id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
593659
Nicole Oresme
French philosopher Nicole Oresme (; 1 January 1325 – 11 July 1382), also known as Nicolas Oresme, Nicholas Oresme, or Nicolas d'Oresme, was a French philosopher of the later Middle Ages. He wrote influential works on economics, mathematics, physics, astrology, astronomy, philosophy, and theology; was Bishop of Lisieux, a translator, a counselor of King Charles V of France, and one of the most original thinkers of 14th-century Europe. Life. Nicole Oresme was born c. 1320–1325 in the village of Allemagnes (today's Fleury-sur-Orne) in the vicinity of Caen, Normandy, in the diocese of Bayeux. Practically nothing is known concerning his family. The fact that Oresme attended the royally sponsored and subsidised College of Navarre, an institution for students too poor to pay their expenses while studying at the University of Paris, makes it probable that he came from a peasant family. Oresme studied the "arts" in Paris, together with Jean Buridan (the so-called founder of the French school of natural philosophy), Albert of Saxony and perhaps Marsilius of Inghen, and there received the Magister Artium. He was already a regent master in arts by 1342, during the crisis over William of Ockham's natural philosophy. In 1348, he was a student of theology in Paris. In 1356, he received his doctorate and in the same year he became grand master ("grand-maître") of the College of Navarre. In 1364, he was appointed dean of the Cathedral of Rouen. Around 1369, he began a series of translations of Aristotelian works at the request of Charles V, who granted him a pension in 1371 and, with royal support, was appointed bishop of Lisieux in 1377. In 1382, he died in Lisieux. Scientific work. Cosmology. In his "Livre du ciel et du monde" Oresme discussed a range of evidence for and against the daily rotation of the Earth on its axis. From astronomical considerations, he maintained that if the Earth were moving and not the celestial spheres, all the movements that we see in the heavens that are computed by the astronomers would appear exactly the same as if the spheres were rotating around the Earth. He rejected the physical argument that if the Earth were moving the air would be left behind causing a great wind from east to west. In his view the Earth, Water, and Air would all share the same motion. As to the scriptural passage that speaks of the motion of the Sun, he concludes that "this passage conforms to the customary usage of popular speech" and is not to be taken literally. He also noted that it would be more economical for the small Earth to rotate on its axis than the immense sphere of the stars. Nonetheless, he concluded that none of these arguments were conclusive and "everyone maintains, and I think myself, that the heavens do move and not the Earth." Critiques of astrology. In his mathematical work, Oresme developed the notion of incommensurate fractions, fractions that could not be expressed as powers of one another, and made probabilistic, statistical arguments as to their relative frequency. From this, he argued that it was very probable that the length of the day and the year were incommensurate (irrational), as indeed were the periods of the motions of the moon and the planets. From this, he noted that planetary conjunctions and oppositions would never recur in quite exactly the same way. Oresme maintained that this disproves the claims of astrologers who, thinking "they know with punctual exactness the motions, aspects, conjunctions and oppositions… [judge] rashly and erroneously about future events." Oresme's critique of astrology in his "Livre de divinacions" treats it as having six parts. The first, essentially astronomy, the movements of heavenly bodies, he considers good science but not precisely knowable. The second part deals with the influences of the heavenly bodies on earthly events at all scales. Oresme does not deny such influence, but states, in line with a commonly held opinion, that it could either be that arrangements of heavenly bodies signify events, purely symbolically, or that they actually cause such events, deterministically. Mediaevalist Chauncey Wood remarks that this major elision "makes it very difficult to determine who believed what about astrology". The third part concerns predictiveness, covering events at three different scales: great events such as plagues, famines, floods and wars; weather, winds and storms; and medicine, with influences on the humours, the four Aristotelian fluids of the body. Oresme criticizes all of these as misdirected, though he accepts that prediction is a legitimate area of study, and argues that the effect on the weather is less well known than the effect on great events. He observes that sailors and farmers are better at predicting weather than astrologers, and specifically attacks the astrological basis of prediction, noting correctly that the zodiac has moved relative to the fixed stars (because of precession of the equinoxes) since the zodiac was first described in ancient times. These first three parts are what Oresme considers the physical influences of the stars and planets (including sun and moon) on the earth, and while he offers critiques of them, he accepts that effects exist. The last three parts are what Oresme considers to concern (good or bad) fortune. They are interrogations, meaning asking the stars when to do things such as business deals; elections, meaning choosing the best time to do things such as getting married or fighting a war and nativities, meaning the natal astrology with birth charts that forms much of modern astrological practice. Oresme classifies interrogations and elections as "totally false" arts, but his critique of nativities is more measured. He denies that any path is predetermined by the heavenly bodies, because humans have free will, but he accepts that the heavenly bodies can influence behaviour and habitual mood, via the combination of humours in each person. Overall, Oresme's skepticism is strongly shaped by his understanding of the scope of astrology. He accepts things a modern skeptic would reject, and rejects some things — such as the knowability of planetary movements, and effects on weather — that are accepted by modern science. Sense perception. In discussing the propagation of light and sound, Oresme adopted the common medieval doctrine of the multiplication of species, as it had been developed by optical writers such as Alhacen, Robert Grosseteste, Roger Bacon, John Pecham, and Witelo. Oresme maintained that these species were immaterial, but corporeal (i.e., three-dimensional) entities. Mathematics. Oresme's most important contributions to mathematics are contained in "Tractatus de configurationibus qualitatum et motuum". In a quality, or accidental form, such as heat, he distinguished the "intensio" (the degree of heat at each point) and the "extensio" (as the length of the heated rod). These two terms were often replaced by "latitudo" and "longitudo". For the sake of clarity, Oresme conceived the idea of visualizing these concepts by plane figures, approaching what we would now call rectangular coordinates. The intensity of the quality was represented by a length or "latitudo" proportional to the intensity erected perpendicular to the base at a given point on the base line, which represents the "longitudo". Oresme proposed that the geometrical form of such a figure could be regarded as corresponding to a characteristic of the quality itself. Oresme defined a uniform quality as that which is represented by a line parallel to the longitude, and any other quality as difform. Uniformly varying qualities are represented by a straight line inclined to the axis of the longitude, while he described many cases of nonuniformly varying qualities. Oresme extended this doctrine to figures of three dimensions. He considered this analysis applicable to many different qualities such as hotness, whiteness, and sweetness. Significantly for later developments, Oresme applied this concept to the analysis of local motion where the "latitudo" or intensity represented the speed, the "longitudo" represented the time, and the area of the figure represented the distance travelled. He shows that his method of figuring the latitude of forms is applicable to the movement of a point, on condition that the time is taken as longitude and the speed as latitude; quantity is, then, the space covered in a given time. In virtue of this transposition, the theorem of the "latitudo uniformiter difformis" became the law of the space traversed in case of uniformly varied motion; thus Oresme published what was taught over two centuries prior to Galileo's making it famous. Diagrams of the velocity of an accelerating object against time in "On the Latitude of Forms" by Oresme have been cited to credit Oresme with the discovery of "proto bar charts". In "De configurationibus" Oresme introduces the concept of curvature as a measure of departure from straightness, for circles he has the curvature as being inversely proportional to radius and attempts to extend this to other curves as a continuously varying magnitude. Significantly, Oresme developed the first proof of the divergence of the harmonic series. His proof, requiring less advanced mathematics than current standard tests for divergence (for example, the integral test), begins by noting that for any "n" that is a power of 2, there are "n"/2 − 1 terms in the series between 1/("n"/2) and 1/"n". Each of these terms is at least 1/"n", and since there are "n"/2 of them they sum to at least 1/2. For instance, there is one term 1/2, then two terms 1/3 + 1/4 that together sum to at least 1/2, then four terms 1/5 + 1/6 + 1/7 + 1/8 that also sum to at least 1/2, and so on. Thus the series must be greater than the series 1 + 1/2 + 1/2 + 1/2 + ..., which does not have a finite limit. This proves that the harmonic series must be divergent. This argument shows that the sum of the first "n" terms grows at least as fast as formula_0. (See also Harmonic series) Oresme was the first mathematician to prove this fact, and (after his proof was lost) it was not proven again until the 17th century by Pietro Mengoli. He also worked on fractional powers, and the notion of probability over infinite sequences, ideas which would not be further developed for the next three and five centuries, respectively. On local motion. Oresme, like many of his contemporaries such as John Buridan and Albert of Saxony, shaped and critiqued Aristotle's and Averroes's theories of motion to their own liking. Taking inspiration from the theories of "forma fluens" and "fluxus formae", Oresme would suggest his own descriptions for change and motion in his commentary of "Physics". "Forma fluens" is described by William of Ockham as "Every thing that is moved is moved by a mover," and "fluxus formae" as "Every motion is produced by a mover." Buridan and Albert of Saxony each subscribed to the classic interpretation of flux being an innate part of an object, but Oresme differs from his contemporaries in this aspect. Oresme agrees with "fluxus formae" in that motion is attributed to an object, but that an object is “set into” motion, rather than “given” motion, denying a distinction between a motionless object and an object in motion. To Oresme, an object moves, but it is not a moving object. Once an object begins movement through the three dimensions it has a new “modus rei” or “way of being,” which should only be described through the perspective of the moving object, rather than a distinct point. This line of thought coincides with Oresme's challenge to the structure of the universe. Oresme's description of motion was not popular, although it was thorough. A Richard Brinkley is thought to be an inspiration for the modus-rei description, but this is uncertain. Political thought. Oresme provided the first modern vernacular translations of Aristotle's moral works that are still extant today. Between 1371 and 1377 he translated Aristotle's "Ethics", "Politics" and "Economics" (the last of which is nowadays considered to be pseudo-Aristotelian) into Middle French. He also extensively commented on these texts, thereby expressing some of his political views. Like his predecessors Albert the Great, Thomas Aquinas and Peter of Auvergne (and quite unlike Aristotle), Oresme favours monarchy as the best form of government. His criterion for good government is the common good. A king (by definition good) takes care of the common good, whereas a tyrant works for his own profit. A monarch can ensure the stability and durability of his reign by letting the people participate in government. This has rather confusingly and anachronistically been called popular sovereignty. Like Albert the Great, Thomas Aquinas, Peter of Auvergne and especially Marsilius of Padua, whom he occasionally quotes, Oresme conceives of this popular participation as rather restrictive: only the multitude of reasonable, wise and virtuous men should be allowed political participation by electing and correcting the prince, changing the law and passing judgement. Oresme, however, categorically denies the right of rebellion since it endangers the common good. Unlike earlier commentators, however, Oresme prescribes the law as superior to the king's will. It must only be changed in cases of extreme necessity. Oresme favours moderate kingship, thereby negating contemporary absolutist thought, usually promoted by adherents of Roman law. Furthermore, Oresme doesn't comply to contemporary conceptions of the French king as sacred, as promoted by in his "" or in his "Traité du sacre". Although he heavily criticises the Church as corrupt, tyrannical and oligarchical, he never fundamentally questions its necessity for the spiritual well-being of the faithful. It has traditionally been thought that Oresme's Aristotelian translations had a major influence on King Charles V's politics: Charles' laws concerning the line of succession and the possibility of a regency for an underage king have been accredited to Oresme, as has the election of several high-ranking officials by the king's council in the early 1370s. Oresme may have conveyed Marsilian and conciliarist thought to Jean Gerson and Christine de Pizan. Economics. With his "Treatise on the origin, nature, law, and alterations of money" ("De origine, natura, jure et mutationibus monetarum"), one of the earliest manuscripts devoted to an economic matter, Oresme brings an interesting insight on the medieval conception of money. Oresme's viewpoints of theoretical architecture are outlined in Part 3 and 4 of his work from "De moneta," which he completed between 1356 and 1360. His belief is that humans have a natural right to own property; this property belongs to the individual and community. In Part 4, Oresme provides a solution to a political problem as to how a monarch can be held accountable to put the common good before any private affairs. Though the monarchy rightfully has claims on all money given an emergency, Oresme states that any ruler that goes through this is a “Tyrant dominating slaves”. Oresme was one of the first medieval theorists that did not accept the right of the monarch to have claims on all money as well as “his subjects’ right to own private property.” Psychology. Oresme was known to be a well rounded psychologist. He practiced the technique of “inner senses” and studied the perception of the world. Oresme contributed to 19th and 20th century psychology in the fields of cognitive psychology, perception psychology, psychology of consciousness, and psychophysics. Oresme discovered the psychology of unconscious and came up with the theory of unconscious conclusion of perception. He developed many ideas beyond quality, quantity, categories and terms which were labeled “theory of cognition”. Posthumous reputation. Oresme's economic thought remained well regarded centuries after his death. In a 1920 "Essay on Medieval Economic Teaching", Irish economist George O'Brien summed up the favorable academic consensus over Oresme's "Treatise on the origin, nature, law, and alterations of money": The merits of this work have excited the unanimous admiration of all who have studied it. Roscher says that it contains 'a theory of money, elaborated in the fourteenth century, which remains perfectly correct to-day, under the test of the principles applied in the nineteenth century, and that with a brevity, a precision, a clarity, and a simplicity of language which is a striking proof of the superior genius of its author.' According to Brants, 'the treatise of Oresme is one of the first to be devoted "ex professo" to an economic subject, and it expresses many ideas which are very just, more just than those which held the field for a long period after him, under the name of mercantilism, and more just than those which allowed of the reduction of money as if it were nothing more than a counter of exchange.' 'Oresme's treatise on money,' says Macleod, 'may be justly said to stand at the head of modern economic literature. This treatise laid the foundations of monetary science, which are now accepted by all sound economists.' 'Oresme's completely secular and naturalistic method of treating one of the most important problems of political economy,' says Espinas, 'is a signal of the approaching end of the Middle Ages and the dawn of the Renaissance.' Dr. Cunningham adds his tribute of praise: 'The conceptions of national wealth and national power were ruling ideas in economic matters for several centuries, and Oresme appears to be the earliest of the economic writers by whom they were explicitly adopted as the very basis of his argument…. A large number of points of economic doctrine in regard to coinage are discussed with much judgment and clearness.' Endemann alone is inclined to quarrel with the pre-eminence of Oresme; but on this question, he is in a minority of one. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "(1/2) \\log_2 n" } ]
https://en.wikipedia.org/wiki?curid=593659
593693
Point (geometry)
Fundamental object of geometry In geometry, a point is an abstract idealization of an exact position, without size, in physical space, or its generalization to other kinds of mathematical spaces. As zero-dimensional objects, points are usually taken to be the fundamental indivisible elements comprising the space, of which one-dimensional curves, two-dimensional surfaces, and higher-dimensional objects consist; conversely, a point can be determined by the intersection of two curves or three surfaces, called a "vertex" or "corner". In classical Euclidean geometry, a point is a primitive notion, defined as "that which has no part". Points and other primitive notions are not defined in terms of other concepts, but only by certain formal properties, called axioms, that they must satisfy; for example, "there is exactly one straight line that passes through two distinct points". As physical diagrams, geometric figures are made with tools such as a compass, scriber, or pen, whose pointed tip can mark a small dot or prick a small hole representing a point, or can be drawn across a surface to represent a curve. Since the advent of analytic geometry, points are often defined or represented in terms of numerical coordinates. In modern mathematics, a space of points is typically treated as a set, a point set. An "isolated point" is an element of some subset of points which has some neighborhood containing no other points of the subset. Points in Euclidean geometry. Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (x, y) of numbers, where the first number conventionally represents the horizontal and is often denoted by x, and the second number conventionally represents the vertical and is often denoted by y. This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (x, y, z) with the additional third number representing depth and often denoted by z. Further generalizations are represented by an ordered tuplet of n terms, ("a"1, "a"2, … , "a""n") where n is the dimension of the space in which the point is located. Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form formula_0where "c"1 through "cn" and d are constants and n is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment. In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, that any two points can be connected by a straight line. This is easily confirmed under modern extensions of Euclidean geometry, and had lasting consequences at its introduction, allowing the construction of almost all the geometric concepts known at the time. However, Euclid's postulation of points was neither complete nor definitive, and he occasionally assumed facts about points that did not follow directly from his axioms, such as the ordering of points on the line or the existence of specific points. In spite of this, modern expansions of the system serve to remove these assumptions. Dimension of a point. There are several inequivalent definitions of dimension in mathematics. In all of the common definitions, a point is 0-dimensional. Vector space dimension. The dimension of a vector space is the maximum size of a linearly independent subset. In a vector space consisting of a single point (which must be the zero vector 0), there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non-trivial linear combination making it zero: formula_1. Topological dimension. The topological dimension of a topological space formula_2 is defined to be the minimum value of "n", such that every finite open cover formula_3 of formula_2 admits a finite open cover formula_4 of formula_2 which refines formula_3 in which no point is included in more than "n"+1 elements. If no such minimal "n" exists, the space is said to be of infinite covering dimension. A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a single open set. Hausdorff dimension. Let "X" be a metric space. If "S" ⊂ "X" and "d" ∈ [0, ∞), the "d"-dimensional Hausdorff content of "S" is the infimum of the set of numbers "δ" ≥ 0 such that there is some (indexed) collection of balls formula_5 covering "S" with "ri" > 0 for each "i" ∈ "I" that satisfies formula_6 The Hausdorff dimension of "X" is defined by formula_7 A point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius. Geometry without points. Although the notion of a point is generally considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e.g. noncommutative geometry and pointless topology. A "pointless" or "pointfree" space is defined not as a set, but via some structure (algebraic or logical respectively) which looks like a well-known function space on the set: an algebra of continuous functions or an algebra of sets respectively. More precisely, such structures generalize well-known spaces of functions in a way that the operation "take a value at this point" may not be defined. A further tradition starts from some books of A. N. Whitehead in which the notion of region is assumed as a primitive together with the one of "inclusion" or "connection". Point masses and the Dirac delta function. Often in physics and mathematics, it is useful to think of a point as having non-zero mass or charge (this is especially common in classical electromagnetism, where electrons are idealized as points with non-zero charge). The Dirac delta function, or δ function, is (informally) a generalized function on the real number line that is zero everywhere except at zero, with an integral of one over the entire real line. The delta function is sometimes thought of as an infinitely high, infinitely thin spike at the origin, with total area one under the spike, and physically represents an idealized point mass or point charge. It was introduced by theoretical physicist Paul Dirac. In the context of signal processing it is often referred to as the unit impulse symbol (or function). Its discrete analog is the Kronecker delta function which is usually defined on a finite domain and takes values 0 and 1. See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " L = \\lbrace (a_1,a_2,...a_n) \\mid a_1c_1 + a_2c_2 + ... a_nc_n = d \\rbrace," }, { "math_id": 1, "text": "1 \\cdot \\mathbf{0}=\\mathbf{0}" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\mathcal{A}" }, { "math_id": 4, "text": "\\mathcal{B}" }, { "math_id": 5, "text": "\\{B(x_i,r_i):i\\in I\\}" }, { "math_id": 6, "text": "\\sum_{i\\in I} r_i^d<\\delta. " }, { "math_id": 7, "text": "\\operatorname{dim}_{\\operatorname{H}}(X):=\\inf\\{d\\ge 0: C_H^d(X)=0\\}." } ]
https://en.wikipedia.org/wiki?curid=593693
5937299
Pendulum (mechanics)
Free swinging suspended body &lt;templatestyles src="Hlist/styles.css"/&gt; A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations. Simple gravity pendulum. A "simple gravity pendulum" is an idealized mathematical model of a real pendulum. It is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Since in the model there is no frictional energy loss, when given an initial displacement it swings back and forth with a constant amplitude. The model is based on the assumptions: The differential equation which governs the motion of a simple pendulum is where g is the magnitude of the gravitational field, ℓ is the length of the rod or cord, and θ is the angle from the vertical to the pendulum. &lt;templatestyles src="Math_proof/styles.css" /&gt; "Force" derivation of (Eq. 1) Consider Figure 1 on the right, which shows the forces acting on a simple pendulum. Note that the path of the pendulum sweeps out an arc of a circle. The angle θ is measured in radians, and this is crucial for this formula. The blue arrow is the gravitational force acting on the bob, and the violet arrows are that same force resolved into components parallel and perpendicular to the bob's instantaneous motion. The direction of the bob's instantaneous velocity always points along the red axis, which is considered the tangential axis because its direction is always tangent to the circle. Consider Newton's second law, formula_0 where F is the sum of forces on the object, m is mass, and a is the acceleration. Newton's equation can be applied to the tangential axis only. This is because only changes in speed are of concern and the bob is forced to stay in a circular path. The short violet arrow represents the component of the gravitational force in the tangential axis, and trigonometry can be used to determine its magnitude. Thus, formula_1 where g is the acceleration due to gravity near the surface of the earth. The negative sign on the right hand side implies that θ and a always point in opposite directions. This makes sense because when a pendulum swings further to the left, it is expected to accelerate back toward the right. This linear acceleration a along the red axis can be related to the change in angle θ by the arc length formulas; s is arc length: formula_2 thus: formula_3 &lt;templatestyles src="Math_proof/styles.css" /&gt; "Torque" derivation of (Eq. 1) Equation (1) can be obtained using two definitions for torque. formula_4 First start by defining the torque on the pendulum bob using the force due to gravity. formula_5 where l is the length vector of the pendulum and Fg is the force due to gravity. For now just consider the magnitude of the torque on the pendulum. formula_6 where m is the mass of the pendulum, g is the acceleration due to gravity, l is the length of the pendulum, and θ is the angle between the length vector and the force due to gravity. Next rewrite the angular momentum. formula_7 Again just consider the magnitude of the angular momentum. formula_8 and its time derivative formula_9 The magnitudes can then be compared using τ = formula_10 thus: formula_11 which is the same result as obtained through force analysis. &lt;templatestyles src="Math_proof/styles.css" /&gt; "Energy" derivation of (Eq. 1) It can also be obtained via the conservation of mechanical energy principle: any object falling a vertical distance formula_12 would acquire kinetic energy equal to that which it lost to the fall. In other words, gravitational potential energy is converted into kinetic energy. Change in potential energy is given by formula_13 The change in kinetic energy (body started from rest) is given by formula_14 Since no energy is lost, the gain in one must be equal to the loss in the other formula_15 The change in velocity for a given change in height can be expressed as formula_16 Using the arc length formula above, this equation can be rewritten in terms of : formula_17 where h is the vertical distance the pendulum fell. Look at Figure 2, which presents the trigonometry of a simple pendulum. If the pendulum starts its swing from some initial angle "θ"0, then "y"0, the vertical distance from the screw, is given by formula_18 Similarly, when "y"1, then formula_19 Then h is the difference of the two formula_20 In terms of gives This equation is known as the "first integral of motion", it gives the velocity in terms of the location and includes an integration constant related to the initial displacement ("θ"0). Next, differentiate by applying the chain rule, with respect to time to get the acceleration formula_21 which is the same result as obtained through force analysis. &lt;templatestyles src="Math_proof/styles.css" /&gt; "Lagrange" derivation of (Eq. 1) Equation 1 can additionally be obtained through Lagrangian Mechanics. More specifically, using the Euler–Lagrange equations (or Lagrange's equations of the second kind) by identifying the Lagrangian of the system (formula_22), the constraints (formula_23) and solving the following system of equations formula_24 If the origin of the Cartesian coordinate system is defined as the point of suspension (or simply pivot), then the bob is at formula_25 formula_26 and the velocity of the bob, calculated via differentiating the coordinates with respect to time (using dot notation to indicate the time derivatives) formula_27 formula_28 Thus, the Lagrangian is formula_29 The Euler-Lagrange equation (singular as there is only one constraint, formula_30) is thus formula_31 Which can then be rearranged to match Equation 1, obtained through force analysis. formula_32 Deriving via Lagrangian Mechanics, while excessive with a single pendulum, is useful for more complicated, chaotic systems, such as a double pendulum. Small-angle approximation. The differential equation given above is not easily solved, and there is no solution that can be written in terms of elementary functions. However, adding a restriction to the size of the oscillation's amplitude gives a form whose solution can be easily obtained. If it is assumed that the angle is much less than 1 radian (often cited as less than 0.1 radians, about 6°), or formula_33 then substituting for sin "θ" into Eq. 1 using the small-angle approximation, formula_34 yields the equation for a harmonic oscillator, formula_35 The error due to the approximation is of order "θ"3 (from the Taylor expansion for sin "θ"). Let the starting angle be "θ"0. If it is assumed that the pendulum is released with zero angular velocity, the solution becomes formula_36 The motion is simple harmonic motion where "θ"0 is the amplitude of the oscillation (that is, the maximum angle between the rod of the pendulum and the vertical). The corresponding approximate period of the motion is then formula_37 which is known as Christiaan Huygens's law for the period. Note that under the small-angle approximation, the period is independent of the amplitude "θ"0; this is the property of isochronism that Galileo discovered. Rule of thumb for pendulum length. formula_38 gives formula_39 If SI units are used (i.e. measure in metres and seconds), and assuming the measurement is taking place on the Earth's surface, then "g" ≈ 9.81 m/s2, and ≈ 1 m/s2 (0.994 is the approximation to 3 decimal places). Therefore, relatively reasonable approximations for the length and period are: formula_40 where "T"0 is the number of seconds between "two" beats (one beat for each side of the swing), and l is measured in metres. Arbitrary-amplitude period. For amplitudes beyond the small angle approximation, one can compute the exact period by first inverting the equation for the angular velocity obtained from the energy method (Eq. 2), formula_41 and then integrating over one complete cycle, formula_42 or twice the half-cycle formula_43 or four times the quarter-cycle formula_44 which leads to formula_45 Note that this integral diverges as "θ"0 approaches the vertical formula_46 so that a pendulum with just the right energy to go vertical will never actually get there. (Conversely, a pendulum close to its maximum can take an arbitrarily long time to fall down.) This integral can be rewritten in terms of elliptic integrals as formula_47 where F is the incomplete elliptic integral of the first kind defined by formula_48 Or more concisely by the substitution formula_49 expressing θ in terms of u, formula_50 Eq. 3 Here K is the complete elliptic integral of the first kind defined by formula_51 For comparison of the approximation to the full solution, consider the period of a pendulum of length 1 m on Earth (g = ) at an initial angle of 10 degrees is formula_52 The linear approximation gives formula_53 The difference between the two values, less than 0.2%, is much less than that caused by the variation of g with geographical location. From here there are many ways to proceed to calculate the elliptic integral. Legendre polynomial solution for the elliptic integral. Given Eq. 3 and the Legendre polynomial solution for the elliptic integral: formula_54 where "n"!! denotes the double factorial, an exact solution to the period of a simple pendulum is: formula_55 Figure 4 shows the relative errors using the power series. "T"0 is the linear approximation, and "T"2 to "T"10 include respectively the terms up to the 2nd to the 10th powers. Power series solution for the elliptic integral. Another formulation of the above solution can be found if the following Maclaurin series: formula_56 is used in the Legendre polynomial solution above. The resulting power series is: formula_57 more fractions available in the On-Line Encyclopedia of Integer Sequences with OEIS:  having the numerators and OEIS:  having the denominators. Arithmetic-geometric mean solution for elliptic integral. Given Eq. 3 and the arithmetic–geometric mean solution of the elliptic integral: formula_58 where "M"("x","y") is the arithmetic-geometric mean of x and y. This yields an alternative and faster-converging formula for the period: formula_59 The first iteration of this algorithm gives formula_60 This approximation has the relative error of less than 1% for angles up to 96.11 degrees. Since formula_61 the expression can be written more concisely as formula_62 The second order expansion of formula_63 reduces to formula_64 A second iteration of this algorithm gives formula_65 This second approximation has a relative error of less than 1% for angles up to 163.10 degrees. Approximate formulae for the nonlinear pendulum period. Though the exact period formula_66 can be determined, for any finite amplitude formula_67 rad, by evaluating the corresponding complete elliptic integral formula_68, where formula_69, this is often avoided in applications because it is not possible to express this integral in a closed form in terms of elementary functions. This has made way for research on simple approximate formulae for the increase of the pendulum period with amplitude (useful in introductory physics labs, classical mechanics, electromagnetism, acoustics, electronics, superconductivity, etc. The approximate formulae found by different authors can be classified as follows: Of course, the increase of formula_66 with amplitude is more apparent when formula_75, as has been observed in many experiments using either a rigid rod or a disc. As accurate timers and sensors are currently available even in introductory physics labs, the experimental errors found in ‘very large-angle’ experiments are already small enough for a comparison with the exact period, and a very good agreement between theory and experiments in which friction is negligible has been found. Since this activity has been encouraged by many instructors, a simple approximate formula for the pendulum period valid for all possible amplitudes, to which experimental data could be compared, was sought. In 2008, Lima derived a weighted-average formula with this characteristic: formula_76 where formula_77, which presents a maximum error of only 0.6% (at formula_78). Arbitrary-amplitude angular displacement. The Fourier series expansion of formula_79 is given by formula_80 where formula_23 is the elliptic nome, formula_81 formula_82 and formula_83 the angular frequency. If one defines formula_84 formula_23 can be approximated using the expansion formula_85 (see OEIS: ). Note that formula_86 for formula_87, thus the approximation is applicable even for large amplitudes. Equivalently, the angle can be given in terms of the Jacobi elliptic function formula_88 with modulus formula_89 formula_90 For small formula_91, formula_92, formula_93 and formula_94, so the solution is well-approximated by the solution given in Pendulum (mechanics)#Small-angle approximation. Examples. The animations below depict the motion of a simple (frictionless) pendulum with increasing amounts of initial displacement of the bob, or equivalently increasing initial velocity. The small graph above each pendulum is the corresponding phase plane diagram; the horizontal axis is displacement and the vertical axis is velocity. With a large enough initial velocity the pendulum does not oscillate back and forth but rotates completely around the pivot. Compound pendulum. A compound pendulum (or physical pendulum) is one where the rod is not massless, and may have extended size; that is, an arbitrarily shaped rigid body swinging by a pivot formula_95. In this case the pendulum's period depends on its moment of inertia formula_96 around the pivot point. The equation of torque gives: formula_97 where: formula_98 is the angular acceleration. formula_99 is the torque The torque is generated by gravity so: formula_100 where: Hence, under the small-angle approximation, formula_104 (or equivalently when formula_105), formula_106 where formula_96 is the moment of inertia of the body about the pivot point formula_95. The expression for formula_98 is of the same form as the conventional simple pendulum and gives a period of formula_107 And a frequency of formula_108 If the initial angle is taken into consideration (for large amplitudes), then the expression for formula_98 becomes: formula_109 and gives a period of: formula_110 where formula_111 is the maximum angle of oscillation (with respect to the vertical) and formula_112 is the complete elliptic integral of the first kind. An important concept is the equivalent length, formula_113, the length of a simple pendulums that has the same angular frequency formula_114 as the compound pendulum: formula_115 Consider the following cases: formula_130 Where formula_131. Notice these formulae can be particularized into the two previous cases studied before just by considering the mass of the rod or the bob to be zero respectively. Also notice that the formula does not depend on both the mass of the bob and the rod, but actually on their ratio, formula_132. An approximation can be made for formula_133: formula_134 Notice how similar it is to the angular frequency in a spring-mass system with effective mass. Damped, driven pendulum. The above discussion focuses on a pendulum bob only acted upon by the force of gravity. Suppose a damping force, e.g. air resistance, as well as a sinusoidal driving force acts on the body. This system is a damped, driven oscillator, and is chaotic. Equation (1) can be written as formula_135 (see the Torque derivation of Equation (1) above). A damping term and forcing term can be added to the right hand side to get formula_136 where the damping is assumed to be directly proportional to the angular velocity (this is true for low-speed air resistance, see also Drag (physics)). formula_137 and formula_138 are constants defining the amplitude of forcing and the degree of damping respectively. formula_139 is the angular frequency of the driving oscillations. Dividing through by formula_140: formula_141 For a physical pendulum: formula_142 This equation exhibits chaotic behaviour. The exact motion of this pendulum can only be found numerically and is highly dependent on initial conditions, e.g. the initial velocity and the starting amplitude. However, the small angle approximation outlined above can still be used under the required conditions to give an approximate analytical solution. Physical interpretation of the imaginary period. The Jacobian elliptic function that expresses the position of a pendulum as a function of time is a doubly periodic function with a real period and an imaginary period. The real period is, of course, the time it takes the pendulum to go through one full cycle. Paul Appell pointed out a physical interpretation of the imaginary period: if "θ"0 is the maximum angle of one pendulum and 180° − "θ"0 is the maximum angle of another, then the real period of each is the magnitude of the imaginary period of the other. Coupled pendula. Coupled pendulums can affect each other's motion, either through a direction connection (such as a spring connecting the bobs) or through motions in a supporting structure (such as a tabletop). The equations of motion for two identical simple pendulums coupled by a spring connecting the bobs can be obtained using Lagrangian mechanics. The kinetic energy of the system is: formula_143 where formula_101 is the mass of the bobs, formula_144 is the length of the strings, and formula_145, formula_146 are the angular displacements of the two bobs from equilibrium. The potential energy of the system is: formula_147 where formula_148 is the gravitational acceleration, and formula_89 is the spring constant. The displacement formula_149 of the spring from its equilibrium position assumes the small angle approximation. The Lagrangian is then formula_150 which leads to the following set of coupled differential equations: formula_151 Adding and subtracting these two equations in turn, and applying the small angle approximation, gives two harmonic oscillator equations in the variables formula_152 and formula_153: formula_154 with the corresponding solutions formula_155 where formula_156 and formula_157, formula_158, formula_98, formula_159 are constants of integration. Expressing the solutions in terms of formula_145 and formula_146 alone: formula_160 If the bobs are not given an initial push, then the condition formula_161 requires formula_162, which gives (after some rearranging): formula_163 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F=ma" }, { "math_id": 1, "text": "\\begin{align} F &= -mg\\sin\\theta = ma, \\qquad \\text{so} \\\\ a &= -g \\sin\\theta,\\end{align}" }, { "math_id": 2, "text": "\\begin{align} s &= \\ell\\theta, \\\\ v &= \\frac{ds}{dt} = \\ell\\frac{d\\theta}{dt}, \\\\ a &= \\frac{d^2s}{dt^2} = \\ell\\frac{d^2\\theta}{dt^2}, \\end{align}" }, { "math_id": 3, "text": "\\begin{align} \\ell\\frac{d^2\\theta}{dt^2} &= -g \\sin\\theta, \\\\ \\frac{d^2\\theta}{dt^2} + \\frac{g}{\\ell} \\sin\\theta &= 0. \\end{align} " }, { "math_id": 4, "text": "\\boldsymbol{\\tau} = \\mathbf{r} \\times \\mathbf{F} = \\frac{d\\mathbf{L}}{dt}." }, { "math_id": 5, "text": "\\boldsymbol{ \\tau } = \\mathbf{l} \\times \\mathbf{F}_\\mathrm{g} ," }, { "math_id": 6, "text": "|\\boldsymbol{\\tau}| = -mg\\ell\\sin\\theta," }, { "math_id": 7, "text": "\\mathbf{ L } = \\mathbf{r} \\times \\mathbf{p} = m \\mathbf{r} \\times (\\boldsymbol{\\omega} \\times \\mathbf{r}) ." }, { "math_id": 8, "text": " |\\mathbf{L}| = mr^2 \\omega = m \\ell^2 \\frac{d\\theta}{dt} ." }, { "math_id": 9, "text": "\\frac {d}{dt}|\\mathbf{L}| = m \\ell^2 \\frac{d^2\\theta}{dt^2} ," }, { "math_id": 10, "text": "-mg\\ell\\sin\\theta = m \\ell^2 \\frac{d^2\\theta}{dt^2} ," }, { "math_id": 11, "text": "\\frac{d^2\\theta}{dt^2} + \\frac{g}{\\ell} \\sin\\theta = 0," }, { "math_id": 12, "text": "h" }, { "math_id": 13, "text": "\\Delta U = mgh." }, { "math_id": 14, "text": "\\Delta K = \\tfrac12 mv^2." }, { "math_id": 15, "text": "\\tfrac12 mv^2=mgh." }, { "math_id": 16, "text": "v = \\sqrt{2gh}." }, { "math_id": 17, "text": "\\begin{align} v = \\ell\\frac{d\\theta}{dt} &= \\sqrt{2gh}, \\quad \\text{so} \\\\ \\frac{d\\theta}{dt} &= \\frac{\\sqrt{2gh}}{\\ell}, \\end{align}" }, { "math_id": 18, "text": "y_0 = \\ell\\cos\\theta_0." }, { "math_id": 19, "text": "y_1 = \\ell\\cos\\theta." }, { "math_id": 20, "text": "h = \\ell\\left(\\cos\\theta-\\cos\\theta_0\\right)." }, { "math_id": 21, "text": "\\begin{align}\n\\frac{d}{dt}\\frac{d\\theta}{dt} &= \\frac{d}{dt}\\sqrt{\\frac{2g} \\ell \\left(\\cos\\theta-\\cos\\theta_0\\right)}, \\\\\n\\frac{d^2\\theta}{dt^2} & = \\frac12\\frac{-\\frac{2g} \\ell \\sin\\theta}{\\sqrt{\\frac{2g} \\ell (\\cos\\theta-\\cos\\theta_0)}}\\frac{d\\theta}{dt} \\\\\n& = \\frac12\\frac{-\\frac{2g}{\\ell} \\sin\\theta}{\\sqrt{\\frac{2g}{\\ell} (\\cos\\theta-\\cos\\theta_0)}}\\sqrt{\\frac{2g}{\\ell} (\\cos\\theta-\\cos\\theta_0)} = -\\frac g \\ell \\sin\\theta, \\\\\n\\frac{d^2\\theta}{dt^2} &+ \\frac{g}{\\ell} \\sin\\theta = 0,\n\\end{align}" }, { "math_id": 22, "text": "\\mathcal{L}" }, { "math_id": 23, "text": "q" }, { "math_id": 24, "text": "\\frac{d}{dt}\\left(\\frac{\\partial \\mathcal{L}}{\\partial \\dot{q_j}}\\right) = \\frac{\\partial \\mathcal{L}}{\\partial q_j}." }, { "math_id": 25, "text": " x = \\ell\\sin{\\theta}," }, { "math_id": 26, "text": " y = -\\ell\\cos{\\theta}," }, { "math_id": 27, "text": " \\dot{x} = -\\ell\\dot{\\theta}\\cos{\\theta}," }, { "math_id": 28, "text": " \\dot{y} = \\ell\\dot{\\theta}\\sin{\\theta}." }, { "math_id": 29, "text": "\\begin{align}\n\\mathcal{L} &= E_k - E_p \\\\\n&= \\frac{1}{2} m v^2 + m g h \\\\\n&= \\frac{1}{2} m (\\dot{x}^2 + \\dot{y}^2) - m g \\ell (1 - \\cos{\\theta}) \\\\\n&= \\frac{1}{2} m \\ell^2 \\dot{\\theta}^2 - m g \\ell + m g \\ell \\cos{\\theta}.\n\\end{align}" }, { "math_id": 30, "text": "q = \\theta" }, { "math_id": 31, "text": "\\begin{align}\n\\frac{d}{dt}\\left(\\frac{\\partial \\mathcal{L}}{\\partial \\dot{\\theta}}\\right) &= \\frac{\\partial \\mathcal{L}}{\\partial \\theta} \\\\\n\\frac{d}{dt}(m \\ell^2 \\dot{\\theta}) &= -m g \\ell \\sin{\\theta} \\\\\nm \\ell^2 \\ddot{\\theta} &= -m g \\ell \\sin{\\theta} \\\\\n\\ddot{\\theta} &= -\\frac{g}{\\ell}\\sin{\\theta}. \\\\\n\\end{align}" }, { "math_id": 32, "text": " \\frac{d^2 \\theta}{d t^2} + \\frac{g}{\\ell}\\sin{\\theta} = 0." }, { "math_id": 33, "text": "\\theta \\ll 1," }, { "math_id": 34, "text": "\\sin\\theta\\approx\\theta," }, { "math_id": 35, "text": "\\frac{d^2\\theta}{dt^2}+\\frac{g}{\\ell} \\theta=0." }, { "math_id": 36, "text": "\\theta(t) = \\theta_0\\cos\\left(\\sqrt\\frac{g}{\\ell}\\,t\\right) \\quad\\quad\\quad\\quad \\theta_0 \\ll 1." }, { "math_id": 37, "text": "T_0 = 2\\pi\\sqrt{\\frac\\ell g} \\quad\\quad\\quad\\quad\\quad \\theta_0 \\ll 1" }, { "math_id": 38, "text": "T_0 = 2\\pi\\sqrt{\\frac \\ell g}" }, { "math_id": 39, "text": " \\ell = \\frac{g}{\\pi^2}\\frac{T_0^2} 4." }, { "math_id": 40, "text": "\\begin{align} \\ell &\\approx \\frac{T_0^2}{4}, \\\\ T_0 &\\approx 2 \\sqrt\\ell \\end{align}" }, { "math_id": 41, "text": "\\frac{dt}{d\\theta} = \\sqrt\\frac\\ell{2g}\\frac{1}{\\sqrt{\\cos\\theta-\\cos\\theta_0}}" }, { "math_id": 42, "text": "T = t(\\theta_0 \\rightarrow 0 \\rightarrow -\\theta_0 \\rightarrow 0 \\rightarrow\\theta_0)," }, { "math_id": 43, "text": "T = 2 t(\\theta_0 \\rightarrow 0 \\rightarrow -\\theta_0)," }, { "math_id": 44, "text": "T = 4 t(\\theta_0 \\rightarrow 0)," }, { "math_id": 45, "text": "T = 4\\sqrt\\frac\\ell{2g}\\int^{\\theta_0}_0 \\frac{d\\theta}{\\sqrt{\\cos\\theta-\\cos\\theta_0}} ." }, { "math_id": 46, "text": " \\lim_{\\theta_0 \\to \\pi} T = \\infty, " }, { "math_id": 47, "text": "T = 4\\sqrt\\frac\\ell g F\\left( \\frac{\\pi} 2, \\sin \\frac{\\theta_0} 2\\right) " }, { "math_id": 48, "text": "F(\\varphi , k) = \\int_0^\\varphi \\frac {du} {\\sqrt{1-k^2\\sin^2 u}}\\,." }, { "math_id": 49, "text": "\\sin{u} = \\frac{\\sin\\frac{\\theta}{2}}{\\sin\\frac{\\theta_0}{2}}" }, { "math_id": 50, "text": "T = \\frac{2 T_0}{\\pi} K(k), \\qquad \\text{where} \\quad k=\\sin\\frac{\\theta_0}{2}." }, { "math_id": 51, "text": "K(k) = F \\left( \\frac \\pi 2, k \\right) = \\int_0^\\frac{\\pi}{2} \\frac{du}{\\sqrt{1-k^2\\sin^2 u}}\\,." }, { "math_id": 52, "text": "4\\sqrt{\\frac{1\\text{ m}}{g}}\\ K\\left(\\sin\\frac{10^\\circ} {2} \\right)\\approx 2.0102\\text{ s}." }, { "math_id": 53, "text": "2\\pi \\sqrt{\\frac{1\\text{ m}}{g}} \\approx 2.0064\\text{ s}." }, { "math_id": 54, "text": "K(k) =\\frac{\\pi}{2}\\sum_{n=0}^\\infty \\left(\\frac{(2n-1)!!}{(2n)!!}k^{n}\\right)^{2}" }, { "math_id": 55, "text": "\\begin{alignat}{2}\nT & = 2\\pi \\sqrt\\frac \\ell g \\left( 1+ \\left( \\frac{1}{2} \\right)^2 \\sin^2 \\frac{\\theta_0}{2} + \\left( \\frac{1 \\cdot 3}{2 \\cdot 4} \\right)^2 \\sin^4 \\frac{\\theta_0}{2} + \\left( \\frac {1 \\cdot 3 \\cdot 5}{2 \\cdot 4 \\cdot 6} \\right)^2 \\sin^6 \\frac{\\theta_0}{2} + \\cdots \\right) \\\\\n & = 2\\pi \\sqrt\\frac\\ell g \\cdot \\sum_{n=0}^\\infty \\left( \\left ( \\frac{(2n)!}{( 2^n \\cdot n! )^2} \\right )^2 \\cdot \\sin^{2 n}\\frac{\\theta_0}{2} \\right).\\end{alignat}" }, { "math_id": 56, "text": "\\sin \\frac{\\theta_0} 2 =\\frac12\\theta_0 - \\frac{1}{48}\\theta_0^3 + \\frac{1}{3\\,840}\\theta_0^5 - \\frac{1}{645\\,120}\\theta_0^7 + \\cdots." }, { "math_id": 57, "text": "T = 2\\pi \\sqrt\\frac\\ell g \\left( 1+ \\frac{1}{16}\\theta_0^2 + \\frac{11}{3\\,072}\\theta_0^4 + \\frac{173}{737\\,280}\\theta_0^6 + \\frac{22\\,931}{1\\,321\\,205\\,760}\\theta_0^8 + \\frac{1\\,319\\,183}{951\\,268\\,147\\,200}\\theta_0^{10} + \\frac{233\\,526\\,463}{2\\,009\\,078\\,326\\,886\\,400}\\theta_0^{12} + \\cdots \\right)," }, { "math_id": 58, "text": "K(k) = \\frac {\\pi}{2 M(1-k,1+k)}," }, { "math_id": 59, "text": "T = \\frac{2\\pi}{M\\left(1, \\cos\\frac{\\theta_0} 2 \\right)} \\sqrt\\frac\\ell g." }, { "math_id": 60, "text": "T_1 = \\frac{2T_0}{1 + \\cos \\frac{\\theta_0}{2}}. " }, { "math_id": 61, "text": "\\frac{1}{2}\\left(1+\\cos\\left(\\frac{\\theta_0}{2}\\right)\\right) = \\cos^2 \\frac{\\theta_0}{4}," }, { "math_id": 62, "text": "T_1 = T_0 \\sec^2 \\frac{\\theta_0}{4}." }, { "math_id": 63, "text": "\\sec^2(\\theta_0/4)" }, { "math_id": 64, "text": "T \\approx T_0 \\left(1 + \\frac{\\theta_0^2}{16} \\right)." }, { "math_id": 65, "text": "T_2 = \\frac{4T_0}{1 + \\cos \\frac{\\theta_0}{2} + 2\\sqrt{\\cos \\frac{\\theta_0}{2}}} = \\frac{4T_0}{\\left(1 + \\sqrt{\\cos \\frac{\\theta_0}{2}} \\right)^2}. " }, { "math_id": 66, "text": "T" }, { "math_id": 67, "text": "\\theta_0 < \\pi" }, { "math_id": 68, "text": "K(k)" }, { "math_id": 69, "text": "k \\equiv \\sin(\\theta_0/2)" }, { "math_id": 70, "text": "\\pi/2" }, { "math_id": 71, "text": "\\pi" }, { "math_id": 72, "text": "T \\approx -\\,T_0 \\, \\frac{\\ln{a}}{1-a}" }, { "math_id": 73, "text": "a \\equiv \\cos{(\\theta_0/2)}" }, { "math_id": 74, "text": "T \\approx \\frac{2}{\\pi}\\,T_0\\,\\ln{(4/a)}" }, { "math_id": 75, "text": "\\pi/2<\\theta_0<\\pi" }, { "math_id": 76, "text": "T \\approx \\frac{r\\,a^2\\,T_\\text{Lima} +k^2\\,T_\\text{Cromer}}{r\\,a^2+k^2}," }, { "math_id": 77, "text": "r = 7.17" }, { "math_id": 78, "text": "\\theta_0 = 95^\\circ" }, { "math_id": 79, "text": "\\theta(t)" }, { "math_id": 80, "text": "\n\\theta(t) = 8\\sum_{n \\geq 1\\text{ odd}}\\frac{(-1)^{\\left\\lfloor{n/2}\\right\\rfloor}}{n}\\frac{q^{n/2}}{1+q^{n}}\\cos(n\\omega t)\n" }, { "math_id": 81, "text": "q=\\exp\\left({-\\pi K\\bigl(\\sqrt{\\textstyle 1-k^2}\\bigr) \\big/ K(k)}\\right)," }, { "math_id": 82, "text": "k=\\sin (\\theta_0/2)," }, { "math_id": 83, "text": "\\omega=2\\pi/T" }, { "math_id": 84, "text": "\\varepsilon=\\frac12\\cdot \\frac{1-\\sqrt{\\cos(\\theta_0/2)}}{1+\\sqrt{\\cos(\\theta_0/2)}} " }, { "math_id": 85, "text": "q = \\varepsilon + 2\\varepsilon^5 + 15\\varepsilon^{9} + 150\\varepsilon^{13} + 1707\\varepsilon^{17} + 20910\\varepsilon^{21} + \\cdots " }, { "math_id": 86, "text": "\\varepsilon < \\tfrac 1 2" }, { "math_id": 87, "text": "\\theta_0<\\pi" }, { "math_id": 88, "text": "\\operatorname{cd}" }, { "math_id": 89, "text": "k" }, { "math_id": 90, "text": "\\theta(t)=2\\arcsin\\left(k\\operatorname{cd}\\left(\\sqrt{\\frac{g}{\\ell}}t;k\\right)\\right),\\quad k=\\sin\\frac{\\theta_0}{2}." }, { "math_id": 91, "text": "x" }, { "math_id": 92, "text": "\\sin x\\approx x" }, { "math_id": 93, "text": "\\arcsin x\\approx x" }, { "math_id": 94, "text": "\\operatorname{cd}(t;0)=\\cos t" }, { "math_id": 95, "text": "O" }, { "math_id": 96, "text": "I_O" }, { "math_id": 97, "text": "\\tau = I \\alpha" }, { "math_id": 98, "text": "\\alpha" }, { "math_id": 99, "text": "\\tau" }, { "math_id": 100, "text": "\\tau = - m g r_\\oplus \\sin\\theta" }, { "math_id": 101, "text": "m" }, { "math_id": 102, "text": "r_\\oplus" }, { "math_id": 103, "text": "\\theta" }, { "math_id": 104, "text": "\\sin\\theta\\approx\\theta" }, { "math_id": 105, "text": "\\theta_\\mathrm{max}\\ll 1" }, { "math_id": 106, "text": "\\alpha = \\ddot{\\theta} = \\frac{mgr_\\oplus}{I_O}\\sin\\theta \\approx -\\frac{mgr_\\oplus}{I_O}\\theta" }, { "math_id": 107, "text": "T = 2 \\pi \\sqrt{\\frac{I_O}{mgr_\\oplus}}" }, { "math_id": 108, "text": "f = \\frac{1}{T} = \\frac{1}{2\\pi} \\sqrt{\\frac{mgr_\\oplus}{I_O}}" }, { "math_id": 109, "text": "\\alpha = \\ddot{\\theta} = -\\frac{mgr_\\oplus}{I_O}\\sin\\theta" }, { "math_id": 110, "text": "T = 4 \\operatorname{K}\\left(\\sin^2\\frac{\\theta_\\mathrm{max}}{2}\\right) \\sqrt{\\frac{I_O}{mgr_\\oplus}}" }, { "math_id": 111, "text": "\\theta_\\mathrm{max}" }, { "math_id": 112, "text": "\\operatorname{K}(k)" }, { "math_id": 113, "text": "\\ell^\\mathrm{eq}" }, { "math_id": 114, "text": "\\omega_0" }, { "math_id": 115, "text": " {\\omega_0}^2 = \\frac{g}{\\ell^\\mathrm{eq}} := \\frac{mgr_\\oplus}{I_O} \\implies \\ell^\\mathrm{eq} = \\frac{I_O}{mr_\\oplus} " }, { "math_id": 116, "text": "\\ell" }, { "math_id": 117, "text": "r_\\oplus=\\ell" }, { "math_id": 118, "text": "I_O=m\\ell^2" }, { "math_id": 119, "text": "{\\omega_0}^2 = \\frac{mgr_\\oplus}{I_O}=\\frac{mg\\ell}{m\\ell^2}=\\frac{g}{\\ell}" }, { "math_id": 120, "text": "\\ell^\\mathrm{eq}=\\ell" }, { "math_id": 121, "text": "r_\\oplus=\\frac{1}{2}\\ell" }, { "math_id": 122, "text": "I_O=\\frac{1}{3}m\\ell^2" }, { "math_id": 123, "text": "{\\omega_0}^2 = \\frac{mgr_\\oplus}{I_O}=\\frac{mg\\,\\frac{1}{2}\\ell}{\\frac{1}{3}m\\ell^2}=\\frac{g}{\\frac{2}{3}\\ell}" }, { "math_id": 124, "text": "\\ell^\\mathrm{eq}=\\frac{2}{3}\\ell" }, { "math_id": 125, "text": "m_\\mathrm{rod}" }, { "math_id": 126, "text": "m_\\mathrm{bob}" }, { "math_id": 127, "text": "m_\\mathrm{bob}+m_\\mathrm{rod}" }, { "math_id": 128, "text": "m r_\\oplus=m_\\mathrm{bob}\\ell+m_\\mathrm{rod}\\frac{\\ell}{2}" }, { "math_id": 129, "text": "I_O=m_\\mathrm{bob}\\ell^2+\\frac{1}{3}m_\\mathrm{rod}\\ell^2" }, { "math_id": 130, "text": "{\\omega_0}^2 = \\frac{mgr_\\oplus}{I_O}=\\frac{\\left(m_\\mathrm{bob}\\ell+m_\\mathrm{rod}\\frac{\\ell}{2}\\right)g}{m_\\mathrm{bob}\\ell^2+\\frac{1}{3}m_\\mathrm{rod}\\ell^2} = \\frac{g}{\\ell} \\frac{m_\\mathrm{bob}+\\frac{m_\\mathrm{rod}}{2}}{m_\\mathrm{bob}+\\frac{m_\\mathrm{rod}}{3}} = \\frac{g}{\\ell} \\frac{1+\\frac{m_\\mathrm{rod}}{2m_\\mathrm{bob}}}{1+\\frac{m_\\mathrm{rod}}{3m_\\mathrm{bob}}} " }, { "math_id": 131, "text": "\\ell^\\mathrm{eq} = \\ell \\frac{1+\\frac{m_\\mathrm{rod}}{3m_\\mathrm{bob}}} {1+\\frac{m_\\mathrm{rod}}{2m_\\mathrm{bob}}} " }, { "math_id": 132, "text": "\\frac{m_\\mathrm{rod}}{m_\\mathrm{bob}}" }, { "math_id": 133, "text": "\\frac{m_\\mathrm{rod}}{m_\\mathrm{bob}}\\ll 1" }, { "math_id": 134, "text": "{\\omega_0}^2 \\approx \\frac{g}{\\ell} \\left( 1+\\frac{1}{6}\\frac{m_\\mathrm{rod}}{m_\\mathrm{bob}}+\\cdots\\right)" }, { "math_id": 135, "text": "ml^2 \\frac{d^2 \\theta}{dt^2} = -mgl \\sin \\theta" }, { "math_id": 136, "text": "ml^2 \\frac{d^2 \\theta}{dt^2} = -mgl\\sin \\theta - b\\frac{d\\theta}{dt} + a\\cos(\\Omega t)" }, { "math_id": 137, "text": "a" }, { "math_id": 138, "text": "b" }, { "math_id": 139, "text": "\\Omega" }, { "math_id": 140, "text": "ml^2" }, { "math_id": 141, "text": "\\frac{d^2 \\theta}{dt^2} + \\frac{b}{ml^2}\\frac{d\\theta}{dt} + \\frac{g}{l}{\\sin \\theta} - \\frac{a}{ml^2}\\cos (\\Omega t) = 0." }, { "math_id": 142, "text": "\\frac{d^2 \\theta}{dt^2} + \\frac{b}{I}\\frac{d\\theta}{dt} + \\frac{mgr_\\oplus}{I}{\\sin \\theta} - \\frac{a}{I}\\cos (\\Omega t) = 0." }, { "math_id": 143, "text": "E_\\text{K}=\\frac{1}{2}mL^2\\left(\\dot\\theta_1^2+\\dot\\theta_2^2\\right)" }, { "math_id": 144, "text": "L" }, { "math_id": 145, "text": "\\theta_1" }, { "math_id": 146, "text": "\\theta_2" }, { "math_id": 147, "text": "E_\\text{p}=mgL(2-\\cos\\theta_1-\\cos\\theta_2)+\\frac{1}{2}kL^2(\\theta_2-\\theta_1)^2" }, { "math_id": 148, "text": "g" }, { "math_id": 149, "text": "L(\\theta_2-\\theta_1)" }, { "math_id": 150, "text": "\\mathcal{L}=\\frac{1}{2}mL^2\\left(\\dot\\theta_1^2+\\dot\\theta_2^2\\right)-mgL(2-\\cos\\theta_1-\\cos\\theta_2)-\\frac{1}{2} k L^2(\\theta_2-\\theta_1)^2" }, { "math_id": 151, "text": "\\begin{align}\n\\ddot\\theta_1+\\frac{g}{L}\\sin\\theta_1+\\frac{k}{m}(\\theta_1-\\theta_2)&=0 \\\\\n\\ddot\\theta_2+\\frac{g}{L}\\sin\\theta_2-\\frac{k}{m}(\\theta_1-\\theta_2)&=0\n\\end{align}" }, { "math_id": 152, "text": "\\theta_1+\\theta_2" }, { "math_id": 153, "text": "\\theta_1-\\theta_2" }, { "math_id": 154, "text": "\\begin{align}\n\\ddot\\theta_1+\\ddot\\theta_2+\\frac{g}{L}(\\theta_1+\\theta_2)&=0 \\\\\n\\ddot\\theta_1-\\ddot\\theta_2+\\left(\\frac{g}{L}+2\\frac{k}{m}\\right)(\\theta_1-\\theta_2)&=0\n\\end{align}" }, { "math_id": 155, "text": "\\begin{align}\n\\theta_1+\\theta_2&=A\\cos(\\omega_1t+\\alpha) \\\\\n\\theta_1-\\theta_2&=B\\cos(\\omega_2t+\\beta)\n\\end{align}" }, { "math_id": 156, "text": "\\begin{align}\n\\omega_1&=\\sqrt{\\frac{g}{L}} \\\\\n\\omega_2&=\\sqrt{\\frac{g}{L}+2\\frac{k}{m}}\n\\end{align}" }, { "math_id": 157, "text": "A" }, { "math_id": 158, "text": "B" }, { "math_id": 159, "text": "\\beta" }, { "math_id": 160, "text": "\\begin{align}\n\\theta_1&=\\frac{1}{2}A\\cos(\\omega_1t+\\alpha)+\\frac{1}{2}B\\cos(\\omega_2t+\\beta) \\\\\n\\theta_2&=\\frac{1}{2}A\\cos(\\omega_1t+\\alpha)-\\frac{1}{2}B\\cos(\\omega_2t+\\beta)\n\\end{align}" }, { "math_id": 161, "text": "\\dot\\theta_1(0)=\\dot\\theta_2(0)=0" }, { "math_id": 162, "text": "\\alpha=\\beta=0" }, { "math_id": 163, "text": "\\begin{align}\nA&=\\theta_1(0)+\\theta_2(0)\\\\\nB&=\\theta_1(0)-\\theta_2(0)\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=5937299
593773
Unimodular matrix
Integer matrices with +1 or -1 determinant; invertible over the integers. GL_n(Z) In mathematics, a unimodular matrix "M" is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix "N" that is its inverse (these are equivalent under Cramer's rule). Thus every equation "Mx" = "b", where "M" and "b" both have integer components and "M" is unimodular, has an integer solution. The "n" × "n" unimodular matrices form a group called the "n" × "n" general linear group over formula_0, which is denoted formula_1. Examples of unimodular matrices. Unimodular matrices form a subgroup of the general linear group under matrix multiplication, i.e. the following matrices are unimodular: Other examples include: Total unimodularity. A totally unimodular matrix (TU matrix) is a matrix for which every square non-singular submatrix is unimodular. Equivalently, every square submatrix has determinant 0, +1 or −1. A totally unimodular matrix need not be square itself. From the definition it follows that any submatrix of a totally unimodular matrix is itself totally unimodular (TU). Furthermore it follows that any TU matrix has only 0, +1 or −1 entries. The converse is not true, i.e., a matrix with only 0, +1 or −1 entries is not necessarily unimodular. A matrix is TU if and only if its transpose is TU. Totally unimodular matrices are extremely important in polyhedral combinatorics and combinatorial optimization since they give a quick way to verify that a linear program is integral (has an integral optimum, when any optimum exists). Specifically, if "A" is TU and "b" is integral, then linear programs of forms like formula_3 or formula_4 have integral optima, for any "c". Hence if "A" is totally unimodular and "b" is integral, every extreme point of the feasible region (e.g. formula_5) is integral and thus the feasible region is an integral polyhedron. Common totally unimodular matrices. 1. The unoriented incidence matrix of a bipartite graph, which is the coefficient matrix for bipartite matching, is totally unimodular (TU). (The unoriented incidence matrix of a non-bipartite graph is not TU.) More generally, in the appendix to a paper by Heller and Tompkins, A.J. Hoffman and D. Gale prove the following. Let formula_6 be an "m" by "n" matrix whose rows can be partitioned into two disjoint sets formula_7 and formula_8. Then the following four conditions together are sufficient for "A" to be totally unimodular: It was realized later that these conditions define an incidence matrix of a balanced signed graph; thus, this example says that the incidence matrix of a signed graph is totally unimodular if the signed graph is balanced. The converse is valid for signed graphs without half edges (this generalizes the property of the unoriented incidence matrix of a graph). 2. The constraints of maximum flow and minimum cost flow problems yield a coefficient matrix with these properties (and with empty "C"). Thus, such network flow problems with bounded integer capacities have an integral optimal value. Note that this does not apply to multi-commodity flow problems, in which it is possible to have fractional optimal value even with bounded integer capacities. 3. The consecutive-ones property: if "A" is (or can be permuted into) a 0-1 matrix in which for every row, the 1s appear consecutively, then "A" is TU. (The same holds for columns since the transpose of a TU matrix is also TU.) 4. Every network matrix is TU. The rows of a network matrix correspond to a tree "T" = ("V", "R"), each of whose arcs has an arbitrary orientation (it is not necessary that there exist a root vertex "r" such that the tree is "rooted into "r"" or "out of "r"").The columns correspond to another set "C" of arcs on the same vertex set "V". To compute the entry at row "R" and column "C" = "st", look at the "s"-to-"t" path "P" in "T"; then the entry is: See more in Schrijver (2003). 5. Ghouila-Houri showed that a matrix is TU iff for every subset "R" of rows, there is an assignment formula_9 of signs to rows so that the signed sum formula_10 (which is a row vector of the same width as the matrix) has all its entries in formula_11 (i.e. the row-submatrix has discrepancy at most one). This and several other if-and-only-if characterizations are proven in Schrijver (1998). 6. Hoffman and Kruskal proved the following theorem. Suppose formula_12 is a directed graph without 2-dicycles, formula_13 is the set of all dipaths in formula_12, and formula_6 is the 0-1 incidence matrix of formula_14 versus formula_13. Then formula_6 is totally unimodular if and only if every simple arbitrarily-oriented cycle in formula_12 consists of alternating forwards and backwards arcs. 7. Suppose a matrix has 0-(formula_151) entries and in each column, the entries are non-decreasing from top to bottom (so all −1s are on top, then 0s, then 1s are on the bottom). Fujishige showed that the matrix is TU iff every 2-by-2 submatrix has determinant in formula_16. 8. Seymour (1980) proved a full characterization of all TU matrices, which we describe here only informally. Seymour's theorem is that a matrix is TU if and only if it is a certain natural combination of some network matrices and some copies of a particular 5-by-5 TU matrix. Concrete examples. 1. The following matrix is totally unimodular: formula_17 This matrix arises as the coefficient matrix of the constraints in the linear programming formulation of the maximum flow problem on the following network: 2. Any matrix of the form formula_18 is "not" totally unimodular, since it has a square submatrix of determinant −2. Abstract linear algebra. Abstract linear algebra considers matrices with entries from any commutative ring formula_19, not limited to the integers. In this context, a unimodular matrix is one that is invertible over the ring; equivalently, whose determinant is a unit. This group is denoted formula_20. A rectangular formula_21-by-formula_22 matrix is said to be unimodular if it can be extended with formula_23 rows in formula_24 to a unimodular square matrix. Over a field, "unimodular" has the same meaning as "non-singular". "Unimodular" here refers to matrices with coefficients in some ring (often the integers) which are invertible over that ring, and one uses "non-singular" to mean matrices that are invertible over the field.
[ { "math_id": 0, "text": "\\mathbb{Z}" }, { "math_id": 1, "text": "\\operatorname{GL}_n(\\mathbb{Z})" }, { "math_id": 2, "text": " \\det(A \\otimes B) = (\\det A)^q (\\det B)^p, " }, { "math_id": 3, "text": "\\{\\min c^\\top x \\mid Ax \\ge b, x \\ge 0\\}" }, { "math_id": 4, "text": "\\{\\max c^\\top x \\mid Ax \\le b\\}" }, { "math_id": 5, "text": "\\{x \\mid Ax \\ge b\\}" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "C" }, { "math_id": 9, "text": "s:R \\to \\pm1" }, { "math_id": 10, "text": "\\sum_{r \\in R} s(r)r" }, { "math_id": 11, "text": "\\{0,\\pm1\\}" }, { "math_id": 12, "text": "G" }, { "math_id": 13, "text": "P" }, { "math_id": 14, "text": "V(G)" }, { "math_id": 15, "text": "\\pm" }, { "math_id": 16, "text": "0, \\pm1" }, { "math_id": 17, "text": "A=\\left[\\begin{array}{rrrrrr}\n-1 & -1 & 0 & 0 & 0 & +1\\\\\n+1 & 0 & -1 & -1 & 0 & 0\\\\\n0 & +1 & +1 & 0 & -1 & 0\\\\\n0 & 0 & 0 & +1 & +1 & -1\n\\end{array}\\right]." }, { "math_id": 18, "text": "A=\\left[\\begin{array}{ccccc}\n & \\vdots & & \\vdots \\\\\n\\dotsb & +1 & \\dotsb & +1 & \\dotsb \\\\\n & \\vdots & & \\vdots \\\\\n\\dotsb & +1 & \\dotsb & -1 & \\dotsb \\\\\n & \\vdots & & \\vdots\n\\end{array}\\right]." }, { "math_id": 19, "text": "R" }, { "math_id": 20, "text": "\\operatorname{GL}_n(R)" }, { "math_id": 21, "text": "k" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "m-k" }, { "math_id": 24, "text": "R^m" } ]
https://en.wikipedia.org/wiki?curid=593773
5938327
Billbergia
Genus of flowering plants Billbergia is a genus of flowering plants in the family Bromeliaceae, subfamily Bromelioideae. Description. The "Billbergia" species are rosette-forming, evergreen perennials, usually epiphytic, occasionally terrestrial or lithotypic in habit. They are mostly medium-sized species with small funnel diameters. Most species are epiphytes, some species grow on plants, on rocks, as well as directly on the ground. Water collects in the leaf funnels. In many funnels there are small biotopes with several species of animals and algae and aquatic plants. The rough leaves are always reinforced on the edge (as with all genera of the Bromelioideae), with a spiked tip. In some species and varieties, the leaves are beautifully colored. In many species, suction scales are everywhere on the leaves, often also on the inflorescence. They often bloom with brilliantly colored flowers with long-lasting inflorescence (inflorescences). The inflorescence often hang with terminal scape, erect or decurved. Strikingly colored bracts (bracts) often sit on the inflorescence; the color red dominates (usually with a blue component). Flowers bisexual, sessile or conspicuously pedicellate; sepals free; petals free, threefold with a double perianth, with basal appendages, often spirally recurved at anthesis; stamens free or adnate to the petals, the anthers without appendages; inferior ovary. There are three sepals present. The three petals often have different shades of blue, there are also yellow, green and white. Birds are the pollinators of the blue-flowered species. An important characteristic that distinguishes them from other genera is that their petals curl up when they wither. The individual flowers only bloom for a few hours and can be pollinated for much less time. Most species have small scales (Ligulea) at the base of the petals. The six stamens and the style often protrude far from the flower. A large part of the species blooms at night. The flower formula is: formula_0 bis formula_1 The fruits are multi-seeded berries, often heavily colored when ripe; red to blue dominate here. The fruits are eaten by animals (mainly by birds, less often by bats and monkeys). The seeds are excreted undigested and end up on branches with the feces. Taxonomy. The Swedish botanist Carl Peter Thunberg (1743–1828) established the genus "Billbergia" in "Plantarum Brasiliensium" ..., 3, 1821 p. 30 with the type species being "Billbergia speciosa". The genus, named for the Swedish botanist, zoologist, and anatomist Gustaf Johan Billberg (1772–1844), is divided into two subgenera: "Billbergia" and "Helicodea". Species in subgenus "Helicodea" are distinguishable by the tightly recurved 'clock spring' flower petals, unlike other billbergias where the petals are flared. Distribution. They are native to forest and scrub, up to an altitude of , in southern Mexico, the West Indies, Central America and South America, with many species endemic to Brazil. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\star" }, { "math_id": 1, "text": " \\downarrow \\; K_3 \\; C_{(3)} \\; A_{3+3} \\; G_{\\overline{(3)}}" } ]
https://en.wikipedia.org/wiki?curid=5938327
59383794
Nanoparticle drug delivery
Technologies for targeted drug delivery Nanoparticle drug delivery systems are engineered technologies that use nanoparticles for the targeted delivery and controlled release of therapeutic agents. The modern form of a drug delivery system should minimize side-effects and reduce both dosage and dosage frequency. Recently, nanoparticles have aroused attention due to their potential application for effective drug delivery. Nanomaterials exhibit different chemical and physical properties or biological effects compared to larger-scale counterparts that can be beneficial for drug delivery systems. Some important advantages of nanoparticles are their high surface-area-to-volume ratio, chemical and geometric tunability, and their ability to interact with biomolecules to facilitate uptake across the cell membrane. The large surface area also has a large affinity for drugs and small molecules, like ligands or antibodies, for targeting and controlled release purposes. Nanoparticles refer to a large family of materials both organic and inorganic. Each material has uniquely tunable properties and thus can be selectively designed for specific applications. Despite the many advantages of nanoparticles, there are also many challenges, including but not exclusive to: nanotoxicity, biodistribution and accumulation, and the clearance of nanoparticles by human body. The National Institute of Biomedical Imaging and Bioengineering has issued the following prospects for future research in nanoparticle drug delivery systems: The development of new drug systems is time-consuming; it takes approximately seven years to complete fundamental research and development before advancing to preclinical animal studies. Characterization. Nanoparticle drug delivery focuses on maximizing drug efficacy and minimizing cytotoxicity. Fine-tuning nanoparticle properties for effective drug delivery involves addressing the following factors. The surface-area-to-volume ratio of nanoparticles can be altered to allow for more ligand binding to the surface. Increasing ligand binding efficiency can decrease dosage and minimize nanoparticle toxicity. Minimizing dosage or dosage frequency also lowers the mass of nanoparticle per mass of drug, thus achieving greater efficiency. Surface functionalization of nanoparticles is another important design aspect and is often accomplished by bioconjugation or passive adsorption of molecules onto the nanoparticle surface. By functionalizing nanoparticle surfaces with ligands that enhance drug binding, suppress immune response, or provide targeting/controlled release capabilities, both a greater efficacy and lower toxicity are achieved. Efficacy is increased as more drug is delivered to the target site, and toxic side effects are lowered by minimizing the total level of drug in the body. The composition of the nanoparticle can be chosen according to the target environment or desired effect. For example, liposome-based nanoparticles can be biologically degraded after delivery, thus minimizing the risk of accumulation and toxicity after the therapeutic cargo has been released. Metal nanoparticles, such as gold nanoparticles, have optical qualities(also described in nanomaterials) that allow for less invasive imaging techniques. Furthermore, the photothermal response of nanoparticles to optical stimulation can be directly utilized for tumor therapy. Platforms. Current nanoparticle drug delivery systems can be cataloged based on their platform composition into several groups: polymeric nanoparticles, inorganic nanoparticles, viral nanoparticles, lipid-based nanoparticles, and nanoparticle albumin-bound (nab) technology. Each family has its unique characteristics. Polymeric nanoparticles. Polymeric nanoparticles are synthetic polymers with a size ranging from 10 to 100 nm. Common synthetic polymeric nanoparticles include polyacrylamide, polyacrylate, and chitosan. Drug molecules can be incorporated either during or after polymerization. Depending on the polymerization chemistry, the drug can be covalently bonded, encapsulated in a hydrophobic core, or conjugated electrostatically. Common synthetic strategies for polymeric nanoparticles include microfluidic approaches, electrodropping, high pressure homogenization, and emulsion-based interfacial polymerization. Polymer biodegradability is an important aspect to consider when choosing the appropriate nanoparticle chemistry. Nanocarriers composed of biodegradable polymers undergo hydrolysis in the body, producing biocompatible small molecules such as lactic acid and glycolic acid. Polymeric nanoparticles can be created via self assembly or other methods such as particle replication in nonwetting templates (PRINT) which allows customization of composition, size, and shape of the nanoparticle using tiny molds. Dendrimers. Dendrimers are unique hyper-branched synthetic polymers with monodispersed size, well-defined structure, and a highly functionalized terminal surface. They are typically composed of synthetic or natural amino acid, nucleic acids, and carbohydrates. Therapeutics can be loaded with relative ease onto the interior of the dendrimers or the terminal surface of the branches via electrostatic interaction, hydrophobic interactions, hydrogen bonds, chemical linkages, or covalent conjugation. Drug-dendrimer conjugation can elongate the half-life of drugs. Currently, dendrimer use in biological systems is limited due to dendrimer toxicity and limitations in their synthesis methods. Dendrimers are also confined within a narrow size range (&lt;15 nm) and current synthesis methods are subject to low yield. The surface groups will reach the de Gennes dense packing limit at high generation level, which seals the interior from the bulk solution – this can be useful for encapsulation of hydrophobic, poorly soluble drug molecules. The seal can be tuned by intramolecular interactions between adjacent surface groups, which can be varied by the condition of the solution, such as pH, polarity, and temperature, a property which can be utilized to tailor encapsulation and controlled release properties. Inorganic Nanoparticles and Nanocrystals. Inorganic nanoparticles have emerged as highly valuable functional building blocks for drug delivery systems due to their well-defined and highly tunable properties such as size, shape, and surface functionalization. Inorganic nanoparticles have been largely adopted to biological and medical applications ranging from imaging and diagnoses to drug delivery. Inorganic nanoparticles are usually composed of inert metals such as gold and titanium that form nanospheres, however, iron oxide nanoparticles have also become an option. Quantum dots (QDs), or inorganic semiconductor nanocrystals, have also emerged as valuable tools in the field of bionanotechnology because of their unique size-dependent optical properties and versatile surface chemistry. Their diameters (2 - 10 nm) are on the order of the exciton Bohr radius, resulting in quantum confinement effects analogous to the "particle-in-a-box" model. As a result, optical and electronic properties of quantum dots vary with their size: nanocrystals of larger sizes will emit lower energy light upon fluorescence excitation. Surface engineering of QDs is crucial for creating nanoparticle–biomolecule hybrids capable of participating in biological processes. Manipulation of nanocrystal core composition, size, and structure changes QD photo-physical properties Designing coating materials which encapsulate the QD core in an organic shell make nanocrystals biocompatible, and QDs can be further decorated with biomolecules to enable more specific interaction with biological targets. The design of inorganic nanocrystal core coupled with biologically compatible organic shell and surface ligands can combine useful properties of both materials, i.e. optical properties of the QDs and biological functions of ligands attached. Toxicity. While application of inorganic nanoparticles in bionanotechnology shows encouraging advancements from a materials science perspective, the use of such materials in vivo is limited by issues related with toxicity, biodistribution and bioaccumulation. Because metal inorganic nanoparticle systems degrade into their constituent metal atoms, challenges may arise from the interactions of these materials with biosystems, and a considerable amount of the particles may remain in the body after treatment, leading to buildup of metal particles potentially resulting in toxicity. Recently, however, some studies have shown that certain nanoparticle environmental toxicity effects aren't apparent until nanoparticles undergo transformations to release free metal ions. Under aerobic and anaerobic conditions, it was found that copper, silver, and titanium nanoparticles released low or insignificant levels of metal ions. This is evidence that copper, silver, and titanium NP are slow to release metal ions, and may therefore appear at low levels in the environment. Additionally, nanoshell coatings significantly protect against degradation in the cellular environment and also reduce QDs toxicity by reducing metal ion leakage from the core. Organic Nanocrystals. Organic nanocrystals consist of pure drugs and surface active agents required for stabilization. They are defined as carrier-free submicron colloidal drug delivery systems with a mean particle size in the nanometer range. The primary importance of the formulation of drugs into nanocrystals is the increase in particle surface area in contact with the dissolution medium, therefore increasing bioavailability. A number of drug products formulated in this way are on the market. Solubility. One of the issues faced by drug delivery is the solubility of the drug in the body; around 40% of newly detected chemicals found in drug discovery are poorly soluble in water. This low solubility affects the bioavailability of the drug, meaning the rate at which the drug reaches the circulatory system and thus the target site. Low bioavailability is most commonly seen in oral administration, which is the preferred choice for drug administration due to its convenience, low costs, and good patient practice. A measure to improve poor bioavailability is to inject the drugs in a solvent mixture with a solubilizing agent. However, results show this solution is ineffective, with the solubilizing agent demonstrating side-effects and/or toxicity. Nanocrystals used for drug delivery can increase saturation solubility and dispersion velocity. Generally, saturation solubility is thought to be a function of temperature, but it is also based on other factors, such as crystalline structure and particle size, in regards to nanocrystals. The Ostwald-Freundlich equation below shows this relationship: formula_0 Where Cs is the saturation solubility of the nanocrystal, C𝛼 is the solubility of the drug at a non-nano scale, σ is the interfacial tension of the substance, V is the molar volume of the particle, R is the gas constant, T is the absolute temperature, 𝜌 is the density of the solid, and r is the radius. The advantage of nanocrystals is that they can improve oral adsorption, bioavailability, action onset and reduces intersubject variability. Consequently, nanocrystals are now being produced and are on the market for a variety of purposes ranging from antidepressants to appetite stimulants. Nanocrystals can be produced using two different ways: the top-down method or the bottom-up method. Bottom-up technologies are also known as nanoprecipitation. This technique involves dissolving a drug in a suitable solvent and then precipitating it with a non-solvent. On the other hand, top-down technologies use force to reduce the size of a particle to nanometers, usually done by milling a drug. Top-down methods are preferred when working with poorly soluble drugs. Stability. A disadvantage of using nanocrystals for drug delivery is nanocrystal stability. Instability problems of nanocrystalline structures derive from thermodynamic processes such as particle aggregation, amorphization, and bulk crystallization. Particles at the nanoscopic scale feature a relative excess of Gibbs free energy, due to their higher surface area to volume ratio. To reduce this excess energy, it is generally favorable for aggregation to occur. Thus, individual nanocrystals are relatively unstable by themselves and will generally aggregate. This is particularly problematic in top-down production of nanocrystals. Methods such as high-pressure homogenization and bead milling, tend to increase instabilities by increasing surface areas; to compensate, or as a response to high pressure, individual particles may aggregate or turn amorphous in structure. Such methods can also lead to the reprecipitation of the drug by surpassing the solubility beyond the saturation point (Ostwald ripening). One method to overcome aggregation and retain or increase nanocrystal stability is by use of stabilizer molecules. These molecules, which interact with the surface of the nanocrystals and prevent aggregation via ionic repulsion or steric barriers between the individual nanocrystals, include surfactants and are generally useful for stabilizing suspensions of nanocrystals. Concentrations of surfactants that are too high, however, may inhibit nanocrystal stability and enhance crystal growth or aggregation. It has been shown that certain surfactants, upon reaching a critical concentration, begin to self-assemble into micelles, which then compete with nanocrystal surfaces for other surfactant molecules. With fewer surface molecules interacting with the nanocrystal surface, crystal growth and aggregation is reported to occur at increased amounts. Use of surfactant at optimal concentrations reportedly allows for higher stability, larger drug capacity as a carrier, and sustained drug release. In a study using PEG as a stabilizer was found that nanocrystals treated with PEG enhanced accumulation at tumor sites and had greater blood circulation, than those not treated with PEG. Amorphization can occur in top-down methods of production. With different intramolecular arrangements, amorphization of nanocrystals leads to different thermodynamic and kinetic properties that affect drug delivery and kinetics. Transition to amorphous structures is reported to occur through production practices such as spray drying, lyophilization, and mechanical mechanisms, such as milling. This amorphization has been reportedly observed with or without the presence of stabilizer in a dry milling process. Using a wet milling process with surfactant, however significantly reduced amorphization, suggesting that solvent, in this case water, and surfactant could inhibit amorphization for some top-down production methods that otherwise reportedly facilitate amorphization. Liposome delivery. Liposomes are spherical vesicles composed of synthetic or natural phospholipids that self-assemble in aqueous solution in sizes ranging from tens of nanometers to micrometers. The resulting vesicle, which has an aqueous core surrounded by a hydrophobic membrane, can be loaded with a wide variety of hydrophobic or hydrophilic molecules for therapeutic purposes. Liposomes are typically synthesized with naturally occurring phospholipids, mainly phosphatidylcholine. Cholesterol is often included in the formulation to adjust the rigidity of the membrane and to increase stability. The molecular cargo is loaded through liposome formation in aqueous solution, solvent exchange mechanisms, or pH gradients methods. Various molecules can also be chemically conjugated to the surface of the liposome to alter recognition properties. One typical modification is conjugating polyethyleneglycol (PEG) to the vesicle surface. The hydrophilic polymer prevents recognition by macrophages and decreases clearance. The size, surface charge, and bilayer fluidity also alter liposome delivery kinetics. Liposomes diffuse from the bloodstream into the interstitial space near the target site. As the cell membrane itself is composed of phospholipids, liposomes can directly fuse with the membrane and release the cargo into the cytosol, or may enter the cell through phagocytosis or other active transport pathways. Liposomal delivery has various advantages. Liposomes increase the solubility, stability, and uptake of drug molecules. Peptides, polymers, and other molecules can be conjugated to the surface of a liposome for targeted delivery. Conjugating various ligands can facilitate binding to target cells based on the receptor-ligand interaction. Altering vesicle size and surface chemistry can also be tuned to increase circulation time. Various FDA-approved liposomal drugs are in clinical use in the US. The anthracycline drug doxorubicin is delivered with phospholipid-cholesterol liposomes to treat AIDS-related Kaposi sarcoma and multiple myeloma with high efficacy and low toxicity. Many others are undergoing clinical trials, and liposomal drug delivery remains an active field of research today, with potential applications including nucleic acid therapy, brain targeting, and tumor therapy. Viral vectors, viral-like particles, and biological nanocarriers. Viruses can be used to deliver genes for genetic engineering or gene therapy. Commonly used viruses include adenoviruses, retroviruses, and various bacteriophages. The surface of the viral particle can also be modified with ligands to increase targeting capabilities. While viral vectors can be used to great efficacy, one concern is that may cause off-target effects due to its natural tropism. This usually requires replacing the proteins causing virus-cell interactions with chimeric proteins. In addition to using viruses, drug molecules can also be encapsulated in protein particles derived from the viral capsid, or virus-like particles (VLPs). VLPs are easier to manufacture than viruses, and their structural uniformity allows VLPs to be produced precisely in large amounts. VLPs also have easy-to-modify surfaces, allowing the possibility for targeted delivery. There are various methods of packaging the molecule into the capsid; most take advantage of the capsid's ability to self-assemble. One strategy is to alter the pH gradient outside the capsid to create pores on the capsid surface and trap the desired molecule. Other methods use aggregators such as leucine zippers or polymer-DNA amphiphiles to induce capsid formation and capture drug molecules. It is also possible to chemically conjugate of drugs directly onto the reactive sites on the capsid surface, often involving the formation of amide bonds. After being introduced to the organism, VLPs often have broad tissue distribution, rapid clearance, and are generally non-toxic. It may, however, like viruses, invoke an immune response, so immune-masking agents may be necessary. Nanoparticle Albumin-bound (nab) Technology. Nanoparticle albumin-bound technology utilizes the protein albumin as a carrier for hydrophobic chemotherapy drugs through noncovalent binding. Because albumin is already a natural carrier of hydrophobic particles and is able to transcytose molecules bound to itself, albumin composed nanoparticles have become an effective strategy for the treatment of many diseases in clinical research. Delivery and release mechanisms. An ideal drug delivery system should have effective targeting and controlled release. The two main targeting strategies are passive targeting and active targeting. Passive targeting depends on the fact that tumors have abnormally structured blood vessels that favor accumulation of relatively large macromolecules and nanoparticles. This so-called enhanced permeability and retention effect (EPR) allows the drug-carrier be transported specifically to the tumor cells. Active targeting is, as the name suggests, much more specific and is achieved by taking advantage of receptor-ligand interactions at the surface of the cell membrane. Controlled drug release systems can be achieved through several methods. Rate-programmed drug delivery systems are tuned to the diffusivity of active agents across the membrane. Another delivery-release mechanism is activation-modulated drug delivery, where the release is triggered by environmental stimuli. The stimuli can be external, such as the introduction of a chemical activators or activation by light or electromagnetic fields, or biological - such as pH, temperature, and osmotic pressure which can vary widely throughout the body. Polymeric nanoparticles. For polymeric nanoparticles, the induction of stimuli-responsiveness has usually relied heavily upon well-known polymers that possess an inherent stimuli-responsiveness. Certain polymers that can undergo reversible phase transitions due to changes in temperature or pH have aroused interest. Arguably the most utilized polymer for activation-modulated delivery is the thermo-responsive polymer poly(N-isopropylacrylamide). It is readily soluble in water at room temperature but precipitates reversibly from when the temperature is raised above its lower critical solution temperature (LCST), changing from an extended chain conformation to a collapsed chain. This feature presents a way to change the hydrophilicity of a polymer via temperature. Efforts also focus on dual stimuli-responsive drug delivery systems, which can be harnessed to control the release of the encapsulated drug. For example, the triblock copolymer of poly(ethylene glycol)-b-poly(3-aminopropyl-methacrylamide)-b-poly(N-isopropylacrylamide) (PEG-b-PAPMA-b-PNIPAm) can self-assemble to form micelles, possessing a core–shell–corona architecture above the lower critical solution temperature. It is also pH responsive. Therefore, drug release can be tuned by changing either temperature or pH conditions. Inorganic nanoparticles. Drug delivery strategies of inorganic nanoparticles are dependent on material properties. The active targeting of inorganic nanoparticle drug carriers is often achieved by surface functionalization with specific ligands of nanoparticles. For example, the inorganic multifunctional nanovehicle (5-FU/Fe3O4/αZrP@CHI-FA-R6G) is able to accomplish tumor optical imaging and therapy simultaneously. It can be directed to the location of cancer cells with sustained release behavior. Studies have also been done on gold nanoparticle responses to local near-infrared (NIR) light as a stimuli for drug release. In one study, gold nanoparticles functionalized with double-stranded DNA encapsulated with drug molecules, were irradiated with NIR light. The particles generated heat and denatured the double-stranded DNA, which triggered the release of drugs at the target site. Studies also suggest that a porous structure is beneficial to attain a sustained or pulsatile release. Porous inorganic materials demonstrate high mechanical and chemical stability within a range of physiological conditions. The well-defined surface properties, such as high pore volume, narrow pore diameter distribution, and high surface area allow the entrapment of drugs, proteins and other biogenic molecules with predictable and reproducible release patterns. Toxicity. Some of the same properties that make nanoparticles efficient drug carriers also contribute to their toxicity. For example, gold nanoparticles are known to interact with proteins through surface adsorption, forming a protein corona, which can be utilized for cargo loading and immune shielding. However, this protein-adsorption property can also disrupt normal protein function that is essential for homeostasis, especially when the protein contains exposed sulfur groups. The photothermal effect, which can be induced to kill tumor cells, may also create reactive oxygen species that impose oxidative stress on surrounding healthy cells. Gold nanoparticles of sizes below 4-5 nm fit in DNA grooves which can interfere with transcription, gene regulation, replication, and other processes that rely on DNA-protein binding. Lack of biodegradability for some nanoparticle chemistries can lead to accumulation in certain tissues, thus interfering with a wide range of biological processes. Currently, there is no regulatory framework in the United States for testing nanoparticles for their general impact on health and on the environment. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "log (\\frac{C_s}{C_{\\alpha}}) = \\frac{2\\sigma V}{2.303RT\\rho r}" } ]
https://en.wikipedia.org/wiki?curid=59383794
59384081
Isotropy representation
Linear representation of a group on the tangent space to a fixed point of the group. In differential geometry, the isotropy representation is a natural linear representation of a Lie group, that is acting on a manifold, on the tangent space to a fixed point. Construction. Given a Lie group action formula_0 on a manifold "M", if "G""o" is the stabilizer of a point "o" (isotropy subgroup at "o"), then, for each "g" in "G""o", formula_1 fixes "o" and thus taking the derivative at "o" gives the map formula_2 By the chain rule, formula_3 and thus there is a representation: formula_4 given by formula_5. It is called the isotropy representation at "o". For example, if formula_6 is a conjugation action of "G" on itself, then the isotropy representation formula_7 at the identity element "e" is the adjoint representation of formula_8.
[ { "math_id": 0, "text": "(G, \\sigma)" }, { "math_id": 1, "text": "\\sigma_g: M \\to M" }, { "math_id": 2, "text": "(d\\sigma_g)_o: T_o M \\to T_o M." }, { "math_id": 3, "text": "(d \\sigma_{gh})_o = d (\\sigma_g \\circ \\sigma_h)_o = (d \\sigma_g)_o \\circ (d \\sigma_h)_o" }, { "math_id": 4, "text": "\\rho: G_o \\to \\operatorname{GL}(T_o M)" }, { "math_id": 5, "text": "\\rho(g) = (d \\sigma_g)_o" }, { "math_id": 6, "text": "\\sigma" }, { "math_id": 7, "text": "\\rho" }, { "math_id": 8, "text": "G = G_e" } ]
https://en.wikipedia.org/wiki?curid=59384081
593908
Propagation of uncertainty
Effect of variables' uncertainties on the uncertainty of a function based on them In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function. The uncertainty "u" can be expressed in a number of ways. It may be defined by the absolute error Δ"x". Uncertainties can also be defined by the relative error (Δ"x")/"x", which is usually written as a percentage. Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval "x" ± "u". However, the most general way of characterizing uncertainty is by specifying its probability distribution. If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation "σ" from the central value "x", which means that the region "x" ± "σ" will cover the true value in roughly 68% of cases. If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the "measurement errors" may be correlated. Second, when the underlying values are correlated across a population, the "uncertainties in the group averages" will be correlated. In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from the Monte Carlo method family. For very expansive data or complex functions, the calculation of the error propagation may be very expansive so that a surrogate model or a parallel computing strategy may be necessary. In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below. Linear combinations. Let formula_0 be a set of "m" functions, which are linear combinations of formula_1 variables formula_2 with combination coefficients formula_3: formula_4 or in matrix notation, formula_5 Also let the variance–covariance matrix of "x" = ("x"1, ..., "x""n") be denoted by formula_6 and let the mean value be denoted by formula_7: formula_8 formula_9 is the outer product. Then, the variance–covariance matrix formula_10 of "f" is given by formula_11 In component notation, the equation formula_12 reads formula_13 This is the most general expression for the propagation of error from one set of variables onto another. When the errors on "x" are uncorrelated, the general expression simplifies to formula_14 where formula_15 is the variance of "k"-th element of the "x" vector. Note that even though the errors on "x" may be uncorrelated, the errors on "f" are in general correlated; in other words, even if formula_6 is a diagonal matrix, formula_10 is in general a full matrix. The general expressions for a scalar-valued function "f" are a little simpler (here a is a row vector): formula_16 formula_17 Each covariance term formula_18 can be expressed in terms of the correlation coefficient formula_19 by formula_20, so that an alternative expression for the variance of "f" is formula_21 In the case that the variables in "x" are uncorrelated, this simplifies further to formula_22 In the simple case of identical coefficients and variances, we find formula_23 For the arithmetic mean, formula_24, the result is the standard error of the mean: formula_25 Non-linear combinations. When "f" is a set of non-linear combination of the variables "x", an interval propagation could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function "f" must usually be linearised by approximation to a first-order Taylor series expansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products. The Taylor expansion would be: formula_26 where formula_27 denotes the partial derivative of "fk" with respect to the "i"-th variable, evaluated at the mean value of all components of vector "x". Or in matrix notation, formula_28 where J is the Jacobian matrix. Since f0 is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, "Aki" and "Akj" by the partial derivatives, formula_29 and formula_30. In matrix notation, formula_31 That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument. Note this is equivalent to the matrix expression for the linear case with formula_32. Simplification. Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula: formula_33 where formula_34 represents the standard deviation of the function formula_35, formula_36 represents the standard deviation of formula_37, formula_38 represents the standard deviation of formula_39, and so forth. This formula is based on the linear characteristics of the gradient of formula_35 and therefore it is a good estimation for the standard deviation of formula_35 as long as formula_40 are small enough. Specifically, the linear approximation of formula_35 has to be close to formula_35 inside a neighbourhood of radius formula_40. Example. Any non-linear differentiable function, formula_41, of two variables, formula_42 and formula_43, can be expanded as formula_44 If we take the variance on both sides and use the formula for the variance of a linear combination of variables formula_45 then we obtain formula_46 where formula_47 is the standard deviation of the function formula_35, formula_48 is the standard deviation of formula_42, formula_49 is the standard deviation of formula_43 and formula_50 is the covariance between formula_42 and formula_43. In the particular case that formula_51, formula_52, formula_53. Then formula_54 or formula_55 where formula_56 is the correlation between formula_42 and formula_43. When the variables formula_42 and formula_43 are uncorrelated, formula_57. Then formula_58 Caveats and warnings. Error estimates for non-linear functions are biased on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+"x") increases as "x" increases, since the expansion to "x" is a good approximation only when "x" is near zero. For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation; see Uncertainty quantification for details. Reciprocal and shifted reciprocal. In the special case of the inverse or reciprocal formula_59, where formula_60 follows a standard normal distribution, the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance. However, in the slightly more general case of a shifted reciprocal function formula_61 for formula_62 following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole formula_63 and the mean formula_64 is real-valued. Ratios. Ratios are also problematic; normal approximations exist under certain conditions. Example formulae. This table shows the variances and standard deviations of simple functions of the real variables formula_65 with standard deviations formula_66 covariance formula_67 and correlation formula_68 The real-valued coefficients formula_42 and formula_43 are assumed exactly known (deterministic), i.e., formula_69 In the right-hand columns of the table, formula_70 and formula_71 are expectation values, and formula_35 is the value of the function calculated at those values. For uncorrelated variables (formula_72, formula_73) expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation, gives formula_74 For the case formula_75 we also have Goodman's expression for the exact variance: for the uncorrelated case it is formula_76 and therefore we have formula_77 Effect of correlation on differences. If "A" and "B" are uncorrelated, their difference "A" − "B" will have more variance than either of them. An increasing positive correlation (formula_78) will decrease the variance of the difference, converging to zero variance for perfectly correlated variables with the same variance. On the other hand, a negative correlation (formula_79) will further increase the variance of the difference, compared to the uncorrelated case. For example, the self-subtraction "f" = "A" − "A" has zero variance formula_80 only if the variate is perfectly autocorrelated (formula_81). If "A" is uncorrelated, formula_82 then the output variance is twice the input variance, formula_83 And if "A" is perfectly anticorrelated, formula_84 then the input variance is quadrupled in the output, formula_85 (notice formula_86 for "f" = "aA" − "aA" in the table above). Example calculations. Inverse tangent function. We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error. Define formula_87 where formula_88 is the absolute uncertainty on our measurement of x. The derivative of "f"("x") with respect to x is formula_89 Therefore, our propagated uncertainty is formula_90 where formula_91 is the absolute propagated uncertainty. Resistance measurement. A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm's law, "R" = "V" / "I". Given the measured variables with uncertainties, "I" ± "σ""I" and "V" ± "σ""V", and neglecting their possible correlation, the uncertainty in the computed quantity, "σ""R", is: formula_92 See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{f_k(x_1, x_2, \\dots, x_n)\\}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "x_1, x_2, \\dots, x_n" }, { "math_id": 3, "text": "A_{k1}, A_{k2}, \\dots,A_{kn}, (k = 1, \\dots, m)" }, { "math_id": 4, "text": "f_k = \\sum_{i=1}^n A_{ki} x_i," }, { "math_id": 5, "text": "\\mathbf{f} = \\mathbf{A} \\mathbf{x}." }, { "math_id": 6, "text": "\\boldsymbol\\Sigma^x" }, { "math_id": 7, "text": "\\boldsymbol{\\mu}" }, { "math_id": 8, "text": "\\boldsymbol\\Sigma^x = E[(\\mathbf{x}-\\boldsymbol\\mu)\\otimes (\\mathbf{x}-\\boldsymbol\\mu)]=\n\\begin{pmatrix}\n \\sigma^2_1 & \\sigma_{12} & \\sigma_{13} & \\cdots \\\\\n \\sigma_{21} & \\sigma^2_2 & \\sigma_{23} & \\cdots\\\\\n \\sigma_{31} & \\sigma_{32} & \\sigma^2_3 & \\cdots \\\\\n \\vdots & \\vdots & \\vdots & \\ddots\n\\end{pmatrix} = \n\\begin{pmatrix}\n {\\Sigma}^x_{11} & {\\Sigma}^x_{12} & {\\Sigma}^x_{13} & \\cdots \\\\\n {\\Sigma}^x_{21} & {\\Sigma}^x_{22} & {\\Sigma}^x_{23} & \\cdots\\\\\n {\\Sigma}^x_{31} & {\\Sigma}^x_{32} & {\\Sigma}^x_{33} & \\cdots \\\\\n \\vdots & \\vdots & \\vdots & \\ddots\n\\end{pmatrix}.\n" }, { "math_id": 9, "text": "\\otimes" }, { "math_id": 10, "text": "\\boldsymbol\\Sigma^f" }, { "math_id": 11, "text": "\\boldsymbol\\Sigma^f\n= E[(\\mathbf{f} - E[\\mathbf{f}]) \\otimes (\\mathbf{f} - E[\\mathbf{f}])]\n= E[\\mathbf{A}(\\mathbf{x}-\\boldsymbol\\mu) \\otimes \\mathbf{A}(\\mathbf{x}-\\boldsymbol\\mu)]\n= \\mathbf{A} E[(\\mathbf{x}-\\boldsymbol\\mu) \\otimes (\\mathbf{x}-\\boldsymbol\\mu)]\\mathbf{A}^\\mathrm{T}\n= \\mathbf{A} \\boldsymbol\\Sigma^x \\mathbf{A}^\\mathrm{T}." }, { "math_id": 12, "text": "\\boldsymbol\\Sigma^f = \\mathbf{A} \\boldsymbol\\Sigma^x \\mathbf{A}^\\mathrm{T}" }, { "math_id": 13, "text": "\\Sigma^f_{ij} = \\sum_k^n \\sum_l^n A_{ik} {\\Sigma}^x_{kl} A_{jl}." }, { "math_id": 14, "text": "\\Sigma^f_{ij} = \\sum_k^n A_{ik} \\Sigma^x_k A_{jk}," }, { "math_id": 15, "text": "\\Sigma^x_k = \\sigma^2_{x_k}" }, { "math_id": 16, "text": "f = \\sum_i^n a_i x_i = \\mathbf{a x}," }, { "math_id": 17, "text": "\\sigma^2_f = \\sum_i^n \\sum_j^n a_i \\Sigma^x_{ij} a_j = \\mathbf{a} \\boldsymbol\\Sigma^x \\mathbf{a}^\\mathrm{T}." }, { "math_id": 18, "text": "\\sigma_{ij}" }, { "math_id": 19, "text": "\\rho_{ij}" }, { "math_id": 20, "text": "\\sigma_{ij} = \\rho_{ij} \\sigma_i \\sigma_j" }, { "math_id": 21, "text": "\\sigma^2_f = \\sum_i^n a_i^2 \\sigma^2_i + \\sum_i^n \\sum_{j (j \\ne i)}^n a_i a_j \\rho_{ij} \\sigma_i \\sigma_j." }, { "math_id": 22, "text": "\\sigma^2_f = \\sum_i^n a_i^2 \\sigma^2_i." }, { "math_id": 23, "text": "\\sigma_f = \\sqrt{n}\\, |a| \\sigma." }, { "math_id": 24, "text": "a=1/n" }, { "math_id": 25, "text": "\\sigma_f = \\frac{\\sigma} {\\sqrt{n}}." }, { "math_id": 26, "text": "f_k \\approx f^0_k+ \\sum_i^n \\frac{\\partial f_k}{\\partial {x_i}} x_i " }, { "math_id": 27, "text": "\\partial f_k/\\partial x_i" }, { "math_id": 28, "text": "\\mathrm{f} \\approx \\mathrm{f}^0 + \\mathrm{J} \\mathrm{x}\\," }, { "math_id": 29, "text": "\\frac{\\partial f_k}{\\partial x_i}" }, { "math_id": 30, "text": "\\frac{\\partial f_k}{\\partial x_j}" }, { "math_id": 31, "text": "\\mathrm{\\Sigma}^\\mathrm{f} = \\mathrm{J} \\mathrm{\\Sigma}^\\mathrm{x} \\mathrm{J}^\\top." }, { "math_id": 32, "text": "\\mathrm{J = A}" }, { "math_id": 33, "text": "s_f = \\sqrt{ \\left(\\frac{\\partial f}{\\partial x}\\right)^2 s_x^2 + \\left(\\frac{\\partial f}{\\partial y} \\right)^2 s_y^2 + \\left(\\frac{\\partial f}{\\partial z} \\right)^2 s_z^2 + \\cdots}" }, { "math_id": 34, "text": "s_f" }, { "math_id": 35, "text": "f" }, { "math_id": 36, "text": "s_x" }, { "math_id": 37, "text": "x" }, { "math_id": 38, "text": "s_y" }, { "math_id": 39, "text": "y" }, { "math_id": 40, "text": "s_x, s_y, s_z,\\ldots" }, { "math_id": 41, "text": "f(a,b)" }, { "math_id": 42, "text": "a" }, { "math_id": 43, "text": "b" }, { "math_id": 44, "text": "f\\approx f^0+\\frac{\\partial f}{\\partial a}a+\\frac{\\partial f}{\\partial b}b." }, { "math_id": 45, "text": "\\operatorname{Var}(aX + bY) = a^2\\operatorname{Var}(X) + b^2\\operatorname{Var}(Y) + 2ab \\operatorname{Cov}(X, Y)," }, { "math_id": 46, "text": "\\sigma^2_f\\approx\\left| \\frac{\\partial f}{\\partial a}\\right| ^2\\sigma^2_a+\\left| \\frac{\\partial f}{\\partial b}\\right|^2\\sigma^2_b+2\\frac{\\partial f}{\\partial a}\\frac{\\partial f} {\\partial b}\\sigma_{ab}," }, { "math_id": 47, "text": "\\sigma_{f}" }, { "math_id": 48, "text": "\\sigma_{a}" }, { "math_id": 49, "text": "\\sigma_{b}" }, { "math_id": 50, "text": "\\sigma_{ab} = \\sigma_{a}\\sigma_{b} \\rho_{ab}" }, { "math_id": 51, "text": "f = ab" }, { "math_id": 52, "text": "\\frac{\\partial f}{\\partial a} = b" }, { "math_id": 53, "text": "\\frac{\\partial f}{\\partial b} = a" }, { "math_id": 54, "text": "\\sigma^2_f \\approx b^2\\sigma^2_a+a^2 \\sigma_b^2+2ab\\,\\sigma_{ab}" }, { "math_id": 55, "text": "\\left(\\frac{\\sigma_f}{f}\\right)^2 \\approx \\left(\\frac{\\sigma_a}{a} \\right)^2 + \\left(\\frac{\\sigma_b}{b}\\right)^2 + 2\\left(\\frac{\\sigma_a}{a}\\right)\\left(\\frac{\\sigma_b}{b}\\right)\\rho_{ab}" }, { "math_id": 56, "text": "\\rho_{ab}" }, { "math_id": 57, "text": "\\rho_{ab}=0" }, { "math_id": 58, "text": "\\left(\\frac{\\sigma_f}{f}\\right)^2 \\approx \\left(\\frac{\\sigma_a}{a} \\right)^2 + \\left(\\frac{\\sigma_b}{b}\\right)^2." }, { "math_id": 59, "text": "1/B" }, { "math_id": 60, "text": "B=N(0,1)" }, { "math_id": 61, "text": "1/(p-B)" }, { "math_id": 62, "text": "B=N(\\mu,\\sigma)" }, { "math_id": 63, "text": "p" }, { "math_id": 64, "text": "\\mu" }, { "math_id": 65, "text": "A, B" }, { "math_id": 66, "text": "\\sigma_A, \\sigma_B," }, { "math_id": 67, "text": "\\sigma_{AB} = \\rho_{AB} \\sigma_A \\sigma_B," }, { "math_id": 68, "text": "\\rho_{AB}." }, { "math_id": 69, "text": "\\sigma_a = \\sigma_b = 0." }, { "math_id": 70, "text": "A" }, { "math_id": 71, "text": "B" }, { "math_id": 72, "text": "\\rho_{AB} = 0" }, { "math_id": 73, "text": "\\sigma_{AB} = 0" }, { "math_id": 74, "text": "f = ABC; \\qquad \\left(\\frac{\\sigma_f}{f}\\right)^2 \\approx \\left(\\frac{\\sigma_A}{A}\\right)^2 + \\left(\\frac{\\sigma_B}{B}\\right)^2+ \\left(\\frac{\\sigma_C}{C}\\right)^2." }, { "math_id": 75, "text": "f = AB " }, { "math_id": 76, "text": "V(XY)= E(X)^2 V(Y) + E(Y)^2 V(X) + E((X-E(X))^2 (Y-E(Y))^2)," }, { "math_id": 77, "text": "\\sigma_f^2 = A^2\\sigma_B^2 + B^2\\sigma_A^2 + \\sigma_A^2\\sigma_B^2." }, { "math_id": 78, "text": "\\rho_{AB} \\to 1" }, { "math_id": 79, "text": "\\rho_{AB} \\to -1" }, { "math_id": 80, "text": "\\sigma_f^2 = 0" }, { "math_id": 81, "text": "\\rho_A = 1" }, { "math_id": 82, "text": "\\rho_A = 0," }, { "math_id": 83, "text": "\\sigma_f^2 = 2\\sigma^2_A." }, { "math_id": 84, "text": "\\rho_A = -1," }, { "math_id": 85, "text": "\\sigma_f^2 = 4 \\sigma^2_A" }, { "math_id": 86, "text": "1 - \\rho_A = 2" }, { "math_id": 87, "text": "f(x) = \\arctan(x)," }, { "math_id": 88, "text": "\\Delta_x" }, { "math_id": 89, "text": "\\frac{d f}{d x} = \\frac{1}{1+x^2}." }, { "math_id": 90, "text": "\\Delta_{f} \\approx \\frac{\\Delta_x}{1+x^2}," }, { "math_id": 91, "text": "\\Delta_f" }, { "math_id": 92, "text": "\\sigma_R \\approx \\sqrt{ \\sigma_V^2 \\left(\\frac{1}{I}\\right)^2 + \\sigma_I^2 \\left(\\frac{-V}{I^2}\\right)^2 } = R\\sqrt{ \\left(\\frac{\\sigma_V}{V}\\right)^2 + \\left(\\frac{\\sigma_I}{I}\\right)^2 }." } ]
https://en.wikipedia.org/wiki?curid=593908
59400228
Lions–Magenes lemma
In mathematics, the Lions–Magenes lemma (or theorem) is the result in the theory of Sobolev spaces of Banach space-valued functions, which provides a criterion for moving a time derivative of a function out of its action (as a functional) on the function itself. Statement of the lemma. Let "X"0, "X" and "X"1 be three Hilbert spaces with "X"0 ⊆ "X" ⊆ "X"1. Suppose that "X"0 is continuously embedded in "X" and that "X" is continuously embedded in "X"1, and that "X"1 is the dual space of "X"0. Denote the norm on "X" by || ⋅ ||"X", and denote the action of "X"1 on "X"0 by formula_0. Suppose for some formula_1 that formula_2 is such that its time derivative formula_3. Then formula_4 is almost everywhere equal to a function continuous from formula_5 into formula_6, and moreover the following equality holds in the sense of scalar distributions on formula_7: formula_8 The above equality is meaningful, since the functions formula_9 are both integrable on formula_5. Notes. It is important to note that this lemma does not extend to the case where formula_10 is such that its time derivative formula_11 for formula_12. For example, the energy equality for the 3-dimensional Navier–Stokes equations is not known to hold for weak solutions, since a weak solution formula_4 is only known to satisfy formula_13 and formula_14 (where formula_15 is a Sobolev space, and formula_16 is its dual space, which is not enough to apply the Lions–Magnes lemma (one would need formula_17, but this is not known to be true for weak solutions). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle\\cdot,\\cdot\\rangle" }, { "math_id": 1, "text": "T>0" }, { "math_id": 2, "text": "u \\in L^2 ([0, T]; X_0)" }, { "math_id": 3, "text": "\\dot{u} \\in L^2 ([0, T]; X_1)" }, { "math_id": 4, "text": "u" }, { "math_id": 5, "text": "[0,T]" }, { "math_id": 6, "text": "X" }, { "math_id": 7, "text": "(0,T)" }, { "math_id": 8, "text": "\\frac{1}{2}\\frac{d}{dt} \\|u\\|_X^2 = \\langle\\dot{u},u\\rangle" }, { "math_id": 9, "text": "t\\rightarrow \\|u\\|_X^2, \\quad t\\rightarrow \\langle \\dot{u}(t),u(t)\\rangle" }, { "math_id": 10, "text": "u \\in L^p ([0, T]; X_0)" }, { "math_id": 11, "text": "\\dot{u} \\in L^q ([0, T]; X_1)" }, { "math_id": 12, "text": "1/p + 1/q>1" }, { "math_id": 13, "text": "u \\in L^2 ([0, T]; H^1)" }, { "math_id": 14, "text": "\\dot{u} \\in L^{4/3}([0, T]; H^{-1})" }, { "math_id": 15, "text": "H^1" }, { "math_id": 16, "text": "H^{-1}" }, { "math_id": 17, "text": "\\dot{u} \\in L^2([0, T]; H^{-1})" } ]
https://en.wikipedia.org/wiki?curid=59400228
594072
QR algorithm
Algorithm to calculate eigenvalues In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate. The practical QR algorithm. Formally, let "A" be a real matrix of which we want to compute the eigenvalues, and let "A"0 := "A". At the k-th step (starting with "k" = 0), we compute the QR decomposition "A""k" = "Q""k" "R""k" where "Q""k" is an orthogonal matrix (i.e., "Q"T = "Q"−1) and "R""k" is an upper triangular matrix. We then form "A""k"+1 = "R""k" "Q""k". Note that formula_0 so all the "A""k" are similar and hence they have the same eigenvalues. The algorithm is numerically stable because it proceeds by "orthogonal" similarity transforms. Under certain conditions, the matrices "A""k" converge to a triangular matrix, the Schur form of "A". The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the Gershgorin circle theorem provides a bound on the error. In this crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix A to upper Hessenberg form (which costs formula_1 arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs formula_2 arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm. If the original matrix is symmetric, then the upper Hessenberg matrix is also symmetric and thus tridiagonal, and so are all the "A""k". This procedure costs formula_3 arithmetic operations using a technique based on Householder reduction. Determining the QR decomposition of a symmetric tridiagonal matrix costs formula_4 operations. The rate of convergence depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust. Visualization. The basic QR algorithm can be visualized in the case where "A" is a positive-definite symmetric matrix. In that case, "A" can be depicted as an ellipse in 2 dimensions or an ellipsoid in higher dimensions. The relationship between the input to the algorithm and a single iteration can then be depicted as in Figure 1 (click to see an animation). Note that the LR algorithm is depicted alongside the QR algorithm. A single iteration causes the ellipse to tilt or "fall" towards the x-axis. In the event where the large semi-axis of the ellipse is parallel to the x-axis, one iteration of QR does nothing. Another situation where the algorithm "does nothing" is when the large semi-axis is parallel to the y-axis instead of the x-axis. In that event, the ellipse can be thought of as balancing precariously without being able to fall in either direction. In both situations, the matrix is diagonal. A situation where an iteration of the algorithm "does nothing" is called a fixed point. The strategy employed by the algorithm is iteration towards a fixed-point. Observe that one fixed point is stable while the other is unstable. If the ellipse were tilted away from the unstable fixed point by a very small amount, one iteration of QR would cause the ellipse to tilt away from the fixed point instead of towards. Eventually though, the algorithm would converge to a different fixed point, but it would take a long time. Finding eigenvalues versus finding eigenvectors. It's worth pointing out that finding even a single eigenvector of a symmetric matrix is not computable (in exact real arithmetic according to the definitions in computable analysis). This difficulty exists whenever the multiplicities of a matrix's eigenvalues are not knowable. On the other hand, the same problem does not exist for finding eigenvalues. The eigenvalues of a matrix are always computable. We will now discuss how these difficulties manifest in the basic QR algorithm. This is illustrated in Figure 2. Recall that the ellipses represent positive-definite symmetric matrices. As the two eigenvalues of the input matrix approach each other, the input ellipse changes into a circle. A circle corresponds to a multiple of the identity matrix. A near-circle corresponds to a near-multiple of the identity matrix whose eigenvalues are nearly equal to the diagonal entries of the matrix. Therefore the problem of approximately finding the eigenvalues is shown to be easy in that case. But notice what happens to the semi-axes of the ellipses. An iteration of QR (or LR) tilts the semi-axes less and less as the input ellipse gets closer to being a circle. The eigenvectors can only be known when the semi-axes are parallel to the x-axis and y-axis. The number of iterations needed to achieve near-parallelism increases without bound as the input ellipse becomes more circular. While it may be impossible to compute the eigendecomposition of an arbitrary symmetric matrix, it is always possible to perturb the matrix by an arbitrarily small amount and compute the eigendecomposition of the resulting matrix. In the case when the matrix is depicted as a near-circle, the matrix can be replaced with one whose depiction is a perfect circle. In that case, the matrix is a multiple of the identity matrix, and its eigendecomposition is immediate. Be aware though that the resulting eigenbasis can be quite far from the original eigenbasis. Speeding up: Shifting and deflation. The slowdown when the ellipse gets more circular has a converse: It turns out that when the ellipse gets more stretched - and less circular - then the rotation of the ellipse becomes faster. Such a stretch can be induced when the matrix formula_5 which the ellipse represents gets replaced with formula_6 where formula_7 is approximately the smallest eigenvalue of formula_5. In this case, the ratio of the two semi-axes of the ellipse approaches formula_8. In higher dimensions, shifting like this makes the length of the smallest semi-axis of an ellipsoid small relative to the other semi-axes, which speeds up convergence to the smallest eigenvalue, but does not speed up convergence to the other eigenvalues. This becomes useless when the smallest eigenvalue is fully determined, so the matrix must then be "deflated", which simply means removing its last row and column. The issue with the unstable fixed point also needs to be addressed. The shifting heuristic is often designed to deal with this problem as well: Practical shifts are often discontinuous and randomised. Wilkinson's shift -- which is well-suited for symmetric matrices like the ones we're visualising -- is in particular discontinuous. The implicit QR algorithm. In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce. The matrix is first brought to upper Hessenberg form formula_9 as in the explicit version; then, at each step, the first column of formula_10 is transformed via a small-size Householder similarity transformation to the first column of formula_11 (or formula_12), where formula_11, of degree formula_13, is the polynomial that defines the shifting strategy (often formula_14, where formula_7 and formula_15 are the two eigenvalues of the trailing formula_16 principal submatrix of formula_10, the so-called "implicit double-shift"). Then successive Householder transformations of size formula_17 are performed in order to return the working matrix formula_10 to upper Hessenberg form. This operation is known as "bulge chasing", due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of formula_10 is sufficiently small. Renaming proposal. Since in the modern implicit version of the procedure no QR decompositions are explicitly performed, some authors, for instance Watkins, suggested changing its name to "Francis algorithm". Golub and Van Loan use the term "Francis QR step". Interpretation and convergence. The QR algorithm can be seen as a more sophisticated variation of the basic "power" eigenvalue algorithm. Recall that the power algorithm repeatedly multiplies "A" times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix "A", upon convergence, "AQ" = "QΛ", where "Λ" is the diagonal matrix of eigenvalues to which "A" converged, and where "Q" is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of "Q" are the eigenvectors. History. The QR algorithm was preceded by the "LR algorithm", which uses the LU decomposition instead of the QR decomposition. The QR algorithm is more stable, so the LR algorithm is rarely used nowadays. However, it represents an important step in the development of the QR algorithm. The LR algorithm was developed in the early 1950s by Heinz Rutishauser, who worked at that time as a research assistant of Eduard Stiefel at ETH Zurich. Stiefel suggested that Rutishauser use the sequence of moments "y"0T "A""k" "x"0, "k" = 0, 1, … (where "x"0 and "y"0 are arbitrary vectors) to find the eigenvalues of "A". Rutishauser took an algorithm of Alexander Aitken for this task and developed it into the "quotient–difference algorithm" or "qd algorithm". After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration "A""k" = "L""k""U""k" (LU decomposition), "A""k"+1 = "U""k""L""k", applied on a tridiagonal matrix, from which the LR algorithm follows. Other variants. One variant of the "QR algorithm", "the Golub-Kahan-Reinsch" algorithm starts with reducing a general matrix into a bidiagonal one. This variant of the "QR algorithm" for the computation of singular values was first described by . The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small . Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD routine for the computation of the singular value decomposition. The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " A_{k+1} = R_k Q_k = Q_k^{-1} Q_k R_k Q_k = Q_k^{-1} A_k Q_k = Q_k^{\\mathsf{T}} A_k Q_k, " }, { "math_id": 1, "text": "\\tfrac{10}{3} n^3 + \\mathcal{O}(n^2)" }, { "math_id": 2, "text": "6 n^2 + \\mathcal{O}(n)" }, { "math_id": 3, "text": "\\tfrac{4}{3} n^3 + \\mathcal{O}(n^2)" }, { "math_id": 4, "text": "\\mathcal{O}(n)" }, { "math_id": 5, "text": "M" }, { "math_id": 6, "text": "M-\\lambda I" }, { "math_id": 7, "text": "\\lambda" }, { "math_id": 8, "text": "\\infty" }, { "math_id": 9, "text": "A_0=QAQ^{\\mathsf{T}}" }, { "math_id": 10, "text": "A_k" }, { "math_id": 11, "text": "p(A_k)" }, { "math_id": 12, "text": "p(A_k)e_1" }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "p(x)=(x-\\lambda)(x-\\bar\\lambda)" }, { "math_id": 15, "text": "\\bar\\lambda" }, { "math_id": 16, "text": "2 \\times 2" }, { "math_id": 17, "text": "r+1" } ]
https://en.wikipedia.org/wiki?curid=594072
59409110
Magnetic topological insulator
Topological insulators of magnetic materials In physics, magnetic topological insulators are three dimensional magnetic materials with a non-trivial topological index protected by a symmetry other than time-reversal. This type of material conducts electricity on its outer surface, but its volume behaves like an insulator. In contrast with a non-magnetic topological insulator, a magnetic topological insulator can have naturally gapped surface states as long as the quantizing symmetry is broken at the surface. These gapped surfaces exhibit a topologically protected half-quantized surface anomalous Hall conductivity (formula_0) perpendicular to the surface. The sign of the half-quantized surface anomalous Hall conductivity depends on the specific surface termination. Theory. Axion coupling. The formula_1 classification of a 3D crystalline topological insulator can be understood in terms of the axion coupling formula_2. A scalar quantity that is determined from the ground state wavefunction formula_3 . where formula_4 is a shorthand notation for the Berry connection matrix formula_5, where formula_6 is the cell-periodic part of the ground state Bloch wavefunction. The topological nature of the axion coupling is evident if one considers gauge transformations. In this condensed matter setting a gauge transformation is a unitary transformation between states at the same formula_7 point formula_8. Now a gauge transformation will cause formula_9 , formula_10. Since a gauge choice is arbitrary, this property tells us that formula_2 is only well defined in an interval of length formula_11 e.g. formula_12. The final ingredient we need to acquire a formula_1 classification based on the axion coupling comes from observing how crystalline symmetries act on formula_2. The consequence is that if time-reversal or inversion are symmetries of the crystal we need to have formula_19 and that can only be true if formula_20(trivial),formula_21(non-trivial) (note that formula_22 and formula_21 are identified) giving us a formula_1 classification. Furthermore, we can combine inversion or time-reversal with other symmetries that do not affect formula_2 to acquire new symmetries that quantize formula_2. For example, mirror symmetry can always be expressed as formula_23 giving rise to crystalline topological insulators, while the first intrinsic magnetic topological insulator MnBiformula_24Teformula_25 has the quantizing symmetry formula_26. Surface anomalous hall conductivity. So far we have discussed the mathematical properties of the axion coupling. Physically, a non-trivial axion coupling (formula_27) will result in a half-quantized surface anomalous Hall conductivity (formula_28) if the surface states are gapped. To see this, note that in general formula_29 has two contribution. One comes from the axion coupling formula_30, a quantity that is determined from bulk considerations as we have seen, while the other is the Berry phase formula_31 of the surface states at the Fermi level and therefore depends on the surface. In summary for a given surface termination the perpendicular component of the surface anomalous Hall conductivity to the surface will be formula_32. The expression for formula_29 is defined formula_33 because a surface property (formula_29) can be determined from a bulk property (formula_2) up to a quantum. To see this, consider a block of a material with some initial formula_2 which we wrap with a 2D quantum anomalous Hall insulator with Chern index formula_34. As long as we do this without closing the surface gap, we are able to increase formula_29 by formula_35 without altering the bulk, and therefore without altering the axion coupling formula_2. One of the most dramatic effects occurs when formula_36 and time-reversal symmetry is present, i.e. non-magnetic topological insulator. Since formula_37 is a pseudovector on the surface of the crystal, it must respect the surface symmetries, and formula_16 is one of them, but formula_38 resulting in formula_39. This forces formula_40 on "every surface" resulting in a Dirac cone (or more generally an odd number of Dirac cones) on "every surface" and therefore making the boundary of the material conducting. On the other hand, if time-reversal symmetry is absent, other symmetries can quantize formula_36 and but not force formula_37 to vanish. The most extreme case is the case of inversion symmetry (I). Inversion is never a surface symmetry and therefore a non-zero formula_37 is valid. In the case that a surface is gapped, we have formula_41 which results in a half-quantized surface AHC formula_42. A half quantized surface Hall conductivity and a related treatment is also valid to understand topological insulators in magnetic field giving an effective axion description of the electrodynamics of these materials. This term leads to several interesting predictions including a quantized magnetoelectric effect. Evidence for this effect has recently been given in THz spectroscopy experiments performed at the Johns Hopkins University. Experimental realizations. Magnetic topological insulators have proven difficult to create experimentally. In 2023 it was estimated that a magnetic topological insulator might be developed in 15 years' time. A compound made from manganese, bismuth, and tellurium (MnBi2Te4) has been predicted to be a magnetic topological insulator. In 2024, scientists at the University of Chicago used MnBi2Te4 to develop a form of optical memory which is switched using lasers. This memory storage device could store data more quickly and efficiently, including in quantum computing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e^2/2h" }, { "math_id": 1, "text": "\\mathbb{Z}_2" }, { "math_id": 2, "text": "\\theta" }, { "math_id": 3, "text": "\\theta = -\\frac{1}{4\\pi}\\int_{\\rm BZ} d^3k \\, \\epsilon^{\\alpha \\beta \\gamma} \\text{Tr} \\Big[ \\mathcal{A}_\\alpha \\partial_\\beta \\mathcal{A}_\\gamma -i\\frac{2}{3} \\mathcal{A}_\\alpha \\mathcal{A}_\\beta \\mathcal{A}_\\gamma \\Big]" }, { "math_id": 4, "text": "\\mathcal{A}_\\alpha" }, { "math_id": 5, "text": "\\mathcal{A}_j^{nm}(\\mathbf{k}) = \\langle u_{n\\mathbf{k}} | i\\partial_{k_j} | u_{m\\mathbf{k}} \\rangle" }, { "math_id": 6, "text": "| u_{m\\mathbf{k}} \\rangle" }, { "math_id": 7, "text": "\\mathbf{k}" }, { "math_id": 8, "text": "|\\tilde{\\psi}_{n\\mathbf{k}}\\rangle = U_{mn}(\\mathbf{k})|\\psi_{n\\mathbf{k}}\\rangle" }, { "math_id": 9, "text": " \\theta \\rightarrow \\theta +2\\pi n" }, { "math_id": 10, "text": "n \\in \\mathbb{N}" }, { "math_id": 11, "text": "2\\pi" }, { "math_id": 12, "text": "\\theta \\in [-\\pi,\\pi]" }, { "math_id": 13, "text": "\\tau_q" }, { "math_id": 14, "text": "C_n" }, { "math_id": 15, "text": "\\theta \\rightarrow \\theta " }, { "math_id": 16, "text": "T" }, { "math_id": 17, "text": "I" }, { "math_id": 18, "text": "\\theta \\rightarrow -\\theta " }, { "math_id": 19, "text": "\\theta = -\\theta " }, { "math_id": 20, "text": "\\theta = 0" }, { "math_id": 21, "text": "\\pi" }, { "math_id": 22, "text": "-\\pi" }, { "math_id": 23, "text": "m=I*C_2" }, { "math_id": 24, "text": "_2" }, { "math_id": 25, "text": "_4" }, { "math_id": 26, "text": "S=T*\\tau_{1/2}" }, { "math_id": 27, "text": "\\theta = \\pi" }, { "math_id": 28, "text": "\\sigma^{\\text{surf}}_{\\text{AHC}}=e^2/2h" }, { "math_id": 29, "text": "\\sigma^{\\text{surf}}_{\\text{AHC}}" }, { "math_id": 30, "text": "\\theta " }, { "math_id": 31, "text": "\\phi " }, { "math_id": 32, "text": "\\sigma^{\\text{surf}}_{\\text{AHC}} = -\\frac{e^2}{h}\\frac{\\theta-\\phi}{2\\pi} \\ \\text{mod} \\ e^2/h " }, { "math_id": 33, "text": "\\text{mod} \\ e^2/h " }, { "math_id": 34, "text": "C=1" }, { "math_id": 35, "text": "e^2/h" }, { "math_id": 36, "text": "\\theta=\\pi" }, { "math_id": 37, "text": "\\boldsymbol{\\sigma}^{\\text{surf}}_{\\text{AHC}}" }, { "math_id": 38, "text": "T\\boldsymbol{\\sigma}^{\\text{surf}}_{\\text{AHC}} =- \\boldsymbol{\\sigma}^{\\text{surf}}_{\\text{AHC}}" }, { "math_id": 39, "text": "\\boldsymbol{\\sigma}^{\\text{surf}}_{\\text{AHC}} = 0" }, { "math_id": 40, "text": "\\phi = \\pi" }, { "math_id": 41, "text": "\\phi = 0" }, { "math_id": 42, "text": "\\sigma^{\\text{surf}}_{\\text{AHC}} = -\\frac{e^2}{2h}" } ]
https://en.wikipedia.org/wiki?curid=59409110
59413690
Kerr–Dold vortex
In fluid dynamics, Kerr–Dold vortex is an exact solution of Navier–Stokes equations, which represents steady periodic vortices superposed on the stagnation point flow (or extensional flow). The solution was discovered by Oliver S. Kerr and John W. Dold in 1994. These steady solutions exist as a result of a balance between vortex stretching by the extensional flow and viscous diffusion, which are similar to Burgers vortex. These vortices were observed experimentally in a four-roll mill apparatus by Lagnado and L. Gary Leal. Mathematical description. The stagnation point flow, which is already an exact solution of the Navier–Stokes equation is given by formula_0, where formula_1 is the strain rate. To this flow, an additional periodic disturbance can be added such that the new velocity field can be written as formula_2 where the disturbance formula_3 and formula_4 are assumed to be periodic in the formula_5 direction with a fundamental wavenumber formula_6. Kerr and Dold showed that such disturbances exist with finite amplitude, thus making the solution an exact to Navier–Stokes equations. Introducing a stream function formula_7 for the disturbance velocity components, the equations for disturbances in vorticity-streamfunction formulation can be shown to reduce to formula_8 where formula_9 is the disturbance vorticity. A single parameter formula_10 can be obtained upon non-dimensionalization, which measures the strength of the converging flow to viscous dissipation. The solution will be assumed to be formula_11 Since formula_7 is real, it is easy to verify that formula_12 Since the expected vortex structure has the symmetry formula_13, we have formula_14. Upon substitution, an infinite sequence of differential equation will be obtained which are coupled non-linearly. To derive the following equations, Cauchy product rule will be used. The equations are formula_15 The boundary conditions formula_16 and the corresponding symmetry condition is enough to solve the problem. It can be shown that non-trivial solution exist only when formula_17 On solving this equation numerically, it is verified that keeping first 7 to 8 terms suffice to produce accurate results. The solution when formula_18 is formula_19 was already discovered by Craik and Criminale in 1986. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{U}=(0,-Ay,Az)" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "\\mathbf{u}=\\begin{bmatrix}0 \\\\-Ay \\\\ Az \\end{bmatrix} + \\begin{bmatrix}u(x,y) \\\\v(x,y) \\\\ 0 \\end{bmatrix}" }, { "math_id": 3, "text": "u(x,y)" }, { "math_id": 4, "text": "v(x,y)" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\psi" }, { "math_id": 8, "text": "\\begin{align}\n\\omega &= -\\left(\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}\\right)\\psi\\\\[6pt]\n\\frac{\\partial \\psi}{\\partial y} \\frac{\\partial \\omega}{\\partial x} - \\frac{\\partial \\psi}{\\partial x} \\frac{\\partial \\omega}{\\partial y} &- A y\\frac{\\partial \\omega}{\\partial y} - A\\omega = \\nu\\left(\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}\\right)\\omega\n\\end{align}" }, { "math_id": 9, "text": "\\omega" }, { "math_id": 10, "text": "\\lambda = \\frac{A}{\\nu k^2}" }, { "math_id": 11, "text": "\\psi = \\sum_{k=-\\infty}^\\infty [a_k(y) + i b_k(y)]e^{-ikx}." }, { "math_id": 12, "text": "a_k= a_{-k},\\,b_k = - b_{-k},\\, b_0 =0." }, { "math_id": 13, "text": "\\psi(x,y)=\\psi(-x,-y),\\, \\psi(x,y)=-\\psi(\\pi-x,y)" }, { "math_id": 14, "text": "a_0=b_1=0" }, { "math_id": 15, "text": "\n\\begin{align}\n& a_k''''+ Ay a_k''' + (A-2k^2)a_k''- k^2 Ay a_k'- k^2 Aa_k + k^4 a_k\\\\[6pt]\n& {} + i\\left[b_k'''' + A y b_k''' + (A-2k^2)b_k'' - k^2 Ay b_k' - k^2 Ab_k + k^4 b_k \\right]\\\\[6pt]\n= {} & i \\sum_{\\ell=-\\infty}^\\infty \\left\\{\\left(a_{k-\\ell}' + ib_{k-\\ell}'\\right)\\left[\\ell a_\\ell'' - \\ell^3 a_\\ell + i(\\ell b_\\ell'' - \\ell^3 b_\\ell)\\right]\n- (k-\\ell) \\left(a_{k-\\ell}+ib_{k-\\ell}\\right)\\left[a_\\ell''' - \\ell^2 a_\\ell' + i(b_\\ell''' - \\ell^2 b_\\ell')\\right]\\right\\}.\n\\end{align}\n" }, { "math_id": 16, "text": "a_k'(0)=b_k(0)=a_k(\\infty)=b_k(\\infty)=0" }, { "math_id": 17, "text": "\\lambda>1." }, { "math_id": 18, "text": "\\lambda=1" }, { "math_id": 19, "text": "\\psi=\\cos x" } ]
https://en.wikipedia.org/wiki?curid=59413690
5941535
Regular number
Numbers that evenly divide powers of 60 Regular numbers are numbers that evenly divide powers of 60 (or, equivalently, powers of 30). Equivalently, they are the numbers whose only prime divisors are 2, 3, and 5. As an example, 602 = 3600 = 48 × 75, so as divisors of a power of 60 both 48 and 75 are regular. These numbers arise in several areas of mathematics and its applications, and have different names coming from their different areas of study. Number theory. Formally, a regular number is an integer of the form formula_0, for nonnegative integers formula_1, formula_2, and formula_3. Such a number is a divisor of formula_4. The regular numbers are also called 5-smooth, indicating that their greatest prime factor is at most 5. More generally, a k-smooth number is a number whose greatest prime factor is at The first few regular numbers are &lt;templatestyles src="Block indent/styles.css"/&gt; Several other sequences at the On-Line Encyclopedia of Integer Sequences have definitions involving 5-smooth numbers. Although the regular numbers appear dense within the range from 1 to 60, they are quite sparse among the larger integers. A regular number formula_5 is less than or equal to some threshold formula_6 if and only if the point formula_7 belongs to the tetrahedron bounded by the coordinate planes and the plane formula_8 as can be seen by taking logarithms of both sides of the inequality formula_9. Therefore, the number of regular numbers that are at most formula_6 can be estimated as the volume of this tetrahedron, which is formula_10 Even more precisely, using big O notation, the number of regular numbers up to formula_6 is formula_11 and it has been conjectured that the error term of this approximation is actually formula_12. A similar formula for the number of 3-smooth numbers up to formula_6 is given by Srinivasa Ramanujan in his first letter to G. H. Hardy. Babylonian mathematics. In the Babylonian sexagesimal notation, the reciprocal of a regular number has a finite representation. If formula_13 divides formula_14, then the sexagesimal representation of formula_15 is just that for formula_16, shifted by some number of places. This allows for easy division by these numbers: to divide by formula_13, multiply by formula_15, then shift. For instance, consider division by the regular number 54 = 2133. 54 is a divisor of 603, and 603/54 = 4000, so dividing by 54 in sexagesimal can be accomplished by multiplying by 4000 and shifting three places. In sexagesimal 4000 = 1×3600 + 6×60 + 40×1, or (as listed by Joyce) 1:6:40. Thus, 1/54, in sexagesimal, is 1/60 + 6/602 + 40/603, also denoted 1:6:40 as Babylonian notational conventions did not specify the power of the starting digit. Conversely 1/4000 = 54/603, so division by 1:6:40 = 4000 can be accomplished by instead multiplying by 54 and shifting three sexagesimal places. The Babylonians used tables of reciprocals of regular numbers, some of which still survive. These tables existed relatively unchanged throughout Babylonian times. One tablet from Seleucid times, by someone named Inaqibıt-Anu, contains the reciprocals of 136 of the 231 six-place regular numbers whose first place is 1 or 2, listed in order. It also includes reciprocals of some numbers of more than six places, such as 323 (2 1 4 8 3 0 7 in sexagesimal), whose reciprocal has 17 sexagesimal digits. Noting the difficulty of both calculating these numbers and sorting them, Donald Knuth in 1972 hailed Inaqibıt-Anu as "the first man in history to solve a computational problem that takes longer than one second of time on a modern electronic computer!" (Two tables are also known giving approximations of reciprocals of non-regular numbers, one of which gives reciprocals for all the numbers from 56 to 80.) Although the primary reason for preferring regular numbers to other numbers involves the finiteness of their reciprocals, some Babylonian calculations other than reciprocals also involved regular numbers. For instance, tables of regular squares have been found and the broken tablet Plimpton 322 has been interpreted by Neugebauer as listing Pythagorean triples formula_17 generated by formula_18 and formula_19 both regular and less than 60. Fowler and Robson discuss the calculation of square roots, such as how the Babylonians found an approximation to the square root of 2, perhaps using regular number approximations of fractions such as 17/12. Music theory. In music theory, the just intonation of the diatonic scale involves regular numbers: the pitches in a single octave of this scale have frequencies proportional to the numbers in the sequence 24, 27, 30, 32, 36, 40, 45, 48 of nearly consecutive regular numbers. Thus, for an instrument with this tuning, all pitches are regular-number harmonics of a single fundamental frequency. This scale is called a 5-limit tuning, meaning that the interval between any two pitches can be described as a product 2i3j5k of powers of the prime numbers up to 5, or equivalently as a ratio of regular numbers. 5-limit musical scales other than the familiar diatonic scale of Western music have also been used, both in traditional musics of other cultures and in modern experimental music: list 31 different 5-limit scales, drawn from a larger database of musical scales. Each of these 31 scales shares with diatonic just intonation the property that all intervals are ratios of regular numbers. Euler's tonnetz provides a convenient graphical representation of the pitches in any 5-limit tuning, by factoring out the octave relationships (powers of two) so that the remaining values form a planar grid. Some music theorists have stated more generally that regular numbers are fundamental to tonal music itself, and that pitch ratios based on primes larger than 5 cannot be consonant. However the equal temperament of modern pianos is not a 5-limit tuning, and some modern composers have experimented with tunings based on primes larger than five. In connection with the application of regular numbers to music theory, it is of interest to find pairs of regular numbers that differ by one. There are exactly ten such pairs formula_20 and each such pair defines a superparticular ratio formula_21 that is meaningful as a musical interval. These intervals are 2/1 (the octave), 3/2 (the perfect fifth), 4/3 (the perfect fourth), 5/4 (the just major third), 6/5 (the just minor third), 9/8 (the just major tone), 10/9 (the just minor tone), 16/15 (the just diatonic semitone), 25/24 (the just chromatic semitone), and 81/80 (the syntonic comma). In the Renaissance theory of universal harmony, musical ratios were used in other applications, including the architecture of buildings. In connection with the analysis of these shared musical and architectural ratios, for instance in the architecture of Palladio, the regular numbers have also been called the harmonic whole numbers. Algorithms. Algorithms for calculating the regular numbers in ascending order were popularized by Edsger Dijkstra. Dijkstra (1976, 1981) attributes to Hamming the problem of building the infinite ascending sequence of all 5-smooth numbers; this problem is now known as Hamming's problem, and the numbers so generated are also called the Hamming numbers. Dijkstra's ideas to compute these numbers are the following: This algorithm is often used to demonstrate the power of a lazy functional programming language, because (implicitly) concurrent efficient implementations, using a constant number of arithmetic operations per generated value, are easily constructed as described above. Similarly efficient strict functional or imperative sequential implementations are also possible whereas explicitly concurrent generative solutions might be non-trivial. In the Python programming language, lazy functional code for generating regular numbers is used as one of the built-in tests for correctness of the language's implementation. A related problem, discussed by , is to list all formula_3-digit sexagesimal numbers in ascending order (see #Babylonian mathematics above). In algorithmic terms, this is equivalent to generating (in order) the subsequence of the infinite sequence of regular numbers, ranging from formula_14 to formula_30. See for an early description of computer code that generates these numbers out of order and then sorts them; Knuth describes an ad hoc algorithm, which he attributes to , for generating the six-digit numbers more quickly but that does not generalize in a straightforward way to larger values of formula_3. describes an algorithm for computing tables of this type in linear time for arbitrary values of formula_3. Other applications. show that, when formula_13 is a regular number and is divisible by 8, the generating function of an formula_13-dimensional extremal even unimodular lattice is an formula_13th power of a polynomial. As with other classes of smooth numbers, regular numbers are important as problem sizes in computer programs for performing the fast Fourier transform, a technique for analyzing the dominant frequencies of signals in time-varying data. For instance, the method of requires that the transform length be a regular number. Book VIII of Plato's "Republic" involves an allegory of marriage centered on the highly regular number 604 = 12,960,000 and its divisors (see Plato's number). Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage. Certain species of bamboo release large numbers of seeds in synchrony (a process called masting) at intervals that have been estimated as regular numbers of years, with different intervals for different species, including examples with intervals of 10, 15, 16, 30, 32, 48, 60, and 120 years. It has been hypothesized that the biological mechanism for timing and synchronizing this process lends itself to smooth numbers, and in particular in this case to 5-smooth numbers. Although the estimated masting intervals for some other species of bamboo are not regular numbers of years, this may be explainable as measurement error. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "2^i\\cdot 3^j\\cdot 5^k" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "j" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "60^{\\max(\\lceil i\\,/2\\rceil,j,k)}" }, { "math_id": 5, "text": "n=2^i\\cdot 3^j\\cdot 5^k" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "(i,j,k)" }, { "math_id": 8, "text": "i\\ln 2+j\\ln 3+k\\ln 5\\le\\ln N," }, { "math_id": 9, "text": "2^i\\cdot 3^j\\cdot 5^k\\le N" }, { "math_id": 10, "text": "\\frac{\\log_2 N\\,\\log_3 N\\,\\log_5 N}{6}." }, { "math_id": 11, "text": "\\frac{\\left(\\ln(N\\sqrt{30})\\right)^3}{6\\ln 2 \\ln 3 \\ln 5}+O(\\log N)," }, { "math_id": 12, "text": "O(\\log\\log N)" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "60^k" }, { "math_id": 15, "text": "1/n" }, { "math_id": 16, "text": "60^k/n" }, { "math_id": 17, "text": "( p^2 - q^2,\\, 2pq,\\, p^2 + q^2 )" }, { "math_id": 18, "text": "p" }, { "math_id": 19, "text": "q" }, { "math_id": 20, "text": "(x,x+1)" }, { "math_id": 21, "text": "\\tfrac{x+1}{x}" }, { "math_id": 22, "text": "2h" }, { "math_id": 23, "text": "3h" }, { "math_id": 24, "text": "5h" }, { "math_id": 25, "text": "h" }, { "math_id": 26, "text": "H" }, { "math_id": 27, "text": "2H" }, { "math_id": 28, "text": "3H" }, { "math_id": 29, "text": "5H" }, { "math_id": 30, "text": "60^{k+1}" } ]
https://en.wikipedia.org/wiki?curid=5941535
594303
True airspeed
Speed of an aircraft relative to the air mass through which it is flying The true airspeed (TAS; also KTAS, for "knots true airspeed") of an aircraft is the speed of the aircraft relative to the air mass through which it is flying. The true airspeed is important information for accurate navigation of an aircraft. Traditionally it is measured using an analogue TAS indicator, but as the Global Positioning System has become available for civilian use, the importance of such air-measuring instruments has decreased. Since "indicated", as opposed to "true", airspeed is a better indicator of margin above the stall, true airspeed is not used for controlling the aircraft; for these purposes the indicated airspeed – IAS or KIAS (knots indicated airspeed) – is used. However, since indicated airspeed only shows true speed through the air at standard sea level pressure and temperature, a TAS meter is necessary for navigation purposes at cruising altitude in less dense air. The IAS meter reads very nearly the TAS at lower altitude and at lower speed. On jet airliners the TAS meter is usually hidden at speeds below . Neither provides for accurate speed over the ground, since surface winds or winds aloft are not taken into account. Performance. TAS is the appropriate speed to use when calculating the range of an airplane. It is the speed normally listed on the flight plan, also used in flight planning, before considering the effects of wind. Airspeed sensing errors. The airspeed indicator (ASI), driven by ram air into a pitot tube and still air into a barometric static port, shows what is called indicated airspeed (IAS). The differential pressure is affected by air density. The ratio between the two measurements is temperature-dependent and pressure-dependent, according to the ideal gas law. At sea level in the International Standard Atmosphere (ISA) and at low speeds where air compressibility is negligible (i.e., assuming a constant air density), IAS corresponds to TAS. When the air density or temperature around the aircraft differs from standard sea level conditions, IAS will no longer correspond to TAS, thus it will no longer reflect aircraft performance. The ASI will indicate less than TAS when the air density decreases due to a change in altitude or air temperature. For this reason, TAS cannot be measured directly. In flight, it can be calculated either by using an E6B flight calculator or its equivalent. For low speeds, the data required are static air temperature, pressure altitude and IAS (or CAS for more precision). Above approximately , the compressibility error rises significantly and TAS must be calculated by the Mach speed. Mach incorporates the above data including the compressibility factor. Modern aircraft instrumentation use an "air data computer" to perform this calculation in real time and display the TAS reading directly on the electronic flight instrument system. Since temperature variations are of a smaller influence, the ASI error can be estimated as indicating about 2% less than TAS per of altitude above sea level. For example, an aircraft flying at in the international standard atmosphere with an IAS of , is actually flying at TAS. Use in navigation calculations. To maintain a desired ground track while flying in the moving airmass, the pilot of an aircraft must use knowledge of wind speed, wind direction, and true air speed to determine the required heading. See also wind triangle. Calculating true airspeed. Low-speed flight. At low speeds and altitudes, IAS and CAS are close to equivalent airspeed (EAS). formula_0 TAS can be calculated as a function of EAS and air density: formula_1 where formula_2 is true airspeed, formula_3 is equivalent airspeed, formula_4 is the air density at sea level in the International Standard Atmosphere (15 °C and 1013.25 hectopascals, corresponding to a density of 1.225 kg/m3), formula_5 is the density of the air in which the aircraft is flying. High-speed flight. TAS can be calculated as a function of Mach number and static air temperature: formula_6 where formula_7 is the speed of sound at standard sea level (), formula_8 is Mach number, formula_9 is static air temperature in kelvins, formula_10 is the temperature at standard sea level (288.15 K). For manual calculation of TAS in knots, where Mach number and static air temperature are known, the expression may be simplified to formula_11 (remembering temperature is in kelvins). Combining the above with the expression for Mach number gives an expression for TAS as a function of impact pressure, static pressure and static air temperature (valid for subsonic flow): formula_12 where: formula_13 is impact pressure, formula_14 is static pressure. Electronic flight instrument systems (EFIS) contain an air data computer with inputs of impact pressure, static pressure and total air temperature. In order to compute TAS, the air data computer must convert total air temperature to static air temperature. This is also a function of Mach number: formula_15 where formula_16 total air temperature. In simple aircraft, without an air data computer or machmeter, true airspeed can be calculated as a function of calibrated airspeed and local air density (or static air temperature and pressure altitude, which determine density). Some airspeed indicators incorporate a slide rule mechanism to perform this calculation. Otherwise, it can be performed using this applet or a device such as the E6B (a handheld circular slide rule). See also. &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "\\rho_0 (EAS)^2 = \\rho (TAS)^2" }, { "math_id": 1, "text": "\\mathrm{TAS} =\\frac { \\mathrm{EAS}}{\\sqrt{\\frac{\\rho}{\\rho_0}}}" }, { "math_id": 2, "text": "\\mathrm{TAS}" }, { "math_id": 3, "text": "\\mathrm{EAS}" }, { "math_id": 4, "text": "\\rho_0" }, { "math_id": 5, "text": "\\rho" }, { "math_id": 6, "text": "\\mathrm{TAS} ={a_0} M\\sqrt{T\\over T_0}," }, { "math_id": 7, "text": "{a_0}" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "T" }, { "math_id": 10, "text": "T_0" }, { "math_id": 11, "text": "\n\\mathrm{TAS} = 39M\\sqrt{T}\n" }, { "math_id": 12, "text": "\\mathrm{TAS} = a_0\\sqrt{\\frac{5T}{T_0}\\left[\\left(\\frac{q_c}{P} + 1\\right)^\\frac{2}{7} - 1\\right]}," }, { "math_id": 13, "text": "q_c" }, { "math_id": 14, "text": "P" }, { "math_id": 15, "text": "\nT = \\frac{T_\\text{t}}{1 + 0.2M^2},\n" }, { "math_id": 16, "text": "T_\\text{t} = " } ]
https://en.wikipedia.org/wiki?curid=594303
594312
Indicated airspeed
Displayed on the airspeed indicator on an aircraft Indicated airspeed (IAS) is the airspeed of an aircraft as measured by its pitot-static system and displayed by the airspeed indicator (ASI). This is the pilots' primary airspeed reference. This value is not corrected for installation error, instrument error, or the actual encountered air density, being instead calibrated to always reflect the adiabatic compressible flow of the International Standard Atmosphere at sea level. It uses the difference between total pressure and static pressure, provided by the system, to either mechanically or electronically measure dynamic pressure. The dynamic pressure includes terms for both density and airspeed. Since the airspeed indicator cannot know the density, it is by design calibrated to assume the sea level standard atmospheric density when calculating airspeed. Since the actual density will vary considerably from this assumed value as the aircraft changes altitude, IAS varies considerably from true airspeed (TAS), the relative velocity between the aircraft and the surrounding air mass. Calibrated airspeed (CAS) is the IAS corrected for instrument and position error. An aircraft's indicated airspeed in knots is typically abbreviated "KIAS" for "Knots-Indicated Air Speed" (vs. "KCAS" for calibrated airspeed and "KTAS" for true airspeed). The IAS is an important value for the pilot because it is the indicated speeds which are specified in the aircraft flight manual for such important performance values as the stall speed. These speeds, in true airspeed terms, vary considerably depending upon density altitude. However, at typical civilian operating speeds, the aircraft's aerodynamic structure responds to dynamic pressure alone, and the aircraft will perform the same when at the same dynamic pressure. Since it is this same dynamic pressure that drives the airspeed indicator, an aircraft will always, for example, stall at the published "indicated" airspeed (for the current configuration) regardless of density, altitude or true airspeed. Furthermore, the IAS is specified in some regulations, and by air traffic control when directing pilots, since the airspeed indicator displays that speed (by definition) and it is the pilot's primary airspeed reference when operating below transonic or supersonic speeds. Calculation. Indicated airspeed measured by pitot-tube can be approximately expressed by the following equation delivered from Bernoulli's equation. formula_0 NOTE: The above equation applies only to conditions that can be treated as incompressible. Liquids are treated as incompressible under almost all conditions. Gases under certain conditions can be approximated as incompressible. See Compressibility. The compression effects can be corrected by use of Poisson constant. This compensation corresponds to equivalent airspeed (EAS). formula_1 where: IAS vs CAS. The IAS is not the actual speed through the air even when the aircraft is at sea level under International Standard Atmosphere conditions (15 °C, 1013 hPa, 0% humidity). The IAS needs to be corrected for known instrument and position errors to show true airspeed under those specific atmospheric conditions, and this is the CAS (Calibrated Airspeed). Despite this the pilot's primary airspeed reference, the ASI, shows IAS (by definition). The relationship between CAS and IAS is known and documented for each aircraft type and model. IAS and V speeds. The aircraft's pilot manual usually gives critical V speeds as IAS, those speeds indicated by the airspeed indicator. This is because the aircraft behaves similarly at the same IAS no matter what the TAS is: E.g. A pilot landing at a hot and high airfield will use the same IAS to fly the aircraft at the correct approach and landing speeds as he would when landing at a cold sea level airfield even though the TAS must differ considerably between the two landings. Whereas IAS can be reliably used for monitoring critical speeds well below the speed of sound this is not so at higher speeds. An example: Because (1) the compressibility of air changes considerably approaching the speed of sound, and (2) the speed of sound varies considerably with temperature and therefore altitude; the maximum speed at which an aircraft structure is safe, the never exceed speed (abbreviated "V"NE), is specified at several differing altitudes in faster aircraft's operating manuals, as shown in the sample table below. Ref: "Pilot's Notes for Tempest V Sabre IIA Engine" - Air Ministry A.P.2458C-PN IAS and navigation. For navigation, it is necessary to convert IAS to TAS and/or ground speed (GS) using the following method: With the advent of Doppler radar navigation and, more recently, GPS receivers, with other advanced navigation equipment that allows pilots to read ground speed directly, the TAS calculation in-flight is becoming unnecessary for the purposes of navigation estimations. TAS is the primary method to determine aircraft's cruise performance in manufacturer's specs, speed comparisons and pilot reports. Other airspeeds. From IAS, the following speeds can also be calculated: On large jet aircraft the IAS is by far the most important speed indicator. Most aircraft speed limitations are based on IAS, as IAS closely reflects dynamic pressure. TAS is usually displayed as well, but purely for advisory information and generally not in a prominent location. Modern jet airliners also include ground speed (GS) and Machmeter. Ground speed shows the actual speed that the aircraft uses compared to the ground. This is usually connected to a GPS or similar system. Ground speed is just a pilot aid to estimate if the flight is on time, behind or ahead of schedule. It is not used for takeoff and landing purposes, since the imperative speed for a flying aircraft always is the speed against the wind. The Machmeter is, on subsonic aircraft, a warning indicator. Subsonic aircraft must not fly faster than a specific percentage of the speed of sound. Usually passenger airliners do not fly faster than around 85% of speed of sound, or Mach 0.85. Supersonic aircraft, like the Concorde and military fighters, use the Machmeter as the main speed instrument with the exception of take-offs and landings. Some aircraft also have a taxi speed indicator for use on the ground. Since the IAS often starts at around (on jet airliners), pilots may need extra help while taxiing the aircraft on the ground. Its range is around . See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "IAS \\approx \\sqrt{\\frac{2 (p_t - p_s)}{\\rho(0)}}" }, { "math_id": 1, "text": "u = \\sqrt{ \\frac{2 \\gamma}{\\gamma - 1} \\frac{p_s}{\\rho} \\left[\\left(\\frac{p_t}{p_s}\\right)^\\frac{\\gamma-1}{\\gamma}-1 \\right] }" }, { "math_id": 2, "text": "u" }, { "math_id": 3, "text": "p_t" }, { "math_id": 4, "text": "p_s" }, { "math_id": 5, "text": "\\rho(0)" }, { "math_id": 6, "text": " kg/m^3" }, { "math_id": 7, "text": "\\ \\gamma\\," } ]
https://en.wikipedia.org/wiki?curid=594312
59434042
Local average treatment effect
Econometric effect In econometrics and related empirical fields, the local average treatment effect (LATE), also known as the complier average causal effect (CACE), is the effect of a treatment for subjects who comply with the experimental treatment assigned to their sample group. It is not to be confused with the average treatment effect (ATE), which includes compliers and non-compliers together. Compliance refers to the human-subject response to a proposed experimental treatment condition. Similar to the ATE, the LATE is calculated but does not include non-compliant parties. If the goal is to evaluate the effect of a treatment in ideal, compliant subjects, the LATE value will give a more precise estimate. However, it may lack external validity by ignoring the effect of non-compliance that is likely to occur in the real-world deployment of a treatment method. The LATE can be estimated by a ratio of the estimated intent-to-treat effect and the estimated proportion of compliers, or alternatively through an instrumental variable estimator. The LATE was first introduced in the econometrics literature by Guido W. Imbens and Joshua D. Angrist in 1994, who shared one half of the 2021 Nobel Memorial Prize in Economic Sciences. As summarized by the Nobel Committee, the LATE framework "significantly altered how researchers approach empirical questions using data generated from either natural experiments or randomized experiments with incomplete compliance to the assigned treatment. At the core, the LATE interpretation clarifies what can and cannot be learned from such experiments." The phenomenon of non-compliant subjects (patients) is also known in medical research. In the biostatistics literature, Baker and Lindeman (1994) independently developed the LATE method for a binary outcome with the paired availability design and the key monotonicity assumption. Baker, Kramer, Lindeman (2016) summarized the history of its development. Various papers called both Imbens and Angrist (1994) and Baker and Lindeman (1994) seminal. An early version of LATE involved one-sided noncompliance (and hence no monotonicity assumption). In 1983 Baker wrote a technical report describing LATE for one-sided noncompliance that was published in 2016 in a supplement. In 1984, Bloom published a paper on LATE with one-sided compliance. For a history of multiple discoveries involving LATE see Baker and Lindeman (2024). General definition. The typical terminology of the Rubin causal model is used to measure the LATE, with units indexed formula_0 and a binary treatment indicator, formula_1 for unit formula_2. The term formula_3 is used to denote the potential outcome of unit formula_2 under treatment formula_1. In an ideal experiment, all subjects assigned to the treatment will comply with the treatment, while those that are assigned to control will remain untreated. In reality, however, the compliance rate is often imperfect, which prevents researchers from identifying the ATE. In such cases, estimating the LATE becomes the more feasible option. The LATE is the average treatment effect among a specific subset of the subjects, who in this case would be the compliers. Potential outcome framework. The is defined within the potential outcomes framework of causal inference. The treatment effect for subject formula_2 is formula_4. It is impossible to simultaneously observe formula_5and formula_6 for the same subject. At any given time, only a subject in its treated formula_5 or untreated formula_6 state can be observed. Through random assignment, the expected untreated potential outcome of the control group is the same as that of the treatment group, and the expected treated potential outcome of the treatment group is the same as that of the control group. The random assignment assumption thus allows one to take the difference between the average outcome in the treatment group and the average outcome in the control group as the overall average treatment effect, such that: formula_7 Non-compliance framework. Researchers frequently encounter non-compliance problems in their experiments, whereby subjects fail to comply with their experimental assignments. In an experiment with non-compliance, the subjects can be divided into four subgroups: compliers, always-takers, never-takers and defiers. The term formula_8 represents the treatment that subject formula_2 actually takes when their treatment assignment is formula_9. Compliers are subjects who will take the treatment if and only if they were assigned to the treatment group, i.e., the subpopulation with formula_10 and formula_11. Non-compliers are composed of the three remaining subgroups: Non-compliance can take two forms: one-sided (always-takers and never-takers) and two-sided (defiers). In the case of one-sided non-compliance, a number of the subjects who were assigned to the treatment group remain untreated. Subjects are thus divided into compliers and never-takers, such that formula_16 for all formula_17, while formula_18 or formula_19. In the case of two-sided non-compliance, a number of the subjects assigned to the treatment group fail to receive the treatment, while a number of the subjects assigned to the control group receive the treatment. In this case, subjects are divided into the four subgroups, such that both formula_20 and formula_21 can be 0 or 1. Given non-compliance, certain assumptions are required to estimate the LATE. Under one-sided non-compliance, non-interference and excludability is assumed. Under two-sided non-compliance, non-interference, excludability, and monotonicity is assumed. Assumptions under one-sided non-compliance. The non-interference assumption, otherwise known as the Stable Unit Treatment Value Assumption (SUTVA), is composed of two parts. The excludability assumption requires that potential outcomes respond to treatment itself, formula_22, not treatment assignment, formula_9. Formally formula_28. So under this assumption, only formula_29 matters. The plausibility of the excludability assumption must also be assessed on a case-by-case basis. Identification. The formula_33, whereby formula_34 formula_35 The formula_36 measures the average effect of experimental assignment on outcomes without accounting for the proportion of the group that was actually treated (i.e., an average of those assigned to treatment minus the average of those assigned to control). In experiments with full compliance, the formula_37. The formula_38measures the proportion of subjects who are treated when they are assigned to the treatment group, minus the proportion who would have been treated even if they had been assigned to the control group, i.e., formula_38= the share of compliers. Proof. Under one-sided noncompliance, all subjects assigned to control group will not take the treatment, therefore: formula_39, so that formula_40 If all subjects were assigned to treatment, the expected potential outcomes would be a weighted average of the treated potential outcomes among compliers, and the untreated potential outcomes among never-takers, such that formula_41 If all subjects were assigned to control, however, the expected potential outcomes would be a weighted average of the untreated potential outcomes among compliers and never-takers, such that formula_42 Through substitution, the ITT is expressed as a weighted average of the ITT among the two subpopulations (compliers and never-takers), such that formula_43 Given the exclusion and monotonicity assumption, the second half of this equation should be zero. As such, formula_44 Application: hypothetical schedule of the potential outcome under two-sided noncompliance. The table below lays out the hypothetical schedule of potential outcomes under two-sided noncompliance. The ATE is calculated by the average of formula_45 formula_46 LATE is calculated by ATE among compliers, so formula_47 ITT is calculated by the average of formula_48, so formula_49 formula_50 is the share of compliers formula_51 formula_52 Others: LATE in instrumental variable framework. LATE can be thought of through an IV framework. Treatment assignment formula_9 is the instrument that drives the causal effect on outcome formula_53 through the variable of interest formula_22, such that formula_9 only influences formula_53 through the endogenous variable formula_22, and through no other path. This would produce the treatment effect for compliers. In addition to the potential outcomes framework mentioned above, LATE can also be estimated through the Structural Equation Modeling (SEM) framework, originally developed for econometric applications. SEM is derived through the following equations: formula_54 formula_55 The first equation captures the first stage effect of formula_9on formula_22, adjusting for variance, where formula_56 The second equation formula_57 captures the reduced form effect of formula_9 on formula_53, formula_58 The covariate adjusted IV estimator is the ratio formula_59 Similar to the nonzero compliance assumption, the coefficient formula_60 in first stage regression needs to be significant to make formula_61 a valid instrument. However, because of SEM’s strict assumption of constant effect on every individual, the potential outcomes framework is in more prevalent use today. Generalizing LATE. The primary goal of running an experiment is to obtain causal leverage, and it does so by randomly assigning subjects to experimental conditions, which sets it apart from observational studies. In an experiment with perfect compliance, the average treatment effect can be obtained. However, many experiments are likely to experience either one-sided or two-sided non-compliance. In the presence of non-compliance, the ATE can no longer be recovered. Instead, what is recovered is the average treatment effect for a certain subpopulation known as the compliers, which is the LATE. When there may exist heterogeneous treatment effects across groups, the LATE is unlikely to be equivalent to the ATE. In one example, Angrist (1989) attempts to estimate the causal effect of serving in the military on earnings, using the draft lottery as an instrument. The compliers are those who were induced by the draft lottery to serve in the military. If the research interest is on how to compensate those involuntarily taxed by the draft, LATE would be useful, since the research targets compliers. However, if researchers are concerned about a more universal draft for future interpretation, then the ATE would be more important (Imbens 2009). Generalizing from the LATE to the ATE thus becomes an important issue when the research interest lies with the causal treatment effect on a broader population, not just the compliers. In these cases, the LATE may not be the parameter of interest, and researchers have questioned its utility. Other researchers, however, have countered this criticism by proposing new methods to generalize from the LATE to the ATE. Most of these involve some form of reweighting from the LATE, under certain key assumptions that allow for extrapolation from the compliers. Reweighting. The intuition behind reweighting comes from the notion that given a certain strata, the distribution among the compliers may not reflect the distribution of the broader population. Thus, to retrieve the ATE, it is necessary to reweight based on the information gleaned from compliers. There are a number of ways that reweighting can be used to obtain the ATE from the LATE. Reweighting by ignorability assumption. By leveraging instrumental variables, Aronow and Carnegie (2013) propose a new reweighting method called Inverse Compliance Score weighting (ICSW), with a similar intuition behind IPW. This method assumes compliance propensity is a pre-treatment covariate and compliers would have the same average treatment effect within their strata. ICSW first estimates the conditional probability of being a complier (Compliance Score) for each subject by Maximum Likelihood estimator given covariates control, then reweights each unit by its inverse of compliance score, so that compliers would have covariate distribution that matches the full population. ICSW is applicable at both one-sided and two-sided noncompliance situation. Although one's compliance score cannot be directly observed, the probability of compliance can be estimated by observing the compliance condition from the same strata,  in other words those that share the same covariate profile. The compliance score is treated as a latent pretreatment covariate, which is independent of treatment assignment formula_62. For each unit formula_17, compliance score is denoted as formula_63, where formula_64is the covariate vector for unit formula_65. In one-sided noncompliance case, the population consists of only compliers and never-takers. All units assigned to the treatment group that take the treatment will be compliers. Thus, a simple bivariate regression of "D" on "X" can predict the probability of compliance. In two-sided noncompliance case, compliance score is estimated using maximum likelihood estimation. By assuming probit distribution for compliance and of Bernoulli distribution of D, where formula_66 . and formula_67 is a vector of covariates to be estimated, formula_68 is the cumulative distribution function for a probit model By the LATE theorem, average treatment effect for compliers can be estimated with equation: formula_69 Define formula_70 the ICSW estimator is simply weighted by: formula_71 This estimator is equivalent to using 2SLS estimator with weight . An essential assumption of ICSW relying on treatment homogeneity within strata, which means the treatment effect should on average be the same for everyone in the strata, not just for the compliers. If this assumption holds, LATE is equal to ATE within some covariate profile. Denote as: formula_72 Notice this is a less restrictive assumption than the traditional Ignorability assumption, as this only concerns the covariate sets that are relevant to compliance score, which further leads to heterogeneity, without considering all sets of covariates. The second assumption is consistency of formula_73 for formula_74 and the third assumption is the nonzero compliance for each strata, which is an extension of IV assumption of nonzero compliance over population. This is a reasonable assumption as if compliance score is zero for certain strata, the inverse of it would be infinite. ICSW estimator is more sensible than that of IV estimator, as it incorporate more covariate information, such that the estimator might have higher variances. This is a general problem for IPW-style estimation. The problem is exaggerated when there is only a small population in certain strata and compliance rate is low. One way to compromise it to winsorize the estimates, in this paper they set the threshold as =0.275. If compliance score for lower than 0.275, it is replaced by this value. Bootstrap is also recommended in the entire process to reduce uncertainty(Abadie 2002). Reweighting under monotonicity assumption. In another approach, one might assume that an underlying utility model links the never-takers, compliers, and always-takers. The ATE can be estimated by reweighting based on an extrapolation of the complier treated and untreated potential outcomes to the never-takers and always-takers. The following method is one that has been proposed by Amanda Kowalski. First, all subjects are assumed to have a utility function, determined by their individual gains from treatment and costs from treatment. Based on an underlying assumption of monotonicity, the never-takers, compliers, and always-takers can be arranged on the same continuum based on their utility function. This assumes that the always-takers have such a high utility from taking the treatment that they will take it even without encouragement. On the other hand, the never-takers have such a low utility function that they will not take the treatment despite encouragement. Thus, the never-takers can be aligned with the compliers with the lowest utilities, and the always-takers with the compliers with the highest utility functions. In an experimental population, several aspects can be observed: the treated potential outcomes of the always-takers (those who are treated in the control group); the untreated potential outcomes of the never-takers (those who remain untreated in the treatment group); the treated potential outcomes of the always-takers and compliers (those who are treated in the treatment group); and the untreated potential outcomes of the compliers and never-takers (those who are untreated in the control group). However, the treated and untreated potential outcomes of the compliers should be extracted from the latter two observations. To do so, the LATE must be extracted from the treated population. Assuming no defiers, it can be assumed that the treated group in the treatment condition consists of both always-takers and compliers. From the observations of the treated outcomes in the control group, the average treated outcome for always-takers can be extracted, as well as their share of the overall population. As such, the weighted average can be undone and the treated potential outcome for the compliers can be obtained; then, the LATE is subtracted to get the untreated potential outcomes for the compliers. This move will then allow extrapolation from the compliers to obtain the ATE. Returning to the weak monotonicity assumption, which assumes that the utility function always runs in one direction, the utility of a marginal complier would be similar to the utility of a never-taker on one end, and that of an always-taker on the other end. The always-takers will have the same untreated potential outcomes as the compliers, which is its maximum untreated potential outcome. Again, this is based on the underlying utility model linking the subgroups, which assumes that the utility function of an always-taker would not be lower than the utility function of a complier. The same logic would apply to the never-takers, who are assumed to have a utility function that will always be lower than that of a complier. Given this, extrapolation is possible by projecting the untreated potential outcomes of the compliers to the always-takers, and the treated potential outcomes of the compliers to the never-takers. In other words, if it is assumed that the untreated compliers are informative about always-takers, and the treated compliers are informative about never-takers, then comparison is now possible among the treated always-takers to their “as-if” untreated always-takers, and the untreated never-takers can be compared to their “as-if” treated counterparts. This will then allow the calculation of the overall treatment effect. Extrapolation under the weak monotonicity assumption will provide a bound, rather than a point-estimate. Limitations. The estimation of the extrapolation to ATE from the LATE requires certain key assumptions, which may vary from one approach to another. While some may assume homogeneity within covariates, and thus extrapolate based on strata, others may instead assume monotonicity. All will assume the absence of defiers within the experimental population. Some of these assumptions may be weaker than others—for example, the monotonicity assumption is weaker than the ignorability assumption. However, there are other trade-offs to consider, such as whether the estimates produced are point-estimates, or bounds. Ultimately, the literature on generalizing the LATE relies entirely on key assumptions. It is not a design-based approach per se, and the field of experiments is not usually in the habit of comparing groups unless they are randomly assigned. Even in case when assumptions are difficult to verify, researchers can incorporate through the foundation of experiment design. For example, in a typical field experiment where instrument is “encouragement to treatment”, treatment heterogeneity could be detected by varying intensity of encouragement. If the compliance rate remains stable under different intensity, if could be a signal of homogeneity across groups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " i = 1, \\ldots, N " }, { "math_id": 1, "text": " z_i " }, { "math_id": 2, "text": " i " }, { "math_id": 3, "text": " Y_i(z_i) " }, { "math_id": 4, "text": " Y_i(1)-Y_i(0) " }, { "math_id": 5, "text": " Y_i(1) " }, { "math_id": 6, "text": " Y_i(0) " }, { "math_id": 7, "text": "ATE= E[Y_i(1)-Y_i(0)]=E[Y_i(1)]-E[Y_i(0)]=E[Y_i(1)|Z_i=1]-E[Y_i(0)|Z_i=0] " }, { "math_id": 8, "text": "d_i(z)" }, { "math_id": 9, "text": "z_i" }, { "math_id": 10, "text": "d_i(1)=1" }, { "math_id": 11, "text": "d_i(0)=0" }, { "math_id": 12, "text": "d_i(z)=1" }, { "math_id": 13, "text": "d_i(z)=0" }, { "math_id": 14, "text": "d_i(1)=0" }, { "math_id": 15, "text": "d_i(0)=1\n" }, { "math_id": 16, "text": "d_i(0) = 0" }, { "math_id": 17, "text": "i" }, { "math_id": 18, "text": "d_i(1)= 0 " }, { "math_id": 19, "text": " 1 " }, { "math_id": 20, "text": "d_i(0)" }, { "math_id": 21, "text": "d_i(1)" }, { "math_id": 22, "text": "d_i" }, { "math_id": 23, "text": "z_i=z_i'" }, { "math_id": 24, "text": "D_i(\\mathbf{z})=D_i(\\mathbf{z}')" }, { "math_id": 25, "text": "\\mathbf{z}" }, { "math_id": 26, "text": "d_i=d_i'" }, { "math_id": 27, "text": "Y_i(z,d)=Y_i(z',d)" }, { "math_id": 28, "text": " Y_i(z,d)=Y_i(d) " }, { "math_id": 29, "text": "d" }, { "math_id": 30, "text": "d_i(1) \\geq d_i(0)" }, { "math_id": 31, "text": " d_i " }, { "math_id": 32, "text": "d_i(1) < d_i(0)" }, { "math_id": 33, "text": " LATE = \\frac{ITT}{ITT_D} " }, { "math_id": 34, "text": " ITT=E[Y_i(z=1)]-E[Y_i(z=0)]" }, { "math_id": 35, "text": " ITT_D = E[d_i(z=1)]-E[d_i(z=0)]" }, { "math_id": 36, "text": "ITT " }, { "math_id": 37, "text": "ITT = ATE " }, { "math_id": 38, "text": "ITT_D " }, { "math_id": 39, "text": " E[d_i(z=0)]=0" }, { "math_id": 40, "text": " ITT_D = E[d_i(z=1)]=P[d_i(1)=1]" }, { "math_id": 41, "text": "\\begin{align} {\\displaystyle E[Y_{i}(z=1)]=E[Y_{i}(d(1),z=1)]} = E[Y_i(z=1,d=1)|d_i(1)=1]*P[d_i(1)=1] & \\\\ +E[Y_i(z=1,d=0)|d_i(1)=0]* (1-P[d_i(1)=1])\n \\end{align}\n\n" }, { "math_id": 42, "text": "\\begin{align} {\\displaystyle E[Y_{i}(z=0)]=E[Y_{i}(d=0,z=0)]} = E[Y_i(z=0,d=0)|d_i(1)=1]*P[d_i(1)=1] & \\\\ +E[Y_i(z=0,d=0)|d_i(1)=0]* (1-P[d_i(1)=1])\n \\end{align}\n\n" }, { "math_id": 43, "text": " \\begin{alignat}{2} \nITT= E[Y_i(z=1)]-E[Y_i(z=0)] = E[Y_i(z=1,d=1)-Y_i(z=0,d=0)|d_i(1)=1]*P[d_i(1)=1]+&\\\\ \nE[Y_i(z=1,d=0)-Y_i(z=0,d=0)|d_i(1)=0]*P[d_i(1)=0]\n \\end{alignat}" }, { "math_id": 44, "text": " \\begin{align} \\frac{ITT}{ITT_D}= & \\frac {E[Y_i(z=1,d=1)-Y_i(z=0,d=0)|d_i(1)=1]*P[d_i(1)=1]}{P[d_i(1)=1]} \n\\\\ = & E[Y_i(d=1)-Y_i(d=0)|d_i(1)=1]\n\\\\ = & LATE\n\\end{align}\n" }, { "math_id": 45, "text": "Y_i(d=1)- Y_i(d=0)" }, { "math_id": 46, "text": "ATE= \\frac{3+2+4+3+6+6+4+4+3}{9}=\\frac{35}{9}=3.9 " }, { "math_id": 47, "text": "LATE = \\frac{3+4+6+4+4}{5}=4.2" }, { "math_id": 48, "text": "Y_i(z=1)-Y_i(z=0)" }, { "math_id": 49, "text": "ITT = \\frac{3+0+4+0+6+0+4+4+0}{9}=\\frac{21}{9}=2.3" }, { "math_id": 50, "text": " ITT_D " }, { "math_id": 51, "text": " ITT_D = \\frac{5}{9}" }, { "math_id": 52, "text": " \\frac{ITT}{ITT_D}= \\frac{21/9}{5/9}=\\frac{21}{5}=4.2=LATE " }, { "math_id": 53, "text": "Y_i" }, { "math_id": 54, "text": "D_i = \\alpha_0 + \\alpha_1 Z_i + \\xi_{1i}" }, { "math_id": 55, "text": "Y_i = \\beta_0 + \\beta_1 Z_i + \\xi_{2i}" }, { "math_id": 56, "text": "\\alpha_1=Cov(D,Z)/var(Z)" }, { "math_id": 57, "text": "\\beta_1" }, { "math_id": 58, "text": "\\beta_1=Cov(Y,Z)/var(Z)" }, { "math_id": 59, "text": "\\tau_{LATE}=\\frac{\\beta_1}{\\alpha_1}=\\frac{Cov(Y,Z)/Var(Z)}{Cov(D,Z)/Var(Z) } = \\frac{Cov(Y,Z)}{Cov(D,Z)}" }, { "math_id": 60, "text": "\\alpha_1 " }, { "math_id": 61, "text": "z " }, { "math_id": 62, "text": "Z" }, { "math_id": 63, "text": "P_{Ci}=Pr(D_1>D_0|X=x_i)" }, { "math_id": 64, "text": "x_i" }, { "math_id": 65, "text": "i " }, { "math_id": 66, "text": "\\hat{\\Pr{c_i}}=\\hat{\\Pr}(D_1>D_0|X=x_i)=F(\\hat{\\theta}_{A,C,x_i})(1-F(\\hat{\\theta}_{A|A,C,x_i}))^3" }, { "math_id": 67, "text": "\\theta" }, { "math_id": 68, "text": "F(.)" }, { "math_id": 69, "text": "\\tau_{LATE}=\\frac{\\sum_{i=1}^n {Z_i}{Y_i}/\\sum_{i=1}^n {Z_i}-\\sum_{i=1}^n {(1-Z_i)}{Y_i}/\\sum_{i=1}^n {(1-Z_i)}}{\n \\sum_{i=1}^n {Z_i}{D_i}/\\sum_{i=1}^n {Z_i}-\\sum_{i=1}^n {(1-Z_i)}{D_i}/\\sum_{i=1}^n {(1-Z_i)}}\n" }, { "math_id": 70, "text": "\\hat{w_{Ci}}=1/\\hat{Pr_{Ci}}" }, { "math_id": 71, "text": "\\tau_{ATE}=\\frac{\\sum_{i=1}^n \\hat{W_i}{Z_i}{Y_i}/\\sum_{i=1}^n \\hat{W_i}{Z_i}-\\sum_{i=1}^n \\hat{W_i}{(1-Z_i)}{Y_i}/\\sum_{i=1}^n {\\hat{W_i}(1-Z_i)}}{\n \\sum_{i=1}^n \\hat{W_i}{Z_i}{D_i}/\\sum_{i=1}^n \\hat{W_i}{Z_i}-\\sum_{i=1}^n \\hat{W_i}{(1-Z_i)}{D_i}/\\sum_{i=1}^n \\hat{W_i}{(1-Z_i)}}\n" }, { "math_id": 72, "text": "\\text{for all }x \\in Supp(X), E[Y_1-Y_0|D_1>D_0]" }, { "math_id": 73, "text": "\\hat{Pr_{Ci}}" }, { "math_id": 74, "text": "Pr_{Ci}" } ]
https://en.wikipedia.org/wiki?curid=59434042
59435402
Joseph Knar
Austrian mathematician (1800–1864) Joseph Knar (January 1, 1800 – June 1, 1864) was an Austrian mathematician working at the University of Graz. He is most well known for discovering Knar's formula, an infinite product formula involving the gamma function. Life. From a poor family, Knar graduated from the University of Graz at the age of 19 after studying mathematics and law. In 1821, he became a full professor, a position which he held until his death from a stroke in 1864. He published several books including a two-volume textbook "Lehrbuch der Elementarmathematik" on elementary mathematics. Knar's Formula. Knar's most famous mathematical contribution was his discovery of the infinite product formula formula_0 for formula_1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma(1+x) = 2^{2x}\\prod_{n=1}^{\\infty}\\frac{\\Gamma(\\frac{1}{2}+2^{-n}x)}{\\sqrt{\\pi}}" }, { "math_id": 1, "text": "x > 0" } ]
https://en.wikipedia.org/wiki?curid=59435402
5943744
Divisor summatory function
Summatory function of the divisor-counting function In number theory, the divisor summatory function is a function that is a sum over the divisor function. It frequently occurs in the study of the asymptotic behaviour of the Riemann zeta function. The various studies of the behaviour of the divisor function are sometimes called divisor problems. Definition. The divisor summatory function is defined as formula_0 where formula_1 is the divisor function. The divisor function counts the number of ways that the integer "n" can be written as a product of two integers. More generally, one defines formula_2 where "d""k"("n") counts the number of ways that "n" can be written as a product of "k" numbers. This quantity can be visualized as the count of the number of lattice points fenced off by a hyperbolic surface in "k" dimensions. Thus, for "k"=2, "D"("x") = "D"2("x") counts the number of points on a square lattice bounded on the left by the vertical-axis, on the bottom by the horizontal-axis, and to the upper-right by the hyperbola "jk" = "x". Roughly, this shape may be envisioned as a hyperbolic simplex. This allows us to provide an alternative expression for "D"("x"), and a simple way to compute it in formula_3 time: formula_4, where formula_5 If the hyperbola in this context is replaced by a circle then determining the value of the resulting function is known as the Gauss circle problem. Sequence of D(n)(sequence in the OEIS): 0, 1, 3, 5, 8, 10, 14, 16, 20, 23, 27, 29, 35, 37, 41, 45, 50, 52, 58, 60, 66, 70, 74, 76, 84, 87, 91, 95, 101, 103, 111, ... Dirichlet's divisor problem. Finding a closed form for this summed expression seems to be beyond the techniques available, but it is possible to give approximations. The leading behavior of the series is given by formula_6 where formula_7 is the Euler–Mascheroni constant, and the error term is formula_8 Here, formula_9 denotes Big-O notation. This estimate can be proven using the Dirichlet hyperbola method, and was first established by Dirichlet in 1849. The Dirichlet divisor problem, precisely stated, is to improve this error bound by finding the smallest value of formula_10 for which formula_11 holds true for all formula_12. As of today, this problem remains unsolved. Progress has been slow. Many of the same methods work for this problem and for Gauss's circle problem, another lattice-point counting problem. Section F1 of "Unsolved Problems in Number Theory" surveys what is known and not known about these problems. So, formula_26 lies somewhere between 1/4 and 131/416 (approx. 0.3149); it is widely conjectured to be 1/4. Theoretical evidence lends credence to this conjecture, since formula_27 has a (non-Gaussian) limiting distribution. The value of 1/4 would also follow from a conjecture on exponent pairs. Piltz divisor problem. In the generalized case, one has formula_28 where formula_29 is a polynomial of degree formula_30. Using simple estimates, it is readily shown that formula_31 for integer formula_32. As in the formula_33 case, the infimum of the bound is not known for any value of formula_34. Computing these infima is known as the Piltz divisor problem, after the name of the German mathematician Adolf Piltz (also see his German page). Defining the order formula_35 as the smallest value for which formula_36 holds, for any formula_37, one has the following results (note that formula_38 is the formula_10 of the previous section): formula_39 formula_40 and formula_41 Mellin transform. Both portions may be expressed as Mellin transforms: formula_43 for formula_44. Here, formula_45 is the Riemann zeta function. Similarly, one has formula_46 with formula_47. The leading term of formula_48 is obtained by shifting the contour past the double pole at formula_49: the leading term is just the residue, by Cauchy's integral formula. In general, one has formula_50 and likewise for formula_51, for formula_32. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D(x)=\\sum_{n\\le x} d(n) = \\sum_{j,k \\atop jk\\le x} 1" }, { "math_id": 1, "text": "d(n)=\\sigma_0(n) = \\sum_{j,k \\atop jk=n} 1" }, { "math_id": 2, "text": "D_k(x)=\\sum_{n\\le x} d_k(n)= \\sum_{m\\le x}\\sum_{mn\\le x} d_{k-1}(n)" }, { "math_id": 3, "text": "O(\\sqrt{x})" }, { "math_id": 4, "text": "D(x)=\\sum_{k=1}^x \\left\\lfloor\\frac{x}{k}\\right\\rfloor = 2 \\sum_{k=1}^u \\left\\lfloor\\frac{x}{k}\\right\\rfloor - u^2" }, { "math_id": 5, "text": "u = \\left\\lfloor \\sqrt{x}\\right\\rfloor" }, { "math_id": 6, "text": "D(x) = x\\log x + x(2\\gamma-1) + \\Delta(x)\\ " }, { "math_id": 7, "text": "\\gamma" }, { "math_id": 8, "text": "\\Delta(x) = O\\left(\\sqrt{x}\\right)." }, { "math_id": 9, "text": "O" }, { "math_id": 10, "text": "\\theta" }, { "math_id": 11, "text": "\\Delta(x) = O\\left(x^{\\theta+\\epsilon}\\right)" }, { "math_id": 12, "text": "\\epsilon > 0" }, { "math_id": 13, "text": "O(x^{1/3}\\log x)." }, { "math_id": 14, "text": "\\inf \\theta \\ge 1/4" }, { "math_id": 15, "text": "K" }, { "math_id": 16, "text": "\\Delta(x) > Kx^{1/4}" }, { "math_id": 17, "text": "\\Delta(x) < -Kx^{1/4}" }, { "math_id": 18, "text": "\\inf \\theta \\le 33/100 = 0.33" }, { "math_id": 19, "text": "\\inf \\theta \\le 27/82 = 0.3\\overline{29268}" }, { "math_id": 20, "text": "\\inf \\theta \\le 15/46 = 0.32608695652..." }, { "math_id": 21, "text": "\\inf \\theta \\le 12/37 = 0.\\overline{324}" }, { "math_id": 22, "text": "\\inf \\theta \\le 346/1067 = 0.32427366448..." }, { "math_id": 23, "text": "\\inf \\theta \\le 35/108 = 0.32\\overline{407}" }, { "math_id": 24, "text": "\\inf \\theta \\leq 7/22 = 0.3\\overline{18}" }, { "math_id": 25, "text": "\\inf \\theta \\leq 131/416 = 0.31490384615..." }, { "math_id": 26, "text": "\\inf \\theta" }, { "math_id": 27, "text": "\\Delta(x)/x^{1/4}" }, { "math_id": 28, "text": "D_k(x) = xP_k(\\log x)+\\Delta_k(x) \\," }, { "math_id": 29, "text": "P_k" }, { "math_id": 30, "text": "k-1" }, { "math_id": 31, "text": "\\Delta_k(x)=O\\left(x^{1-1/k} \\log^{k-2} x\\right)" }, { "math_id": 32, "text": "k\\ge 2" }, { "math_id": 33, "text": "k=2" }, { "math_id": 34, "text": " k" }, { "math_id": 35, "text": "\\alpha_k" }, { "math_id": 36, "text": "\\Delta_k(x)=O\\left(x^{\\alpha_k+\\varepsilon}\\right)" }, { "math_id": 37, "text": "\\varepsilon>0" }, { "math_id": 38, "text": "\\alpha_2" }, { "math_id": 39, "text": "\\alpha_2\\le\\frac{131}{416}\\ ," }, { "math_id": 40, "text": "\\alpha_3 \\le\\frac{43}{96}\\ ," }, { "math_id": 41, "text": "\n\\begin{align}\n\\alpha_k & \\le \\frac{3k-4}{4k}\\quad(4\\le k\\le 8) \\\\[6pt]\n\\alpha_9 & \\le\\frac{35}{54}\\ ,\\quad \\alpha_{10}\\le\\frac{41}{60}\\ ,\\quad \\alpha_{11}\\le\\frac{7}{10} \\\\[6pt]\n\\alpha_k & \\le \\frac{k-2}{k+2}\\quad(12\\le k\\le 25) \\\\[6pt]\n\\alpha_k & \\le \\frac{k-1}{k+4}\\quad(26\\le k\\le 50) \\\\[6pt]\n\\alpha_k & \\le \\frac{31k-98}{32k}\\quad(51\\le k\\le 57) \\\\[6pt]\n\\alpha_k & \\le \\frac{7k-34}{7k}\\quad(k\\ge 58)\n\\end{align}\n" }, { "math_id": 42, "text": "\\alpha_k =\\frac{k-1}{2k}\\ ." }, { "math_id": 43, "text": "D(x)=\\frac{1}{2\\pi i} \\int_{c-i\\infty}^{c+i\\infty} \n\\zeta^2(w) \\frac {x^w}{w}\\, dw" }, { "math_id": 44, "text": "c>1" }, { "math_id": 45, "text": "\\zeta(s)" }, { "math_id": 46, "text": "\\Delta(x)=\\frac{1}{2\\pi i} \\int_{c^\\prime-i\\infty}^{c^\\prime+i\\infty} \n\\zeta^2(w) \\frac {x^w}{w} \\,dw" }, { "math_id": 47, "text": "0<c^\\prime<1" }, { "math_id": 48, "text": "D(x)" }, { "math_id": 49, "text": "w=1" }, { "math_id": 50, "text": "D_k(x)=\\frac{1}{2\\pi i} \\int_{c-i\\infty}^{c+i\\infty} \n\\zeta^k(w) \\frac {x^w}{w} \\,dw" }, { "math_id": 51, "text": "\\Delta_k(x)" } ]
https://en.wikipedia.org/wiki?curid=5943744
59438
Thermal conductivity and resistivity
Capacity of a material to conduct heat The thermal conductivity of a material is a measure of its ability to conduct heat. It is commonly denoted by formula_0, formula_1, or formula_2 and is measured in W·m−1·K−1. Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals typically have high thermal conductivity and are very efficient at conducting heat, while the opposite is true for insulating materials such as mineral wool or Styrofoam. Correspondingly, materials of high thermal conductivity are widely used in heat sink applications, and materials of low thermal conductivity are used as thermal insulation. The reciprocal of thermal conductivity is called thermal resistivity. The defining equation for thermal conductivity is formula_3, where formula_4 is the heat flux, formula_5 is the thermal conductivity, and formula_6 is the temperature gradient. This is known as Fourier's Law for heat conduction. Although commonly expressed as a scalar, the most general form of thermal conductivity is a second-rank tensor. However, the tensorial description only becomes necessary in materials which are anisotropic. Definition. Simple definition. Consider a solid material placed between two environments of different temperatures. Let formula_8 be the temperature at formula_9 and formula_10 be the temperature at formula_11, and suppose formula_12. An example of this scenario is a building on a cold winter day; the solid material in this case is the building wall, separating the cold outdoor environment from the warm indoor environment. According to the second law of thermodynamics, heat will flow from the hot environment to the cold one as the temperature difference is equalized by diffusion. This is quantified in terms of a heat flux formula_7, which gives the rate, per unit area, at which heat flows in a given direction (in this case minus x-direction). In many materials, formula_7 is observed to be directly proportional to the temperature difference and inversely proportional to the separation distance formula_13: formula_14 The constant of proportionality formula_0 is the thermal conductivity; it is a physical property of the material. In the present scenario, since formula_12 heat flows in the minus x-direction and formula_7 is negative, which in turn means that formula_15. In general, formula_0 is always defined to be positive. The same definition of formula_0 can also be extended to gases and liquids, provided other modes of energy transport, such as convection and radiation, are eliminated or accounted for. The preceding derivation assumes that the formula_0 does not change significantly as temperature is varied from formula_8 to formula_10. Cases in which the temperature variation of formula_0 is non-negligible must be addressed using the more general definition of formula_0 discussed below. General definition. Thermal conduction is defined as the transport of energy due to random molecular motion across a temperature gradient. It is distinguished from energy transport by convection and molecular work in that it does not involve macroscopic flows or work-performing internal stresses. Energy flow due to thermal conduction is classified as heat and is quantified by the vector formula_16, which gives the heat flux at position formula_17 and time formula_18. According to the second law of thermodynamics, heat flows from high to low temperature. Hence, it is reasonable to postulate that formula_16 is proportional to the gradient of the temperature field formula_19, i.e. formula_20 where the constant of proportionality, formula_21, is the thermal conductivity. This is called Fourier's law of heat conduction. Despite its name, it is not a law but a definition of thermal conductivity in terms of the independent physical quantities formula_16 and formula_19. As such, its usefulness depends on the ability to determine formula_0 for a given material under given conditions. The constant formula_0 itself usually depends on formula_19 and thereby implicitly on space and time. An explicit space and time dependence could also occur if the material is inhomogeneous or changing with time. In some solids, thermal conduction is anisotropic, i.e. the heat flux is not always parallel to the temperature gradient. To account for such behavior, a tensorial form of Fourier's law must be used: formula_22 where formula_23 is symmetric, second-rank tensor called the thermal conductivity tensor. An implicit assumption in the above description is the presence of local thermodynamic equilibrium, which allows one to define a temperature field formula_19. This assumption could be violated in systems that are unable to attain local equilibrium, as might happen in the presence of strong nonequilibrium driving or long-ranged interactions. Other quantities. In engineering practice, it is common to work in terms of quantities which are derivative to thermal conductivity and implicitly take into account design-specific features such as component dimensions. For instance, thermal conductance is defined as the quantity of heat that passes in unit time through a plate of "particular area and thickness" when its opposite faces differ in temperature by one kelvin. For a plate of thermal conductivity formula_0, area formula_24 and thickness formula_13, the conductance is formula_25, measured in W⋅K−1. The relationship between thermal conductivity and conductance is analogous to the relationship between electrical conductivity and electrical conductance. Thermal resistance is the inverse of thermal conductance. It is a convenient measure to use in multicomponent design since thermal resistances are additive when occurring in series. There is also a measure known as the heat transfer coefficient: the quantity of heat that passes per unit time through a unit area of a plate of particular thickness when its opposite faces differ in temperature by one kelvin. In ASTM C168-15, this area-independent quantity is referred to as the "thermal conductance". The reciprocal of the heat transfer coefficient is thermal insulance. In summary, for a plate of thermal conductivity formula_0, area formula_24 and thickness formula_13, The heat transfer coefficient is also known as thermal admittance in the sense that the material may be seen as admitting heat to flow. An additional term, thermal transmittance, quantifies the thermal conductance of a structure along with heat transfer due to convection and radiation. It is measured in the same units as thermal conductance and is sometimes known as the "composite thermal conductance". The term "U-value" is also used. Finally, thermal diffusivity formula_29 combines thermal conductivity with density and specific heat: formula_30. As such, it quantifies the "thermal inertia" of a material, i.e. the relative difficulty in heating a material to a given temperature using heat sources applied at the boundary. Units. In the International System of Units (SI), thermal conductivity is measured in watts per meter-kelvin (W/(m⋅K)). Some papers report in watts per centimeter-kelvin [W/(cm⋅K)]. However, physicists use other convenient units as well, e.g., in cgs units, where esu/(cm-sec-K) is used. The Lorentz number, defined as L=κ/σT is a quantity independent of the carrier density and the scattering mechanism. Its value for a gas of non-interacting electrons (typical carriers in good metallic conductors) is 2.72×10-13 esu/K2, or equivalently, 2.44×10-8 Watt-Ohm/K2. In imperial units, thermal conductivity is measured in BTU/(h⋅ft⋅°F). The dimension of thermal conductivity is M1L1T−3Θ−1, expressed in terms of the dimensions mass (M), length (L), time (T), and temperature (Θ). Other units which are closely related to the thermal conductivity are in common use in the construction and textile industries. The construction industry makes use of measures such as the R-value (resistance) and the U-value (transmittance or conductance). Although related to the thermal conductivity of a material used in an insulation product or assembly, R- and U-values are measured per unit area, and depend on the specified thickness of the product or assembly. Likewise the textile industry has several units including the tog and the clo which express thermal resistance of a material in a way analogous to the R-values used in the construction industry. Measurement. There are several ways to measure thermal conductivity; each is suitable for a limited range of materials. Broadly speaking, there are two categories of measurement techniques: "steady-state" and "transient". Steady-state techniques infer the thermal conductivity from measurements on the state of a material once a steady-state temperature profile has been reached, whereas transient techniques operate on the instantaneous state of a system during the approach to steady state. Lacking an explicit time component, steady-state techniques do not require complicated signal analysis (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed, and the time required to reach steady state precludes rapid measurement. In comparison with solid materials, the thermal properties of fluids are more difficult to study experimentally. This is because in addition to thermal conduction, convective and radiative energy transport are usually present unless measures are taken to limit these processes. The formation of an insulating boundary layer can also result in an apparent reduction in the thermal conductivity. Experimental values. The thermal conductivities of common substances span at least four orders of magnitude. Gases generally have low thermal conductivity, and pure metals have high thermal conductivity. For example, under standard conditions the thermal conductivity of copper is over times that of air. Of all materials, allotropes of carbon, such as graphite and diamond, are usually credited with having the highest thermal conductivities at room temperature. The thermal conductivity of natural diamond at room temperature is several times higher than that of a highly conductive metal such as copper (although the precise value varies depending on the diamond type). Thermal conductivities of selected substances are tabulated below; an expanded list can be found in the list of thermal conductivities. These values are illustrative estimates only, as they do not account for measurement uncertainties or variability in material definitions. Influencing factors. Temperature. The effect of temperature on thermal conductivity is different for metals and nonmetals. In metals, heat conductivity is primarily due to free electrons. Following the Wiedemann–Franz law, thermal conductivity of metals is approximately proportional to the absolute temperature (in kelvins) times electrical conductivity. In pure metals the electrical conductivity decreases with increasing temperature and thus the product of the two, the thermal conductivity, stays approximately constant. However, as temperatures approach absolute zero, the thermal conductivity decreases sharply. In alloys the change in electrical conductivity is usually smaller and thus thermal conductivity increases with temperature, often proportionally to temperature. Many pure metals have a peak thermal conductivity between 2 K and 10 K. On the other hand, heat conductivity in nonmetals is mainly due to lattice vibrations (phonons). Except for high-quality crystals at low temperatures, the phonon mean free path is not reduced significantly at higher temperatures. Thus, the thermal conductivity of nonmetals is approximately constant at high temperatures. At low temperatures well below the Debye temperature, thermal conductivity decreases, as does the heat capacity, due to carrier scattering from defects. Chemical phase. When a material undergoes a phase change (e.g. from solid to liquid), the thermal conductivity may change abruptly. For instance, when ice melts to form liquid water at 0 °C, the thermal conductivity changes from 2.18 W/(m⋅K) to 0.56 W/(m⋅K). Even more dramatically, the thermal conductivity of a fluid diverges in the vicinity of the vapor-liquid critical point. Thermal anisotropy. Some substances, such as non-cubic crystals, can exhibit different thermal conductivities along different crystal axes. Sapphire is a notable example of variable thermal conductivity based on orientation and temperature, with 35 W/(m⋅K) along the c axis and 32 W/(m⋅K) along the a axis. Wood generally conducts better along the grain than across it. Other examples of materials where the thermal conductivity varies with direction are metals that have undergone heavy cold pressing, laminated materials, cables, the materials used for the Space Shuttle thermal protection system, and fiber-reinforced composite structures. When anisotropy is present, the direction of heat flow may differ from the direction of the thermal gradient. Electrical conductivity. In metals, thermal conductivity is approximately correlated with electrical conductivity according to the Wiedemann–Franz law, as freely moving valence electrons transfer not only electric current but also heat energy. However, the general correlation between electrical and thermal conductance does not hold for other materials, due to the increased importance of phonon carriers for heat in non-metals. Highly electrically conductive silver is less thermally conductive than diamond, which is an electrical insulator but conducts heat via phonons due to its orderly array of atoms. Magnetic field. The influence of magnetic fields on thermal conductivity is known as the thermal Hall effect or Righi–Leduc effect. Gaseous phases. In the absence of convection, air and other gases are good insulators. Therefore, many insulating materials function simply by having a large number of gas-filled pockets which obstruct heat conduction pathways. Examples of these include expanded and extruded polystyrene (popularly referred to as "styrofoam") and silica aerogel, as well as warm clothes. Natural, biological insulators such as fur and feathers achieve similar effects by trapping air in pores, pockets, or voids. Low density gases, such as hydrogen and helium typically have high thermal conductivity. Dense gases such as xenon and dichlorodifluoromethane have low thermal conductivity. An exception, sulfur hexafluoride, a dense gas, has a relatively high thermal conductivity due to its high heat capacity. Argon and krypton, gases denser than air, are often used in insulated glazing (double paned windows) to improve their insulation characteristics. The thermal conductivity through bulk materials in porous or granular form is governed by the type of gas in the gaseous phase, and its pressure. At low pressures, the thermal conductivity of a gaseous phase is reduced, with this behaviour governed by the Knudsen number, defined as formula_31, where formula_32 is the mean free path of gas molecules and formula_33 is the typical gap size of the space filled by the gas. In a granular material formula_33 corresponds to the characteristic size of the gaseous phase in the pores or intergranular spaces. Isotopic purity. The thermal conductivity of a crystal can depend strongly on isotopic purity, assuming other lattice defects are negligible. A notable example is diamond: at a temperature of around 100 K the thermal conductivity increases from 10,000 W·m−1·K−1 for natural type IIa diamond (98.9% 12C), to 41,000 for 99.9% enriched synthetic diamond. A value of 200,000 is predicted for 99.999% 12C at 80 K, assuming an otherwise pure crystal. The thermal conductivity of 99% isotopically enriched cubic boron nitride is ~ 1400 W·m−1·K−1, which is 90% higher than that of natural boron nitride. Molecular origins. The molecular mechanisms of thermal conduction vary among different materials, and in general depend on details of the microscopic structure and molecular interactions. As such, thermal conductivity is difficult to predict from first-principles. Any expressions for thermal conductivity which are exact and general, e.g. the Green-Kubo relations, are difficult to apply in practice, typically consisting of averages over multiparticle correlation functions. A notable exception is a monatomic dilute gas, for which a well-developed theory exists expressing thermal conductivity accurately and explicitly in terms of molecular parameters. In a gas, thermal conduction is mediated by discrete molecular collisions. In a simplified picture of a solid, thermal conduction occurs by two mechanisms: 1) the migration of free electrons and 2) lattice vibrations (phonons). The first mechanism dominates in pure metals and the second in non-metallic solids. In liquids, by contrast, the precise microscopic mechanisms of thermal conduction are poorly understood. Gases. In a simplified model of a dilute monatomic gas, molecules are modeled as rigid spheres which are in constant motion, colliding elastically with each other and with the walls of their container. Consider such a gas at temperature formula_34 and with density formula_35, specific heat formula_36 and molecular mass formula_37. Under these assumptions, an elementary calculation yields for the thermal conductivity formula_38 where formula_39 is a numerical constant of order formula_40, formula_41 is the Boltzmann constant, and formula_1 is the mean free path, which measures the average distance a molecule travels between collisions. Since formula_1 is inversely proportional to density, this equation predicts that thermal conductivity is independent of density for fixed temperature. The explanation is that increasing density increases the number of molecules which carry energy but decreases the average distance formula_1 a molecule can travel before transferring its energy to a different molecule: these two effects cancel out. For most gases, this prediction agrees well with experiments at pressures up to about 10 atmospheres. At higher densities, the simplifying assumption that energy is only transported by the translational motion of particles no longer holds, and the theory must be modified to account for the transfer of energy across a finite distance at the moment of collision between particles, as well as the locally non-uniform density in a high density gas. This modification has been carried out, yielding Revised Enskog Theory, which predicts a density dependence of the thermal conductivity in dense gases. Typically, experiments show a more rapid increase with temperature than formula_42 (here, formula_1 is independent of formula_34). This failure of the elementary theory can be traced to the oversimplified "hard sphere" model, which both ignores the "softness" of real molecules, and the attractive forces present between real molecules, such as dispersion forces. To incorporate more complex interparticle interactions, a systematic approach is necessary. One such approach is provided by Chapman–Enskog theory, which derives explicit expressions for thermal conductivity starting from the Boltzmann equation. The Boltzmann equation, in turn, provides a statistical description of a dilute gas for "generic" interparticle interactions. For a monatomic gas, expressions for formula_0 derived in this way take the form formula_43 where formula_44 is an effective particle diameter and formula_45 is a function of temperature whose explicit form depends on the interparticle interaction law. For rigid elastic spheres, formula_45 is independent of formula_34 and very close to formula_40. More complex interaction laws introduce a weak temperature dependence. The precise nature of the dependence is not always easy to discern, however, as formula_45 is defined as a multi-dimensional integral which may not be expressible in terms of elementary functions, but must be evaluated numerically. However, for particles interacting through a Mie potential (a generalisation of the Lennard-Jones potential) highly accurate correlations for formula_45 in terms of reduced units have been developed. An alternate, equivalent way to present the result is in terms of the gas viscosity formula_46, which can also be calculated in the Chapman–Enskog approach: formula_47 where formula_48 is a numerical factor which in general depends on the molecular model. For smooth spherically symmetric molecules, however, formula_48 is very close to formula_49, not deviating by more than formula_50 for a variety of interparticle force laws. Since formula_0, formula_46, and formula_36 are each well-defined physical quantities which can be measured independent of each other, this expression provides a convenient test of the theory. For monatomic gases, such as the noble gases, the agreement with experiment is fairly good. For gases whose molecules are not spherically symmetric, the expression formula_51 still holds. In contrast with spherically symmetric molecules, however, formula_48 varies significantly depending on the particular form of the interparticle interactions: this is a result of the energy exchanges between the internal and translational degrees of freedom of the molecules. An explicit treatment of this effect is difficult in the Chapman–Enskog approach. Alternately, the approximate expression formula_52 was suggested by Eucken, where formula_53 is the heat capacity ratio of the gas. The entirety of this section assumes the mean free path formula_1 is small compared with macroscopic (system) dimensions. In extremely dilute gases this assumption fails, and thermal conduction is described instead by an apparent thermal conductivity which decreases with density. Ultimately, as the density goes to formula_54 the system approaches a vacuum, and thermal conduction ceases entirely. Liquids. The exact mechanisms of thermal conduction are poorly understood in liquids: there is no molecular picture which is both simple and accurate. An example of a simple but very rough theory is that of Bridgman, in which a liquid is ascribed a local molecular structure similar to that of a solid, i.e. with molecules located approximately on a lattice. Elementary calculations then lead to the expression formula_55 where formula_56 is the Avogadro constant, formula_57 is the volume of a mole of liquid, and formula_58 is the speed of sound in the liquid. This is commonly called "Bridgman's equation". Metals. For metals at low temperatures the heat is carried mainly by the free electrons. In this case the mean velocity is the Fermi velocity which is temperature independent. The mean free path is determined by the impurities and the crystal imperfections which are temperature independent as well. So the only temperature-dependent quantity is the heat capacity "c", which, in this case, is proportional to "T". So formula_59 with "k"0 a constant. For pure metals, "k"0 is large, so the thermal conductivity is high. At higher temperatures the mean free path is limited by the phonons, so the thermal conductivity tends to decrease with temperature. In alloys the density of the impurities is very high, so "l" and, consequently "k", are small. Therefore, alloys, such as stainless steel, can be used for thermal insulation. Lattice waves, phonons, in dielectric solids. Heat transport in both amorphous and crystalline dielectric solids is by way of elastic vibrations of the lattice (i.e., phonons). This transport mechanism is theorized to be limited by the elastic scattering of acoustic phonons at lattice defects. This has been confirmed by the experiments of Chang and Jones on commercial glasses and glass ceramics, where the mean free paths were found to be limited by "internal boundary scattering" to length scales of 10−2 cm to 10−3 cm. The phonon mean free path has been associated directly with the effective relaxation length for processes without directional correlation. If Vg is the group velocity of a phonon wave packet, then the relaxation length formula_60 is defined as: formula_61 where "t" is the characteristic relaxation time. Since longitudinal waves have a much greater phase velocity than transverse waves, "V"long is much greater than "V"trans, and the relaxation length or mean free path of longitudinal phonons will be much greater. Thus, thermal conductivity will be largely determined by the speed of longitudinal phonons. Regarding the dependence of wave velocity on wavelength or frequency (dispersion), low-frequency phonons of long wavelength will be limited in relaxation length by elastic Rayleigh scattering. This type of light scattering from small particles is proportional to the fourth power of the frequency. For higher frequencies, the power of the frequency will decrease until at highest frequencies scattering is almost frequency independent. Similar arguments were subsequently generalized to many glass forming substances using Brillouin scattering. Phonons in the acoustical branch dominate the phonon heat conduction as they have greater energy dispersion and therefore a greater distribution of phonon velocities. Additional optical modes could also be caused by the presence of internal structure (i.e., charge or mass) at a lattice point; it is implied that the group velocity of these modes is low and therefore their contribution to the lattice thermal conductivity "λ"L (formula_62L) is small. Each phonon mode can be split into one longitudinal and two transverse polarization branches. By extrapolating the phenomenology of lattice points to the unit cells it is seen that the total number of degrees of freedom is 3"pq" when "p" is the number of primitive cells with "q" atoms/unit cell. From these only 3p are associated with the acoustic modes, the remaining 3"p"("q" − 1) are accommodated through the optical branches. This implies that structures with larger "p" and "q" contain a greater number of optical modes and a reduced "λ"L. From these ideas, it can be concluded that increasing crystal complexity, which is described by a complexity factor CF (defined as the number of atoms/primitive unit cell), decreases λL. This was done by assuming that the relaxation time "τ" decreases with increasing number of atoms in the unit cell and then scaling the parameters of the expression for thermal conductivity in high temperatures accordingly. Describing anharmonic effects is complicated because an exact treatment as in the harmonic case is not possible, and phonons are no longer exact eigensolutions to the equations of motion. Even if the state of motion of the crystal could be described with a plane wave at a particular time, its accuracy would deteriorate progressively with time. Time development would have to be described by introducing a spectrum of other phonons, which is known as the phonon decay. The two most important anharmonic effects are the thermal expansion and the phonon thermal conductivity. Only when the phonon number ‹n› deviates from the equilibrium value ‹n›0, can a thermal current arise as stated in the following expression formula_63 where "v" is the energy transport velocity of phonons. Only two mechanisms exist that can cause time variation of ‹"n"› in a particular region. The number of phonons that diffuse into the region from neighboring regions differs from those that diffuse out, or phonons decay inside the same region into other phonons. A special form of the Boltzmann equation formula_64 states this. When steady state conditions are assumed the total time derivate of phonon number is zero, because the temperature is constant in time and therefore the phonon number stays also constant. Time variation due to phonon decay is described with a relaxation time ("τ") approximation formula_65 which states that the more the phonon number deviates from its equilibrium value, the more its time variation increases. At steady state conditions and local thermal equilibrium are assumed we get the following equation formula_66 Using the relaxation time approximation for the Boltzmann equation and assuming steady-state conditions, the phonon thermal conductivity "λ"L can be determined. The temperature dependence for "λ"L originates from the variety of processes, whose significance for "λ"L depends on the temperature range of interest. Mean free path is one factor that determines the temperature dependence for "λ"L, as stated in the following equation formula_67 where Λ is the mean free path for phonon and formula_68 denotes the heat capacity. This equation is a result of combining the four previous equations with each other and knowing that formula_69 for cubic or isotropic systems and formula_70. At low temperatures (&lt; 10 K) the anharmonic interaction does not influence the mean free path and therefore, the thermal resistivity is determined only from processes for which q-conservation does not hold. These processes include the scattering of phonons by crystal defects, or the scattering from the surface of the crystal in case of high quality single crystal. Therefore, thermal conductance depends on the external dimensions of the crystal and the quality of the surface. Thus, temperature dependence of λL is determined by the specific heat and is therefore proportional to T3. Phonon quasimomentum is defined as ℏq and differs from normal momentum because it is only defined within an arbitrary reciprocal lattice vector. At higher temperatures (10 K &lt; "T" &lt; "Θ"), the conservation of energy formula_71 and quasimomentum formula_72, where q1 is wave vector of the incident phonon and q2, q3 are wave vectors of the resultant phonons, may also involve a reciprocal lattice vector G complicating the energy transport process. These processes can also reverse the direction of energy transport. Therefore, these processes are also known as Umklapp (U) processes and can only occur when phonons with sufficiently large "q"-vectors are excited, because unless the sum of q2 and q3 points outside of the Brillouin zone the momentum is conserved and the process is normal scattering (N-process). The probability of a phonon to have energy "E" is given by the Boltzmann distribution formula_73. To U-process to occur the decaying phonon to have a wave vector q1 that is roughly half of the diameter of the Brillouin zone, because otherwise quasimomentum would not be conserved. Therefore, these phonons have to possess energy of formula_74, which is a significant fraction of Debye energy that is needed to generate new phonons. The probability for this is proportional to formula_75, with formula_76. Temperature dependence of the mean free path has an exponential form formula_77. The presence of the reciprocal lattice wave vector implies a net phonon backscattering and a resistance to phonon and thermal transport resulting finite "λ"L, as it means that momentum is not conserved. Only momentum non-conserving processes can cause thermal resistance. At high temperatures ("T" &gt; Θ), the mean free path and therefore "λ"L has a temperature dependence "T"−1, to which one arrives from formula formula_77 by making the following approximation formula_78 and writing formula_79. This dependency is known as Eucken's law and originates from the temperature dependency of the probability for the U-process to occur. Thermal conductivity is usually described by the Boltzmann equation with the relaxation time approximation in which phonon scattering is a limiting factor. Another approach is to use analytic models or molecular dynamics or Monte Carlo based methods to describe thermal conductivity in solids. Short wavelength phonons are strongly scattered by impurity atoms if an alloyed phase is present, but mid and long wavelength phonons are less affected. Mid and long wavelength phonons carry significant fraction of heat, so to further reduce lattice thermal conductivity one has to introduce structures to scatter these phonons. This is achieved by introducing interface scattering mechanism, which requires structures whose characteristic length is longer than that of impurity atom. Some possible ways to realize these interfaces are nanocomposites and embedded nanoparticles or structures. Prediction. Because thermal conductivity depends continuously on quantities like temperature and material composition, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available under the physical conditions of interest. This capability is important in thermophysical simulations, where quantities like temperature and pressure vary continuously with space and time, and may encompass extreme conditions inaccessible to direct measurement. In fluids. For the simplest fluids, such as monatomic gases and their mixtures at low to moderate densities, "ab initio" quantum mechanical computations can accurately predict thermal conductivity in terms of fundamental atomic properties—that is, without reference to existing measurements of thermal conductivity or other transport properties. This method uses Chapman-Enskog theory or Revised Enskog Theory to evaluate the thermal conductivity, taking fundamental intermolecular potentials as input, which are computed "ab initio" from a quantum mechanical description. For most fluids, such high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing thermal conductivity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures and pressures, then it is called a "reference correlation" for that material. Reference correlations have been published for many pure materials; examples are carbon dioxide, ammonia, and benzene. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases. Thermophysical modeling software often relies on reference correlations for predicting thermal conductivity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp (open-source). Thermal conductivity can also be computed using the Green-Kubo relations, which express transport coefficients in terms of the statistics of molecular trajectories. The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics. An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules. History. Jan Ingenhousz and the thermal conductivity of different metals. In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;You remembre you gave me a wire of five metals all drawn thro the same hole Viz. one, of gould, one of silver, copper steel and iron. I supplyed here the two others Viz. the one of tin the other of lead. I fixed these seven wires into a wooden frame at an equal distance of one an other ... I dipt the seven wires into this melted wax as deep as the wooden frame ... By taking them out they were cov[e]red with a coat of wax ... When I found that this crust was there about of an equal thikness upon all the wires, I placed them all in a glased earthen vessel full of olive oil heated to some degrees under boiling, taking care that each wire was dipt just as far in the oil as the other ... Now, as they had been all dipt alike at the same time in the same oil, it must follow, that the wire, upon which the wax had been melted the highest, had been the best conductor of heat. ... Silver conducted heat far the best of all other metals, next to this was copper, then gold, tin, iron, steel, Lead. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. Undergraduate-level texts (engineering). &lt;templatestyles src="Refbegin/styles.css" /&gt; Undergraduate-level texts (physics). &lt;templatestyles src="Refbegin/styles.css" /&gt; Graduate-level texts. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\lambda" }, { "math_id": 2, "text": "\\kappa" }, { "math_id": 3, "text": " \\mathbf{q} = - k \\nabla T" }, { "math_id": 4, "text": "\\mathbf{q}" }, { "math_id": 5, "text": " k " }, { "math_id": 6, "text": "\\nabla T " }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "T_1" }, { "math_id": 9, "text": "x=0" }, { "math_id": 10, "text": "T_2" }, { "math_id": 11, "text": "x=L" }, { "math_id": 12, "text": "T_2 > T_1" }, { "math_id": 13, "text": "L" }, { "math_id": 14, "text": "\nq = -k \\cdot \\frac{T_2 - T_1}{L}.\n" }, { "math_id": 15, "text": "k>0" }, { "math_id": 16, "text": "\\mathbf{q}(\\mathbf{r}, t)" }, { "math_id": 17, "text": "\\mathbf{r}" }, { "math_id": 18, "text": "t" }, { "math_id": 19, "text": "T(\\mathbf{r}, t)" }, { "math_id": 20, "text": "\n\\mathbf{q}(\\mathbf{r}, t) = -k \\nabla T(\\mathbf{r}, t),\n" }, { "math_id": 21, "text": "k > 0" }, { "math_id": 22, "text": "\n\\mathbf{q}(\\mathbf{r}, t) = -\\boldsymbol{\\kappa} \\cdot \\nabla T(\\mathbf{r}, t)\n" }, { "math_id": 23, "text": "\\boldsymbol{\\kappa}" }, { "math_id": 24, "text": "A" }, { "math_id": 25, "text": "kA/L" }, { "math_id": 26, "text": "L/(kA)" }, { "math_id": 27, "text": "k/L" }, { "math_id": 28, "text": "L/k" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "\\alpha = \\frac{ k }{ \\rho c_{p} }" }, { "math_id": 31, "text": "K_n=l/d" }, { "math_id": 32, "text": "l" }, { "math_id": 33, "text": "d" }, { "math_id": 34, "text": "T" }, { "math_id": 35, "text": "\\rho" }, { "math_id": 36, "text": "c_v" }, { "math_id": 37, "text": "m" }, { "math_id": 38, "text": "\nk = \\beta \\rho \\lambda c_v \\sqrt{\\frac{2k_\\text{B} T}{\\pi m}},\n" }, { "math_id": 39, "text": "\\beta" }, { "math_id": 40, "text": "1" }, { "math_id": 41, "text": "k_\\text{B}" }, { "math_id": 42, "text": "k \\propto \\sqrt{T}" }, { "math_id": 43, "text": "\nk = \\frac{25}{32} \\frac{\\sqrt{\\pi m k_\\text{B} T}}{\\pi \\sigma^2 \\Omega(T)} c_v,\n" }, { "math_id": 44, "text": "\\sigma" }, { "math_id": 45, "text": "\\Omega(T)" }, { "math_id": 46, "text": "\\mu" }, { "math_id": 47, "text": "\nk = f \\mu c_v,\n" }, { "math_id": 48, "text": "f" }, { "math_id": 49, "text": "2.5" }, { "math_id": 50, "text": "1%" }, { "math_id": 51, "text": "k = f \\mu c_v" }, { "math_id": 52, "text": "f = (1/4){(9 \\gamma - 5)}" }, { "math_id": 53, "text": "\\gamma" }, { "math_id": 54, "text": "0" }, { "math_id": 55, "text": "\nk = 3(N_\\text{A} / V)^{2/3} k_\\text{B} v_\\text{s},\n" }, { "math_id": 56, "text": "N_\\text{A}" }, { "math_id": 57, "text": "V" }, { "math_id": 58, "text": "v_\\text{s}" }, { "math_id": 59, "text": "k=k_0\\,T \\text{ (metal at low temperature)} " }, { "math_id": 60, "text": "l\\;" }, { "math_id": 61, "text": "l\\;=V_\\text{g} t" }, { "math_id": 62, "text": "\\kappa " }, { "math_id": 63, "text": "Q_x=\\frac{1}{V} \\sum_{q,j} {\\hslash \\omega \\left (\\left \\langle n \\right \\rangle-{ \\left \\langle n \\right \\rangle}^0 \\right)v_x}\\text{,}" }, { "math_id": 64, "text": "\\frac{d\\left \\langle n\\right \\rangle}{dt}={\\left(\\frac{\\partial \\left \\langle n\\right \\rangle}{\\partial t}\\right)}_{\\text{diff.}}+{\\left(\\frac{\\partial \\left \\langle n\\right \\rangle}{\\partial t}\\right)}_\\text{decay}" }, { "math_id": 65, "text": "{\\left(\\frac{\\partial \\left \\langle n\\right \\rangle}{\\partial t}\\right)}_\\text{decay}=-\\text{ }\\frac{\\left \\langle n\\right \\rangle-{\\left \\langle n\\right \\rangle}^{0}}{\\tau}," }, { "math_id": 66, "text": "{\\left(\\frac{\\partial \\left(n\\right)}{\\partial t}\\right)}_\\text{diff.}=-{v}_{x}\\frac{\\partial {\\left(n\\right)}^{0}}{\\partial T}\\frac{\\partial T}{\\partial x}\\text{.}" }, { "math_id": 67, "text": "{\\lambda}_{L}=\\frac{1}{3V}\\sum _{q,j}v\\left(q,j\\right)\\Lambda \\left(q,j\\right)\\frac{\\partial}{\\partial T}\\epsilon \\left(\\omega \\left(q,j\\right),T\\right)," }, { "math_id": 68, "text": "\\frac{\\partial}{\\partial T}\\epsilon" }, { "math_id": 69, "text": "\\left \\langle v_x^2\\right \\rangle=\\frac{1}{3}v^2" }, { "math_id": 70, "text": "\\Lambda =v\\tau " }, { "math_id": 71, "text": "\\hslash {\\omega}_{1}=\\hslash {\\omega}_{2}+\\hslash {\\omega}_{3}" }, { "math_id": 72, "text": "\\mathbf{q}_{1}=\\mathbf{q}_{2}+\\mathbf{q}_{3}+\\mathbf{G}" }, { "math_id": 73, "text": "P\\propto {e}^{-E/kT}" }, { "math_id": 74, "text": "\\sim k\\Theta /2" }, { "math_id": 75, "text": "{e}^{-\\Theta /bT}" }, { "math_id": 76, "text": "b=2" }, { "math_id": 77, "text": "{e}^{\\Theta /bT}" }, { "math_id": 78, "text": "{e}^{x}\\propto x\\text{ },\\text{ }\\left(x\\right) < 1" }, { "math_id": 79, "text": "x=\\Theta /bT" } ]
https://en.wikipedia.org/wiki?curid=59438
59441634
Sparse Fourier transform
The sparse Fourier transform (SFT) is a kind of discrete Fourier transform (DFT) for handling big data signals. Specifically, it is used in GPS synchronization, spectrum sensing and analog-to-digital converters.: The fast Fourier transform (FFT) plays an indispensable role on many scientific domains, especially on signal processing. It is one of the top-10 algorithms in the twentieth century. However, with the advent of big data era, the FFT still needs to be improved in order to save more computing power. Recently, the sparse Fourier transform (SFT) has gained a considerable amount of attention, for it performs well on analyzing the long sequence of data with few signal components. Definition. Consider a sequence "x""n" of complex numbers. By Fourier series, "x""n" can be written as formula_0 Similarly, "X""k" can be represented as formula_1 Hence, from the equations above, the mapping is formula_2. Single frequency recovery. Assume only a single frequency exists in the sequence. In order to recover this frequency from the sequence, it is reasonable to utilize the relationship between adjacent points of the sequence. Phase encoding. The phase "k" can be obtained by dividing the adjacent points of the sequence. In other words, formula_3 Notice that formula_4. An aliasing-based search. Seeking phase "k" can be done by Chinese remainder theorem (CRT). Take formula_5 for an example. Now, we have three relatively prime integers 100, 101, and 103. Thus, the equation can be described as formula_6 By CRT, we have formula_7 Randomly binning frequencies. Now, we desire to explore the case of multiple frequencies, instead of a single frequency. The adjacent frequencies can be separated by the scaling "c" and modulation "b" properties. Namely, by randomly choosing the parameters of "c" and "b", the distribution of all frequencies can be almost a uniform distribution. The figure "Spread all frequencies" reveals by randomly binning frequencies, we can utilize the single frequency recovery to seek the main components. formula_8 where "c" is scaling property and "b" is modulation property. By randomly choosing "c" and "b", the whole spectrum can be looked like uniform distribution. Then, taking them into filter banks can separate all frequencies, including Gaussians, indicator functions, spike trains, and Dolph-Chebyshev filters. Each bank only contains a single frequency. The prototypical SFT. Generally, all SFT follows the three stages Identifying frequencies. By randomly bining frequencies, all components can be separated. Then, taking them into filter banks, so each band only contains a single frequency. It is convenient to use the methods we mentioned to recover this signal frequency. Estimating coefficients. After identifying frequencies, we will have many frequency components. We can use Fourier transform to estimate their coefficients. formula_9 Repeating. Finally, repeating these two stages can we extract the most important components from the original signal. formula_10 Sparse Fourier transform in the discrete setting. In 2012, Hassanieh, Indyk, Katabi, and Price proposed an algorithm that takes formula_11 samples and runs in the same running time. Sparse Fourier transform in the high dimensional setting. In 2014, Indyk and Kapralov proposed an algorithm that takes formula_12 samples and runs in nearly linear time in formula_13. In 2016, Kapralov proposed an algorithm that uses sublinear samples formula_14 and sublinear decoding time formula_15. In 2019, Nakos, Song, and Wang introduced a new algorithm which uses nearly optimal samples formula_16 and requires nearly linear time decoding time. A dimension-incremental algorithm was proposed by Potts, Volkmer based on sampling along rank-1 lattices. Sparse Fourier transform in the continuous setting. There are several works about generalizing the discrete setting into the continuous setting. Implementations. There are several works based on MIT, MSU, ETH and Universtity of Technology Chemnitz [TUC]. Also, they are free online. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nx_n=(F^*X)_n=\\sum_{k=0}^{N-1}X_k e^{j\\frac{2\\pi}{N}kn}.\n" }, { "math_id": 1, "text": "\nX_k=\\frac{1}{N}(Fx)_k=\\frac{1}{N}\\sum_{n=0}^{N-1}x_n e^{-j\\frac{2\\pi}{N}kn}.\n" }, { "math_id": 2, "text": "F:\\mathbb C^N\\to \\mathbb C^N" }, { "math_id": 3, "text": "\n\\frac{x_{n+1}}{x_n}=e^{j\\frac{2\\pi}{N}k} = \\cos\\left(\\frac{2\\pi k}{N}\\right)+j \\sin\\left(\\frac{2\\pi k}{N}\\right).\n" }, { "math_id": 4, "text": "x_n \\in \\mathbb C^N" }, { "math_id": 5, "text": "k=104{,}134" }, { "math_id": 6, "text": "\nk=104{,}134\\equiv \\left\\{ \\begin{array}{rl} 34 & \\bmod 100, \\\\ 3 & \\bmod 101, \\\\ 1 & \\bmod 103. \\end{array} \\right.\n" }, { "math_id": 7, "text": "\nk=104{,}134\\bmod (100\\cdot101\\cdot103)=104{,}134\\bmod 1{,}040{,}300\n" }, { "math_id": 8, "text": "\nx_n'=X_k e^{j\\frac{2\\pi}{N}(c\\cdot k+b)},\n" }, { "math_id": 9, "text": "\nX_k'=\\frac 1 L \\sum_{l=1}^L x_n'e^{-j\\frac{2\\pi}{N}n'\\ell}\n" }, { "math_id": 10, "text": "\nx_n-\\sum_{k'=1}^k X_k' e^{j\\frac{2\\pi}{N}k'n}\n" }, { "math_id": 11, "text": " O ( k \\log n \\log (n/k) ) " }, { "math_id": 12, "text": " 2^{O(d \\log d)} k \\log n " }, { "math_id": 13, "text": " n " }, { "math_id": 14, "text": " 2^{O(d^2)} k \\log n \\log \\log n " }, { "math_id": 15, "text": " k \\log^{O(d)} n " }, { "math_id": 16, "text": " O( k \\log n \\log k ) " } ]
https://en.wikipedia.org/wiki?curid=59441634
59441761
Network synthesis
A design technique for linear electrical circuits Network synthesis is a design technique for linear electrical circuits. Synthesis starts from a prescribed impedance function of frequency or frequency response and then determines the possible networks that will produce the required response. The technique is to be compared to network analysis in which the response (or other behaviour) of a given circuit is calculated. Prior to network synthesis, only network analysis was available, but this requires that one already knows what form of circuit is to be analysed. There is no guarantee that the chosen circuit will be the closest possible match to the desired response, nor that the circuit is the simplest possible. Network synthesis directly addresses both these issues. Network synthesis has historically been concerned with synthesising passive networks, but is not limited to such circuits. The field was founded by Wilhelm Cauer after reading Ronald M. Foster's 1924 paper "A reactance theorem". Foster's theorem provided a method of synthesising LC circuits with arbitrary number of elements by a partial fraction expansion of the impedance function. Cauer extended Foster's method to RC and RL circuits, found new synthesis methods, and methods that could synthesise a general RLC circuit. Other important advances before World War II are due to Otto Brune and Sidney Darlington. In the 1940s Raoul Bott and Richard Duffin published a synthesis technique that did not require transformers in the general case (the elimination of which had been troubling researchers for some time). In the 1950s, a great deal of effort was put into the question of minimising the number of elements required in a synthesis, but with only limited success. Little was done in the field until the 2000s when the issue of minimisation again became an active area of research, but as of 2023, is still an unsolved problem. A primary application of network synthesis is the design of network synthesis filters but this is not its only application. Amongst others are impedance matching networks, time-delay networks, directional couplers, and equalisation. In the 2000s, network synthesis began to be applied to mechanical systems as well as electrical, notably in Formula One racing. Overview. Network synthesis is all about designing an electrical network that behaves in a prescribed way without any preconception of the network form. Typically, an impedance is required to be synthesised using passive components. That is, a network consisting of resistances (R), inductances (L) and capacitances (C). Such networks always have an impedance, denoted formula_0, in the form of a rational function of the complex frequency variable "s". That is, the impedance is the ratio of two polynomials in "s". There are three broad areas of study in network synthesis; approximating a requirement with a rational function, synthesising that function into a network, and determining equivalents of the synthesised network. Approximation. The idealised prescribed function will rarely be capable of being exactly described by polynomials. It is therefore not possible to synthesise a network to exactly reproduce it. A simple, and common, example is the brick-wall filter. This is the ideal response of a low-pass filter but its piecewise continuous response is impossible to represent with polynomials because of the discontinuities. To overcome this difficulty, a rational function is found that closely approximates the prescribed function using approximation theory. In general, the closer the approximation is required to be, the higher the degree of the polynomial and the more elements will be required in the network. There are many polynomials and functions used in network synthesis for this purpose. The choice depends on which parameters of the prescribed function the designer wishes to optimise. One of the earliest used was Butterworth polynomials which results in a maximally flat response in the passband. A common choice is the Chebyshev approximation in which the designer specifies how much the passband response can deviate from the ideal in exchange for improvements in other parameters. Other approximations are available for optimising time delay, impedance matching, roll-off, and many other requirements. Realisation. Given a rational function, it is usually necessary to determine whether the function is realisable as a discrete passive network. All such networks are described by a rational function, but not all rational functions are realisable as a discrete passive network. Historically, network synthesis was concerned exclusively with such networks. Modern active components have made this limitation less relevant in many applications, but at the higher radio frequencies passive networks are still the technology of choice. There is a simple property of rational functions that predicts whether the function is realisable as a passive network. Once it is determined that a function is realisable, there a number of algorithms available that will synthesise a network from it. Equivalence. A network realisation from a rational function is not unique. The same function may realise many equivalent networks. It is known that affine transformations of the impedance matrix formed in mesh analysis of a network are all impedance matrices of equivalent networks (further information at ). Other impedance transformations are known, but whether there are further equivalence classes that remain to be discovered is an open question. A major area of research in network synthesis has been to find the realisation which uses the minimum number of elements. This question has not been fully solved for the general case, but solutions are available for many networks with practical applications. History. The field of network synthesis was founded by German mathematician and scientist Wilhelm Cauer (1900–1945). The first hint towards a theory came from American mathematician Ronald M. Foster (1896–1998) when he published "A reactance theorem" in 1924. Cauer immediately recognised the importance of this work and set about generalising and extending it. His thesis in 1926 was on "The realisation of impedances of prescibed frequency dependence" and is the beginning of the field. Cauer's most detailed work was done during World War II, but he was killed shortly before the end of the war. His work could not be widely published during the war, and it was not until 1958 that his family collected his papers and published them for the wider world. Meanwhile, progress had been made in the United States based on Cauer's pre-war publications and material captured during the war. English self-taught mathematician and scientist Oliver Heaviside (1850–1925) was the first to show that the impedance of an RLC network was always a rational function of a frequency operator, but provided no method of realising a network from a rational function. Cauer found a necessary condition for a rational function to be realisable as a passive network. South African Otto Brune (1901–1982) later coined the term positive-real function (PRF) for this condition. Cauer postulated that PRF was a necessary and sufficient condition but could not prove it, and suggested it as a research project to Brune, who was his grad student in the United States at the time. Brune published the missing proof in his 1931 doctoral thesis. Foster's realisation was limited to LC networks and was in one of two forms; either a number of series LC circuits in parallel, or a number of parallel LC circuits in series. Foster's method was to expand formula_0 into partial fractions. Cauer showed that Foster's method could be extended to RL and RC networks. Cauer also found another method; expanding formula_0 as a continued fraction which results in a ladder network, again in two possible forms. In general, a PRF will represent an RLC network; with all three kinds of element present the realisation is trickier. Both Cauer and Brune used ideal transformers in their realisations of RLC networks. Having to include transformers is undesirable in a practical implementation of a circuit. A method of realisation that did not require transformers was provided in 1949 by Hungarian-American mathematician Raoul Bott (1923–2005) and American physicist Richard Duffin (1909–1996). The Bott and Duffin method provides an expansion by repeated application of Richards' theorem, a 1947 result due to American physicist and applied mathematician Paul I. Richards (1923–1978). The resulting Bott-Duffin networks have limited practical use (at least for rational functionals of high degree) because the number of components required grows exponentially with the degree. A number of variations of the original Bott-Duffin method all reduce the number of elements in each section from six to five, but still with exponentially growing overall numbers. Papers achieving this include Pantell (1954), Reza (1954), Storer (1954) and Fialkow &amp; Gest (1955). As of 2010, there has been no further significant advance in synthesising rational functions. In 1939, American electrical engineer Sidney Darlington showed that any PRF can be realised as a two-port network consisting only of L and C elements and terminated at its output with a resistor. That is, only one resistor is required in any network, the remaining components being lossless. The theorem was independently discovered by both Cauer and Giovanni Cocci. The corollary problem, to find a synthesis of PRFs using R and C elements with only one inductor, is an unsolved problem in network theory. Another unsolved problem is finding a proof of Darlington's conjecture (1955) that any RC 2-port with a common terminal can be realised as a series-parallel network. An important consideration in practical networks is to minimise the number of components, especially the wound components—inductors and transformers. Despite great efforts being put into minimisation, no general theory of minimisation has ever been discovered as it has for the Boolean algebra of digital circuits. Cauer used elliptic rational functions to produce approximations to ideal filters. A special case of elliptic rational functions is the Chebyshev polynomials due to Pafnuty Chebyshev (1821–1894) and is an important part of approximation theory. Chebyshev polynomials are widely used to design filters. In 1930, British physicist Stephen Butterworth (1885–1958) designed the Butterworth filter, otherwise known as the maximally-flat filter, using Butterworth polynomials. Butterworth's work was entirely independent of Cauer, but it was later found that the Butterworth polynomials were a limiting case of the Chebyshev polynomials. Even earlier (1929) and again independently, American engineer and scientist Edward Lawry Norton (1898–1983) designed a maximally-flat mechanical filter with a response entirely analogous to Butterworth's electrical filter. In the 2000s, interest in further developing network synthesis theory was given a boost when the theory started to be applied to large mechanical systems. The unsolved problem of minimisation is much more important in the mechanical domain than the electrical due to the size and cost of components. In 2017, researchers at the University of Cambridge, limiting themselves to considering biquadratic rational functions, determined that Bott-Duffin realisations of such functions for all series-parallel networks and most arbitrary networks had the minimum number of reactances (Hughes, 2017). They found this result surprising as it showed that the Bott-Duffin method was not quite so non-minimal as previously thought. This research partly centred on revisiting the "Ladenheim Catalogue". This is an enumeration of all distinct RLC networks with no more than two reactances and three resistances. Edward Ladenheim carried out this work in 1948 while a student of Foster. The relevance of the catalogue is that all these networks are realised by biquadratic functions. Applications. The single most widely used application of network synthesis is in the design of signal processing filters. The modern designs of such filters are almost always some form of network synthesis filter. Another application is the design of impedance matching networks. Impedance matching at a single frequency requires only a trivial network—usually one component. Impedance matching over a wide band, however, requires a more complex network, even in the case that the source and load resistances do not vary with frequency. Doing this with passive elements and without the use of transformers results in a filter-like design. Furthermore, if the load is not a pure resistance then it is only possible to achieve a perfect match at a number of discrete frequencies; the match over the band as a whole must be approximated. The designer first prescribes the frequency band over which the matching network is to operate, and then designs a band-pass filter for that band. The only essential difference between a standard filter and a matching network is that the source and load impedances are not equal. There are differences between filters and matching networks in which parameters are important. Unless the network has a dual function, the designer is not too concerned over the behaviour of the impedance matching network outside the passband. It does not matter if the transition band is not very narrow, or that the stopband has poor attenuation. In fact, trying to improve the bandwidth beyond what is strictly necessary will detract from the accuracy of the impedance match. With a given number of elements in the network, narrowing the design bandwidth improves the matching and vice versa. The limitations of impedance matching networks were first investigated by American engineer and scientist Hendrik Wade Bode in 1945, and the principle that they must necessarily be filter-like was established by Italian-American computer scientist Robert Fano in 1950. One parameter in the passband that is usually set for filters is the maximum insertion loss. For impedance matching networks, a better match can be obtained by also setting a minimum loss. That is, the gain never rises to unity at any point. Time-delay networks can be designed by network synthesis with filter-like structures. It is not possible to design a delay network that has a constant delay at all frequencies in a band. An approximation to this behaviour must be used limited to a prescribed bandwidth. The prescribed delay will occur at most at a finite number of spot frequencies. The Bessel filter has maximally-flat time-delay. The application of network synthesis is not limited to the electrical domain. It can be applied to systems in any energy domain that can be represented as a network of linear components. In particular, network synthesis has found applications in mechanical networks in the mechanical domain. Consideration of mechanical network synthesis led Malcolm C. Smith to propose a new mechanical network element, the inerter, which is analogous to the electrical capacitor. Mechanical components with the inertance property have found an application in the suspensions of Formula One racing cars. Synthesis techniques. Synthesis begins by choosing an approximation technique that delivers a rational function approximating the required function of the network. If the function is to be implemented with passive components, the function must also meet the conditions of a positive-real function (PRF). The synthesis technique used depends in part on what form of network is desired, and in part how many kinds of elements are needed in the network. A one-element-kind network is a trivial case, reducing to an impedance of a single element. A two-element-kind network (LC, RC, or RL) can be synthesised with Foster or Cauer synthesis. A three-element-kind network (an RLC network) requires more advanced treatment such as Brune or Bott-Duffin synthesis. Which, and how many kinds of, elements are required can be determined by examining the poles and zeroes (collectively called critical frequencies) of the function. The requirement on the critical frequencies is given for each kind of network in the relevant sections below. Foster synthesis. Foster's synthesis, in its original form, can be applied only to LC networks. A PRF represents a two-element-kind LC network if the critical frequencies of formula_0 all exist on the formula_1 axis of the complex plane of formula_2 (the "s"-plane) and will alternate between poles and zeroes. There must be a single critical frequency at the origin and at infinity, all the rest must be in conjugate pairs. formula_0 must be the ratio of an even and odd polynomial and their degrees must differ by exactly one. These requirements are a consequence of Foster's reactance theorem. Foster I form. Foster's first form (Foster I form) synthesises formula_0 as a set of parallel LC circuits in series. For example, formula_3 can be expanded into partial fractions as, formula_4 The first term represents a series inductor, a consequence of formula_0 having a pole at infinity. If it had had a pole at the origin, that would represent a series capacitor. The remaining two terms each represent conjugate pairs of poles on the formula_1 axis. Each of these terms can be synthesised as a parallel LC circuit by comparison with the impedance expression for such a circuit, formula_5 The resulting circuit is shown in the figure. Foster II form. Foster II form synthesises formula_0 as a set of series LC circuits in parallel. The same method of expanding into partial fractions is used as for Foster I form, but applied to the admittance, formula_6, instead of formula_0. Using the same example PRF as before, formula_7 Expanded in partial fractions, formula_8 The first term represents a shunt inductor, a consequence of formula_6 having a pole at the origin (or, equivalently, formula_0 has a zero at the origin). If it had had a pole at infinity, that would represent a shunt capacitor. The remaining two terms each represent conjugate pairs of poles on the formula_1 axis. Each of these terms can be synthesised as a series LC circuit by comparison with the admittance expression for such a circuit, formula_9 The resulting circuit is shown in the figure. Extension to RC or RL networks. Foster synthesis can be extended to any two-element-kind network. For instance, the partial fraction terms of an RC network in Foster I form will each represent an R and C element in parallel. In this case, the partial fractions will be of the form, formula_10 Other forms and element kinds follow by analogy. As with an LC network, The PRF can be tested to see if it is an RC or RL network by examining the critical frequencies. The critical frequencies must all be on the negative real axis and alternate between poles and zeroes, and there must be an equal number of each. If the critical frequency nearest, or at, the origin is a pole, then the PRF is an RC network if it represents a formula_0, or it is an RL network if it represents a formula_6. Vice versa if the critical frequency nearest, or at, the origin is a zero. These extensions of the theory also apply to the Cauer forms described below. Immittance. In the Foster synthesis above, the expansion of the function is the same procedure in both the Foster I form and Foster II form. It is convenient, especially in theoretical works, to treat them together as an immittance rather than separately as either an impedance or an admittance. It is only necessary to declare whether the function represents an impedance or an admittance at the point that an actual circuit needs to be realised. Immittance can also be used in the same way with the Cauer I and Cauer II forms and other procedures. Cauer synthesis. Cauer synthesis is an alternative synthesis to Foster synthesis and the conditions that a PRF must meet are exactly the same as Foster synthesis. Like Foster synthesis, there are two forms of Cauer synthesis, and both can be extended to RC and RL networks. Cauer I form. The Cauer I form expands formula_0 into a continued fraction. Using the same example as used for the Foster I form, formula_11 or, in more compact notation, formula_12 The terms of this expansion can be directly implemented as the component values of a ladder network as shown in the figure. The given PRF may have a denominator that has a greater degree than the numerator. In such cases, the multiplicative inverse of the function is expanded instead. That is, if the function represents formula_0, then formula_6 is expanded instead and vice versa. Cauer II form. Cauer II form expands formula_0 in exactly the same way as Cauer I form except that lowest degree term is extracted first in the continued fraction expansion rather than the highest degree term as is done in Cauer I form. The example used for the Cauer I form and the Foster forms when expanded as a Cauer II form results in some elements having negative values. This particular PRF, therefore, cannot be realised in passive components as a Cauer II form without the inclusion of transformers or mutual inductances. The essential reason that the example formula_0 cannot be realised as a Cauer II form is that this form has a high-pass topology. The first element extracted in the continued fraction is a series capacitor. This makes it impossible for the zero of formula_0 at the origin to be realised. The Cauer I form, on the other hand, has a low-pass topology and naturally has a zero at the origin. However, the formula_6 of this function can be realised as a Cauer II form since the first element extracted is a shunt inductor. This gives a pole at the origin for formula_6, but that translates to the necessary zero at the origin for formula_0. The continued fraction expansion is, formula_13 and the realised network is shown in the figure. Brune synthesis. The Brune synthesis can synthesise any arbitrary PRF, so in general will result in a 3-element-kind (i.e. RLC) network. The poles and zeroes can lie anywhere in the left-hand half of the complex plane. The Brune method starts with some preliminary steps to eliminate critical frequencies on the imaginary axis as in the Foster method. These preliminary steps are sometimes called the "Foster preamble". There is then a cycle of steps to produce a cascade of Brune sections. Removal of critical frequencies on the imaginary axis. Poles and zeroes on the formula_14 axis represent L and C elements that can be extracted from the PRF. Specifically, After these extractions, the remainder PRF has no critical frequencies on the imaginary axis and is known as a "minimum reactance, minimum susceptance function". Brune synthesis proper begins with such a function. Broad outline of method. The essence of the Brune method is to create a conjugate pair of zeroes on the formula_1 axis by extracting the real and imaginary parts of the function at that frequency, and then extract the pair of zeroes as a resonant circuit. This is the first Brune section of the synthesised network. The resulting remainder is another minimum reactance function that is two degrees lower. The cycle is then repeated, each cycle producing one more Brune section of the final network until just a constant value (a resistance) remains. The Brune synthesis is canonical, that is, the number of elements in the final synthesised network is equal to the number of arbitrary coefficients in the impedance function. The number of elements in the synthesised circuit cannot therefore be reduced any further. Removal of minimum resistance. A minimum reactance function will have a minimum real part, formula_17, at some frequency formula_18. This resistance can be extracted from the function leaving a remainder of another PRF called a "minimum positive-real function", or just "minimum function". For example, the minimum reactance function formula_19 has formula_20 and formula_21. The minimum function, formula_22, is therefore, formula_23 Removal of a negative inductance or capacitance. Since formula_24 has no real part, we can write, formula_25 For the example function, formula_26 In this case, formula_27 is negative, and we interpret it as the reactance of a negative-valued inductor, formula_28. Thus, formula_29 and formula_30 after substituting in the values of formula_18 and formula_31. This inductance is then extracted from formula_22, leaving another PRF, formula_32, formula_33 The reason for extracting a negative value is because formula_34 is a PRF, which it would not be if formula_28 were positive. This guarantees that formula_32 will also be PRF (because the sum of two PRFs is also PRF). For cases where formula_27 is a positive value, the admittance function is used instead and a negative capacitance is extracted. How these negative values are implemented is explained in a later section. Removal of a conjugate pair of zeroes. Both the real and imaginary parts of formula_35 have been removed in previous steps. This leaves a pair of zeroes in formula_32 at formula_36 as shown by factorising the example function; formula_37 Since such a pair of zeroes represents a shunt resonant circuit, we extract it as a pair of poles from the admittance function, formula_38 The rightmost term is the extracted resonant circuit with formula_39 and formula_40. The network synthesised so far is shown in the figure. Removal of a pole at infinity. formula_41 must have a pole at infinity, since one was created there by the extraction of a negative inductance. This pole can now be extracted as a positive inductance. formula_42 Thus formula_43 as shown in the figure. Replacing negative inductance with a transformer. The negative inductance cannot be implemented directly with passive components. However, the "tee" of inductors can be converted into mutually coupled inductors which absorbs the negative inductance. With a coupling coefficient of unity (tightly coupled) the mutual inductance, formula_44, in the example case is 2.0. Rinse and repeat. In general, formula_45 will be another minimum reactance function and the Brune cycle is then repeated to extract another Brune section In the example case, the original PRF was of degree 2, so after reducing it by two degrees, only a constant term is left which, trivially, synthesises as a resistance. Positive "X". In step two of the cycle it was mentioned that a negative element value must be extracted in order to guarantee a PRF remainder. If formula_27 is positive, the element extracted must be a shunt capacitor instead of a series inductor if the element is to be negative. It is extracted from the admittance formula_46 instead of the impedance formula_22. The circuit topology arrived at in step four of the cycle is a Π (pi) of capacitors plus an inductor instead of a tee of inductors plus a capacitor. It can be shown that this Π of capacitors plus inductor is an equivalent circuit of the tee of inductors plus capacitor. Thus, it is permissible to extract a positive inductance and then proceed as though formula_32 were PRF, even though it is not. The correct result will still be arrived at and the remainder function will be PRF so can be fed into the next cycle. Bott-Duffin synthesis. The Bott-Duffin synthesis begins as with the Brune synthesis by removing all poles and zeroes on the formula_1 axis. Then Richards' theorem is invoked, which states for, formula_47 if formula_0 is a PRF then formula_48 is a PRF for all real, positive values of formula_49. Making formula_0 the subject of the expression results in, formula_50 An example of one cycle of Bott-Duffin synthesis is shown in the figures. The four terms in this expression are, respectively, a PRF (formula_51 in the diagram), an inductance, formula_52, in parallel with it, another PRF (formula_53 in the diagram), and a capacitance, formula_54, in parallel with it. A pair of critical frequencies on the formula_1 axis is then extracted from each of the two new PRFs (details not given here) each realised as a resonant circuit. The two residual PRFs (formula_55 and formula_56 in the diagram) are each two degrees lower than formula_0. The same procedure is then repeatedly applied to the new PRFs generated until just a single element remains. Since the number of PRFs generated doubles with each cycle, the number of elements synthesised will grow exponentially. Although the Bott-Duffin method avoids the use of transformers and can be applied to any expression capable of realisation as a passive network, it has limited practical use due to the high component count required. Bayard synthesis. Bayard synthesis is a state-space synthesis method based on the Gauss factorisation procedure. This method returns a synthesis using the minimum number of resistors and contains no gyrators. However, the method is non-canonical and will, in general, return a non-minimal number of reactance elements. Darlington synthesis. Darlington synthesis starts from a different perspective to the techniques discussed so far, which all start from a prescribed rational function and realise it as a one-port impedance. Darlington synthesis starts with a prescribed rational function that is the desired transfer function of a two-port network. Darlington showed that any PRF can be realised as a two-port network using only L and C elements with a single resistor terminating the output port. The Darlington and related methods are called the "insertion loss method". The method can be extended to multi-port networks with each port terminated with a single resistor. The Darlington method, in general, will require transformers or coupled inductors. However, most common filter types can be constructed by the Darlington method without these undesirable features. Active and digital realisations. If the requirement to use only passive elements is lifted, then the realisation can be greatly simplified. Amplifiers can be used to buffer the parts of the network from each other so that they do not interact. Each buffered cell can directly realise a pair of poles of the rational function. There is then no need for any kind of iterative expansion of the function. The first example of this kind of synthesis is due to Stephen Butterworth in 1930. The Butterworth filter he produced became a classic of filter design, but more frequently implemented with purely passive rather than active components. More generally applicable designs of this kind include the Sallen–Key topology due to R. P. Sallen and E. L. Key in 1955 at MIT Lincoln Laboratory, and the biquadratic filter. Like the Darlington approach, Butterworth and Sallen-Key start with a prescribed transfer function rather than an impedance. A major practical advantage of active implementation is that it can avoid the use of wound components (transformers and inductors) altogether. These are undesirable for manufacturing reasons. Another feature of active designs is that they are not limited to PRFs. Digital realisations, like active circuits, are not limited to PRFs and can implement any rational function simply by programming it in. However, the function may not be stable. That is, it may lead to oscillation. PRFs are guaranteed to be stable, but other functions may not be. The stability of a rational function can be determined by examining the poles and zeroes of the function and applying the Nyquist stability criterion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z(s)" }, { "math_id": 1, "text": "i \\omega" }, { "math_id": 2, "text": "s = \\sigma + i \\omega" }, { "math_id": 3, "text": "Z(s) = \\frac {9s^5 + 30s^3 + 24s}{18s^4 + 36s^2 + 8} " }, { "math_id": 4, "text": "Z(s) = {s \\over 2} + \\frac {(25 + 11 \\sqrt 5)s}{5(9 + 3 \\sqrt 5)s^2 +20} + \\frac {(25 - 11 \\sqrt 5)s}{5(9 - 3 \\sqrt 5)s^2 +20} \\approx {s \\over 2} + \\frac {2.48s}{3.93s^2 +1} + \\frac {0.020s}{0.573s^2 + 1}" }, { "math_id": 5, "text": "Z_{LC}(s) = \\frac {Ls}{LCs^2 + 1} " }, { "math_id": 6, "text": "Y(s)" }, { "math_id": 7, "text": "Y(s) = {1 \\over Z(s)} = \\frac {18s^4 + 36s^2 + 8}{9s^5 + 30s^3 + 24s} " }, { "math_id": 8, "text": "Y(s) \\simeq {1 \\over 3s} + \\frac {2.498s}{0.6346s^2 +1} + \\frac {1.415s}{0.4719s^2 + 1}" }, { "math_id": 9, "text": "Y_{LC}(s) = \\frac {Cs}{LCs^2 + 1} " }, { "math_id": 10, "text": "Z_{RC}(s) = \\frac {R}{RCs + 1} " }, { "math_id": 11, "text": "Z(s) = 0.5s + \\cfrac{1}{1.5s+\\cfrac{1}{2s+\\cfrac{1}{1.5s+\\cfrac{1}{0.5s}}}}" }, { "math_id": 12, "text": "Z(s) = [0.5s;1.5s,2s,1.5s,0.5s]." }, { "math_id": 13, "text": "Y(s) \\simeq \\left [ {1 \\over 3s}; {1 \\over 1.083s} , {1 \\over 0.2175s} , {1 \\over 1.735s} \\right ]" }, { "math_id": 14, "text": "j \\omega" }, { "math_id": 15, "text": "s= \\pm i \\omega_c" }, { "math_id": 16, "text": "\\omega_c" }, { "math_id": 17, "text": "R_ \\text {min}" }, { "math_id": 18, "text": "\\omega_0" }, { "math_id": 19, "text": "Z(s) = \\frac {3s^2 + 3s + 6}{2s^2 + s + 2} " }, { "math_id": 20, "text": "\\omega_ \\text {min} = \\sqrt 2" }, { "math_id": 21, "text": "R_ \\text {min} = 1" }, { "math_id": 22, "text": "Z_1(s)" }, { "math_id": 23, "text": "Z_1(s) = Z(s) - R_ \\text {min} = \\frac {s^2 + 2s + 4}{2s^2 + s + 2}" }, { "math_id": 24, "text": "Z_1(i \\omega_0)" }, { "math_id": 25, "text": "Z_1(i \\omega_0) = iX \\ ." }, { "math_id": 26, "text": "Z_1(i \\omega_0) = -i \\sqrt 2 = iX \\ ." }, { "math_id": 27, "text": "X" }, { "math_id": 28, "text": "L_1" }, { "math_id": 29, "text": "iX = i \\omega_0 L_1" }, { "math_id": 30, "text": "L_1 = -1 " }, { "math_id": 31, "text": "iX" }, { "math_id": 32, "text": "Z_2(s)" }, { "math_id": 33, "text": "Z_2(s) = Z_1(s) - sL_1 = \\frac {2s^3 + 2s^2 + 4s +4}{2s^2 + s +2} \\ ." }, { "math_id": 34, "text": "- sL_1" }, { "math_id": 35, "text": "Z(i \\omega_0)" }, { "math_id": 36, "text": "\\pm i \\omega_0" }, { "math_id": 37, "text": "Z_2(s) = \\frac {2s^3 + 2s^2 + 4s +4}{2s^2 + s +2} = \\frac {(s^2 + 2)(2s + 2)}{2s^2 + s + 2} \\ ." }, { "math_id": 38, "text": " \\begin{align}\nY_2(s) & = {1 \\over Z_2(s)} = \\frac {2s^2 + s + 2}{(s^2 + 2)(2s + 2)} \\\\\n& = {1 \\over {2s +2}} + \\frac {s/2}{s^2 + 2} \\\\\n& = Y_3(s) + \\frac {s/2}{s^2 + 2} \\\\\n\\end{align} " }, { "math_id": 39, "text": "L_2 = 2" }, { "math_id": 40, "text": "C_2 = 1/4" }, { "math_id": 41, "text": "Z_3 (s)" }, { "math_id": 42, "text": "Z_3 (s) = {1 \\over Y_3 (s)} = 2s + 2 = Z_4 (s) + 2s." }, { "math_id": 43, "text": "L_3 = 2" }, { "math_id": 44, "text": "M" }, { "math_id": 45, "text": "Z_4(s)" }, { "math_id": 46, "text": "Y_1(s)" }, { "math_id": 47, "text": "R(s) = \\frac {kZ(s)-sZ(k)}{kZ(k)-sZ(s)} " }, { "math_id": 48, "text": "R(s)" }, { "math_id": 49, "text": "k" }, { "math_id": 50, "text": "Z(s) = \\left ( \\frac {R(s)}{Z(k)} + \\frac {k}{sZ(k)} \\right )^{-1} + \\left ( \\frac {1}{Z(k)R(s)} + \\frac {s}{kZ(k)} \\right )^{-1}" }, { "math_id": 51, "text": "Z_2 (s)" }, { "math_id": 52, "text": "L" }, { "math_id": 53, "text": "Z_1 (s)" }, { "math_id": 54, "text": "C" }, { "math_id": 55, "text": "Y_3 (s)" }, { "math_id": 56, "text": "Z_4 (s)" } ]
https://en.wikipedia.org/wiki?curid=59441761
5944391
Hopcroft–Karp algorithm
Algorithm for maximum cardinality matching In computer science, the Hopcroft–Karp algorithm (sometimes more accurately called the Hopcroft–Karp–Karzanov algorithm) is an algorithm that takes a bipartite graph as input and produces a maximum-cardinality matching as output — a set of as many edges as possible with the property that no two edges share an endpoint. It runs in formula_0 time in the worst case, where formula_1 is set of edges in the graph, formula_2 is set of vertices of the graph, and it is assumed that formula_3. In the case of dense graphs the time bound becomes formula_4, and for sparse random graphs it runs in time formula_5 with high probability. The algorithm was discovered by John Hopcroft and Richard Karp (1973) and independently by Alexander Karzanov (1973). As in previous methods for matching such as the Hungarian algorithm and the work of , the Hopcroft–Karp algorithm repeatedly increases the size of a partial matching by finding "augmenting paths". These paths are sequences of edges of the graph, which alternate between edges in the matching and edges out of the partial matching, and where the initial and final edge are not in the partial matching. Finding an augmenting path allows us to increment the size of the partial matching, by simply toggling the edges of the augmenting path (putting in the partial matching those that were not, and vice versa). Simpler algorithms for bipartite matching, such as the Ford–Fulkerson algorithm‚ find one augmenting path per iteration: the Hopcroft-Karp algorithm instead finds a maximal set of shortest augmenting paths, so as to ensure that only formula_6 iterations are needed instead of formula_7 iterations. The same performance of formula_0 can be achieved to find maximum-cardinality matchings in arbitrary graphs, with the more complicated algorithm of Micali and Vazirani. The Hopcroft–Karp algorithm can be seen as a special case of Dinic's algorithm for the maximum-flow problem. Augmenting paths. A vertex that is not the endpoint of an edge in some partial matching formula_8 is called a "free vertex". The basic concept that the algorithm relies on is that of an "augmenting path", a path that starts at a free vertex, ends at a free vertex, and alternates between unmatched and matched edges within the path. It follows from this definition that, except for the endpoints, all other vertices (if any) in augmenting path must be non-free vertices. An augmenting path could consist of only two vertices (both free) and single unmatched edge between them. If formula_8 is a matching, and formula_9 is an augmenting path relative to formula_8, then the symmetric difference of the two sets of edges, formula_10, would form a matching with size formula_11. Thus, by finding augmenting paths, an algorithm may increase the size of the matching. Conversely, suppose that a matching formula_8 is not optimal, and let formula_9 be the symmetric difference formula_12 where formula_13 is an optimal matching. Because formula_8 and formula_13 are both matchings, every vertex has degree at most 2 in formula_9. So formula_9 must form a collection of disjoint cycles, of paths with an equal number of matched and unmatched edges in formula_8, of augmenting paths for formula_8, and of augmenting paths for formula_13; but the latter is impossible because formula_13 is optimal. Now, the cycles and the paths with equal numbers of matched and unmatched vertices do not contribute to the difference in size between formula_8 and formula_13, so this difference is equal to the number of augmenting paths for formula_8 in formula_9. Thus, whenever there exists a matching formula_13 larger than the current matching formula_8, there must also exist an augmenting path. If no augmenting path can be found, an algorithm may safely terminate, since in this case formula_8 must be optimal. An augmenting path in a matching problem is closely related to the augmenting paths arising in maximum flow problems, paths along which one may increase the amount of flow between the terminals of the flow. It is possible to transform the bipartite matching problem into a maximum flow instance, such that the alternating paths of the matching problem become augmenting paths of the flow problem. It suffices to insert two vertices, source and sink, and insert edges of unit capacity from the source to each vertex in formula_14, and from each vertex in formula_2 to the sink; and let edges from formula_14 to formula_2 have unit capacity. A generalization of the technique used in Hopcroft–Karp algorithm to find maximum flow in an arbitrary network is known as Dinic's algorithm. Algorithm. The algorithm may be expressed in the following pseudocode. Input: Bipartite graph formula_15 Output: Matching formula_16 formula_17 repeat formula_18 "maximal set of vertex-disjoint shortest augmenting paths" formula_19 until formula_20 In more detail, let formula_14 and formula_2 be the two sets in the bipartition of formula_21, and let the matching from formula_14 to formula_2 at any time be represented as the set formula_8. The algorithm is run in phases. Each phase consists of the following steps. The algorithm terminates when no more augmenting paths are found in the breadth first search part of one of the phases. Analysis. Each phase consists of a single breadth first search and a single depth-first search. Thus, a single phase may be implemented in formula_25 time. Therefore, the first formula_26 phases, in a graph with formula_27 vertices and formula_28 edges, take time formula_0. Each phase increases the length of the shortest augmenting path by at least one: the phase finds a maximal set of augmenting paths of the given length, so any remaining augmenting path must be longer. Therefore, once the initial formula_26 phases of the algorithm are complete, the shortest remaining augmenting path has at least formula_26 edges in it. However, the symmetric difference of the eventual optimal matching and of the partial matching "M" found by the initial phases forms a collection of vertex-disjoint augmenting paths and alternating cycles. If each of the paths in this collection has length at least formula_26, there can be at most formula_26 paths in the collection, and the size of the optimal matching can differ from the size of formula_8 by at most formula_26 edges. Since each phase of the algorithm increases the size of the matching by at least one, there can be at most formula_26 additional phases before the algorithm terminates. Since the algorithm performs a total of at most formula_29 phases, it takes a total time of formula_0 in the worst case. In many instances, however, the time taken by the algorithm may be even faster than this worst case analysis indicates. For instance, in the average case for sparse bipartite random graphs, (improving a previous result of ) showed that with high probability all non-optimal matchings have augmenting paths of logarithmic length. As a consequence, for these graphs, the Hopcroft–Karp algorithm takes formula_30 phases and formula_31 total time. Comparison with other bipartite matching algorithms. For sparse graphs, the Hopcroft–Karp algorithm continues to have the best known worst-case performance, but for dense graphs (formula_32) a more recent algorithm by achieves a slightly better time bound, formula_33. Their algorithm is based on using a push-relabel maximum flow algorithm and then, when the matching created by this algorithm becomes close to optimum, switching to the Hopcroft–Karp method. Several authors have performed experimental comparisons of bipartite matching algorithms. Their results in general tend to show that the Hopcroft–Karp method is not as good in practice as it is in theory: it is outperformed both by simpler breadth-first and depth-first strategies for finding augmenting paths, and by push-relabel techniques. Non-bipartite graphs. The same idea of finding a maximal set of shortest augmenting paths works also for finding maximum cardinality matchings in non-bipartite graphs, and for the same reasons the algorithms based on this idea take formula_6 phases. However, for non-bipartite graphs, the task of finding the augmenting paths within each phase is more difficult. Building on the work of several slower predecessors, showed how to implement a phase in linear time, resulting in a non-bipartite matching algorithm with the same time bound as the Hopcroft–Karp algorithm for bipartite graphs. The Micali–Vazirani technique is complex, and its authors did not provide full proofs of their results; subsequently, a "clear exposition" was published by and alternative methods were described by other authors. In 2012, Vazirani offered a new simplified proof of the Micali-Vazirani algorithm. Pseudocode. where U and V are the left and right sides of the bipartite graph and NIL is a special null vertex function BFS() is for each u in U do if Pair_U[u] = NIL then Dist[u] := 0 Enqueue(Q, u) else Dist[u] := ∞ Dist[NIL] := ∞ while Empty(Q) = false do u := Dequeue(Q) if Dist[u] &lt; Dist[NIL] then for each v in Adj[u] do if Dist[Pair_V[v]] = ∞ then Dist[Pair_V[v]] := Dist[u] + 1 Enqueue(Q, Pair_V[v]) return Dist[NIL] ≠ ∞ function DFS(u) is if u ≠ NIL then for each v in Adj[u] do if Dist[Pair_V[v]] = Dist[u] + 1 then if DFS(Pair_V[v]) = true then Pair_V[v] := u Pair_U[u] := v return true Dist[u] := ∞ return false return true function Hopcroft–Karp is for each u in U do Pair_U[u] := NIL for each v in V do Pair_V[v] := NIL matching := 0 while BFS() = true do for each u in U do if Pair_U[u] = NIL then if DFS(u) = true then matching := matching + 1 return matching Explanation. Let the vertices of our graph be partitioned in U and V, and consider a partial matching, as indicated by the Pair_U and Pair_V tables that contain the one vertex to which each vertex of U and of V is matched, or NIL for unmatched vertices. The key idea is to add two dummy vertices on each side of the graph: uDummy connected to all unmatched vertices in U and vDummy connected to all unmatched vertices in V. Now, if we run a breadth-first search (BFS) from uDummy to vDummy then we can get the paths of minimal length that connect currently unmatched vertices in U to currently unmatched vertices in V. Note that, as the graph is bipartite, these paths always alternate between vertices in U and vertices in V, and we require in our BFS that when going from V to U, we always select a matched edge. If we reach an unmatched vertex of V, then we end at vDummy and the search for paths in the BFS terminate. To summarize, the BFS starts at unmatched vertices in U, goes to all their neighbors in V, if all are matched then it goes back to the vertices in U to which all these vertices are matched (and which were not visited before), then it goes to all the neighbors of these vertices, etc., until one of the vertices reached in V is unmatched. Observe in particular that BFS marks the unmatched nodes of U with distance 0, then increments the distance every time it comes back to U. This guarantees that the paths considered in the BFS are of minimal length to connect unmatched vertices of U to unmatched vertices of V while always going back from V to U on edges that are currently part of the matching. In particular, the special NIL vertex, which corresponds to vDummy, then gets assigned a finite distance, so the BFS function returns true iff some path has been found. If no path has been found, then there are no augmenting paths left and the matching is maximal. If BFS returns true, then we can go ahead and update the pairing for vertices on the minimal-length paths found from U to V: we do so using a depth-first search (DFS). Note that each vertex in V on such a path, except for the last one, is currently matched. So we can explore with the DFS, making sure that the paths that we follow correspond to the distances computed in the BFS. We update along every such path by removing from the matching all edges of the path that are currently in the matching, and adding to the matching all edges of the path that are currently not in the matching: as this is an augmenting path (the first and last edges of the path were not part of the matching, and the path alternated between matched and unmatched edges), then this increases the number of edges in the matching. This is same as replacing the current matching by the symmetric difference between the current matching and the entire path.. Note that the code ensures that all augmenting paths that we consider are vertex disjoint. Indeed, after doing the symmetric difference for a path, none of its vertices could be considered again in the DFS, just because the Dist[Pair_V[v]] will not be equal to Dist[u] + 1 (it would be exactly Dist[u]). Also observe that the DFS does not visit the same vertex multiple times. This is thanks to the following lines: Dist[u] = ∞ return false When we were not able to find any shortest augmenting path from a vertex u, then the DFS marks vertex u by setting Dist[u] to infinity, so that these vertices are not visited again. One last observation is that we actually don't need uDummy: its role is simply to put all unmatched vertices of U in the queue when we start the BFS. As for vDummy, it is denoted as NIL in the pseudocode above. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "O(|E|\\sqrt{|V|})" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "|E|=\\Omega(|V|)" }, { "math_id": 4, "text": "O(|V|^{2.5})" }, { "math_id": 5, "text": "O(|E|\\log |V|)" }, { "math_id": 6, "text": "O(\\sqrt{|V|})" }, { "math_id": 7, "text": "O(|V|)" }, { "math_id": 8, "text": "M" }, { "math_id": 9, "text": "P" }, { "math_id": 10, "text": "M \\oplus P" }, { "math_id": 11, "text": "|M| + 1" }, { "math_id": 12, "text": "M \\oplus M^*" }, { "math_id": 13, "text": "M^*" }, { "math_id": 14, "text": "U" }, { "math_id": 15, "text": "G(U \\cup V, E)" }, { "math_id": 16, "text": "M \\subseteq E" }, { "math_id": 17, "text": "M \\leftarrow \\empty" }, { "math_id": 18, "text": "\\mathcal P \\leftarrow \\{P_1, P_2, \\dots, P_k\\}" }, { "math_id": 19, "text": "M \\leftarrow M \\oplus (P_1 \\cup P_2 \\cup \\dots \\cup P_k)" }, { "math_id": 20, "text": "\\mathcal P = \\empty" }, { "math_id": 21, "text": "G" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "F" }, { "math_id": 24, "text": "v" }, { "math_id": 25, "text": "O(|E|)" }, { "math_id": 26, "text": "\\sqrt{|V|}" }, { "math_id": 27, "text": "|V|" }, { "math_id": 28, "text": "|E|" }, { "math_id": 29, "text": "2\\sqrt{|V|}" }, { "math_id": 30, "text": "O(\\log |V|)" }, { "math_id": 31, "text": "O(|E| \\log |V|)" }, { "math_id": 32, "text": "|E|=\\Omega(|V|^2)" }, { "math_id": 33, "text": "O\\left(|V|^{1.5}\\sqrt{\\frac{|E|}{\\log |V|}}\\right)" } ]
https://en.wikipedia.org/wiki?curid=5944391
59444
Energy level
Different states of quantum systems A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. In chemistry and atomic physics, an electron shell, or principal energy level, may be thought of as the orbit of one or more electrons around an atom's nucleus. The closest shell to the nucleus is called the "1 shell" (also called "K shell"), followed by the "2 shell" (or "L shell"), then the "3 shell" (or "M shell"), and so on farther and farther from the nucleus. The shells correspond with the principal quantum numbers ("n" = 1, 2, 3, 4, ...) or are labeled alphabetically with letters used in the X-ray notation (K, L, M, N, ...). Each shell can contain only a fixed number of electrons: The first shell can hold up to two electrons, the second shell can hold up to eight (2 + 6) electrons, the third shell can hold up to 18 (2 + 6 + 10) and so on. The general formula is that the "n"th shell can in principle hold up to 2"n"2 electrons. Since electrons are electrically attracted to the nucleus, an atom's electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a strict requirement: atoms may have two or even three incomplete outer shells. (See Madelung rule for more details.) For an explanation of why electrons exist in these shells see electron configuration. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the "ground state". If it is at a higher energy level, it is said to be "excited", or any electrons that have higher energy than the ground state are "excited". An energy level is regarded as degenerate if there is more than one measurable quantum mechanical state associated with it. Explanation. Quantized energy levels result from the wave behavior of particles, which gives a relationship between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave functions that have well defined energies have the form of a standing wave. States having well-defined energies are called stationary states because they are the states that do not change in time. Informally, these states correspond to a whole number of wavelengths of the wavefunction along a closed path (a path that ends where it started), such as a circular orbit around an atom, where the number of wavelengths gives the type of atomic orbital (0 for s-orbitals, 1 for p-orbitals and so on). Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. Any superposition (linear combination) of energy states is also a quantum state, but such states change with time and do not have well-defined energies. A measurement of the energy results in the collapse of the wavefunction, which results in a new state that consists of just a single energy state. Measurement of the possible energy levels of an object is called spectroscopy. History. The first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of energy levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these energy levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. Atoms. Intrinsic energy levels. In the formulas for energy of electrons at various levels given below in an atom, the zero point for energy is set when the electron in question has completely left the atom; i.e. when the electron's principal quantum number "n" = ∞. When the electron is bound to the atom in any closer value of n, the electron's energy is lower and is considered negative. Orbital state energy level: atom/ion with nucleus + one electron. Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by: formula_0 (typically between 1 eV and 103 eV), where "R"∞ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, "h" is the Planck constant, and "c" is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number n. This equation is obtained from combining the Rydberg formula for any hydrogen-like element (shown below) with "E" = "hν" = "hc" / "λ" assuming that the principal quantum number n above = "n"1 in the Rydberg formula and "n"2 = ∞ (principal quantum number of the energy level the electron descends from, when emitting a photon). The Rydberg formula was derived from empirical spectroscopic emission data. formula_1 An equivalent formula can be derived quantum mechanically from the time-independent Schrödinger equation with a kinetic energy Hamiltonian operator using a wave function as an eigenfunction to obtain the energy levels as eigenvalues, but the Rydberg constant would be replaced by other fundamental physics constants. Electron–electron interactions in atoms. If there is more than one electron around the atom, electron–electron interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. A simple (though not complete) way to understand this is as a shielding effect, where the outer electrons see an effective nucleus of reduced charge, since the inner electrons are bound tightly to the nucleus and partially cancel its charge. This leads to an approximate correction where Z is substituted with an effective nuclear charge symbolized as "Z"eff that depends strongly on the principal quantum number. formula_2 In such cases, the orbital types (determined by the azimuthal quantum number ℓ) as well as their levels within the molecule affect "Z"eff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. Fine structure splitting. Fine structure arises from relativistic kinetic energy corrections, spin–orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of 10−3 eV. Hyperfine structure. This even finer structure is due to electron–nucleus spin–spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of 10−4 eV. Energy levels due to external fields. Zeeman effect. There is an interaction energy associated with the magnetic dipole moment, μ"L", arising from the electronic orbital angular momentum, "L", given by formula_3 with formula_4. Additionally taking into account the magnetic momentum arising from the electron spin. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μ"S", arising from the electron spin formula_5, with "g""S" the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, formula_6. The interaction energy therefore becomes formula_7. Molecules. Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and antibonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the antibonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state (i.e., an eigenstate of the molecular Hamiltonian) is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: formula_8 where "E"electronic is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. Energy level diagrams. There are various types of energy level diagrams for bonds between atoms in a molecule. Energy level transitions. Electrons in atoms and molecules can change (make "transitions" in) energy levels by emitting or absorbing a photon (of electromagnetic radiation), whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing the 1st, then the 2nd, then the 3rd, etc. of the highest energy electrons, respectively, from the atom originally in the ground state. Energy in corresponding opposite quantities can also be released, sometimes in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the "ground state". If it is at a higher energy level, it is said to be "excited", or any electrons that have higher energy than the ground state are "excited". Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to the Planck constant ("h") times its frequency (f) and thus is proportional to its frequency, or inversely to its wavelength (λ). Δ"E" = "hf" = "hc" / "λ", since "c", the speed of light, equals to "fλ" Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ → σ*, π → π*, or n → π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ* → σ, π* → π, or π* → n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy, and thermally excites molecules to higher average amplitudes of vibrational and rotational modes (excites the molecules to higher internal energy levels). This means that as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possibly coloured glow. An electron farther from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus. Crystalline materials. Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi level, the vacuum level, and the energy levels of any defect states in the crystal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_n = - h c R_{\\infty} \\frac{Z^2}{n^2}" }, { "math_id": 1, "text": "\\frac{1}{\\lambda} = RZ^2 \\left(\\frac{1}{n_1^2}-\\frac{1}{n_2^2}\\right)" }, { "math_id": 2, "text": "E_{n,\\ell} = - h c R_{\\infty} \\frac{{Z_{\\rm eff}}^2}{n^2}" }, { "math_id": 3, "text": "U = -\\boldsymbol{\\mu}_L\\cdot\\mathbf{B}" }, { "math_id": 4, "text": "-\\boldsymbol{\\mu}_L = \\dfrac{e\\hbar}{2m}\\mathbf{L} = \\mu_B\\mathbf{L}" }, { "math_id": 5, "text": "-\\boldsymbol{\\mu}_S = -\\mu_\\text{B} g_S \\mathbf{S}" }, { "math_id": 6, "text": "\\boldsymbol{\\mu} = \\boldsymbol{\\mu}_L + \\boldsymbol{\\mu}_S" }, { "math_id": 7, "text": "U_B = -\\boldsymbol{\\mu}\\cdot\\mathbf{B} = \\mu_\\text{B} B (M_L + g_S M_S)" }, { "math_id": 8, "text": "E = E_\\text{electronic} + E_\\text{vibrational} + E_\\text{rotational} + E_\\text{nuclear} + E_\\text{translational}" } ]
https://en.wikipedia.org/wiki?curid=59444
59448183
Convolutional sparse coding
Neural network coding model The convolutional sparse coding paradigm is an extension of the global sparse coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal formula_0 as a linear combination of a few atoms in the redundant dictionary formula_1, usually expressed as formula_2 for a sparse vector formula_3, the alternative dictionary structure adopted by the convolutional sparse coding model allows the sparsity prior to be applied locally instead of globally: independent patches of formula_4 are generated by "local" dictionaries operating over stripes of formula_5. The local sparsity constraint allows stronger uniqueness and stability conditions than the global sparsity prior, and has shown to be a versatile tool for inverse problems in fields such as image understanding and computer vision. Also, a recently proposed multi-layer extension of the model has shown conceptual benefits for more complex signal decompositions, as well as a tight connection the convolutional neural networks model, allowing a deeper understanding of how the latter operates. Overview. Given a signal of interest formula_0 and a redundant dictionary formula_1, the sparse coding problem consist of retrieving a sparse vector formula_3, denominated the sparse representation of formula_4, such that formula_6. Intuitively, this implies formula_4 is expressed as a linear combination of a small number of elements in formula_7. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding. It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions. Interestingly, by imposing a local sparsity prior in formula_5, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in formula_7 can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated convolutional sparse coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for formula_5 to be efficiently estimated via projected gradient descent algorithms such as orthonormal matching pursuit (OMP) and basis pursuit (BP), while performing in a local fashion Besides its versatility in inverse problems, recent efforts have focused on the multi-layer version of the model and provided evidence of its reliability for recovering multiple underlying representations. Moreover, a tight connection between such a model and the well-established convolutional neural network model (CNN) was revealed, providing a new tool for a more rigurous understanding of its theoretical conditions. The convolutional sparse coding model provides a very efficient set of tools to solve a wide range of inverse problems, including image denoising, image inpainting, and image superresolution. By imposing local sparsity constraints, it allows to efficiently tackle the global coding problem by iteratively estimating disjoint patches and assembling them into a global signal. Furthermore, by adopting a multi-layer sparse model, which results from imposing the sparsity constraint to the signal inherent representations themselves, the resulting "layered" pursuit algorithm keeps the strong uniqueness and stability conditions from the single-layer model. This extension also provides some interesting notions about the relation between its sparsity prior and the forward pass of the convolutional neural network, which allows to understand how the theoretical benefits of the CSC model can provide a strong mathematical meaning of the CNN structure. Sparse coding paradigm. Basic concepts and models are presented to explain into detail the convolutional sparse representation framework. On the grounds that the sparsity constraint has been proposed under different models, a short description of them is presented to show its evolution up to the model of interest. Also included are the concepts of mutual coherence and restricted isometry property to establish uniqueness stability guarantees. Global sparse coding model. Allow signal formula_8 to be expressed as a linear combination of a small number of atoms from a given dictionary formula_9. Alternatively, the signal can be expressed as formula_2, where formula_10 corresponds to the sparse representation of formula_4, which selects the atoms to combine and their weights. Subsequently, given formula_7, the task of recovering formula_5 from either the noise-free signal itself or an observation is denominated sparse coding. Considering the noise-free scenario, the coding problem is formulated as follows: formula_11 The effect of the formula_12 norm is to favor solutions with as much zero elements as possible. Furthermore, given an observation affected by bounded energy noise: formula_13, the pursuit problem is reformulated as: formula_14 Stability and uniqueness guarantees for the global sparse model. Let the "spark" of formula_15 be defined as the minimum number of linearly independent columns: formula_16 Then, from the triangular inequality, the sparsest vector formula_5 satisfies: formula_17. Although the spark provides an upper bound, it is unfeasible to compute in practical scenarios. Instead, let the mutual coherence be a measure of similarity between atoms in formula_7. Assuming formula_18-norm unit atoms, the mutual coherence of formula_7 is defined as: formula_19, where formula_20 are atoms. Based on this metric, it can be proven that the true sparse representation formula_21 can be recovered if and only if formula_22. Similarly, under the presence of noise, an upper bound for the distance between the true sparse representation formula_23 and its estimation formula_24 can be established via the restricted isometry property (RIP). A k-RIP matrix formula_7 with constant formula_25 corresponds to: formula_26, where formula_27 is the smallest number that satisfies the inequality for every formula_28. Then, assuming formula_29, it is guaranteed that formula_30. Solving such a general pursuit problem is a hard task if no structure is imposed on dictionary formula_7. This implies learning large, highly overcomplete representations, which is extremely expensive. Assuming such a burden has been met and a representative dictionary has been obtained for a given signal formula_4, typically based on prior information, formula_21 can be estimated via several pursuit algorithms. Pursuit algorithms for the global sparse model. Two basic methods for solving the global sparse coding problem are orthogonal matching pursuit (OMP) and basis pursuit (BP). OMP is a greedy algorithm that iteratively selects the atom best correlated with the residual between formula_4 and a current estimation, followed by a projection onto a subset of pre-selected atoms. On the other hand, basis pursuit is a more sophisticated approach that replaces the original coding problem by a linear programming problem. Based on this algorithms, the global sparse coding provides considerably loose bounds for the uniqueness and stability of formula_24. To overcome this, additional priors are imposed over formula_7 to guarantee tighter bounds and uniqueness conditions. The reader is referred to (, section 2) for details regarding this properties. Convolutional sparse coding model. A local prior is adopted such that each overlapping section of formula_5 is sparse. Let formula_31 be constructed from shifted versions of a local dictionary formula_32. Then, formula_4 is formed by products between formula_33 and local patches of formula_34. From the latter, formula_5 can be re-expressed by formula_35 disjoint sparse vectors formula_36: formula_37. Similarly, let formula_38 be a set of formula_39 consecutive vectors formula_40. Then, each disjoint segment in formula_4 is expressed as: formula_41, where operator formula_42 extracts overlapping patches of size formula_43 starting at index formula_44. Thus, formula_45 contains only formula_46 nonzero columns. Hence, by introducing operator formula_47 which exclusively preserves them: formula_48 where formula_49 is known as the stripe dictionary, which is independent of formula_44, and formula_50 is denominated the i-th stripe. So, formula_4 corresponds to a patch aggregation or convolutional interpretation: formula_51 Where formula_20 corresponds to the i-th atom from the local dictionary formula_52 and formula_53 is constructed by elements of patches formula_54: formula_55. Given the new dictionary structure, let the formula_56 pseudo-norm be defined as: formula_57. Then, for the noise-free and noise-corrupted scenarios, the problem can be respectively reformulated as: formula_58 Stability and uniqueness guarantees for the convolutional sparse model. For the local approach, formula_7 mutual coherence satisfies: formula_59 So, if a solution obeys formula_60, then it is the sparsest solution to the formula_56 problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector! Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the iterative shrinkage thresholding algorithm (ISTA) for splitting the pursuit into smaller problems. Based on the formula_56 pseudonorm, if a solution formula_5 exists satisfying formula_61, then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the formula_12 prior. Stability conditions for OMP and BP are also guaranteed if its exact recovery condition (ERC) is met for a support formula_62 with a constant formula_63. The ERC is defined as: formula_64, where formula_65 denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA. Algorithm 1: 1D CSC via local iterative soft-thresholding. "Input:" formula_52: Local Dictionary, formula_66: observation, formula_67: Regularization parameter, formula_68: step size for codice_0, codice_1: tolerance factor, codice_2: maximum number of iterations. formula_69 (Initialize disjoint patches.) formula_70 (Initialize residual patches.) formula_71 Repeat formula_72 (Coding along disjoint patches) formula_73 formula_74 (Patch Aggregation) formula_75 (Update residuals) formula_76 Until formula_77 codice_1 or formula_78 codice_2. Multi-layered convolutional sparse coding model. By imposing the sparsity prior in the inherent structure of formula_4, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries. Based on these criteria, yet another extension denominated multi-layer convolutional sparse coding (ML-CSC) is proposed. A set of analytical dictionaries formula_79 can be efficiently designed, where sparse representations at each layer formula_80 are guaranteed by imposing the sparsity prior over the dictionaries themselves. In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift formula_81 elements instead of a single one, where formula_81 corresponds to the number of channels in the previous layer, it is guaranteed that the formula_82 norm of the representations along layers is bounded. For example, given the dictionaries formula_83, the signal is modeled as formula_84, where formula_85 is its sparse code, and formula_86 is the sparse code of formula_85. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming formula_87: formula_88 In what follows, theoretical guarantees for the uniqueness and stability of this extended model are described. Theorem 1: (Uniqueness of sparse representations) Consider signal formula_4 satisfies the (ML-CSC) model for a set of convolutional dictionaries formula_89 with mutual coherence formula_90. If the true sparse representations satisfy formula_91, then a solution to the problem formula_92 will be its unique solution if the thresholds are chosen to satisfy: formula_93. Theorem 2: (Global stability of the noise-corrupted scenario) Consider signal formula_4 satisfies the (ML-CSC) model for a set of convolutional dictionaries formula_89 is contaminated with noise formula_94, where formula_95. resulting in formula_96. If formula_97 and formula_98, then the estimated representations formula_99 satisfy the following: formula_100. Projection-based algorithms. As a simple approach for solving the ML-CSC problem, either via the formula_12 or formula_101 norms, is by computing inner products between formula_4 and the dictionary atoms to identify the most representatives ones. Such a projection is described as: formula_102 which have closed-form solutions via the hard-thresholding formula_103 and soft-thresholding algorithms formula_104, respectively. If a nonnegative constraint is also contemplated, the problem can be expressed via the formula_101 norm as: formula_105 which closed-form solution corresponds to the soft nonnegative thresholding operator formula_106, where formula_107. Guarantees for the Layered soft-thresholding approach are included in the Appendix (Section 6.2). Theorem 3: (Stable recovery of the multi-layered soft-thresholding algorithm) Consider signal formula_4 that satisfies the (ML-CSC) model for a set of convolutional dictionaries formula_108 with mutual coherence formula_109 is contaminated with noise formula_94, where formula_110. resulting in formula_96. Denote by formula_111 and formula_112 the lowest and highest entries in formula_113. Let formula_114 be the estimated sparse representations obtained for formula_115. If formula_116 and formula_117 is chosen according to: formula_118 Then, formula_119 has the same support as formula_120, and formula_121, for formula_122 Connections to convolutional neural networks. Recall the forward pass of the convolutional neural network model, used in both training and inference steps. Let formula_123 be its input and formula_124 the filters at layer formula_125, which are followed by the rectified linear unit (RLU) formula_126, for bias formula_127. Based on this elementary block, taking formula_128 as example, the CNN output can be expressed as: formula_129 Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative constraint, it is straightforward to show that both are equivalent: formula_130 As explained in what follows, this naive approach of solving the coding problem is a particular case of a more stable projected gradient descent algorithm for the ML-CSC model. Equipped with the stability conditions of both approaches, a more clear understanding about the class of signals a CNN can recover, under what noise conditions can an estimation be accurately attained, and how can its structure be modified to improve its theoretical conditions. The reader is referred to (, section 5) for details regarding their connection. Pursuit algorithms for the multi-layer CSC model. A crucial limitation of the forward pass is it being unable to recover the unique solution of the DCP problem, which existence has been demonstrated. So, instead of using a thresholding approach at each layer, a full pursuit method is adopted, denominated layered basis pursuit (LBP). Considering the projection onto the formula_101 ball, the following problem is proposed: formula_131 where each layer is solved as an independent CSC problem, and formula_132 is proportional to the noise level at each layer. Among the methods for solving the layered coding problem, ISTA is an efficient decoupling alternative. In what follows, a short summary of the guarantees for the LBP are established. Theorem 4: (Recovery guarantee) Consider a signal formula_4 characterized by a set of sparse vectors formula_99, convolutional dictionaries formula_89 and their corresponding mutual coherences formula_133. If formula_134, then the LBP algorithm is guaranteed to recover the sparse representations. Theorem 5: (Stability in the presence of noise) Consider the contaminated signal formula_135, where formula_136 and formula_4 is characterized by a set of sparse vectors formula_99 and convolutional dictionaries formula_89. Let formula_137 be solutions obtained via the LBP algorithm with parameters formula_138. If formula_139 and formula_140, then: (i) The support of the solution formula_141 is contained in that of formula_120, (ii) formula_142, and (iii) Any entry greater in absolute value than formula_143 is guaranteed to be recovered. Applications of the convolutional sparse coding model: image inpainting. As a practical example, an efficient image inpainting method for color images via the CSC model is shown. Consider the three-channel dictionary formula_144, where formula_145 denotes the formula_81-th atom at channel formula_68, represents signal formula_4 by a single cross-channel sparse representation formula_5, with stripes denoted as formula_146. Given an observation formula_147, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: formula_148 By means of ADMM, the cost function is decoupled into simpler sub-problems, allowing an efficient formula_5 estimation. Algorithm 2 describes the procedure, where formula_149 is the DFT representation of formula_150, the convolutional matrix for the term formula_151. Likewise, formula_152 and formula_153 correspond to the DFT representations of formula_154 and formula_155, respectively, formula_156 corresponds to the Soft-thresholding function with argument formula_157, and the formula_158 norm is defined as the formula_18 norm along the channel dimension formula_68 followed by the formula_101 norm along the spatial dimension formula_81. The reader is referred to (, Section II) for details on the ADMM implementation and the dictionary learning procedure. Algorithm 2: Color image inpainting via the convolutional sparse coding model. "Input:" formula_159: DFT of convolutional matrices formula_160, formula_161: Color observation, formula_67: Regularization parameter, formula_162: step sizes for codice_5, codice_1: tolerance factor, codice_2: maximum number of iterations. formula_163 Repeat formula_164 formula_165 formula_166 formula_167 Until formula_168codice_1 or formula_169 codice_2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{x}\\in \\mathbb{R}^{N}" }, { "math_id": 1, "text": "\\mathbf{D}\\in\\mathbb{R}^{N\\times M}, M\\gg N" }, { "math_id": 2, "text": "\\mathbf{x}=\\mathbf{D}\\mathbf{\\Gamma}" }, { "math_id": 3, "text": "\\mathbf{\\Gamma}\\in \\mathbb{R}^{M}" }, { "math_id": 4, "text": "\\mathbf{x}" }, { "math_id": 5, "text": "\\mathbf{\\Gamma}" }, { "math_id": 6, "text": "\\mathbf{x}= \\mathbf{D}\\mathbf{\\Gamma}" }, { "math_id": 7, "text": "\\mathbf{D}" }, { "math_id": 8, "text": "\\mathbf{x}\\in \\mathbb{R}^N" }, { "math_id": 9, "text": "\\mathbf{D}\\in \\mathbb{R}^{N \\times M}, M>N" }, { "math_id": 10, "text": "\\mathbf{\\Gamma}\\in \\mathbb{R}^M" }, { "math_id": 11, "text": "\\begin{aligned}\n \\hat{\\mathbf{\\Gamma}}_{\\text{ideal}}&= \\underset{\\mathbf{\\Gamma}}{\\text{argmin}}\\; \\| \\mathbf{\\Gamma}\\|_{0}\\; \\text{s.t.}\\; \\mathbf{D}\\mathbf{\\Gamma}=\\mathbf{x}.\\end{aligned}" }, { "math_id": 12, "text": "\\ell_{0}" }, { "math_id": 13, "text": "\\mathbf{Y}= \\mathbf{D}\\mathbf{\\Gamma}+ \\mathbf{E},\\|\\mathbf{E}\\|_{2}<\\varepsilon" }, { "math_id": 14, "text": "\\begin{aligned}\n \\hat{\\mathbf{\\Gamma}}_{\\text{noise}}&= \\underset{\\mathbf{\\Gamma}}{\\text{argmin}}\\; \\| \\mathbf{\\Gamma}\\|_{0}\\; \\text{ s.t. } \\|\\mathbf{D}\\mathbf{\\Gamma}-\\mathbf{Y}\\|_{2}<\\varepsilon.\\end{aligned}" }, { "math_id": 15, "text": "\\mathbf{\\mathbf{D}}" }, { "math_id": 16, "text": "\\begin{aligned}\n \\sigma(\\mathbf{D})=\\underset{\\mathbf{\\Gamma}}{\\text{min}} \\quad \\|\\mathbf{\\Gamma}\\|_{0} \\quad \\text{s.t.}\\quad \\mathbf{D \\Gamma}=0, \\quad \\mathbf{\\Gamma}\\neq 0.\\end{aligned}" }, { "math_id": 17, "text": "\\|\\mathbf{\\Gamma}\\|_{0}<\\frac{\\sigma(\\mathbf{D})}{2}" }, { "math_id": 18, "text": "\\ell_{2}" }, { "math_id": 19, "text": "\\mu(\\mathbf{D})= \\max_{i\\neq j} \\|\\mathbf{d_i^T}\\mathbf{d_j}\\|_2" }, { "math_id": 20, "text": "\\mathbf{d}_{i}" }, { "math_id": 21, "text": "\\mathbf{\\Gamma}^{*}" }, { "math_id": 22, "text": "\\|\\mathbf{\\Gamma}^{*}\\|_0 < \\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D})} \\big)" }, { "math_id": 23, "text": "\\mathbf{\\Gamma^{*}}" }, { "math_id": 24, "text": "\\hat{\\mathbf{\\Gamma}}" }, { "math_id": 25, "text": "\\delta_{k}" }, { "math_id": 26, "text": "(1-\\delta_k)\\|\\mathbf{\\Gamma}\\|_2^2 \\leq \\|\\mathbf{D\\Gamma}\\|_2^2 \\leq (1+\\delta_k)\\|\\mathbf{\\Gamma}\\|_2^2" }, { "math_id": 27, "text": "\\delta_k" }, { "math_id": 28, "text": "\\|\\mathbf{\\Gamma}\\|_{0}=k" }, { "math_id": 29, "text": "\\|\\mathbf{\\Gamma}\\|_0<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D})} \\big)" }, { "math_id": 30, "text": "\\|\\mathbf{\\hat{\\Gamma}-\\Gamma^{*}}\\|_{2}^{2}\\leq \\frac{4\\varepsilon^2}{1-\\mu(\\mathbf{D})(2\\|\\mathbf{\\Gamma}\\|_0-1)}" }, { "math_id": 31, "text": "\\mathbf{D}\\in \\mathbb{R}^{N \\times Nm}" }, { "math_id": 32, "text": "\\mathbf{D_{L}}\\in\\mathbb{R}^{n \\times m}, m\\ll M" }, { "math_id": 33, "text": "\\mathbf{D_{L}}" }, { "math_id": 34, "text": "\\mathbf{\\Gamma}\\in\\mathbb{R}^{mN}" }, { "math_id": 35, "text": "N" }, { "math_id": 36, "text": "\\alpha_{i}\\in \\mathbb{R}^{m}" }, { "math_id": 37, "text": "\\mathbf{\\Gamma}\\in \\{\\alpha_{1},\\alpha_{2},\\dots, \\alpha_{N}\\}^{T}" }, { "math_id": 38, "text": "\\gamma" }, { "math_id": 39, "text": "(2n-1)" }, { "math_id": 40, "text": "\\alpha_{i}" }, { "math_id": 41, "text": "\\mathbf{x}_{i}=\\mathbf{R}_{i}\\mathbf{D}\\mathbf{\\Gamma}" }, { "math_id": 42, "text": "\\mathbf{R}_{i}\\in \\mathbb{R}^{n\\times N}" }, { "math_id": 43, "text": "n" }, { "math_id": 44, "text": "i" }, { "math_id": 45, "text": "\\mathbf{R}_{i}\\mathbf{D}" }, { "math_id": 46, "text": "(2n-1)m" }, { "math_id": 47, "text": "\\mathbf{S}_{i}\\in \\mathbf{R}^{(2n-1)m \\times Nm}" }, { "math_id": 48, "text": "\\begin{aligned}\n \\mathbf{x}_{i}&= \\underset{\\Omega}{\\underbrace{\\mathbf{R}_{i}\\mathbf{D}\\mathbf{S}_{i}^{T}}}\\underset{\\gamma_{i}}{\\underbrace{(S_{i}\\mathbf{\\Gamma})}},\\end{aligned}" }, { "math_id": 49, "text": "\\Omega" }, { "math_id": 50, "text": "\\gamma_{i}" }, { "math_id": 51, "text": "\\begin{aligned}\n \\mathbf{x}&= \\sum_{i=1}^{N}\\mathbf{R}_{i}^{T}\\mathbf{D}_{L}\\alpha_{i}= \\sum_{i=1}^{m}\\mathbf{d}_{i}\\ast \\mathbf{z_{i}}.\\end{aligned}" }, { "math_id": 52, "text": "\\mathbf{D}_{L}" }, { "math_id": 53, "text": "\\mathbf{z_{i}}" }, { "math_id": 54, "text": "\\alpha" }, { "math_id": 55, "text": "\\mathbf{z_{i}}\\triangleq (\\alpha_{1,i}, \\alpha_{2,i},\\dots, \\alpha_{N,i})^{T}" }, { "math_id": 56, "text": "\\ell_{0,\\infty}" }, { "math_id": 57, "text": "\\|\\mathbf{\\Gamma}\\|_{0,\\infty}\\triangleq \\underset{i}{\\text{ max}}\\; \\|\\gamma_{i}\\|_{0}" }, { "math_id": 58, "text": "\\begin{aligned}\n \\hat{\\mathbf{\\Gamma}}_{\\text{ideal}}&= \\underset{\\mathbf{\\Gamma}}{\\text{argmin}}\\; \\| \\mathbf{\\Gamma}\\|_{0,\\infty}\\; \\text{s.t.}\\; \\mathbf{D}\\mathbf{\\Gamma}=\\mathbf{x},\\\\\n \\hat{\\mathbf{\\Gamma}}_{\\text{noise}}&= \\underset{\\mathbf{\\Gamma}}{\\text{argmin}}\\; \\| \\mathbf{\\Gamma}\\|_{0,\\infty}\\; \\text{s.t.}\\; \\|\\mathbf{Y}-\\mathbf{D}\\mathbf{\\Gamma}\\|_{2}<\\varepsilon.\\end{aligned}" }, { "math_id": 59, "text": "\\mu(\\mathbf{D})\\geq \\big(\\frac{m-1}{m(2n-1)-1}\\big)^{1/2}." }, { "math_id": 60, "text": "\\|\\mathbf{\\Gamma}\\|_{0,\\infty}< \\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D})}\\big)" }, { "math_id": 61, "text": "\\|\\mathbf{\\Gamma}\\|_{0,\\infty}<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D})} \\big)" }, { "math_id": 62, "text": "\\mathcal{T}" }, { "math_id": 63, "text": "\\theta" }, { "math_id": 64, "text": "\\theta= 1-\\underset{i\\notin \\mathcal{T}}{\\text{max}} \\|\\mathbf{D}_{\\mathcal{T}}^{\\dagger}\\mathbf{d}_{i}\\|_{1}>0" }, { "math_id": 65, "text": "\\dagger" }, { "math_id": 66, "text": "\\mathbf{y}" }, { "math_id": 67, "text": "\\lambda" }, { "math_id": 68, "text": "c" }, { "math_id": 69, "text": "\\{\\boldsymbol{\\alpha}_{i}\\}^{(0)}\\gets \\{\\mathbf{0}_{N\\times 1}\\}" }, { "math_id": 70, "text": "\\{\\mathbf{r}_{i}\\}^{(0)}\\gets \\{\\mathbf{R}_{i}\\mathbf{y}\\}" }, { "math_id": 71, "text": "k\\gets 0" }, { "math_id": 72, "text": "\\{\\boldsymbol{\\alpha}_i\\}^{(k)}\\gets \\mathcal{S}_{\\frac{\\lambda}{c}}\\big( \\{\\boldsymbol{\\alpha}_i\\}^{(k-1)}+\\frac{1}{c}\\{\\mathbf{D}_{L}^{T}\\mathbf{r}_i\\}^{(k-1)} \\big)" }, { "math_id": 73, "text": "\\boldsymbol{\\alpha}_i" }, { "math_id": 74, "text": "\\hat{\\mathbf{x}}^{(k)}\\gets \\sum_{i}\\mathbf{R}_{i}^{T}\\mathbf{D}_{L}\\boldsymbol{\\alpha}_{i}^{(k)}" }, { "math_id": 75, "text": "\\{\\mathbf{r}_{i}\\}^{(k)}\\gets \\mathbf{R}_{i}\\big( \\mathbf{y}-\\hat{\\mathbf{x}}^{(k)} \\big)" }, { "math_id": 76, "text": "k \\gets k+ 1" }, { "math_id": 77, "text": "\\|\\hat{\\mathbf{x}}^{(k)}- \\hat{\\mathbf{x}}^{(k-1)}\\|_{2}<" }, { "math_id": 78, "text": "k>" }, { "math_id": 79, "text": "\\{\\mathbf{D}\\}_{k=1}^{K}" }, { "math_id": 80, "text": "\\{\\mathbf{\\Gamma}\\}_{k=1}^{K}" }, { "math_id": 81, "text": "m" }, { "math_id": 82, "text": "\\|\\mathbf{\\Gamma}\\|_{0,\\infty}" }, { "math_id": 83, "text": "\\mathbf{D}_{1} \\in \\mathbb{R}^{N\\times Nm_{1}}, \\mathbf{D}_{2} \\in \\mathbb{R}^{Nm_{1}\\times Nm_{2}}" }, { "math_id": 84, "text": "\\mathbf{D}_{1}\\mathbf{\\Gamma}_{1}= \\mathbf{D}_{1}(\\mathbf{D}_{2}\\mathbf{\\Gamma}_{2})" }, { "math_id": 85, "text": "\\mathbf{\\Gamma}_{1}" }, { "math_id": 86, "text": "\\mathbf{\\Gamma}_{2}" }, { "math_id": 87, "text": "\\mathbf{\\Gamma}_{0}=\\mathbf{x}" }, { "math_id": 88, "text": "\\begin{aligned}\n \\text{Find}\\; \\{\\mathbf{\\Gamma}_{i}\\}_{i=1}^{K}\\;\\text{s.t.}&\\; \\mathbf{\\Gamma}_{i-1}=\\mathbf{D}_{i}\\mathbf{\\Gamma}_{i},\\; \\|\\mathbf{\\Gamma}_{i}\\|_{0,\\infty}\\leq \\lambda_{i}\\\\\n \\text{Find}\\; \\{\\mathbf{\\Gamma}_{i}\\}_{i=1}^{K}\\; \\text{s.t.} &\\;\\|\\mathbf{\\Gamma}_{i-1}-\\mathbf{D}_{i}\\mathbf{\\Gamma}_{i}\\|_{2}\\leq \\varepsilon_{i},\\; \\|\\mathbf{\\Gamma}_{i}\\|_{0,\\infty}\\leq \\lambda_{i}\\end{aligned}" }, { "math_id": 89, "text": "\\{\\mathbf{D}_{i}\\}_{i=1}^{K}" }, { "math_id": 90, "text": "\\{\\mu(\\mathbf{D}_{i})\\}_{i=1}^{K}" }, { "math_id": 91, "text": "\\{\\mathbf{\\Gamma}\\}_{i=1}^{K}<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D}_{i})}\\big)" }, { "math_id": 92, "text": "\\{\\hat{\\mathbf{\\Gamma}_{i}}\\}_{i=1}^{K}" }, { "math_id": 93, "text": "\\lambda_{i}<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D}_{i})} \\big)" }, { "math_id": 94, "text": "\\mathbf{E}" }, { "math_id": 95, "text": "\\|\\mathbf{E}\\|_{2}\\leq \\varepsilon_{0}" }, { "math_id": 96, "text": "\\mathbf{Y=X+E}" }, { "math_id": 97, "text": "\\lambda_{i}<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D}_{i})}\\big)" }, { "math_id": 98, "text": "\\varepsilon_{i}^{2}=\\frac{4\\varepsilon_{i-1}^{2}}{1-(2\\|\\mathbf{\\Gamma}_{i}\\|_{0,\\infty}-1)\\mu(\\mathbf{D}_{i})}" }, { "math_id": 99, "text": "\\{\\mathbf{\\Gamma}_{i}\\}_{i=1}^{K}" }, { "math_id": 100, "text": "\\|\\mathbf{\\Gamma}_{i}-\\hat{\\mathbf{\\Gamma}}_{i}\\|_{2}^{2}\\leq \\varepsilon_{i}^{2}" }, { "math_id": 101, "text": "\\ell_{1}" }, { "math_id": 102, "text": "\\begin{aligned}\n\\hat{\\mathbf{\\Gamma}}_{\\ell_p}&= \\underset{\\mathbf{\\Gamma}}{\\operatorname{argmin}} \\frac{1}{2}\\|\\mathbf{\\Gamma}-\\mathbf{D}^{T}\\mathbf{x}\\|_2^2 +\\beta\\|\\mathbf{\\Gamma}\\|_p & p\\in\\{0,1\\},\\end{aligned}" }, { "math_id": 103, "text": "\\mathcal{H}_{\\beta}(\\mathbf{D}^{T}\\mathbf{x})" }, { "math_id": 104, "text": "\\mathcal{S}_{\\beta}(\\mathbf{D}^{T}\\mathbf{x})" }, { "math_id": 105, "text": "\\begin{aligned}\n\\hat{\\mathbf{\\Gamma}}&= \\underset{\\mathbf{\\Gamma}}{\\text{argmin}}\\; \\frac{1}{2}\\|\\mathbf{\\Gamma}-\\mathbf{D}^T\\mathbf{x}\\|_2^2+\\beta\\|\\mathbf{\\Gamma}\\|_1,\\; \\text{ s.t. } \\mathbf{\\Gamma}\\geq 0,\\end{aligned}" }, { "math_id": 106, "text": "\\mathcal{S}_{\\beta}^{+}(\\mathbf{D}^{T}\\mathbf{x})" }, { "math_id": 107, "text": "\\mathcal{S}_{\\beta}^{+}(z)\\triangleq \\max(z-\\beta,0)" }, { "math_id": 108, "text": "\\{\\mathbf{D}_i\\}_{i=1}^K" }, { "math_id": 109, "text": "\\{\\mu(\\mathbf{D}_i)\\}_{i=1}^K" }, { "math_id": 110, "text": "\\|\\mathbf{E}\\|_2\\leq \\varepsilon_0" }, { "math_id": 111, "text": "|\\mathbf{\\Gamma}_i^{\\min}|" }, { "math_id": 112, "text": "|\\mathbf{\\Gamma}_i^{\\max}|" }, { "math_id": 113, "text": "\\mathbf{\\Gamma}_i" }, { "math_id": 114, "text": "\\{\\hat{\\mathbf{\\Gamma}}_i\\}_{i=1}^K" }, { "math_id": 115, "text": "\\{\\beta_i\\}_{i=1}^K" }, { "math_id": 116, "text": "\\|\\mathbf{\\Gamma}_i\\|_{0,\\infty}<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D}_{i})}\\frac{|\\mathbf{\\Gamma}_i^{\\min}|}{|\\mathbf{\\Gamma}_i^{\\min}|}\\big)-\\frac{1}{\\mu(\\mathbf{D}_{i})} \\frac{\\varepsilon_{i-1}}{|\\mathbf{\\Gamma}_i^{\\max}|}" }, { "math_id": 117, "text": "\\beta_i" }, { "math_id": 118, "text": "\\begin{aligned}\n\\|\\mathbf{\\Gamma}_i\\|_{0,\\infty}^s<\\frac{1}{2}\\big( 1+\\frac{1}{\\mu(\\mathbf{D}_i)} frac{|\\mathbf{\\Gamma}_i^{\\min}|}{|\\mathbf{\\Gamma}_i^{\\max}|} \\big)-\\frac{1}{\\mu(\\mathbf{D}_i)}\\frac{\\varepsilon_{i-1}}{|\\mathbf{\\Gamma}_i^{\\max}|}\n\\end{aligned}" }, { "math_id": 119, "text": "\\hat{\\mathbf{\\Gamma}}_{i}" }, { "math_id": 120, "text": "\\mathbf{\\Gamma}_{i}" }, { "math_id": 121, "text": "\\|\\mathbf{\\Gamma}_{i}-\\hat{\\mathbf{\\Gamma}_i}\\|_{2,\\infty}\\leq \\varepsilon_i" }, { "math_id": 122, "text": "\\varepsilon_i=\\sqrt{\\|\\mathbf{\\Gamma}_i\\|_{0,\\infty}}(\\varepsilon_{i-1}+\\mu(\\mathbf{D}_i)(\\|\\mathbf{\\Gamma}_i\\|_{0,\\infty}-1)|\\mathbf{\\Gamma}_i^{\\max}|+\\beta_{i})" }, { "math_id": 123, "text": "\\mathbf{x}\\in \\mathbb{R}^{Mm_{1}}" }, { "math_id": 124, "text": "\\mathbf{W}_{k}\\in\\mathbb{R}^{N\\times m_{1}}" }, { "math_id": 125, "text": "k" }, { "math_id": 126, "text": "\\text{ReLU}(\\mathbf{x})= \\max(0, x)" }, { "math_id": 127, "text": "\\mathbf{b}\\in \\mathbb{R}^{Mm_{1}}" }, { "math_id": 128, "text": "K=2" }, { "math_id": 129, "text": "\\begin{aligned}\n \\mathbf{Z}_{2}&= \\text{ReLU}\\big(\\mathbf{W}_{2}^{T}\\; \\text{ReLU}(\\mathbf{W}_{1}^{T}\\mathbf{x})+\\mathbf{b}_{1})+\\mathbf{b}_{2}\\;\\big).\\end{aligned}" }, { "math_id": 130, "text": "\\begin{aligned}\n \\hat{\\mathbf{\\Gamma}}&= \\mathcal{S}^{+}_{\\beta_{2}}\\big(\\mathbf{D}_{2}^{T}\\mathcal{S}^{+}_{\\beta_{1}}(\\mathbf{D}_{1}^{T}\\mathbf{x}) \\big)\\\\\n &= \\text{ReLU}\\big(\\mathbf{W}_{2}^{T} \\text{ReLU}(\\mathbf{W}_{1}^{T}\\mathbf{x}+\\beta_{1})+\\beta_{2}\\big).\\end{aligned}" }, { "math_id": 131, "text": "\\begin{aligned}\n \\hat{\\mathbf{\\Gamma}}_i & =\\underset{\\mathbf{\\Gamma}_{i}}{\\text{argmin}}\\; \\frac{1}{2}\\|\\mathbf{D}_{i}\\mathbf{\\Gamma}_{i}-\\hat{\\mathbf{\\Gamma}}_{i}\\|_{2}^{2}+\\; \\xi_{i}\\|\\mathbf{\\Gamma}_{i}\\|_{1},\\end{aligned}" }, { "math_id": 132, "text": "\\xi_{i}" }, { "math_id": 133, "text": "\\{\\mu\\big(\\mathbf{D}_{i}\\big)\\}_{i=1}^{K}" }, { "math_id": 134, "text": "\\|\\mathbf{\\Gamma}_{i}\\|_{0,\\infty}<\\frac{1}{2}\\big(1+\\frac{1}{\\mu(\\mathbf{D}_{i})}\\big)" }, { "math_id": 135, "text": "\\mathbf{Y}=\\mathbf{X+E}" }, { "math_id": 136, "text": "\\|\\mathbf{E}\\|_{0,\\infty}\\leq \\varepsilon_{0}" }, { "math_id": 137, "text": "\\{\\hat{\\mathbf{\\Gamma}}_{i}\\}_{i=1}^{K}" }, { "math_id": 138, "text": "\\{\\xi\\}_{i=1}^{K}" }, { "math_id": 139, "text": "\\|\\mathbf{\\Gamma}_{i}\\|_{0,\\infty}<\\frac{1}{3}\\big(1+\\frac{1}{\\mu(\\mathbf{D}_{i})}\\big)" }, { "math_id": 140, "text": "\\xi_{i}=4\\varepsilon_{i-1}" }, { "math_id": 141, "text": "\\hat{\\mathbf{\\Gamma}}_i" }, { "math_id": 142, "text": "\\|\\mathbf{\\Gamma}_{i}-\\hat\\mathbf{\\Gamma}_i\\|_{2,\\infty}\\leq \\varepsilon_{i}" }, { "math_id": 143, "text": "\\frac{\\varepsilon_{i}}{\\sqrt{\\|\\mathbf{\\Gamma}_{i}\\|_{0\\infty}}}" }, { "math_id": 144, "text": "\\mathbf{D} \\in \\mathbb{R}^{N \\times M \\times 3}" }, { "math_id": 145, "text": "\\mathbf{d}_{c,m}" }, { "math_id": 146, "text": "\\mathbf{z}_{i}" }, { "math_id": 147, "text": "\\mathbf{y}=\\{\\mathbf{y}_{r}, \\mathbf{y}_{g}, \\mathbf{y}_{b}\\}" }, { "math_id": 148, "text": "\\begin{aligned}\n \\{\\mathbf{\\hat{z}}_{i}\\}&=\\underset{\\{\\mathbf{z}_{i}\\}}{\\text{argmin}}\\frac{1}{2}\\sum_{c}\\bigg\\|\\sum_{i}\\mathbf{d}_{c,i}\\ast \\mathbf{z}_{i} -\\mathbf{y}_{c}\\bigg\\|_{2}^{2}+\\lambda \\sum_{i}\\|\\mathbf{z}_{i}\\|_{1}.\\end{aligned}" }, { "math_id": 149, "text": "\\hat{D}_{c,m}" }, { "math_id": 150, "text": "D_{c,m}" }, { "math_id": 151, "text": "\\mathbf{d}_{c,i}\\ast \\mathbf{z}_{i}" }, { "math_id": 152, "text": "\\hat{\\mathbf{x}}_{m}" }, { "math_id": 153, "text": "\\hat{\\mathbf{z}}_{m}" }, { "math_id": 154, "text": "\\mathbf{x}_{m}" }, { "math_id": 155, "text": "\\mathbf{z}_{m}" }, { "math_id": 156, "text": "\\mathcal{S}_{\\beta}(.)" }, { "math_id": 157, "text": "\\beta" }, { "math_id": 158, "text": "\\ell_{1,2}" }, { "math_id": 159, "text": "\\hat{\\mathbf{D}}_{c,m}" }, { "math_id": 160, "text": "\\mathbf{D}_{c,m}" }, { "math_id": 161, "text": "\\mathbf{y}=\\{\\mathbf{y}_{r},\\mathbf{y}_{g},\\mathbf{y}_{b}\\}" }, { "math_id": 162, "text": "\\{\\mu, \\rho\\}" }, { "math_id": 163, "text": "k\\gets k+1" }, { "math_id": 164, "text": "\\{\\hat{\\mathbf{z}}_{m}\\}^{(k+1)}\\gets\\underset{\\{\\hat{\\mathbf{x}}_{m}\\}}{\\text{argmin}}\\;\\frac{1}{2}\\sum_{c}\\big\\|\\sum_{m}\\hat{\\mathbf{D}}_{c,m} \\hat{\\mathbf{z}}_{m}-\\hat{\\mathbf{y}}_{c} \\big\\|+\\frac{\\rho}{2}\\sum_{m}\\|\\hat{\\mathbf{z}}_{m}- (\\hat{\\mathbf{y}}_{m}+\\hat{\\mathbf{u}}_{m}^{(k)})\\|_{2}^{2}." }, { "math_id": 165, "text": "\\{\\mathbf{y}_{c,m}\\}^{(k+1)}\\gets \\underset{\\{\\mathbf{y}_{c,m}\\}}{\\text{argmin}}\\;\\lambda \\sum_{c}\\sum_{m}\\|\\mathbf{y}_{c,m}\\|_{1}+\\mu\\|\\{\\mathbf{x}_{c,m}^{(k+1)}\\}\\|_{2,1}+\\frac{\\rho}{2}\\sum_{m}\\|\\mathbf{z}_{m}^{(k+1)}- (\\mathbf{y}+\\mathbf{u}_{m}^{(k)})\\|_{2}^{2}." }, { "math_id": 166, "text": "\\mathbf{y}_{m}^{(k+1)}=\\mathcal{S}_{\\lambda/\\rho}\\big( \\mathbf{x}_{m}^{(k+1)}+\\mathbf{u}_{m}^{(k)} \\big)." }, { "math_id": 167, "text": "k \\gets k+1" }, { "math_id": 168, "text": "\\|\\{\\mathbf{z}_{m}\\}^{(k+1)}-\\{\\mathbf{z}_{m}\\}^{(k)}\\|_{2}< " }, { "math_id": 169, "text": "i>" } ]
https://en.wikipedia.org/wiki?curid=59448183
59458
ElGamal encryption
Public-key cryptosystem In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption. ElGamal encryption can be defined over any cyclic group formula_0, like multiplicative group of integers modulo "n" if and only if "n" is 1, 2, 4, "p""k" or 2"p""k", where "p" is an odd prime and "k" &gt; 0. Its security depends upon the difficulty of a certain problem in formula_0 related to computing discrete logarithms. The algorithm. The algorithm can be described as first performing a Diffie–Hellman key exchange to establish a shared secret formula_1, then using this as a one-time pad for encrypting the message. ElGamal encryption is performed in three phases: the key generation, the encryption, and the decryption. The first is purely key exchange, whereas the latter two mix key exchange computations with message computations. Key generation. The first party, Alice, generates a key pair as follows: Encryption. A second party, Bob, encrypts a message formula_10 to Alice under her public key formula_9 as follows: Note that if one knows both the ciphertext formula_16 and the plaintext formula_11, one can easily find the shared secret formula_1, since formula_17. Therefore, a new formula_12 and hence a new formula_1 is generated for every message to improve security. For this reason, formula_12 is also called an ephemeral key. Decryption. Alice decrypts a ciphertext formula_18 with her private key formula_6 as follows: Practical use. Like most public key systems, the ElGamal cryptosystem is usually used as part of a hybrid cryptosystem, where the message itself is encrypted using a symmetric cryptosystem, and ElGamal is then used to encrypt only the symmetric key. This is because asymmetric cryptosystems like ElGamal are usually slower than symmetric ones for the same level of security, so it is faster to encrypt the message, which can be arbitrarily large, with a symmetric cipher, and then use ElGamal only to encrypt the symmetric key, which usually is quite small compared to the size of the message. Security. The security of the ElGamal scheme depends on the properties of the underlying group formula_0 as well as any padding scheme used on the messages. If the computational Diffie–Hellman assumption (CDH) holds in the underlying cyclic group formula_0, then the encryption function is one-way. If the decisional Diffie–Hellman assumption (DDH) holds in formula_0, then ElGamal achieves semantic security. Semantic security is not implied by the computational Diffie–Hellman assumption alone. See Decisional Diffie–Hellman assumption for a discussion of groups where the assumption is believed to hold. ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack. For example, given an encryption formula_18 of some (possibly unknown) message formula_11, one can easily construct a valid encryption formula_29 of the message formula_30. To achieve chosen-ciphertext security, the scheme must be further modified, or an appropriate padding scheme must be used. Depending on the modification, the DDH assumption may or may not be necessary. Other schemes related to ElGamal which achieve security against chosen ciphertext attacks have also been proposed. The Cramer–Shoup cryptosystem is secure under chosen ciphertext attack assuming DDH holds for formula_0. Its proof does not use the random oracle model. Another proposed scheme is DHIES, whose proof requires an assumption that is stronger than the DDH assumption. Efficiency. ElGamal encryption is probabilistic, meaning that a single plaintext can be encrypted to many possible ciphertexts, with the consequence that a general ElGamal encryption produces a 1:2 expansion in size from plaintext to ciphertext. Encryption under ElGamal requires two exponentiations; however, these exponentiations are independent of the message and can be computed ahead of time if needed. Decryption requires one exponentiation and one computation of a group inverse, which can, however, be easily combined into just one exponentiation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "s" }, { "math_id": 2, "text": "G\\," }, { "math_id": 3, "text": "q\\," }, { "math_id": 4, "text": "g" }, { "math_id": 5, "text": "e" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "\\{1, \\ldots, q-1\\}" }, { "math_id": 8, "text": "h := g^x" }, { "math_id": 9, "text": "(G,q,g,h)" }, { "math_id": 10, "text": "M" }, { "math_id": 11, "text": "m" }, { "math_id": 12, "text": "y" }, { "math_id": 13, "text": "s := h^y" }, { "math_id": 14, "text": "c_1 := g^y" }, { "math_id": 15, "text": "c_2 := m \\cdot s" }, { "math_id": 16, "text": "(c_1,c_2)" }, { "math_id": 17, "text": "c_2 \\cdot m^{-1} = s" }, { "math_id": 18, "text": "(c_1, c_2)" }, { "math_id": 19, "text": "s := c_1^x" }, { "math_id": 20, "text": "c_1 = g^y" }, { "math_id": 21, "text": "c_1^x = g^{xy} = h^y" }, { "math_id": 22, "text": "s^{-1}" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "c_1^{q-x}" }, { "math_id": 25, "text": "s \\cdot c_1^{q-x} = g^{xy} \\cdot g^{(q-x)y} = (g^{q})^y = e^y = e" }, { "math_id": 26, "text": "m := c_2 \\cdot s^{-1}" }, { "math_id": 27, "text": " c_2 = m \\cdot s" }, { "math_id": 28, "text": "c_2 \\cdot s^{-1} = (m \\cdot s) \\cdot s^{-1} = m \\cdot e = m" }, { "math_id": 29, "text": "(c_1, 2 c_2)" }, { "math_id": 30, "text": "2m" } ]
https://en.wikipedia.org/wiki?curid=59458
59458004
LaRa
LaRa (Lander Radioscience) is a Belgian radio science experiment that will be placed onboard Kazachok, planned to be launched in 2022. LaRa will monitor the Doppler frequency shift of a radio signal traveling between the Martian lander and the Earth. These Doppler measurements will be used to precisely observe the orientation and rotation of Mars, leading to a better knowledge of the internal structure of the planet. Instrument description. LaRa will obtain coherent two-way Doppler measurements from the X band radio link between Kazachok and large antennas on Earth, like those of the Deep space network. The relative radial velocity between the Earth and the Martian lander is inferred from Doppler shifts measured at the Earth ground stations. Masers at the Earth's ground stations ensure the frequency stability. Véronique Dehant, scientist at the Royal Observatory of Belgium, is the Principal Investigator of the experiment. Antwerp Space N.V., a subsidiary of OHB SE, is the manufacturer of the LaRa instrument. The main parts of the transponder are the coherent detector, the transmitter with the Solid-State Power Amplifier, the micro controller unit, the receiver and the power supply unit. The Allan deviation (quantifying the frequency stability of the signal) of the measurements is expected to be lower than formula_0at 60 second integration time. The LaRa high-performance antennas were designed at the Université catholique de Louvain in Belgium to obtain an optimal antenna gain centered on an elevation (angle of the line-of-sight from the lander to Earth) of about 30° to 55°. There will be three antennas: two for the transmission (for redundancy purposes) and one for reception. Cables connect the transponder to the three antennas. Belgium and the Belgian Federal Science Policy Office (BELSPO) fund the development and the manufacturing of LaRa through ESA's PRODEX program. Scientific objectives. LaRa will study the rotation of Mars as well as its internal structure, with particular focus on its core. It will observe the Martian precession rate, the nutations, and the length-of-day variations, as well as the polar motion. The precession and the nutations are variations in the orientation of Mars's rotation axis in space, the precession being the very long term motion (about 170 000 years for Mars) while the nutations are the variations with a shorter period (annual, semi-annual, ter-annual... periods). A precise measurement of the Martian nutations enables an independent determination of the size and density of the liquid core because of a resonance in the nutation amplitudes. The resonant amplification of the low-frequency forced nutations depends sensibly on the size, moment of inertia, and flattening of the core. This amplification is expected to correspond to a displacement of between a few to forty centimeters on Mars surface. Observing the amplification allows to confirm the liquid state of the core and to determine some core properties. LaRa will also measure variations in the rotation angular momentum due to the redistribution of masses, such as the migration of ice from the polar caps to the atmosphere and the sublimation/condensation cycle of atmospheric CO2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "10^{-13}" } ]
https://en.wikipedia.org/wiki?curid=59458004
59459308
Amitsur complex
In algebra, the Amitsur complex is a natural complex associated to a ring homomorphism. It was introduced by Shimshon Amitsur (1959). When the homomorphism is faithfully flat, the Amitsur complex is exact (thus determining a resolution), which is the basis of the theory of faithfully flat descent. The notion should be thought of as a mechanism to go beyond the conventional localization of rings and modules. Definition. Let formula_0 be a homomorphism of (not-necessary-commutative) rings. First define the cosimplicial set formula_1 (where formula_2 refers to formula_3, not formula_4) as follows. Define the face maps formula_5 by inserting formula_6 at the formula_7th spot: formula_8 Define the degeneracies formula_9 by multiplying out the formula_7th and formula_10th spots: formula_11 They satisfy the "obvious" cosimplicial identities and thus formula_12 is a cosimplicial set. It then determines the complex with the augumentation formula_13, the Amitsur complex: formula_14 where formula_15 Exactness of the Amitsur complex. Faithfully flat case. In the above notations, if formula_13 is right faithfully flat, then a theorem of Alexander Grothendieck states that the (augmented) complex formula_16 is exact and thus is a resolution. More generally, if formula_13 is right faithfully flat, then, for each left formula_17-module formula_18, formula_19 is exact. "Proof": Step 1: The statement is true if formula_20 splits as a ring homomorphism. That "formula_13 splits" is to say formula_21 for some homomorphism formula_22 (formula_23 is a retraction and formula_13 a section). Given such a formula_23, define formula_24 by formula_25 An easy computation shows the following identity: with formula_26, formula_27. This is to say that formula_28 is a homotopy operator and so formula_29 determines the zero map on cohomology: i.e., the complex is exact. Step 2: The statement is true in general. We remark that formula_30 is a section of formula_31. Thus, Step 1 applied to the split ring homomorphism formula_32 implies: formula_33 where formula_34, is exact. Since formula_35, etc., by "faithfully flat", the original sequence is exact. formula_36 Arc topology case. Bhargav Bhatt and Peter Scholze (2019, §8) show that the Amitsur complex is exact if formula_17 and formula_37 are (commutative) perfect rings, and the map is required to be a covering in the arc topology (which is a weaker condition than being a cover in the flat topology). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta: R \\to S" }, { "math_id": 1, "text": "C^\\bullet = S^{\\otimes \\bullet+1}" }, { "math_id": 2, "text": "\\otimes" }, { "math_id": 3, "text": "\\otimes_R" }, { "math_id": 4, "text": "\\otimes_{\\Z}" }, { "math_id": 5, "text": "d^i : S^{\\otimes {n+1}} \\to S^{\\otimes n+2}" }, { "math_id": 6, "text": "1" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "d^i(x_0 \\otimes \\cdots \\otimes x_n) = x_0 \\otimes \\cdots \\otimes x_{i-1} \\otimes 1 \\otimes x_i \\otimes \\cdots \\otimes x_n." }, { "math_id": 9, "text": "s^i : S^{\\otimes n+1} \\to S^{\\otimes n}" }, { "math_id": 10, "text": "(i+1)" }, { "math_id": 11, "text": "s^i(x_0 \\otimes \\cdots \\otimes x_n) = x_0 \\otimes \\cdots \\otimes x_i x_{i+1} \\otimes \\cdots \\otimes x_n." }, { "math_id": 12, "text": "S^{\\otimes \\bullet + 1}" }, { "math_id": 13, "text": "\\theta" }, { "math_id": 14, "text": "0 \\to R \\,\\overset{\\theta}\\to\\, S \\,\\overset{\\delta^0}\\to\\, S^{\\otimes 2} \\,\\overset{\\delta^1}\\to\\, S^{\\otimes 3} \\to \\cdots" }, { "math_id": 15, "text": "\\delta^n = \\sum_{i=0}^{n+1} (-1)^i d^i." }, { "math_id": 16, "text": "0 \\to R \\overset{\\theta}\\to S^{\\otimes \\bullet + 1}" }, { "math_id": 17, "text": "R" }, { "math_id": 18, "text": "M" }, { "math_id": 19, "text": "0 \\to M \\to S \\otimes_R M \\to S^{\\otimes 2} \\otimes_R M \\to S^{\\otimes 3} \\otimes_R M \\to \\cdots" }, { "math_id": 20, "text": "\\theta : R \\to S" }, { "math_id": 21, "text": "\\rho \\circ \\theta = \\operatorname{id}_R" }, { "math_id": 22, "text": "\\rho : S \\to R" }, { "math_id": 23, "text": "\\rho" }, { "math_id": 24, "text": "h : S^{\\otimes n+1} \\otimes M \\to S^{\\otimes n} \\otimes M" }, { "math_id": 25, "text": "\\begin{align}\n& h(x_0 \\otimes m) = \\rho(x_0) \\otimes m, \\\\\n& h(x_0 \\otimes \\cdots \\otimes x_n \\otimes m) = \\theta(\\rho(x_0)) x_1 \\otimes \\cdots \\otimes x_n \\otimes m.\n\\end{align}" }, { "math_id": 26, "text": "\\delta^{-1}=\\theta \\otimes \\operatorname{id}_M : M \\to S \\otimes_R M" }, { "math_id": 27, "text": "h \\circ \\delta^n + \\delta^{n-1} \\circ h = \\operatorname{id}_{S^{\\otimes n+1} \\otimes M}" }, { "math_id": 28, "text": "h" }, { "math_id": 29, "text": "\\operatorname{id}_{S^{\\otimes n+1} \\otimes M}" }, { "math_id": 30, "text": "S \\to T := S \\otimes_R S, \\, x \\mapsto 1 \\otimes x" }, { "math_id": 31, "text": "T \\to S, \\, x \\otimes y \\mapsto xy" }, { "math_id": 32, "text": "S \\to T" }, { "math_id": 33, "text": "0 \\to M_S \\to T \\otimes_S M_S \\to T^{\\otimes 2} \\otimes_S M_S \\to \\cdots," }, { "math_id": 34, "text": "M_S = S \\otimes_R M" }, { "math_id": 35, "text": "T \\otimes_S M_S \\simeq S^{\\otimes 2} \\otimes_R M" }, { "math_id": 36, "text": "\\square" }, { "math_id": 37, "text": "S" } ]
https://en.wikipedia.org/wiki?curid=59459308
59463403
Touwsrivier CPV Solar Project
Touwsrivier CPV Solar Project is a 44 MWp (36 MWAC) concentrator photovoltaics (CPV) power station located 13 km outside the town of Touwsrivier in the Western Cape of South Africa. The installation reached full capacity in December 2014 and is the second largest operating CPV facility in the world. Electricity produced by the plant is fed into the national grid operated by Eskom under a 20-year power purchase agreement (PPA). Facility construction details. The facility consists of 1500 dual-axis CX-S530-II solar tracking systems divided into 60 sections. The 25 systems of each section are connected in parallel to a central grid-connected 630 kW inverter. Each system supports 12 CX-M500 modules which are each rated to produce 2450 Wp. Each module contains 2,400 fresnel lenses to concentrate sunlight 500 times onto multi-junction solar cells, allowing a greater efficiency than other photovoltaic power plants. The facility is sited on 190 hectares near a similar 60 kW CPV pilot plant on the neighbouring Aquila private game reserve. Group Five Construction (Pty) Ltd served as the EPC contractor for the balance of the project. It is the world's largest assembly of Soitec's Concentrix Solar technology. Ownership, funding, and operations. Soitec initiated the project under the South African government's Renewable Energy Independent Power Producer (REIPP) programme. Construction was financed with a US$100 million (R1 billion) bond special purpose vehicle (SPV) on the Johannesburg Stock Exchange. The project is owned by Soitec (20%); the Public Investment Corporation, which is the South African Government's employee pension fund (40% through a preferred share structure ); Pele Green Energy (Pty) Ltd (35%); and the Touwsrivier Community Trust (5%). Pele Energy also provides oversight of ongoing operation and improvement activities with a subsidiary of juwi Renewable Energies. Local community. Like other similar solar projects in South Africa, a profit sharing and investment agreement exists with the local community whereby a share of the profits from the plant are invested in improving the town of Touws River. This includes the construction of a hydroponics farm employing 30 people and upgrades to the town's primary school. Ongoing maintenance and security operations at the plant also employ about 35 people. Electricity production. Monthly capacity and production data for grid-connected photovoltaic plants in South Africa are available in aggregate from the Renewable Energy Data and Information Service. Data from individual plants is restricted due to Department of Energy confidentiality protocols. Annual electricity production for the Touwsrivier CPV plant has performed near expected targets for the first five years of operation (2015-2019) as summarized in this bond credit rating opinion from Moody's. Note that the plant's 44 MWp peak DC rating is specified under "concentrator standard test conditions" (CSTC) of DNI=1000 W/m2, AM1.5D, &amp; Tcell=25 °C, as per the IEC 62670 standard convention. Production capacity is 36MW based on IEC 62670 "concentrator standard operating conditions" (CSOC) of DNI=900 W/m2, AM1.5D, Tambient=20 °C, &amp; Wind speed=2 m/s, and is also the value quoted by several sources as representing the plant's expected AC capacity (denoted as MWAC). A capacity factor of 0.230 (23.0%) then corresponds to annual production of: formula_0 See also. &lt;templatestyles src="Stack/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(36\\ \\mbox{MW}) \\times (0.230\\ ) \\times (365\\ \\mbox{days}) \\times (24\\ \\mbox{hours/day}) = 72,500\\ \\mbox{MW·h}" } ]
https://en.wikipedia.org/wiki?curid=59463403
594682
Compactly generated group
In mathematics, a compactly generated (topological) group is a topological group "G" which is algebraically generated by one of its compact subsets. This should not be confused with the unrelated notion (widely used in algebraic topology) of a compactly generated space -- one whose topology is generated (in a suitable sense) by its compact subspaces. Definition. A topological group "G" is said to be compactly generated if there exists a compact subset "K" of "G" such that formula_0 So if "K" is symmetric, i.e. "K" = "K" −1, then formula_1 Locally compact case. This property is interesting in the case of locally compact topological groups, since locally compact compactly generated topological groups can be approximated by locally compact, separable metric factor groups of "G". More precisely, for a sequence "U""n" of open identity neighborhoods, there exists a normal subgroup "N" contained in the intersection of that sequence, such that "G"/"N" is locally compact metric separable (the Kakutani-Kodaira-Montgomery-Zippin theorem). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\langle K\\rangle = \\bigcup_{n \\in \\mathbb{N}} (K \\cup K^{-1})^n = G." }, { "math_id": 1, "text": "G = \\bigcup_{n \\in \\mathbb{N}} K^n." } ]
https://en.wikipedia.org/wiki?curid=594682
59469
Linear cryptanalysis
Form of cryptanalysis In cryptography, linear cryptanalysis is a general form of cryptanalysis based on finding affine approximations to the action of a cipher. Attacks have been developed for block ciphers and stream ciphers. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis. The discovery is attributed to Mitsuru Matsui, who first applied the technique to the FEAL cipher (Matsui and Yamagishi, 1992). Subsequently, Matsui published an attack on the Data Encryption Standard (DES), eventually leading to the first experimental cryptanalysis of the cipher reported in the open community (Matsui, 1993; 1994). The attack on DES is not generally practical, requiring 247 known plaintexts. A variety of refinements to the attack have been suggested, including using multiple linear approximations or incorporating non-linear expressions, leading to a generalized partitioning cryptanalysis. Evidence of security against linear cryptanalysis is usually expected of new cipher designs. Overview. There are two parts to linear cryptanalysis. The first is to construct linear equations relating plaintext, ciphertext and key bits that have a high bias; that is, whose probabilities of holding (over the space of all possible values of their variables) are as close as possible to 0 or 1. The second is to use these linear equations in conjunction with known plaintext-ciphertext pairs to derive key bits. Constructing linear equations. For the purposes of linear cryptanalysis, a linear equation expresses the equality of two expressions which consist of binary variables combined with the exclusive-or (XOR) operation. For example, the following equation, from a hypothetical cipher, states the XOR sum of the first and third plaintext bits (as in a block cipher's block) and the first ciphertext bit is equal to the second bit of the key: formula_0 In an ideal cipher, any linear equation relating plaintext, ciphertext and key bits would hold with probability 1/2. Since the equations dealt with in linear cryptanalysis will vary in probability, they are more accurately referred to as linear "approximations". The procedure for constructing approximations is different for each cipher. In the most basic type of block cipher, a substitution–permutation network, analysis is concentrated primarily on the S-boxes, the only nonlinear part of the cipher (i.e. the operation of an S-box cannot be encoded in a linear equation). For small enough S-boxes, it is possible to enumerate every possible linear equation relating the S-box's input and output bits, calculate their biases and choose the best ones. Linear approximations for S-boxes then must be combined with the cipher's other actions, such as permutation and key mixing, to arrive at linear approximations for the entire cipher. The piling-up lemma is a useful tool for this combination step. There are also techniques for iteratively improving linear approximations (Matsui 1994). Deriving key bits. Having obtained a linear approximation of the form: formula_1 we can then apply a straightforward algorithm (Matsui's Algorithm 2), using known plaintext-ciphertext pairs, to guess at the values of the key bits involved in the approximation. For each set of values of the key bits on the right-hand side (referred to as a "partial key"), count how many times the approximation holds true over all the known plaintext-ciphertext pairs; call this count "T". The partial key whose "T" has the greatest absolute difference from half the number of plaintext-ciphertext pairs is designated as the most likely set of values for those key bits. This is because it is assumed that the correct partial key will cause the approximation to hold with a high bias. The magnitude of the bias is significant here, as opposed to the magnitude of the probability itself. This procedure can be repeated with other linear approximations, obtaining guesses at values of key bits, until the number of unknown key bits is low enough that they can be attacked with brute force. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n P_1 \\oplus P_3 \\oplus C_1 = K_2.\n" }, { "math_id": 1, "text": "\nP_{i_1} \\oplus P_{i_2} \\oplus \\cdots \\oplus C_{j_1} \\oplus C_{j_2} \\oplus \\cdots = K_{k_1} \\oplus K_{k_2} \\oplus \\cdots\n" } ]
https://en.wikipedia.org/wiki?curid=59469
59470
Digital Signature Algorithm
Digital verification standard The Digital Signature Algorithm (DSA) is a public-key cryptosystem and Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. In a public-key cryptosystem, two keys are generated: data can only be encrypted with the public key and encrypted data can only be decrypted with the private key. DSA is a variant of the Schnorr and ElGamal signature schemes. The National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS) in 1991, and adopted it as FIPS 186 in 1994. Five revisions to the initial specification have been released. The newest specification is: FIPS 186-5 from February 2023. DSA is patented but NIST has made this patent available worldwide royalty-free. Specification FIPS 186-5 indicates DSA will no longer be approved for digital signature generation, but may be used to verify signatures generated prior to the implementation date of that standard. Overview. The DSA works in the framework of public-key cryptosystems and is based on the algebraic properties of modular exponentiation, together with the discrete logarithm problem, which is considered to be computationally intractable. The algorithm uses a key pair consisting of a public key and a private key. The private key is used to generate a digital signature for a message, and such a signature can be verified by using the signer's corresponding public key. The digital signature provides message authentication (the receiver can verify the origin of the message), integrity (the receiver can verify that the message has not been modified since it was signed) and non-repudiation (the sender cannot falsely claim that they have not signed the message). History. In 1982, the U.S government solicited proposals for a public key signature standard. In August 1991 the National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS). Initially there was significant criticism, especially from software companies that had already invested effort in developing digital signature software based on the RSA cryptosystem. Nevertheless, NIST adopted DSA as a Federal standard (FIPS 186) in 1994. Five revisions to the initial specification have been released: FIPS 186–1 in 1998, FIPS 186–2 in 2000, FIPS 186–3 in 2009, FIPS 186–4 in 2013, and FIPS 186–5 in 2023. Standard FIPS 186-5 forbids signing with DSA, while allowing verification of signatures generated prior to the implementation date of the standard as a document. It is to be replaced by newer signature schemes such as EdDSA. DSA is covered by U.S. patent 5231668, filed July 26, 1991 and now expired, and attributed to David W. Kravitz, a former NSA employee. This patent was given to "The United States of America as represented by the Secretary of Commerce, Washington, D.C.", and NIST has made this patent available worldwide royalty-free. Claus P. Schnorr claims that his U.S. patent 4995082 (also now expired) covered DSA; this claim is disputed. In 1993, Dave Banisar managed to get confirmation, via a FOIA request, that the DSA algorithm hasn't been designed by the NIST, but by the NSA. OpenSSH announced that DSA is scheduled to be removed in 2025. Operation. The DSA algorithm involves four operations: key generation (which creates the key pair), key distribution, signing and signature verification. 1. Key generation. Key generation has two phases. The first phase is a choice of "algorithm parameters" which may be shared between different users of the system, while the second phase computes a single key pair for one user. Parameter generation. The algorithm parameters are (formula_7, formula_6, formula_14). These may be shared between different users of the system. Per-user keys. Given a set of parameters, the second phase computes the key pair for a single user: formula_15 is the private key and formula_18 is the public key. 2. Key distribution. The signer should publish the public key formula_18. That is, they should send the key to the receiver via a reliable, but not necessarily secret, mechanism. The signer should keep the private key formula_15 secret. 3. Signing. A message formula_19 is signed as follows: The signature is formula_25 The calculation of formula_20 and formula_26 amounts to creating a new per-message key. The modular exponentiation in computing formula_26 is the most computationally expensive part of the signing operation, but it may be computed before the message is known. Calculating the modular inverse formula_27 is the second most expensive part, and it may also be computed before the message is known. It may be computed using the extended Euclidean algorithm or using Fermat's little theorem as formula_28. 4. Signature Verification. One can verify that a signature formula_25 is a valid signature for a message formula_19 as follows: Correctness of the algorithm. The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows: First, since formula_36, it follows that formula_37 by Fermat's little theorem. Since formula_38 and formula_6 is prime, formula_14 must have order formula_6. The signer computes formula_39 Thus formula_40 Since formula_14 has order formula_6 we have formula_41 Finally, the correctness of DSA follows from formula_42 Sensitivity. With DSA, the entropy, secrecy, and uniqueness of the random signature value formula_20 are critical. It is so critical that violating any one of those three requirements can reveal the entire private key to an attacker. Using the same value twice (even while keeping formula_20 secret), using a predictable value, or leaking even a few bits of formula_20 in each of several signatures, is enough to reveal the private key formula_15. This issue affects both DSA and Elliptic Curve Digital Signature Algorithm (ECDSA) – in December 2010, the group "fail0verflow" announced the recovery of the ECDSA private key used by Sony to sign software for the PlayStation 3 game console. The attack was made possible because Sony failed to generate a new random formula_20 for each signature. This issue can be prevented by deriving formula_20 deterministically from the private key and the message hash, as described by . This ensures that formula_20 is different for each formula_43 and unpredictable for attackers who do not know the private key formula_15. In addition, malicious implementations of DSA and ECDSA can be created where formula_20 is chosen in order to subliminally leak information via signatures. For example, an offline private key could be leaked from a perfect offline device that only released innocent-looking signatures. Implementations. Below is a list of cryptographic libraries that provide support for DSA: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H" }, { "math_id": 1, "text": "|H|" }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "N < L" }, { "math_id": 5, "text": "N \\leq |H|" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "p" }, { "math_id": 8, "text": "p - 1" }, { "math_id": 9, "text": "h" }, { "math_id": 10, "text": "\\{ 2 \\ldots p-2 \\}" }, { "math_id": 11, "text": "g := h^{(p - 1)/q} \\mod p" }, { "math_id": 12, "text": "g=1" }, { "math_id": 13, "text": "h=2" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "\\{ 1 \\ldots q-1 \\}" }, { "math_id": 17, "text": "y := g^x \\mod p" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "m" }, { "math_id": 20, "text": "k" }, { "math_id": 21, "text": "r := \\left(g^{k}\\bmod\\,p\\right)\\bmod\\,q" }, { "math_id": 22, "text": "r=0" }, { "math_id": 23, "text": "s := \\left(k^{-1}\\left(H(m)+xr\\right)\\right)\\bmod\\,q" }, { "math_id": 24, "text": "s=0" }, { "math_id": 25, "text": "\\left(r,s\\right)" }, { "math_id": 26, "text": "r" }, { "math_id": 27, "text": "k^{-1}\\bmod\\,q" }, { "math_id": 28, "text": "k^{q-2}\\bmod\\,q" }, { "math_id": 29, "text": "0 < r < q" }, { "math_id": 30, "text": "0 < s < q" }, { "math_id": 31, "text": " w := s^{-1} \\bmod\\,q" }, { "math_id": 32, "text": "u_1 := H(m) \\cdot w\\, \\bmod\\,q" }, { "math_id": 33, "text": "u_2 := r \\cdot w\\, \\bmod\\,q" }, { "math_id": 34, "text": " v := \\left(g^{u_1}y^{u_2} \\bmod\\,p\\right) \\bmod\\,q" }, { "math_id": 35, "text": "v = r" }, { "math_id": 36, "text": "g=h^{(p-1)/q}~\\text{mod}~p" }, { "math_id": 37, "text": "g^q \\equiv h^{p-1} \\equiv 1 \\mod p" }, { "math_id": 38, "text": "g>0" }, { "math_id": 39, "text": "s=k^{-1}(H(m)+xr)\\bmod\\,q" }, { "math_id": 40, "text": "\n\\begin{align}\nk & \\equiv H(m)s^{-1}+xrs^{-1}\\\\\n & \\equiv H(m)w + xrw \\pmod{q}\n\\end{align}\n" }, { "math_id": 41, "text": "\n\\begin{align}\ng^k & \\equiv g^{H(m)w}g^{xrw}\\\\\n & \\equiv g^{H(m)w}y^{rw}\\\\\n & \\equiv g^{u_1}y^{u_2} \\pmod{p}\n\\end{align}\n" }, { "math_id": 42, "text": "\\begin{align}\n r &= (g^k \\bmod\\,p) \\bmod\\,q\\\\\n &= (g^{u_1}y^{u_2} \\bmod\\,p) \\bmod\\,q\\\\\n &= v\n\\end{align}" }, { "math_id": 43, "text": "H(m)" } ]
https://en.wikipedia.org/wiki?curid=59470
59471688
Thermoelectric acclimatization
Thermoelectric acclimatization depends on the possibility of a Peltier cell of absorbing heat on one side and rejecting heat on the other side. Consequently, it is possible to use them for heating on one side and cooling on the other and as a temperature control system. Peltier cell heat pump. A typical Peltier cell based heat pump can be used by coupling the thermoelectric generators with photovoltaic air cooled panels as defined in the PhD thesis of Alexandra Thedeby. Considering the system with an air plant that ensures the possibility of heating on one side and cooling on the other. By changing the configuration it allows both winter and summer acclimatization. These elements are expected to be an effective element for zero-energy buildings, if coupled with solar thermal energy and photovoltaic with particular reference to create radiant heat pumps on the walls of a building. It must be remarked that this acclimatization method ensures the ideal efficiency during summer cooling if coupled with a photovoltaic generator. The air circulation could be also used for cooling the temperature of PV modules. The most important engineering requirement is the accurate design of heat sinks to optimize the heat exchange and minimize the fluiddynamic losses. Thermodynamic parameters. The efficiency can be determined by the following relation: formula_0 where formula_1is the temperature of the cooling surface and formula_2is the temperature of the heating surface. The key energy phenomena and the reason of defining a specific use of thermoelectric elements (Figure 1) as heat pump resides in the energy fluxes that those elements allow realizing: Where the following terms are used: formula_11, formula_12electric current; α Seebeck coefficient; R electric resistance, S surface area, d cell thickness, and k thermal conductivity. The efficiencies of the system are: COP can be calculated according to Cannistraro. Final uses. Thermoelectric heat pumps can be easily used for both local acclimatization for removing local discomfort situations. For example, thermoelectric ceilings are today in an advanced research stage with the aim of increasing indoor comfort conditions according to Fanger, such as the ones that may appear in presence of large glassed surfaces, and for small building acclimatization if coupled with solar systems. Those systems have the key importance in the direction of new zero emissions passive building because of a very high COP value and the following high performances by an accurate exergy optimization of the system. At industrial level thermoelectric acclimatization appliances are actually under development References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\eta= \\frac{T_C - T_H}{T_C}" }, { "math_id": 1, "text": "T_C" }, { "math_id": 2, "text": "T_H" }, { "math_id": 3, "text": "\\dot{Q}_L" }, { "math_id": 4, "text": "\\dot{Q}_L= \\frac{L}{d}S(T_H-T_C)" }, { "math_id": 5, "text": "\\dot{Q}_C" }, { "math_id": 6, "text": "\\dot{Q}_C=\\alpha I T_C - \\frac{I^2 R}{2} - \\frac{k}{d}A \\Delta T" }, { "math_id": 7, "text": "\\dot{Q}_H" }, { "math_id": 8, "text": "\\dot{Q}_H=\\alpha I T_C + \\frac{I^2 R}{2} - \\frac{k}{d}S \\Delta T" }, { "math_id": 9, "text": "\\dot{E}_{EL}" }, { "math_id": 10, "text": "\\dot{E}_{EL}=\\alpha I T_C + I^2 R" }, { "math_id": 11, "text": "\\Delta T = T_H-T_C" }, { "math_id": 12, "text": "I" }, { "math_id": 13, "text": "\\eta _C = \\frac{\\dot{Q _C}}{\\dot{E}_{EL}}" }, { "math_id": 14, "text": "\\eta _H = \\frac{\\dot{Q _H}}{\\dot{E}_{EL}}" } ]
https://en.wikipedia.org/wiki?curid=59471688
59472228
Boule (gambling game)
Gambling game similar to roulette Boule (French for 'ball') is a gambling game, similar to roulette, that dates back to the popular 19th-century game of "Petits Chevaux" ('Little Horses'). Playing. The wheel is divided into 18 pockets which are numbered from 1 to 9, each number occurring twice. The numbers 1, 3, 6 and 8 are black, while the numbers 2, 4, 7 and 9 are red, and 5 is yellow. Instead of the ivory ball used in roulette, a rubber ball is used in Boule. Betting options. Even odds. The number five corresponds to the "zéro" in roulette: if the "boule" falls on the five, all simple chance bets are lost. House advantage. The overall house advantage for all forms of betting in Boule is formula_0 = 11.11%. Boule is thus rather disadvantageous for the punter; by comparison, in European roulette, the house advantage for simple chance play is 1.35%, and the house advantage for multiple chance play is 2.70%. Boule is played for low stakes compared to roulette, especially in resorts where there is no concession for the Grand Jeu, i.e. a casino. Petits Chevaux. In Petits Chevaux, also called Jeu des Petits Chevaux or Rösslispiel, there are the same betting options as in Boule. The winning number is not chosen, however, by throwing a ball, but by a mechanical device that simulates a horse race "en miniature". "Petits Chevaux" was the predecessor of Boule. The pockets of many Boule wheels are decorated with pictures of horses in commemoration of "Petits Chevaux". Around 1900 Petits Chevaux was a very popular casino game. A surviving gaming table by the firm of J. A. Jost (Paris c. 1905) is displayed in the Swiss Museum of Games in La Tour-de-Peilz. Until 2003, a similar game, the horse roulette game of Klondyke, was played in Baden-Baden's casino at the time of the Iffezheim Races. The gaming apparatus is housed today in Baden-Baden's Municipal Museum. In France the name "Petits Chevaux" is also used for a variant of Pachisi or Mensch, ärgere dich nicht; the playing stones are in the shape of knights; see Jeu des petits chevaux.
[ { "math_id": 0, "text": "\\frac{1}{9}" } ]
https://en.wikipedia.org/wiki?curid=59472228
5947843
Welch–Satterthwaite equation
Equation to approximate pooled degrees of freedom In statistics and uncertainty analysis, the Welch–Satterthwaite equation is used to calculate an approximation to the effective degrees of freedom of a linear combination of independent sample variances, also known as the pooled degrees of freedom, corresponding to the pooled variance. For "n" sample variances "s""i"2 ("i" 1, ..., "n"), each respectively having "ν""i" degrees of freedom, often one computes the linear combination. formula_0 where formula_1 is a real positive number, typically formula_2. In general, the probability distribution of "χ cannot be expressed analytically. However, its distribution can be approximated by another chi-squared distribution, whose effective degrees of freedom are given by the Welch–Satterthwaite equation"' formula_3 There is "no" assumption that the underlying population variances "σi"2 are equal. This is known as the Behrens–Fisher problem. The result can be used to perform approximate statistical inference tests. The simplest application of this equation is in performing Welch's "t"-test. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\chi' = \\sum_{i=1}^n k_i s_i^2.\n" }, { "math_id": 1, "text": "k_i " }, { "math_id": 2, "text": " k_i=\\frac{1}{\\nu_i+1}" }, { "math_id": 3, "text": "\n \\nu_{\\chi'} \\approx \\frac{\\displaystyle\\left(\\sum_{i=1}^n k_i s_i^2\\right)^2}\n {\\displaystyle\\sum_{i=1}^n \\frac{(k_i s_i^2)^2}\n {\\nu_i}\n }\n" } ]
https://en.wikipedia.org/wiki?curid=5947843
59487764
Permutation representation
In mathematics, the term permutation representation of a (typically finite) group formula_0 can refer to either of two closely related notions: a representation of formula_0 as a group of permutations, or as a group of permutation matrices. The term also refers to the combination of the two. Abstract permutation representation. A permutation representation of a group formula_0 on a set formula_1 is a homomorphism from formula_0 to the symmetric group of formula_1: formula_2 The image formula_3 is a permutation group and the elements of formula_0 are represented as permutations of formula_1. A permutation representation is equivalent to an action of formula_0 on the set formula_1: formula_4 See the article on group action for further details. Linear permutation representation. If formula_0 is a permutation group of degree formula_5, then the permutation representation of formula_0 is the linear representation of formula_0 formula_6 which maps formula_7 to the corresponding permutation matrix (here formula_8 is an arbitrary field). That is, formula_0 acts on formula_9 by permuting the standard basis vectors. This notion of a permutation representation can, of course, be composed with the previous one to represent an arbitrary abstract group formula_0 as a group of permutation matrices. One first represents formula_0 as a permutation group and then maps each permutation to the corresponding matrix. Representing formula_0 as a permutation group acting on itself by translation, one obtains the regular representation. Character of the permutation representation. Given a group formula_0 and a finite set formula_1 with formula_0 acting on the set formula_1 then the character formula_10 of the permutation representation is exactly the number of fixed points of formula_1 under the action of formula_11 on formula_1. That is formula_12 the number of points of formula_1 fixed by formula_11. This follows since, if we represent the map formula_11 with a matrix with basis defined by the elements of formula_1 we get a permutation matrix of formula_1. Now the character of this representation is defined as the trace of this permutation matrix. An element on the diagonal of a permutation matrix is 1 if the point in formula_1 is fixed, and 0 otherwise. So we can conclude that the trace of the permutation matrix is exactly equal to the number of fixed points of formula_1. For example, if formula_13 and formula_14 the character of the permutation representation can be computed with the formula formula_12 the number of points of formula_1 fixed by formula_15. So formula_16 as only 3 is fixed formula_17 as no elements of formula_1 are fixed, and formula_18 as every element of formula_1 is fixed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\rho\\colon G \\to \\operatorname{Sym}(X)." }, { "math_id": 3, "text": "\\rho(G)\\sub \\operatorname{Sym}(X)" }, { "math_id": 4, "text": "G\\times X \\to X." }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\rho\\colon G\\to \\operatorname{GL}_n(K)" }, { "math_id": 7, "text": "g\\in G" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "K^n" }, { "math_id": 10, "text": "\\chi" }, { "math_id": 11, "text": "\\rho(g)" }, { "math_id": 12, "text": "\\chi(g)=" }, { "math_id": 13, "text": "G=S_3" }, { "math_id": 14, "text": "X=\\{1, 2, 3\\}" }, { "math_id": 15, "text": "g" }, { "math_id": 16, "text": "\\chi((12))=\\operatorname{tr}(\\begin{bmatrix} 0 & 1 & 0\\\\ 1 & 0 & 0\\\\ 0 & 0 & 1\\end{bmatrix})=1" }, { "math_id": 17, "text": "\\chi((123))=\\operatorname{tr}(\\begin{bmatrix} 0 & 1 & 0\\\\ 0 & 0 & 1\\\\ 1 & 0 & 0\\end{bmatrix})=0" }, { "math_id": 18, "text": "\\chi(1)=\\operatorname{tr}(\\begin{bmatrix} 1 & 0 & 0\\\\ 0 & 1 & 0\\\\ 0 & 0 & 1\\end{bmatrix})=3" } ]
https://en.wikipedia.org/wiki?curid=59487764
59497
Solubility
Capacity of a substance to dissolve in a homogeneous way In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution. The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible"). The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first. The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy. Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears. The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property depends on many other variables, such as the physical form of the two substances and the manner and intensity of mixing. The concept and measure of solubility are extremely important in many sciences besides chemistry, such as geology, biology, physics, and oceanography, as well as in engineering, medicine, agriculture, and even in non-technical activities like painting, cleaning, cooking, and brewing. Most chemical reactions of scientific, industrial, or practical interest only happen after the reagents have been dissolved in a suitable solvent. Water is by far the most common such solvent. The term "soluble" is sometimes used for materials that can form colloidal suspensions of very fine solid particles in a liquid. The quantitative solubility of such substances is generally not well-defined, however. Quantification of solubility. The solubility of a specific solute in a specific solvent is generally expressed as the concentration of a saturated solution of the two. Any of the several ways of expressing concentration of solutions can be used, such as the mass, volume, or amount in moles of the solute for a specific mass, volume, or mole amount of the solvent or of the solution. Per quantity of solvent. In particular, chemical handbooks often express the solubility as grams of solute per 100 millilitres of solvent (g/(100 mL), often written as g/100 ml), or as grams of solute per decilitre of solvent (g/dL); or, less commonly, as grams of solute per litre of solvent (g/L). The quantity of solvent can instead be expressed in mass, as grams of solute per 100 grams of solvent (g/(100 g), often written as g/100 g), or as grams of solute per kilogram of solvent (g/kg). The number may be expressed as a percentage in this case, and the abbreviation "w/w" may be used to indicate "weight per weight". (The values in g/L and g/kg are similar for water, but that may not be the case for other solvents.) Alternatively, the solubility of a solute can be expressed in moles instead of mass. For example, if the quantity of solvent is given in kilograms, the value is the molality of the solution (mol/kg). Per quantity of solution. The solubility of a substance in a liquid may also be expressed as the quantity of solute per quantity of "solution", rather than of solvent. For example, following the common practice in titration, it may be expressed as moles of solute per litre of solution (mol/L), the molarity of the latter. In more specialized contexts the solubility may be given by the mole fraction (moles of solute per total moles of solute plus solvent) or by the mass fraction at equilibrium (mass of solute per mass of solute plus solvent). Both are dimensionless numbers between 0 and 1 which may be expressed as percentages (%). Liquid and gaseous solutes. For solutions of liquids or gases in liquids, the quantities of both substances may be given volume rather than mass or mole amount; such as litre of solute per litre of solvent, or litre of solute per litre of solution. The value may be given as a percentage, and the abbreviation "v/v" for "volume per volume" may be used to indicate this choice. Conversion of solubility values. Conversion between these various ways of measuring solubility may not be trivial, since it may require knowing the density of the solution — which is often not measured, and cannot be predicted. While the total mass is conserved by dissolution, the final volume may be different from both the volume of the solvent and the sum of the two volumes. Moreover, many solids (such as acids and salts) will dissociate in non-trivial ways when dissolved; conversely, the solvent may form coordination complexes with the molecules or ions of the solute. In those cases, the sum of the moles of molecules of solute and solvent is not really the total moles of independent particles solution. To sidestep that problem, the solubility per mole of solution is usually computed and quoted as if the solute does not dissociate or form complexes—that is, by pretending that the mole amount of solution is the sum of the mole amounts of the two substances. Qualifiers used to describe extent of solubility. The extent of solubility ranges widely, from infinitely soluble (without limit, i.e. miscible) such as ethanol in water, to essentially insoluble, such as titanium dioxide in water. A number of other descriptive terms are also used to qualify the extent of solubility for a given application. For example, U.S. Pharmacopoeia gives the following terms, according to the mass "m"sv of solvent required to dissolve one unit of mass "m"su of solute: (The solubilities of the examples are approximate, for water at 20–25 °C.) The thresholds to describe something as insoluble, or similar terms, may depend on the application. For example, one source states that substances are described as "insoluble" when their solubility is less than 0.1 g per 100 mL of solvent. Molecular view. Solubility occurs under dynamic equilibrium, which means that solubility results from the simultaneous and opposing processes of dissolution and phase joining (e.g. precipitation of solids). A stable state of the solubility equilibrium occurs when the rates of dissolution and re-joining are equal, meaning the relative amounts of dissolved and non-dissolved materials are equal. If the solvent is removed, all of the substance that had dissolved is recovered. The term "solubility" is also used in some fields where the solute is altered by solvolysis. For example, many metals and their oxides are said to be "soluble in hydrochloric acid", although in fact the aqueous acid irreversibly degrades the solid to give soluble products. Most ionic solids dissociate when dissolved in polar solvents. In those cases where the solute is not recovered upon evaporation of the solvent, the process is referred to as solvolysis. The thermodynamic concept of solubility does not apply straightforwardly to solvolysis. When a solute dissolves, it may form several species in the solution. For example, an aqueous solution of cobalt(II) chloride can afford , each of which interconverts. Factors affecting solubility. Solubility is defined for specific phases. For example, the solubility of aragonite and calcite in water are expected to differ, even though they are both polymorphs of calcium carbonate and have the same chemical formula. The solubility of one substance in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility. Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of solutions. The last two effects can be quantified using the equation for solubility equilibrium. For a solid that dissolves in a redox reaction, solubility is expected to depend on the potential (within the range of potentials under which the solid remains the thermodynamically stable phase). For example, solubility of gold in high-temperature water is observed to be almost an order of magnitude higher (i.e. about ten times higher) when the redox potential is controlled using a highly oxidizing Fe3O4-Fe2O3 redox buffer than with a moderately oxidizing Ni-NiO buffer. Solubility (metastable, at concentrations approaching saturation) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific surface area or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal. The last two effects, although often difficult to measure, are of practical importance. For example, they provide the driving force for precipitate aging (the crystal size spontaneously increasing with time). Temperature. The solubility of a given solute in a given solvent is function of temperature. Depending on the change in enthalpy (Δ"H") of the dissolution reaction, "i.e.", on the endothermic (Δ"H" &gt; 0) or exothermic (Δ"H" &lt; 0) character of the dissolution reaction, the solubility of a given compound may increase or decrease with temperature. The van 't Hoff equation relates the change of solubility equilibrium constant ("K"sp) to temperature change and to reaction enthalpy change. For most solids and liquids, their solubility increases with temperature because their dissolution reaction is endothermic (Δ"H" &gt; 0). In liquid water at high temperatures, (e.g. that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent and in a change of hydration energy affecting the Δ"G" of the dissolution reaction. Gaseous solutes exhibit more complex behavior with temperature. As the temperature is raised, gases usually become less soluble in water (exothermic dissolution reaction related to their hydration) (to a minimum, which is below 120 °C for most permanent gases), but more soluble in organic solvents (endothermic dissolution reaction related to their solvation). The chart shows solubility curves for some typical solid inorganic salts in liquid water (temperature is in degrees Celsius, i.e. kelvins minus 273.15). Many salts behave like barium nitrate and disodium hydrogen arsenate, and show a large increase in solubility with temperature (Δ"H" &gt; 0). Some solutes (e.g. sodium chloride in water) exhibit solubility that is fairly independent of temperature (Δ"H" ≈ 0). A few, such as calcium sulfate (gypsum) and cerium(III) sulfate, become less soluble in water as temperature increases (Δ"H" &lt; 0). This is also the case for calcium hydroxide (portlandite), whose solubility at 70 °C is about half of its value at 25 °C. The dissolution of calcium hydroxide in water is also an exothermic process (Δ"H" &lt; 0). As dictated by the van 't Hoff equation and Le Chatelier's principle, lowe temperatures favorsf dissolution of Ca(OH)2. Portlandite solubility increases at low temperature. This temperature dependence is sometimes referred to as "retrograde" or "inverse" solubility. Occasionally, a more complex pattern is observed, as with sodium sulfate, where the less soluble decahydrate crystal (mirabilite) loses water of crystallization at 32 °C to form a more soluble anhydrous phase (thenardite) with a smaller change in Gibbs free energy (Δ"G") in the dissolution reaction. The solubility of organic compounds nearly always increases with temperature. The technique of recrystallization, used for purification of solids, depends on a solute's different solubilities in hot and cold solvent. A few exceptions exist, such as certain cyclodextrins. Pressure. For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as: formula_0 where the index formula_1 iterates the components, formula_2 is the mole fraction of the formula_1-th component in the solution, formula_3 is the pressure, the index formula_4 refers to constant temperature, formula_5 is the partial molar volume of the formula_1-th component in the solution, formula_6 is the partial molar volume of the formula_7-th component in the dissolving solid, and formula_8 is the universal gas constant. The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time. Solubility of gases. Henry's law is used to quantify the solubility of gases in solvents. The solubility of a gas in a solvent is directly proportional to the partial pressure of that gas above the solvent. This relationship is similar to Raoult's law and can be written as: formula_9 where formula_10 is a temperature-dependent constant (for example, 769.2 L·atm/mol for dioxygen (O2) in water at 298 K), formula_11 is the partial pressure (in atm), and formula_12 is the concentration of the dissolved gas in the liquid (in mol/L). The solubility of gases is sometimes also quantified using Bunsen solubility coefficient. In the presence of small bubbles, the solubility of the gas does not depend on the bubble radius in any other way than through the effect of the radius on pressure (i.e. the solubility of gas in the liquid in contact with small bubbles is increased due to pressure increase by Δ"p" = 2γ/"r"; see Young–Laplace equation). Henry's law is valid for gases that do not undergo change of chemical speciation on dissolution. Sieverts' law shows a case when this assumption does not hold. The carbon dioxide solubility in seawater is also affected by temperature, pH of the solution, and by the carbonate buffer. The decrease of solubility of carbon dioxide in seawater when temperature increases is also an important retroaction factor (positive feedback) exacerbating past and future climate changes as observed in ice cores from the Vostok site in Antarctica. At the geological time scale, because of the Milankovich cycles, when the astronomical parameters of the Earth orbit and its rotation axis progressively change and modify the solar irradiance at the Earth surface, temperature starts to increase. When a deglaciation period is initiated, the progressive warming of the oceans releases CO2 into the atmosphere because of its lower solubility in warmer sea water. In turn, higher levels of CO2 in the atmosphere increase the greenhouse effect and carbon dioxide acts as an amplifier of the general warming. Polarity. A popular aphorism used for predicting solubility is ""like dissolves like" also expressed in the Latin language as "Similia similibus solventur"". This statement indicates that a solute will dissolve best in a solvent that has a similar chemical structure to itself, based on favorable entropy of mixing. This view is simplistic, but it is a useful rule of thumb. The overall solvation capacity of a solvent depends primarily on its polarity. For example, a very polar (hydrophilic) solute such as urea is very soluble in highly polar water, less soluble in fairly polar methanol, and practically insoluble in non-polar solvents such as benzene. In contrast, a non-polar or lipophilic solute such as naphthalene is insoluble in water, fairly soluble in methanol, and highly soluble in non-polar benzene. In even more simple terms a simple ionic compound (with positive and negative ions) such as sodium chloride (common salt) is easily soluble in a highly polar solvent (with some separation of positive (δ+) and negative (δ-) charges in the covalent molecule) such as water, as thus the sea is salty as it accumulates dissolved salts since early geological ages. The solubility is favored by entropy of mixing (Δ"S") and depends on enthalpy of dissolution (Δ"H") and the hydrophobic effect. The free energy of dissolution (Gibbs energy) depends on temperature and is given by the relationship: Δ"G" = Δ"H" – TΔ"S". Smaller Δ"G" means greater solubility. Chemists often exploit differences in solubilities to separate and purify compounds from reaction mixtures, using the technique of liquid-liquid extraction. This applies in vast areas of chemistry from drug synthesis to spent nuclear fuel reprocessing. Rate of dissolution. Dissolution is not an instantaneous process. The rate of solubilization (in kg/s) is related to the solubility product and the surface area of the material. The speed at which a solid dissolves may depend on its crystallinity or lack thereof in the case of amorphous solids and the surface area (crystallite size) and the presence of polymorphism. Many practical systems illustrate this effect, for example in designing methods for controlled drug delivery. In some cases, solubility equilibria can take a long time to establish (hours, days, months, or many years; depending on the nature of the solute and other factors). The rate of dissolution can be often expressed by the Noyes–Whitney equation or the Nernst and Brunner equation of the form: formula_13 where: For dissolution limited by diffusion (or mass transfer if mixing is present), formula_19 is equal to the solubility of the substance. When the dissolution rate of a pure substance is normalized to the surface area of the solid (which usually changes with time during the dissolution process), then it is expressed in kg/m2s and referred to as "intrinsic dissolution rate". The intrinsic dissolution rate is defined by the United States Pharmacopeia. Dissolution rates vary by orders of magnitude between different systems. Typically, very low dissolution rates parallel low solubilities, and substances with high solubilities exhibit high dissolution rates, as suggested by the Noyes-Whitney equation. Theories of solubility. Solubility product. Solubility constants are used to describe saturated solutions of ionic compounds of relatively low solubility (see solubility equilibrium). The solubility constant is a special case of an equilibrium constant. Since it is a product of ion concentrations in equilibrium, it is also known as the solubility product. It describes the balance between dissolved ions from the salt and undissolved salt. The solubility constant is also "applicable" (i.e. useful) to precipitation, the reverse of the dissolving reaction. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. While the solubility constant is not as simple as solubility, the value of this constant is generally independent of the presence of other species in the solvent. Other theories. The Flory–Huggins solution theory is a theoretical model describing the solubility of polymers. The Hansen solubility parameters and the Hildebrand solubility parameters are empirical methods for the prediction of solubility. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion. The octanol-water partition coefficient, usually expressed as its logarithm (Log P), is a measure of differential solubility of a compound in a hydrophobic solvent (1-octanol) and a hydrophilic solvent (water). The logarithm of these two values enables compounds to be ranked in terms of hydrophilicity (or hydrophobicity). The energy change associated with dissolving is usually given per mole of solute as the enthalpy of solution. Applications. Solubility is of fundamental importance in a large number of scientific disciplines and practical applications, ranging from ore processing and nuclear reprocessing to the use of medicines, and the transport of pollutants. Solubility is often said to be one of the "characteristic properties of a substance", which means that solubility is commonly used to describe the substance, to indicate a substance's polarity, to help to distinguish it from other substances, and as a guide to applications of the substance. For example, indigo is described as "insoluble in water, alcohol, or ether but soluble in chloroform, nitrobenzene, or concentrated sulfuric acid". Solubility of a substance is useful when separating mixtures. For example, a mixture of salt (sodium chloride) and silica may be separated by dissolving the salt in water, and filtering off the undissolved silica. The synthesis of chemical compounds, by the milligram in a laboratory, or by the ton in industry, both make use of the relative solubilities of the desired product, as well as unreacted starting materials, byproducts, and side products to achieve separation. Another example of this is the synthesis of benzoic acid from phenylmagnesium bromide and dry ice. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl ether, and when shaken with this organic solvent in a separatory funnel, will preferentially dissolve in the organic layer. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. This process, known as liquid–liquid extraction, is an important technique in synthetic chemistry. Recycling is used to ensure maximum extraction. Differential solubility. In flowing systems, differences in solubility often determine the dissolution-precipitation driven transport of species. This happens when different parts of the system experience different conditions. Even slightly different conditions can result in significant effects, given sufficient time. For example, relatively low solubility compounds are found to be soluble in more extreme environments, resulting in geochemical and geological effects of the activity of hydrothermal fluids in the Earth's crust. These are often the source of high quality economic mineral deposits and precious or semi-precious gems. In the same way, compounds with low solubility will dissolve over extended time (geological time), resulting in significant effects such as extensive cave systems or Karstic land surfaces. Solubility of ionic compounds in water. Some ionic compounds (salts) dissolve in water, which arises because of the attraction between positive and negative charges (see: solvation). For example, the salt's positive ions (e.g. Ag+) attract the partially negative oxygen atom in . Likewise, the salt's negative ions (e.g. Cl−) attract the partially positive hydrogens in . Note: the oxygen atom is partially negative because it is more electronegative than hydrogen, and vice versa (see: chemical polarity). However, there is a limit to how much salt can be dissolved in a given volume of water. This concentration is the solubility and related to the solubility product, "K"sp. This equilibrium constant depends on the type of salt ( vs. , for example), temperature, and the common ion effect. One can calculate the amount of that will dissolve in 1 liter of pure water as follows: "K"sp = [Ag+] × [Cl−] / M2 (definition of solubility product; M = mol/L) "K"sp = 1.8 × 10−10 (from a table of solubility products) [Ag+] = [Cl−], in the absence of other silver or chloride salts, so [Ag+]2 = 1.8 × 10−10 M2 [Ag+] = 1.34 × 10−5 mol/L The result: 1 liter of water can dissolve 1.34 × 10−5 moles of at room temperature. Compared with other salts, is poorly soluble in water. For instance, table salt () has a much higher "K"sp = 36 and is, therefore, more soluble. The following table gives an overview of solubility rules for various ionic compounds. Solubility of organic compounds. The principle outlined above under polarity, that "like dissolves like", is the usual guide to solubility with organic systems. For example, petroleum jelly will dissolve in gasoline because both petroleum jelly and gasoline are non-polar hydrocarbons. It will not, on the other hand, dissolve in ethyl alcohol or water, since the polarity of these solvents is too high. Sugar will not dissolve in gasoline, since sugar is too polar in comparison with gasoline. A mixture of gasoline and sugar can therefore be separated by filtration or extraction with water. Solid solution. This term is often used in the field of metallurgy to refer to the extent that an alloying element will dissolve into the base metal without forming a separate phase. The solvus or solubility line (or curve) is the line (or lines) on a phase diagram that give the limits of solute addition. That is, the lines show the maximum amount of a component that can be added to another component and still be in solid solution. In the solid's crystalline structure, the 'solute' element can either take the place of the matrix within the lattice (a substitutional position; for example, chromium in iron) or take a place in a space between the lattice points (an interstitial position; for example, carbon in iron). In microelectronic fabrication, solid solubility refers to the maximum concentration of impurities one can place into the substrate. In solid compounds (as opposed to elements), the solubility of a solute element can also depend on the phases separating out in equilibrium. For example, amount of Sn soluble in the ZnSb phase can depend significantly on whether the phases separating out in equilibrium are (Zn4Sb3+Sn(L)) or (ZnSnSb2+Sn(L)). Besides these, the ZnSb compound with Sn as a solute can separate out into other combinations of phases after the solubility limit is reached depending on the initial chemical composition during synthesis. Each combination produces a different solubility of Sn in ZnSb. Hence solubility studies in compounds, concluded upon the first instance of observing secondary phases separating out might underestimate solubility. While the maximum number of phases separating out at once in equilibrium can be determined by the Gibb's phase rule, for chemical compounds there is no limit on the number of such phase separating combinations itself. Hence, establishing the "maximum solubility" in solid compounds experimentally can be difficult, requiring equilibration of many samples. If the dominant crystallographic defect (mostly interstitial or substitutional point defects) involved in the solid-solution can be chemically intuited beforehand, then using some simple thermodynamic guidelines can considerably reduce the number of samples required to establish maximum solubility. Incongruent dissolution. Many substances dissolve congruently (i.e. the composition of the solid and the dissolved solute stoichiometrically match). However, some substances may dissolve incongruently, whereby the composition of the solute in solution does not match that of the solid. This solubilization is accompanied by alteration of the "primary solid" and possibly formation of a secondary solid phase. However, in general, some primary solid also remains and a complex solubility equilibrium establishes. For example, dissolution of albite may result in formation of gibbsite. In this case, the solubility of albite is expected to depend on the solid-to-solvent ratio. This kind of solubility is of great importance in geology, where it results in formation of metamorphic rocks. In principle, both congruent and incongruent dissolution can lead to the formation of secondary solid phases in equilibrium. So, in the field of Materials Science, the solubility for both cases is described more generally on chemical composition phase diagrams. Solubility prediction. Solubility is a property of interest in many aspects of science, including but not limited to: environmental predictions, biochemistry, pharmacy, drug-design, agrochemical design, and protein ligand binding. Aqueous solubility is of fundamental interest owing to the vital biological and transportation functions played by water. In addition, to this clear scientific interest in water solubility and solvent effects; accurate predictions of solubility are important industrially. The ability to accurately predict a molecule's solubility represents potentially large financial savings in many chemical product development processes, such as pharmaceuticals. In the pharmaceutical industry, solubility predictions form part of the early stage lead optimisation process of drug candidates. Solubility remains a concern all the way to formulation. A number of methods have been applied to such predictions including quantitative structure–activity relationships (QSAR), quantitative structure–property relationships (QSPR) and data mining. These models provide efficient predictions of solubility and represent the current standard. The draw back such models is that they can lack physical insight. A method founded in physical theory, capable of achieving similar levels of accuracy at an sensible cost, would be a powerful tool scientifically and industrially. Methods founded in physical theory tend to use thermodynamic cycles, a concept from classical thermodynamics. The two common thermodynamic cycles used involve either the calculation of the free energy of sublimation (solid to gas without going through a liquid state) and the free energy of solvating a gaseous molecule (gas to solution), or the free energy of fusion (solid to a molten phase) and the free energy of mixing (molten to solution). These two process are represented in the following diagrams. These cycles have been used for attempts at first principles predictions (solving using the fundamental physical equations) using physically motivated solvent models, to create parametric equations and QSPR models and combinations of the two. The use of these cycles enables the calculation of the solvation free energy indirectly via either gas (in the sublimation cycle) or a melt (fusion cycle). This is helpful as calculating the free energy of solvation directly is extremely difficult. The free energy of solvation can be converted to a solubility value using various formulae, the most general case being shown below, where the numerator is the free energy of solvation, "R" is the gas constant and "T" is the temperature in kelvins. formula_21 Well known fitted equations for solubility prediction are the general solubility equations. These equations stem from the work of Yalkowsky "et al". The original formula is given first, followed by a revised formula which takes a different assumption of complete miscibility in octanol. formula_22 formula_23 These equations are founded on the principles of the fusion cycle. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left(\\frac{\\partial \\ln N_i}{\\partial P} \\right)_T = -\\frac{V_{i,aq}-V_{i,cr}} {RT} " }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "N_i" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "T" }, { "math_id": 5, "text": "V_{i,aq}" }, { "math_id": 6, "text": "V_{i,cr}" }, { "math_id": 7, "text": "<matH>i</math>" }, { "math_id": 8, "text": "R" }, { "math_id": 9, "text": " p = k_{\\rm H}\\, c " }, { "math_id": 10, "text": "k_{\\rm H}" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "c" }, { "math_id": 13, "text": "\\frac {\\mathrm{d}m} {\\mathrm{d}t} = A \\frac {D} {d} (C_\\mathrm{s}-C_\\mathrm{b})" }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "t" }, { "math_id": 16, "text": "A" }, { "math_id": 17, "text": "D" }, { "math_id": 18, "text": "d" }, { "math_id": 19, "text": "C_s" }, { "math_id": 20, "text": "C_b" }, { "math_id": 21, "text": "\\log S(V_{m}) = \\frac{\\Delta G_\\text{solvation}}{-2.303RT}" }, { "math_id": 22, "text": "\n \\log_{10} (S) = 0.8 - \\log_{10} (P) - 0.01(\\text{melting point} -25)\n " }, { "math_id": 23, "text": "\n \\log_{10} (S) = 0.5 - \\log_{10} (P) - 0.01(\\text{melting point} -25)\n " } ]
https://en.wikipedia.org/wiki?curid=59497
59497796
Zero point (photometry)
Calibration factor in a photometric system In astronomy, the zero point in a photometric system is defined as the magnitude of an object that produces 1 count per second on the detector. The zero point is used to calibrate a system to the standard magnitude system, as the flux detected from stars will vary from detector to detector. Traditionally, Vega is used as the calibration star for the zero point magnitude in specific pass bands (U, B, and V), although often, an average of multiple stars is used for higher accuracy. It is not often practical to find Vega in the sky to calibrate the detector, so for general purposes, any star may be used in the sky that has a known apparent magnitude. General formula. The equation for the magnitude of an object in a given band is formula_0 where M is the magnitude of an object, F is the flux at a specific wavelength, and S is the sensitivity function of a given instrument. Under ideal conditions, the sensitivity is 1 inside a pass band and 0 outside a pass band. The constant C is determined from the zero point magnitude using the above equation, by setting the magnitude equal to 0. Vega as calibration. Under most circumstances, Vega is used as the zero point, but in reality, an elaborate "bootstrap" system is used to calibrate a detector. The calibration typically takes place through extensive observational photometry as well as the use of theoretical atmospheric models. Bolometric magnitude zero point. While the zero point is defined to be that of Vega for passband filters, there is no defined zero point for bolometric magnitude, and traditionally, the calibrating star has been the sun. However, the IAU has recently defined the absolute bolometric magnitude and apparent bolometric magnitude zero points to be 3.0128×1028 W and 2.51802×10−8 W/m2, respectively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M = -2.5\\log_{10}\\left(\\int_0^\\infty F(\\lambda)\\,S\\,d\\lambda\\right) + C," } ]
https://en.wikipedia.org/wiki?curid=59497796
5949923
Pontecorvo–Maki–Nakagawa–Sakata matrix
Model of neutrino oscillation In particle physics, the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix), Maki–Nakagawa–Sakata matrix (MNS matrix), lepton mixing matrix, or neutrino mixing matrix is a unitary mixing matrix which contains information on the mismatch of quantum states of neutrinos when they propagate freely and when they take part in weak interactions. It is a model of neutrino oscillation. This matrix was introduced in 1962 by Ziro Maki, Masami Nakagawa, and Shoichi Sakata, to explain the neutrino oscillations predicted by Bruno Pontecorvo. The PMNS matrix. The Standard Model of particle physics contains three generations or "flavors" of neutrinos, formula_0, formula_1, and formula_2, each labeled with a subscript showing the charged lepton that it partners with in the charged-current weak interaction. These three eigenstates of the weak interaction form a complete, orthonormal basis for the Standard Model neutrino. Similarly, one can construct an eigenbasis out of three neutrino states of definite mass, formula_3, formula_4, and formula_5, which diagonalize the neutrino's free-particle Hamiltonian. Observations of neutrino oscillation established experimentally that for neutrinos, as for quarks, these two eigenbases are different – they are 'rotated' relative to each other. Consequently, each flavor eigenstate can be written as a combination of mass eigenstates, called a "superposition", and vice versa. The PMNS matrix, with components formula_6 corresponding to the amplitude of mass eigenstate formula_7 in terms of flavor formula_8 "e", "μ", "τ"; parameterizes the unitary transformation between the two bases: formula_9 The vector on the left represents a generic neutrino expressed in the flavor-eigenstate basis, and on the right is the PMNS matrix multiplied by a vector representing that same neutrino in the mass-eigenstate basis. A neutrino of a given flavor formula_10 is thus a "mixed" state of neutrinos with distinct mass: If one could measure directly that neutrino's mass, it would be found to have mass formula_11 with probability formula_12. The PMNS matrix for antineutrinos is identical to the matrix for neutrinos under CPT symmetry. Due to the difficulties of detecting neutrinos, it is much more difficult to determine the individual coefficients than in the equivalent matrix for the quarks (the CKM matrix). Assumptions. Standard Model. In the Standard Model, the PMNS matrix is unitary. This implies that the sum of the squares of the values in each row and in each column, which represent the probabilities of different possible events given the same starting point, add up to 100%. In the simplest case, the Standard Model posits three generations of neutrinos with Dirac mass that oscillate between three neutrino mass eigenvalues, an assumption that is made when best fit values for its parameters are calculated. Other models. In other models the PMNS matrix is not necessarily unitary, and additional parameters are necessary to describe all possible neutrino mixing parameters in other models of neutrino oscillation and mass generation, such as the see-saw model, and in general, in the case of neutrinos that have Majorana mass rather than Dirac mass. There are also additional mass parameters and mixing angles in a simple extension of the PMNS matrix in which there are more than three flavors of neutrinos, regardless of the character of neutrino mass. As of July 2014, scientists studying neutrino oscillation are actively considering fits of the experimental neutrino oscillation data to an extended PMNS matrix with a fourth, light "sterile" neutrino and four mass eigenvalues, although the current experimental data tends to disfavor that possibility. Parameterization. In general, there are nine degrees of freedom in any unitary three by three matrix. However, in the case of the PMNS matrix, five of those real parameters can be absorbed as phases of the lepton fields and thus the PMNS matrix can be fully described by four free parameters. The PMNS matrix is most commonly parameterized by three mixing angles (formula_13, formula_14, and formula_15) and a single phase angle called formula_16 related to charge-parity violations (i.e. differences in the rates of oscillation between two states with opposite starting points which makes the order in time in which events take place necessary to predict their oscillation rates), in which case the matrix can be written as: formula_17 where formula_18 and formula_19 are used to denote formula_20 and formula_21 respectively. In the case of Majorana neutrinos, two extra complex phases are needed, as the phase of Majorana fields cannot be freely redefined due to the condition formula_22. An infinite number of possible parameterizations exist; one other common example being the Wolfenstein parameterization. The mixing angles have been measured by a variety of experiments (see neutrino mixing for a description). The CP-violating phase formula_23 has not been measured directly, but estimates can be obtained by fits using the other measurements. Experimentally measured parameter values. As of November 2022, the current best-fit values from Nu-FIT.org, from direct and indirect measurements, using normal ordering, are: formula_24 As of November 2022, the 3 σ ranges (99.7% confidence) for the magnitudes of the elements of the matrix were: formula_25 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_\\mathrm{e}" }, { "math_id": 1, "text": "\\nu_\\mu" }, { "math_id": 2, "text": "\\nu_\\tau" }, { "math_id": 3, "text": "\\nu_1" }, { "math_id": 4, "text": "\\nu_2" }, { "math_id": 5, "text": "\\nu_3" }, { "math_id": 6, "text": "U_{\\alpha\\,i}" }, { "math_id": 7, "text": "\\,i = 1, 2, 3\\;" }, { "math_id": 8, "text": "~ \\alpha = \\;" }, { "math_id": 9, "text": "\\begin{bmatrix} ~ \\nu_\\mathrm{e} \\\\ ~ \\nu_\\mu \\\\ ~ \\nu_\\tau ~ \\end{bmatrix} \n= \\begin{bmatrix} ~ U_{\\mathrm{e} 1} ~ & ~ U_{\\mathrm{e} 2} ~ & ~ U_{\\mathrm{e} 3} \\\\ ~ U_{\\mu 1} & ~ U_{\\mu 2} ~ & ~ U_{\\mu 3} \\\\ ~ U_{\\tau 1} ~ & ~ U_{\\tau 2} ~ & ~ U_{\\tau 3} \\end{bmatrix} \\begin{bmatrix} ~ \\nu_1 \\\\ ~ \\nu_2 \\\\ ~ \\nu_3 ~ \\end{bmatrix} ~." }, { "math_id": 10, "text": "\\alpha" }, { "math_id": 11, "text": "m_i" }, { "math_id": 12, "text": "\\left|U_{\\alpha\\,i}\\right|^2" }, { "math_id": 13, "text": "\\theta_{12}" }, { "math_id": 14, "text": "\\theta_{23}" }, { "math_id": 15, "text": "\\theta_{13}" }, { "math_id": 16, "text": "\\delta_{\\mathrm{CP}}" }, { "math_id": 17, "text": " \\begin{align} & \\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & c_{23} & s_{23} \\\\ 0 & -s_{23} & c_{23} \\end{bmatrix}\n \\begin{bmatrix} c_{13} & 0 & s_{13}e^{-i\\delta_\\mathrm{CP}} \\\\ 0 & 1 & 0 \\\\ -s_{13}e^{i\\delta_\\mathrm{CP}} & 0 & c_{13} \\end{bmatrix}\n \\begin{bmatrix} c_{12} & s_{12} & 0 \\\\ -s_{12} & c_{12} & 0 \\\\ 0 & 0 & 1 \\end{bmatrix} \\\\\n & = \\begin{bmatrix} c_{12}c_{13} & s_{12} c_{13} & s_{13}e^{-i\\delta_\\mathrm{CP}} \\\\\n -s_{12}c_{23} - c_{12}s_{23}s_{13}e^{i\\delta_\\mathrm{CP}} & c_{12}c_{23} - s_{12}s_{23}s_{13}e^{i\\delta_\\mathrm{CP}} & s_{23}c_{13}\\\\\n s_{12}s_{23} - c_{12}c_{23}s_{13}e^{i\\delta_\\mathrm{CP}} & -c_{12}s_{23} - s_{12}c_{23}s_{13}e^{i\\delta_\\mathrm{CP}} & c_{23}c_{13} \\end{bmatrix}, \\end{align} " }, { "math_id": 18, "text": "s_{ij}" }, { "math_id": 19, "text": "c_{ij}" }, { "math_id": 20, "text": "\\sin\\theta_{ij}" }, { "math_id": 21, "text": "\\cos\\theta_{ij}" }, { "math_id": 22, "text": "\\nu = \\nu^c~" }, { "math_id": 23, "text": "\\delta_\\mathrm{CP}" }, { "math_id": 24, "text": "\n\\begin{align}\n\\theta_{12} & = {33.41^\\circ}^{+0.75^\\circ}_{-0.72^\\circ} \\\\\n\\theta_{23} & = {49.1^\\circ}^{+1.0^\\circ}_{-1.3^\\circ}\\\\\n\\theta_{13} & = {8.54^\\circ}^{+0.11^\\circ}_{-0.12^\\circ} \\\\\n\\delta_{\\textrm{CP}} & = {197^\\circ}^{+42^\\circ}_{-25^\\circ} \\\\\n\\end{align}\n" }, { "math_id": 25, "text": "\n|U| = \\begin{bmatrix}\n ~ |U_{\\mathrm{e} 1}| ~ & |U_{\\mathrm{e} 2}| ~ & |U_{\\mathrm{e} 3}| \\\\\n ~ |U_{\\mu 1}| ~ & |U_{\\mu 2}| ~ & |U_{\\mu 3}| \\\\\n ~ |U_{\\tau 1}| ~ & |U_{\\tau 2}| ~ & |U_{\\tau 3}| ~ \n\\end{bmatrix} = \\left[\\begin{array}{rrr}\n ~ 0.803 \\sim 0.845 ~~ & 0.514 \\sim 0.578 ~~ & 0.142 \\sim 0.155 ~ \\\\\n ~ 0.233 \\sim 0.505 ~~ & 0.460 \\sim 0.693 ~~ & 0.630 \\sim 0.779 ~ \\\\\n ~ 0.262 \\sim 0.525 ~~ & 0.473 \\sim 0.702 ~~ & 0.610 \\sim 0.762 ~\n\\end{array}\\right]\n" }, { "math_id": 26, "text": "\\theta_{12} =\\," }, { "math_id": 27, "text": "\\theta_{23} =\\," }, { "math_id": 28, "text": "\\theta_{13} =\\," }, { "math_id": 29, "text": "\\theta_{12} \\approx 35.3^\\circ\\,," }, { "math_id": 30, "text": "\\theta_{23} = 45^\\circ\\,," }, { "math_id": 31, "text": "\\theta_{13} = 0^\\circ\\," }, { "math_id": 32, "text": " \\delta_{\\textrm{CP}} = {197^\\circ}^{+42^\\circ}_{-25^\\circ} " }, { "math_id": 33, "text": "\\, {169^\\circ} \\le \\delta_{\\textrm{CP}} \\le {246^\\circ} \\," } ]
https://en.wikipedia.org/wiki?curid=5949923
59500109
Correlation function
Correlation as a function of distance A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the correlation function between random variables representing the same quantity measured at two different points, then this is often referred to as an autocorrelation function, which is made up of autocorrelations. Correlation functions of different random variables are sometimes called cross-correlation functions to emphasize that different variables are being considered and because they are made up of cross-correlations. Correlation functions are a useful indicator of dependencies as a function of distance in time or space, and they can be used to assess the distance required between sample points for the values to be effectively uncorrelated. In addition, they can form the basis of rules for interpolating values at points for which there are no observations. Correlation functions used in astronomy, financial analysis, econometrics, and statistical mechanics differ only in the particular stochastic processes they are applied to. In quantum field theory there are correlation functions over quantum distributions. Definition. For possibly distinct random variables "X"("s") and "Y"("t") at different points "s" and "t" of some space, the correlation function is formula_0 where formula_1 is described in the article on correlation. In this definition, it has been assumed that the stochastic variables are scalar-valued. If they are not, then more complicated correlation functions can be defined. For example, if "X"("s") is a random vector with "n" elements and "Y"(t) is a vector with "q" elements, then an "n"×"q" matrix of correlation functions is defined with formula_2 element formula_3 When "n"="q", sometimes the trace of this matrix is focused on. If the probability distributions have any target space symmetries, i.e. symmetries in the value space of the stochastic variable (also called internal symmetries), then the correlation matrix will have induced symmetries. Similarly, if there are symmetries of the space (or time) domain in which the random variables exist (also called spacetime symmetries), then the correlation function will have corresponding space or time symmetries. Examples of important spacetime symmetries are — Higher order correlation functions are often defined. A typical correlation function of order "n" is (the angle brackets represent the expectation value) formula_4 If the random vector has only one component variable, then the indices formula_2 are redundant. If there are symmetries, then the correlation function can be broken up into irreducible representations of the symmetries — both internal and spacetime. Properties of probability distributions. With these definitions, the study of correlation functions is similar to the study of probability distributions. Many stochastic processes can be completely characterized by their correlation functions; the most notable example is the class of Gaussian processes. Probability distributions defined on a finite number of points can always be normalized, but when these are defined over continuous spaces, then extra care is called for. The study of such distributions started with the study of random walks and led to the notion of the Itō calculus. The Feynman path integral in Euclidean space generalizes this to other problems of interest to statistical mechanics. Any probability distribution which obeys a condition on correlation functions called reflection positivity leads to a local quantum field theory after Wick rotation to Minkowski spacetime (see Osterwalder-Schrader axioms). The operation of renormalization is a specified set of mappings from the space of probability distributions to itself. A quantum field theory is called renormalizable if this mapping has a fixed point which gives a quantum field theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C(s,t) = \\operatorname{corr} ( X(s), Y(t) ) ," }, { "math_id": 1, "text": "\\operatorname{corr}" }, { "math_id": 2, "text": "i,j" }, { "math_id": 3, "text": "C_{ij}(s,t) = \\operatorname{corr}( X_i(s), Y_j(t) )." }, { "math_id": 4, "text": "C_{i_1i_2\\cdots i_n}(s_1,s_2,\\cdots,s_n) = \\langle X_{i_1}(s_1) X_{i_2}(s_2) \\cdots X_{i_n}(s_n)\\rangle." } ]
https://en.wikipedia.org/wiki?curid=59500109
59500649
Simple Lie algebra
In algebra, a simple Lie algebra is a Lie algebra that is non-abelian and contains no nonzero proper ideals. The classification of real simple Lie algebras is one of the major achievements of Wilhelm Killing and Élie Cartan. A direct sum of simple Lie algebras is called a semisimple Lie algebra. A simple Lie group is a connected Lie group whose Lie algebra is simple. Complex simple Lie algebras. A finite-dimensional simple complex Lie algebra is isomorphic to either of the following: formula_0, formula_1, formula_2 (classical Lie algebras) or one of the five exceptional Lie algebras. To each finite-dimensional complex semisimple Lie algebra formula_3, there exists a corresponding diagram (called the Dynkin diagram) where the nodes denote the simple roots, the nodes are jointed (or not jointed) by a number of lines depending on the angles between the simple roots and the arrows are put to indicate whether the roots are longer or shorter. The Dynkin diagram of formula_3 is connected if and only if formula_3 is simple. All possible connected Dynkin diagrams are the following: where "n" is the number of the nodes (the simple roots). The correspondence of the diagrams and complex simple Lie algebras is as follows: (A"n") formula_4 (B"n") formula_5 (C"n") formula_6 (D"n") formula_7 The rest, exceptional Lie algebras. Real simple Lie algebras. If formula_8 is a finite-dimensional real simple Lie algebra, its complexification is either (1) simple or (2) a product of a simple complex Lie algebra and its conjugate. For example, the complexification of formula_0 thought of as a real Lie algebra is formula_9. Thus, a real simple Lie algebra can be classified by the classification of complex simple Lie algebras and some additional information. This can be done by Satake diagrams that generalize Dynkin diagrams. See also Table of Lie groups#Real Lie algebras for a partial list of real simple Lie algebras. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{sl}_n \\mathbb{C}" }, { "math_id": 1, "text": "\\mathfrak{so}_n \\mathbb{C}" }, { "math_id": 2, "text": "\\mathfrak{sp}_{2n} \\mathbb{C}" }, { "math_id": 3, "text": "\\mathfrak{g}" }, { "math_id": 4, "text": "\\quad \\mathfrak{sl}_{n+1} \\mathbb{C}" }, { "math_id": 5, "text": "\\quad \\mathfrak{so}_{2n+1} \\mathbb{C}" }, { "math_id": 6, "text": "\\quad \\mathfrak{sp}_{2n} \\mathbb{C}" }, { "math_id": 7, "text": "\\quad \\mathfrak{so}_{2n} \\mathbb{C}" }, { "math_id": 8, "text": "\\mathfrak{g}_0" }, { "math_id": 9, "text": "\\mathfrak{sl}_n \\mathbb{C} \\times \\overline{\\mathfrak{sl}_n \\mathbb{C}}" } ]
https://en.wikipedia.org/wiki?curid=59500649
59500886
Classification of low-dimensional real Lie algebras
This mathematics-related list provides Mubarakzyanov's classification of low-dimensional real Lie algebras, published in Russian in 1963. It complements the article on Lie algebra in the area of abstract algebra. An English version and review of this classification was published by Popovych et al. in 2003. Mubarakzyanov's Classification. Let formula_0 be formula_1-dimensional Lie algebra over the field of real numbers with generators formula_2, formula_3. For each algebra formula_4 we adduce only non-zero commutators between basis elements. formula_10 formula_14 formula_16 formula_18 formula_22 formula_24 formula_27 formula_30 Three-dimensional. Algebra formula_17 can be considered as an extreme case of formula_23, when formula_31, forming contraction of Lie algebra. Over the field formula_32 algebras formula_23, formula_28 are isomorphic to formula_33 and formula_25, respectively. formula_36 formula_38 formula_14 formula_41 formula_18 formula_22 formula_45 formula_47 formula_49 formula_51 formula_53 formula_55 formula_57 formula_59 formula_61 formula_63 formula_65 formula_67 formula_69 Four-dimensional. Algebra formula_70 can be considered as an extreme case of formula_71, when formula_72, forming contraction of Lie algebra. Over the field formula_32 algebras formula_44, formula_48, formula_73, formula_74, formula_75 are isomorphic to formula_43, formula_46, formula_76, formula_77, formula_78, respectively. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\mathfrak g}_n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": " e_1, \\dots, e_n " }, { "math_id": 3, "text": " n \\leq 4" }, { "math_id": 4, "text": "{\\mathfrak g}" }, { "math_id": 5, "text": "{\\mathfrak g}_1" }, { "math_id": 6, "text": "2{\\mathfrak g}_1" }, { "math_id": 7, "text": "\\mathbb{R}^2" }, { "math_id": 8, "text": "{\\mathfrak g}_{2.1}" }, { "math_id": 9, "text": "\\mathfrak{aff}(1)=\\left\\{\\begin{pmatrix} a&b \\\\ 0&0 \\end{pmatrix}\\,:\\,a,b\\in\\mathbb{R}\\right\\}" }, { "math_id": 10, "text": "[e_1, e_2] = e_1." }, { "math_id": 11, "text": "3{\\mathfrak g}_1" }, { "math_id": 12, "text": "{\\mathfrak g}_{2.1}\\oplus {\\mathfrak g}_1 " }, { "math_id": 13, "text": "{\\mathfrak g}_{3.1}" }, { "math_id": 14, "text": "[e_2, e_3] = e_1;" }, { "math_id": 15, "text": "{\\mathfrak g}_{3.2}" }, { "math_id": 16, "text": "[e_1, e_3] = e_1, \\quad [e_2, e_3] = e_1 + e_2; " }, { "math_id": 17, "text": "{\\mathfrak g}_{3.3}" }, { "math_id": 18, "text": "[e_1, e_3] = e_1, \\quad [e_2, e_3] = e_2;" }, { "math_id": 19, "text": "{\\mathfrak g}_{3.4}" }, { "math_id": 20, "text": "\\mathfrak{p}(1,1)" }, { "math_id": 21, "text": "\\alpha = -1" }, { "math_id": 22, "text": "[e_1, e_3] = e_1, \\quad [e_2, e_3] = \\alpha e_2, \\quad -1 \\leq \\alpha < 1, \\quad \\alpha \\neq 0;" }, { "math_id": 23, "text": "{\\mathfrak g}_{3.5}" }, { "math_id": 24, "text": "[e_1, e_3] = \\beta e_1 - e_2, \\quad [e_2, e_3] = e_1 + \\beta e_2, \\quad \\beta \\geq 0;" }, { "math_id": 25, "text": "{\\mathfrak g}_{3.6}" }, { "math_id": 26, "text": "\\mathfrak{sl}(2, \\mathbb R )," }, { "math_id": 27, "text": "[e_1, e_2] = e_1, \\quad [e_2, e_3] = e_3, \\quad [e_1, e_3] = 2 e_2;" }, { "math_id": 28, "text": "{\\mathfrak g}_{3.7}" }, { "math_id": 29, "text": "\\mathfrak{so}(3)," }, { "math_id": 30, "text": "[e_2, e_3] = e_1, \\quad [e_3, e_1] = e_2, \\quad [e_1, e_2] = e_3." }, { "math_id": 31, "text": " \\beta \\rightarrow \\infty " }, { "math_id": 32, "text": "{\\mathbb C}" }, { "math_id": 33, "text": "{\\mathfrak g}_{3.4} " }, { "math_id": 34, "text": "4{\\mathfrak g}_1" }, { "math_id": 35, "text": "{\\mathfrak g}_{2.1} \\oplus 2{\\mathfrak g}_1" }, { "math_id": 36, "text": "[e_1, e_2] = e_1;" }, { "math_id": 37, "text": "2{\\mathfrak g}_{2.1}" }, { "math_id": 38, "text": "[e_1, e_2] = e_1 \\quad [e_3, e_4] = e_3;" }, { "math_id": 39, "text": "{\\mathfrak g}_{3.1} \\oplus {\\mathfrak g}_1" }, { "math_id": 40, "text": "{\\mathfrak g}_{3.2} \\oplus {\\mathfrak g}_1" }, { "math_id": 41, "text": "[e_1, e_3] = e_1, \\quad [e_2, e_3] = e_1 + e_2;" }, { "math_id": 42, "text": "{\\mathfrak g}_{3.3} \\oplus {\\mathfrak g}_1" }, { "math_id": 43, "text": "{\\mathfrak g}_{3.4} \\oplus {\\mathfrak g}_1" }, { "math_id": 44, "text": "{\\mathfrak g}_{3.5} \\oplus {\\mathfrak g}_1" }, { "math_id": 45, "text": "[e_1, e_3] = \\beta e_1 - e_2 \\quad [e_2, e_3] = e_1 + \\beta e_2, \\quad \\beta \\geq 0;" }, { "math_id": 46, "text": "{\\mathfrak g}_{3.6} \\oplus {\\mathfrak g}_1" }, { "math_id": 47, "text": "[e_1, e_2] = e_1, \\quad [e_2, e_3] = e_3, \\quad [e_1, e_3] = 2 e_2;" }, { "math_id": 48, "text": "{\\mathfrak g}_{3.7} \\oplus {\\mathfrak g}_1" }, { "math_id": 49, "text": "[e_1, e_2] = e_3, \\quad [e_2, e_3] = e_1, \\quad [e_3, e_1] = e_2;" }, { "math_id": 50, "text": "{\\mathfrak g}_{4.1} " }, { "math_id": 51, "text": "[e_2, e_4] = e_1, \\quad [e_3, e_4] = e_2;" }, { "math_id": 52, "text": "{\\mathfrak g}_{4.2} " }, { "math_id": 53, "text": "[e_1, e_4] = \\beta e_1, \\quad [e_2, e_4] = e_2, \\quad [e_3, e_4] = e_2 + e_3, \\quad \\beta \\neq 0;" }, { "math_id": 54, "text": "{\\mathfrak g}_{4.3} " }, { "math_id": 55, "text": "[e_1, e_4] = e_1, \\quad [e_3, e_4] = e_2;" }, { "math_id": 56, "text": "{\\mathfrak g}_{4.4} " }, { "math_id": 57, "text": "[e_1, e_4] = e_1, \\quad [e_2, e_4] = e_1 + e_2, \\quad [e_3, e_4] = e_2+e_3;" }, { "math_id": 58, "text": "{\\mathfrak g}_{4.5} " }, { "math_id": 59, "text": "[e_1, e_4] = \\alpha e_1, \\quad [e_2, e_4] = \\beta e_2, \\quad [e_3, e_4] = \\gamma e_3, \\quad \\alpha \\beta \\gamma \\neq 0;" }, { "math_id": 60, "text": "{\\mathfrak g}_{4.6} " }, { "math_id": 61, "text": "[e_1, e_4] = \\alpha e_1, \\quad [e_2, e_4] = \\beta e_2 - e_3, \\quad [e_3, e_4] = e_2 + \\beta e_3, \\quad \\alpha > 0;" }, { "math_id": 62, "text": "{\\mathfrak g}_{4.7} " }, { "math_id": 63, "text": "[e_2, e_3] = e_1, \\quad [e_1, e_4] = 2 e_1, \\quad [e_2, e_4] = e_2, \\quad [e_3, e_4] = e_2 + e_3;" }, { "math_id": 64, "text": "{\\mathfrak g}_{4.8} " }, { "math_id": 65, "text": "[e_2, e_3] = e_1, \\quad [e_1, e_4] = (1 + \\beta)e_1, \\quad [e_2, e_4] = e_2, \\quad [e_3, e_4] = \\beta e_3, \\quad -1 \\leq \\beta \\leq 1;" }, { "math_id": 66, "text": "{\\mathfrak g}_{4.9} " }, { "math_id": 67, "text": "[e_2, e_3] = e_1, \\quad [e_1, e_4] = 2 \\alpha e_1, \\quad [e_2, e_4] = \\alpha e_2 - e_3, \\quad [e_3, e_4] = e_2 + \\alpha e_3, \\quad \\alpha \\geq 0;" }, { "math_id": 68, "text": "{\\mathfrak g}_{4.10} " }, { "math_id": 69, "text": "[e_1, e_3] = e_1, \\quad [e_2, e_3] = e_2, \\quad [e_1, e_4] = -e_2, \\quad [e_2, e_4] = e_1." }, { "math_id": 70, "text": "{\\mathfrak g}_{4.3}" }, { "math_id": 71, "text": "{\\mathfrak g}_{4.2}" }, { "math_id": 72, "text": " \\beta \\rightarrow 0 " }, { "math_id": 73, "text": "{\\mathfrak g}_{4.6}" }, { "math_id": 74, "text": "{\\mathfrak g}_{4.9}" }, { "math_id": 75, "text": "{\\mathfrak g}_{4.10}" }, { "math_id": 76, "text": "{\\mathfrak g}_{4.5}" }, { "math_id": 77, "text": "{\\mathfrak g}_{4.8}" }, { "math_id": 78, "text": "{2\\mathfrak g}_{2.1}" } ]
https://en.wikipedia.org/wiki?curid=59500886
59501534
Lie operad
In mathematics, the Lie operad is an operad whose algebras are Lie algebras. The notion (at least one version) was introduced by in their formulation of Koszul duality. Definition à la Ginzburg–Kapranov. Fix a base field "k" and let formula_0 denote the free Lie algebra over "k" with generators formula_1 and formula_2 the subspace spanned by all the bracket monomials containing each formula_3 exactly once. The symmetric group formula_4 acts on formula_0 by permutations of the generators and, under that action, formula_5 is invariant. The operadic composition is given by substituting expressions (with renumbered variables) for variables. Then, formula_6 is an operad. Koszul-Dual. The Koszul-dual of formula_7 is the commutative-ring operad, an operad whose algebras are the commutative rings over "k." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{Lie}(x_1, \\dots, x_n)" }, { "math_id": 1, "text": "x_1, \\dots, x_n" }, { "math_id": 2, "text": "\\mathcal{Lie}(n) \\subset \\mathcal{Lie}(x_1, \\dots, x_n)" }, { "math_id": 3, "text": "x_i" }, { "math_id": 4, "text": "S_n" }, { "math_id": 5, "text": "\\mathcal{Lie}(n)" }, { "math_id": 6, "text": "\\mathcal{Lie} = \\{ \\mathcal{Lie}(n) \\}" }, { "math_id": 7, "text": "\\mathcal{Lie}" } ]
https://en.wikipedia.org/wiki?curid=59501534
5950276
Aerostatics
Study of gases that are not in motion A subfield of fluid statics, aerostatics is the study of gases that are not in motion with respect to the coordinate system in which they are considered. The corresponding study of gases in motion is called aerodynamics. Aerostatics studies density allocation, especially in air. One of the applications of this is the barometric formula. An aerostat is a lighter than air craft, such as an airship or balloon, which uses the principles of aerostatics to float. Basic laws. Treatment of the equations of gaseous behaviour at rest is generally taken, as in hydrostatics, to begin with a consideration of the general equations of momentum for fluid flow, which can be expressed as: formula_0, where formula_1 is the mass density of the fluid, formula_2 is the instantaneous velocity, formula_3 is fluid pressure, formula_4 are the external body forces acting on the fluid, and formula_5 is the momentum transport coefficient. As the fluid's static nature mandates that formula_6, and that formula_7, the following set of partial differential equations representing the basic equations of aerostatics is found. formula_8 However, the presence of a non-constant density as is found in gaseous fluid systems (due to the compressibility of gases) requires the inclusion of the ideal gas law: formula_9, where formula_10 denotes the universal gas constant, and formula_11 the temperature of the gas, in order to render the valid aerostatic partial differential equations: formula_12, which can be employed to compute the pressure distribution in gases whose thermodynamic states are given by the equation of state for ideal gases. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho [{\\partial U_j\\over\\partial t} + U_i {\\partial U_j\\over\\partial t}] = -{\\partial P\\over\\partial x_j} - {\\partial \\tau_{ij}\\over\\partial x_i} + \\rho g_j " }, { "math_id": 1, "text": " \\rho " }, { "math_id": 2, "text": " U_j " }, { "math_id": 3, "text": " P " }, { "math_id": 4, "text": " g " }, { "math_id": 5, "text": " \\tau_{ij} " }, { "math_id": 6, "text": " U_j = 0 " }, { "math_id": 7, "text": " \\tau_{ij} = 0 " }, { "math_id": 8, "text": " {\\partial P\\over\\partial x_j} = \\rho g_j " }, { "math_id": 9, "text": " {P\\over\\rho} = RT " }, { "math_id": 10, "text": " R " }, { "math_id": 11, "text": " T " }, { "math_id": 12, "text": " {\\partial P\\over\\partial x_j} = \\rho \\hat{g_j} = {P\\over\\ RT} \\hat{g_j}" } ]
https://en.wikipedia.org/wiki?curid=5950276
59503919
Pentagramma mirificum
Pentagramma mirificum (Latin for "miraculous pentagram") is a star polygon on a sphere, composed of five great circle arcs, all of whose internal angles are right angles. This shape was described by John Napier in his 1614 book "Mirifici Logarithmorum Canonis Descriptio" ("Description of the Admirable Table of Logarithms") along with rules that link the values of trigonometric functions of five parts of a right spherical triangle (two angles and three sides). The properties of "pentagramma mirificum" were studied, among others, by Carl Friedrich Gauss. Geometric properties. On a sphere, both the angles and the sides of a triangle (arcs of great circles) are measured as angles. There are five right angles, each measuring formula_0 at formula_1, formula_2, formula_3, formula_4, and formula_5 There are ten arcs, each measuring formula_6 formula_7, formula_8, formula_9, formula_10, formula_11, formula_12, formula_13, formula_14, formula_15, and formula_16 In the spherical pentagon formula_17, every vertex is the pole of the opposite side. For instance, point formula_18 is the pole of equator formula_19, point formula_20 — the pole of equator formula_21, etc. At each vertex of pentagon formula_17, the external angle is equal in measure to the opposite side. For instance, formula_22 etc. Napier's circles of spherical triangles formula_23, formula_24, formula_25, formula_26, and formula_27 are rotations of one another. Gauss's formulas. Gauss introduced the notation formula_28 The following identities hold, allowing the determination of any three of the above quantities from the two remaining ones: formula_29 Gauss proved the following "beautiful equality" ("schöne Gleichung"): formula_30 It is satisfied, for instance, by numbers formula_31, whose product formula_32 is equal to formula_33. Proof of the first part of the equality: formula_34 Proof of the second part of the equality: formula_35 From Gauss comes also the formula formula_36 where formula_37 is the area of pentagon formula_17. Gnomonic projection. The image of spherical pentagon formula_17 in the gnomonic projection (a projection from the centre of the sphere) onto any plane tangent to the sphere is a rectilinear pentagon. Its five vertices formula_38 unambiguously determine a conic section; in this case — an ellipse. Gauss showed that the altitudes of pentagram formula_38 (lines passing through vertices and perpendicular to opposite sides) cross in one point formula_39, which is the image of the point of tangency of the plane to sphere. Arthur Cayley observed that, if we set the origin of a Cartesian coordinate system in point formula_39, then the coordinates of vertices formula_38: formula_40 formula_41 satisfy the equalities formula_42 formula_43 formula_44 formula_45 formula_46, where formula_47 is the length of the radius of the sphere. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi/2," }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "E." }, { "math_id": 6, "text": "\\pi/2:" }, { "math_id": 7, "text": "PC" }, { "math_id": 8, "text": "PE" }, { "math_id": 9, "text": "QD" }, { "math_id": 10, "text": "QA" }, { "math_id": 11, "text": "RE" }, { "math_id": 12, "text": "RB" }, { "math_id": 13, "text": "SA" }, { "math_id": 14, "text": "SC" }, { "math_id": 15, "text": "TB" }, { "math_id": 16, "text": "TD." }, { "math_id": 17, "text": "PQRST" }, { "math_id": 18, "text": "P" }, { "math_id": 19, "text": "RS" }, { "math_id": 20, "text": "Q" }, { "math_id": 21, "text": "ST" }, { "math_id": 22, "text": "\\angle APT = \\angle BPQ = RS,\\; \\angle BQP = \\angle CQR = ST," }, { "math_id": 23, "text": "APT" }, { "math_id": 24, "text": "BQP" }, { "math_id": 25, "text": "CRQ" }, { "math_id": 26, "text": "DSR" }, { "math_id": 27, "text": "ETS" }, { "math_id": 28, "text": "(\\alpha, \\beta, \\gamma, \\delta, \\varepsilon) = (\\tan^2 TP, \\tan^2 PQ, \\tan^2 QR, \\tan^2 RS, \\tan^2 ST)." }, { "math_id": 29, "text": "\\begin{align}\n1 + \\alpha &= \\gamma\\delta &1 + \\beta &= \\delta\\varepsilon &1 + \\gamma &=\\alpha \\varepsilon \\\\\n1 + \\delta &= \\alpha\\beta &1 + \\varepsilon &= \\beta\\gamma.\n\\end{align}" }, { "math_id": 30, "text": "\\begin{align}\n \\alpha\\beta\\gamma\\delta\\varepsilon &=\\; 3 + \\alpha + \\beta + \\gamma + \\delta + \\varepsilon \\\\\n&=\\; \\sqrt{(1+\\alpha)(1+\\beta)(1+\\gamma)(1+\\delta)(1+\\varepsilon)}.\n\\end{align}" }, { "math_id": 31, "text": "(\\alpha, \\beta, \\gamma, \\delta, \\varepsilon) = (9, 2/3, 2, 5, 1/3)" }, { "math_id": 32, "text": "\\alpha\\beta\\gamma\\delta\\varepsilon" }, { "math_id": 33, "text": "20" }, { "math_id": 34, "text": "\\begin{align}\n \\alpha\\beta\\gamma\\delta\\varepsilon \n&= \\alpha\\beta\\gamma\\left(\\frac{1+\\alpha}{\\gamma}\\right)\\left(\\frac{1+\\gamma}{\\alpha}\\right) \n= \\beta(1+\\alpha)(1+\\gamma)\\\\\n&= \\beta + \\alpha\\beta + \\beta\\gamma + \\alpha\\beta\\gamma = \\beta + (1 + \\delta) + (1 + \\varepsilon) + \\alpha(1 + \\varepsilon) \\\\\n&= 2 + \\alpha + \\beta + \\delta + \\varepsilon + 1 + \\gamma \\\\\n& = 3 + \\alpha + \\beta + \\gamma + \\delta + \\varepsilon\n\\end{align}\n" }, { "math_id": 35, "text": "\\begin{align}\n\\alpha\\beta\\gamma\\delta\\varepsilon &= \\sqrt{\\alpha^2\\beta^2\\gamma^2\\delta^2\\varepsilon^2} \\\\\n&= \\sqrt{\\gamma\\delta \\cdot \\delta\\varepsilon \\cdot \\varepsilon\\alpha \\cdot \\alpha\\beta \\cdot \\beta\\gamma}\\\\\n&= \\sqrt{(1+\\alpha)(1+\\beta)(1+\\gamma)(1+\\delta)(1+\\varepsilon)}\n\\end{align}\n" }, { "math_id": 36, "text": "(1+i\\sqrt{^{^{\\!}}\\alpha})(1+i\\sqrt{\\beta})(1+i\\sqrt{^{^{\\!}}\\gamma})(1+i\\sqrt{\\delta})(1+i\\sqrt{^{^{\\!}}\\varepsilon}) = \\alpha\\beta\\gamma\\delta\\varepsilon e^{iA_{PQRST}}," }, { "math_id": 37, "text": "A_{PQRST} = 2\\pi - (|\\overset{\\frown}{PQ}| + |\\overset{\\frown}{QR}| + |\\overset{\\frown}{RS}| + |\\overset{\\frown}{ST}| + |\\overset{\\frown}{TP}|)" }, { "math_id": 38, "text": "P'Q'R'S'T'" }, { "math_id": 39, "text": "O'" }, { "math_id": 40, "text": "(x_1, y_1),\\ldots," }, { "math_id": 41, "text": "(x_5, y_5)" }, { "math_id": 42, "text": "x_1 x_4 + y_1 y_4 =" }, { "math_id": 43, "text": "x_2 x_5 + y_2 y_5 =" }, { "math_id": 44, "text": "x_3 x_1 + y_3 y_1 =" }, { "math_id": 45, "text": "x_4 x_2 + y_4 y_2 =" }, { "math_id": 46, "text": "x_5 x_3 + y_5 y_3 = -\\rho^2" }, { "math_id": 47, "text": "\\rho" } ]
https://en.wikipedia.org/wiki?curid=59503919
59505168
Bell-shaped function
Mathematical function having a characteristic "bell"-shaped curve A bell-shaped function or simply 'bell curve' is a mathematical function having a characteristic "bell"-shaped curve. These functions are typically continuous or smooth, asymptotically approach zero for large negative/positive x, and have a single, unimodal maximum at small x. Hence, the integral of a bell-shaped function is typically a sigmoid function. Bell shaped functions are also commonly symmetric. Many common probability distribution functions are bell curves. Some bell shaped functions, such as the Gaussian function and the probability distribution of the Cauchy distribution, can be used to construct sequences of functions with decreasing variance that approach the Dirac delta distribution. Indeed, the Dirac delta can roughly be thought of as a bell curve with variance tending to zero. Some examples include: formula_0 formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = a e^{-(x-b)^2/(2c^2)}" }, { "math_id": 1, "text": "f(x) =\\frac 1 {1+\\left|\\frac{x-c} a \\right|^{2b}}" }, { "math_id": 2, "text": "f(x) = \\operatorname{sech}(x)=\\frac{2}{e^x+e^{-x}}" }, { "math_id": 3, "text": "f(x) = \\frac{8a^3}{x^2+4a^2}" }, { "math_id": 4, "text": "\\varphi_b(x)=\\begin{cases}\\exp\\frac{b^2}{x^2-b^2} & |x|<b, \\\\0 & |x|\\geq b.\\end{cases}" }, { "math_id": 5, "text": "f(x;\\mu,s) = \\begin{cases} \\frac 1 {2s} \\left[ 1 +\\cos\\left(\\frac{x-\\mu}s \\pi\\right)\\right] & \\text{for }\\mu-s \\le x \\le \\mu+s, \\\\[3pt] 0 & \\text{otherwise.} \\end{cases} " }, { "math_id": 6, "text": "f(x)=\\frac{e^x}{\\left(1+e^x\\right)^2} " }, { "math_id": 7, "text": "f(x)=\\frac{1}{(1+x^2)^{3/2}} " } ]
https://en.wikipedia.org/wiki?curid=59505168
59512190
Permutation category
Type of mathematical category In mathematics, the permutation category is a category where It is equivalent as a category to the category of finite sets and bijections between them. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_n" }, { "math_id": 1, "text": "m\\neq n" } ]
https://en.wikipedia.org/wiki?curid=59512190
59515025
Metabolic Score for Insulin Resistance
The Metabolic Score for Insulin Resistance (METS-IR) is a metabolic index developed with the aim to quantify peripheral insulin sensitivity in humans; it was first described under the name METS-IR by Bello-Chavolla et al. in 2018. It was developed by the Metabolic Research Disease Unit at the Instituto Nacional de Ciencias Médicas Salvador Zubirán and validated against the euglycemic hyperinsulinemic clamp and the frequently-sampled intravenous glucose tolerance test in Mexican population. It is a non-insulin-based alternative to insulin-based methods to quantify peripheral insulin sensitivity and an alternative to SPINA Carb, the Homeostatic Model Assessment (HOMA-IR) and the quantitative insulin sensitivity check index (QUICKI). METS-IR is currently validated for its use to assess cardio-metabolic risk in Latino population. Derivation and validation. METS-IR was generated using linear regression against the M-value adjusted by lean body mass obtained from the glucose clamp technique in Mexican subjects with and without type 2 diabetes mellitus. It is estimated using fasting laboratory values including glucose (in mg/dL), triglycerides (mg/dL) and high-density lipoprotein cholesterol (HDL-C, in mg/dL) along with body-mass index (BMI). The index can be estimated using the following formula: formula_0 The index holds a significant correlation with the M-value adjusted by lean mass ("ρ" = −0.622) obtained from the euglycemic hyperinsulinaemic clamp study adjusted for age and gender as well as minimal model estimates of glucose sensitivity. In an open population cohort study in Mexican population, METS-IR was shown to predict incident type 2 diabetes mellitus and a value of METS-IR &gt;50.0 suggested up to three-fold higher risk of developing type 2 diabetes after an average of three years. In a nation-wide population-based study of Chinese subjects, METS-IR was also shown to identify subjects with metabolic syndrome independent of adiposity. METS-IR also predicts visceral fat content, subcutaneous adipose tissue, fasting insulin levels and ectopic fat accumulation in liver and pancreas. Comparison to other indexes. METS-IR was compared against other non-insulin-based methods to approximate insulin sensitivity including the Triglyceride-Glucose index (TyG), the triglyceride to HDL-C ratio, and the TyG-BMI index, yielding a higher correlation and area under the receiving operating characteristic curve compared to these other measures. When assessing its utility for identifying metabolic syndrome in Chinese subjects, Yu et al. suggested that the TyG and TG/HDL-C indexes had superior performance in their population owing to ethnic-specific variations in body composition. Given the role of ethnicity in modifying the performance of insulin sensitivity fasting-based indexes, further evaluations in different populations are required to establish performance of non-insulin-based methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "METS-IR =\\frac{\\ln[2*Glucose (mg/dL) + Triglycerides (mg/dL)]*BMI (kg/m^2)}{\\ln[HDL-C (mg/dL)]}" } ]
https://en.wikipedia.org/wiki?curid=59515025
5951576
History of electromagnetic theory
The history of electromagnetic theory begins with ancient measures to understand atmospheric electricity, in particular lightning. People then had little understanding of electricity, and were unable to explain the phenomena. Scientific understanding and research into the nature of electricity grew throughout the eighteenth and nineteenth centuries through the work of researchers such as André-Marie Ampère, Charles-Augustin de Coulomb, Michael Faraday, Carl Friedrich Gauss and James Clerk Maxwell. In the 19th century it had become clear that electricity and magnetism were related, and their theories were unified: wherever charges are in motion electric current results, and magnetism is due to electric current. The source for electric field is electric charge, whereas that for magnetic field is electric current (charges in motion). Ancient and classical history. The knowledge of static electricity dates back to the earliest civilizations, but for millennia it remained merely an interesting and mystifying phenomenon, without a theory to explain its behavior, and it was often confused with magnetism. The ancients were acquainted with rather curious properties possessed by two minerals, amber (Greek: , ) and magnetic iron ore ( , "the Magnesian stone, lodestone"). Amber, when rubbed, attracts lightweight objects, such as feathers; magnetic iron ore has the power of attracting iron. Based on his find of an Olmec hematite artifact in Central America, the American astronomer John Carlson has suggested that "the Olmec may have discovered and used the geomagnetic lodestone compass earlier than 1000 BC". If true, this "predates the Chinese discovery of the geomagnetic lodestone compass by more than a millennium". Carlson speculates that the Olmecs may have used similar artifacts as a directional device for astrological or geomantic purposes, or to orient their temples, the dwellings of the living or the interments of the dead. The earliest Chinese literature reference to magnetism lies in a 4th-century BC book called "Book of the Devil Valley Master" (鬼谷子): "The lodestone makes iron come or it attracts it." Long before any knowledge of electromagnetism existed, people were aware of the effects of electricity. Lightning and other manifestations of electricity such as St. Elmo's fire were known in ancient times, but it was not understood that these phenomena had a common origin. Ancient Egyptians were aware of shocks when interacting with electric fish (such as the electric catfish) or other animals (such as electric eels). The shocks from animals were apparent to observers since pre-history by a variety of peoples that came into contact with them. Texts from 2750 BC by the ancient Egyptians referred to these fish as "thunderer of the Nile" and saw them as the "protectors" of all the other fish. Another possible approach to the discovery of the identity of lightning and electricity from any other source, is to be attributed to the Arabs, who before the 15th century used the same Arabic word for lightning () and the electric ray. Thales of Miletus, writing at around 600 BC, noted that rubbing fur on various substances such as amber would cause them to attract specks of dust and other light objects. Thales wrote on the effect now known as static electricity. The Greeks noted that if they rubbed the amber for long enough they could even get an electric spark to jump. The ancient Indian medical text "Sushruta Samhita" describes using magnetic properties of the lodestone to remove arrows embedded in a person's body. These electrostatic phenomena were again reported millennia later by Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by catfish and electric rays. Pliny in his books writes: "The ancient Tuscans by their learning hold that there are nine gods that send forth lightning and those of eleven sorts." This was in general the early pagan idea of lightning. The ancients held some concept that shocks could travel along conducting objects. Patients with ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. A group of objects found in Iraq in 1938 dated to the early centuries AD (Sassanid Mesopotamia), called the Baghdad Battery, resembles a galvanic cell and is believed by some to have been used for electroplating. The claims are controversial because of supporting evidence and theories for the uses of the artifacts, physical evidence on the objects conducive for electrical functions, and if they were electrical in nature. As a result, the nature of these objects is based on speculation, and the function of these artifacts remains in doubt. Magnetic attraction was once accounted for by Aristotle and Thales as the working of a soul in the stone. Middle Ages and the Renaissance. The magnetic needle compass was developed in the 11th century and it improved the accuracy of navigation by employing the astronomical concept of true north ("Dream Pool Essays", 1088). The Chinese scientist Shen Kuo (1031–1095) was the first person known to write about the magnetic needle compass and by the 12th century Chinese were known to use the lodestone compass for navigation. In Europe, the first description of the compass and its use for navigation are of Alexander Neckam (1187), although the use of compasses was already common. Its development, in European history, was due to Flavio Gioja from Amalfi. In the 13th century, Peter Peregrinus, a native of Maricourt in Picardy, conducted experiments on magnetism and wrote the first extant treatise describing the properties of magnets and pivoting compass needles. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf Umar II, a Yemeni scholar. The dry compass was invented around 1300 by Italian inventor Flavio Gioja. Archbishop Eustathius of Thessalonica, Greek scholar and writer of the 12th century, records that "Woliver", king of the Goths, was able to draw sparks from his body. The same writer states that a certain philosopher was able while dressing to draw sparks from his clothes, a result seemingly akin to that obtained by Robert Symmer in his silk stocking experiments, a careful account of which may be found in the "Philosophical Transactions", 1759. Italian physician Gerolamo Cardano wrote about electricity in "De Subtilitate" (1550) distinguishing, perhaps for the first time, between electrical and magnetic forces. 17th century. Toward the late 16th century, a physician of Queen Elizabeth's time, William Gilbert, in "De Magnete", expanded on Cardano's work and invented the Neo-Latin word from (), the Greek word for "amber". Gilbert undertook a number of careful electrical experiments, in the course of which he discovered that many substances other than amber, such as sulphur, wax, glass, etc., were capable of manifesting electrical properties. Gilbert also discovered that a heated body lost its electricity and that moisture prevented the electrification of all bodies, due to the now well-known fact that moisture impaired the insulation of such bodies. He also noticed that electrified substances attracted all other substances indiscriminately, whereas a magnet only attracted iron. The many discoveries of this nature earned for Gilbert the title of "founder of the electrical science". By investigating the forces on a light metallic needle, balanced on a point, he extended the list of electric bodies, and found also that many substances, including metals and natural magnets, showed no attractive forces when rubbed. He noticed that dry weather with north or east wind was the most favourable atmospheric condition for exhibiting electric phenomena—an observation liable to misconception until the difference between conductor and insulator was understood. Gilbert's work was followed up by Robert Boyle (1627–1691), the famous natural philosopher who was once described as "father of Chemistry, and uncle of the Earl of Cork." Boyle was one of the founders of the Royal Society when it met privately in Oxford, and became a member of the council after the Society was incorporated by Charles II in 1663. He left a detailed account of his research under the title of "Experiments on the Origin of Electricity". He discovered electrified bodies attracted light substances in a vacuum, indicating the electrical effect did not depend upon the air as a medium. He also added resin, and other substances, to the then known list of electrics. In 1663 Otto von Guericke invented a device that is now recognized as an early (possibly the first) electrostatic generator, but he did not recognize it primarily as an electrical device or conduct electrical experiments with it. By the end of the 17th century, researchers had developed practical means of generating electricity by friction with an electrostatic generator, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. The first usage of the word "electricity" is ascribed to Sir Thomas Browne in his 1646 work, "Pseudodoxia Epidemica". The first appearance of the term "electromagnetism" was in "Magnes", by the Jesuit luminary Athanasius Kircher, in 1641, which carries the provocative chapter-heading: ""Elektro-magnetismos" i.e. On the Magnetism of amber, or electrical attractions and their causes" ( "id est sive De Magnetismo electri, seu electricis attractionibus earumque causis"). 18th century. Improving the electric machine. The electric machine was subsequently improved by Francis Hauksbee, his student Litzendorf, and by Prof. Georg Matthias Bose, about 1750. Litzendorf, researching for Christian August Hausen, substituted a glass ball for the sulphur ball of Guericke. Bose was the first to employ the "prime conductor" in such machines, this consisting of an iron rod held in the hand of a person whose body was insulated by standing on a block of resin. Ingenhousz, during 1746, invented electric machines made of plate glass. Experiments with the electric machine were largely aided by the discovery that a glass plate, coated on both sides with tinfoil, would accumulate electric charge when connected with a source of electromotive force. The electric machine was soon further improved by Andrew Gordon, a Scotsman, Professor at Erfurt, who substituted a glass cylinder in place of a glass globe; and by Giessing of Leipzig who added a "rubber" consisting of a cushion of woollen material. The collector, consisting of a series of metal points, was added to the machine by Benjamin Wilson about 1746, and in 1762, John Canton of England (also the inventor of the first pith-ball electroscope in 1754) improved the efficiency of electric machines by sprinkling an amalgam of tin over the surface of the rubber. Electrics and non-electrics. In 1729, Stephen Gray conducted a series of experiments that demonstrated the difference between conductors and non-conductors (insulators), showing amongst other things that a metal wire and even packthread conducted electricity, whereas silk did not. In one of his experiments he sent an electric current through 800 feet of hempen thread which was suspended at intervals by loops of silk thread. When he tried to conduct the same experiment substituting the silk for finely spun brass wire, he found that the electric current was no longer carried throughout the hemp cord, but instead seemed to vanish into the brass wire. From this experiment he classified substances into two categories: "electrics" like glass, resin and silk and "non-electrics" like metal and water. "Non-electrics" conducted charges while "electrics" held the charge. Vitreous and resinous. Intrigued by Gray's results, in 1732, C. F. du Fay began to conduct several experiments. In his first experiment, Du Fay concluded that all objects except metals, animals, and liquids could be electrified by rubbing and that metals, animals and liquids could be electrified by means of an electric machine, thus discrediting Gray's "electrics" and "non-electrics" classification of substances. In 1733 Du Fay discovered what he believed to be two kinds of frictional electricity; one generated from rubbing glass, the other from rubbing resin. From this, Du Fay theorized that electricity consists of two electrical fluids, "vitreous" and "resinous", that are separated by friction and that neutralize each other when combined. This picture of electricity was also supported by Christian Gottlieb Kratzenstein in his theoretical and experimental works. The two-fluid theory would later give rise to the concept of "positive" and "negative" electrical charges devised by Benjamin Franklin. Leyden jar. The Leyden jar, a type of capacitor for electrical energy in large quantities, was invented independently by Ewald Georg von Kleist on 11 October 1744 and by Pieter van Musschenbroek in 1745–1746 at Leiden University (the latter location giving the device its name). William Watson, when experimenting with the Leyden jar, discovered in 1747 that a discharge of static electricity was equivalent to an electric current. Capacitance was first observed by Von Kleist of Leyden in 1754. Von Kleist happened to hold, near his electric machine, a small bottle, in the neck of which there was an iron nail. Touching the iron nail accidentally with his other hand he received a severe electric shock. In much the same way Musschenbroeck assisted by Cunaens received a more severe shock from a somewhat similar glass bottle. Sir William Watson of England greatly improved this device, by covering the bottle, or jar, outside and in with tinfoil. This piece of electrical apparatus will be easily recognized as the well-known Leyden jar, so called by the Abbot Nollet of Paris, after the place of its discovery. In 1741, John Ellicott "proposed to measure the strength of electrification by its power to raise a weight in one scale of a balance while the other was held over the electrified body and pulled to it by its attractive power". As early as 1746, Jean-Antoine Nollet (1700–1770) had performed experiments on the propagation speed of electricity. By involving 200 Carthusian monks connected from hand to hand by iron wires so as to form a circle of about 1.6 km, he was able to prove that this speed is finite, even though very high. In 1749, Sir William Watson conducted numerous experiments to ascertain the velocity of electricity in a wire. These experiments, although perhaps not so intended, also demonstrated the possibility of transmitting signals to a distance by electricity. In these experiments, the signal appeared to travel the 12,276-foot length of the insulated wire instantaneously. Le Monnier in France had previously made somewhat similar experiments, sending shocks through an iron wire 1,319 feet long. About 1750, first experiments in electrotherapy were made. Various experimenters made tests to ascertain the physiological and therapeutical effects of electricity. Typical for this effort was Kratzenstein in Halle who in 1744 wrote a treatise on the subject. Demainbray in Edinburgh examined the effects of electricity upon plants and concluded that the growth of two myrtle trees was quickened by electrification. These myrtles were electrified "during the whole month of October, 1746, and they put forth branches and blossoms sooner than other shrubs of the same kind not electrified." Abbé Ménon in France tried the effects of a continued application of electricity upon men and birds and found that the subjects experimented on lost weight, thus apparently showing that electricity quickened the excretions. The efficacy of electric shocks in cases of paralysis was tested in the county hospital at Shrewsbury, England, with rather poor success. Late 18th century. Benjamin Franklin promoted his investigations of electricity and theories through the famous, though extremely dangerous, experiment of having his son fly a kite through a storm-threatened sky. A key attached to the kite string sparked and charged a Leyden jar, thus establishing the link between lightning and electricity. Following these experiments, he invented a lightning rod. It is either Franklin (more frequently) or Ebenezer Kinnersley of Philadelphia (less frequently) who is considered to have established the convention of positive and negative electricity. Theories regarding the nature of electricity were quite vague at this period, and those prevalent were more or less conflicting. Franklin considered that electricity was an imponderable fluid pervading everything, and which, in its normal condition, was uniformly distributed in all substances. He assumed that the electrical manifestations obtained by rubbing glass were due to the production of an excess of the electric fluid in that substance and that the manifestations produced by rubbing wax were due to a deficit of the fluid. This explanation was opposed by supporters of the "two-fluid" theory like Robert Symmer in 1759. In this theory, the vitreous and resinous electricities were regarded as imponderable fluids, each fluid being composed of mutually repellent particles while the particles of the opposite electricities are mutually attractive. When the two fluids unite as a result of their attraction for one another, their effect upon external objects is neutralized. The act of rubbing a body decomposes the fluids, one of which remains in excess on the body and manifests itself as vitreous or resinous electricity. Up to the time of Franklin's historic kite experiment, the identity of the electricity developed by rubbing and by electrostatic machines (frictional electricity) with lightning had not been generally established. Dr. Wall, Abbot Nollet, Hauksbee, Stephen Gray and John Henry Winkler had indeed suggested the resemblance between the phenomena of "electricity" and "lightning", Gray having intimated that they only differed in degree. It was doubtless Franklin, however, who first proposed tests to determine the sameness of the phenomena. In a letter to Peter Comlinson of London, on 19 October 1752, Franklin, referring to his kite experiment, wrote, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"At this key the phial (Leyden jar) may be charged; and from the electric fire thus obtained spirits may be kindled, and all the other electric experiments be formed which are usually done by the help of a rubbed glass globe or tube, and thereby the sameness of the electric matter with that of lightning be completely demonstrated." On 10 May 1742 Thomas-François Dalibard, at Marly (near Paris), using a vertical iron rod 40 feet long, obtained results corresponding to those recorded by Franklin and somewhat prior to the date of Franklin's experiment. Franklin's important demonstration of the sameness of frictional electricity and lightning added zest to the efforts of the many experimenters in this field in the last half of the 18th century, to advance the progress of the science. Franklin's observations aided later scientists such as Michael Faraday, Luigi Galvani, Alessandro Volta, André-Marie Ampère and Georg Simon Ohm, whose collective work provided the basis for modern electrical technology and for whom fundamental units of electrical measurement are named. Others who would advance the field of knowledge included William Watson, Georg Matthias Bose, Smeaton, Louis-Guillaume Le Monnier, Jacques de Romas, Jean Jallabert, Giovanni Battista Beccaria, Tiberius Cavallo, John Canton, Robert Symmer, Abbot Nollet, John Henry Winkler, Benjamin Wilson, Ebenezer Kinnersley, Joseph Priestley, Franz Aepinus, Edward Hussey Délavai, Henry Cavendish, and Charles-Augustin de Coulomb. Descriptions of many of the experiments and discoveries of these early electrical scientists may be found in the scientific publications of the time, notably the "Philosophical Transactions", "Philosophical Magazine", "Cambridge Mathematical Journal", "Young's Natural Philosophy", Priestley's "History of Electricity", Franklin's "Experiments and Observations on Electricity", Cavalli's "Treatise on Electricity" and De la Rive's "Treatise on Electricity". Henry Elles was one of the first people to suggest links between electricity and magnetism. In 1757 he claimed that he had written to the Royal Society in 1755 about the links between electricity and magnetism, asserting that "there are some things in the power of magnetism very similar to those of electricity" but he did "not by any means think them the same". In 1760 he similarly claimed that in 1750 he had been the first "to think how the electric fire may be the cause of thunder". Among the more important of the electrical research and experiments during this period were those of Franz Aepinus, a noted German scholar (1724–1802) and Henry Cavendish of London, England. Franz Aepinus is credited as the first to conceive of the view of the reciprocal relationship of electricity and magnetism. In his work "Tentamen Theoria Electricitatis et Magnetism", published in Saint Petersburg in 1759, he gives the following amplification of Franklin's theory, which in some of its features is measurably in accord with present-day views: "The particles of the electric fluid repel each other, attract and are attracted by the particles of all bodies with a force that decreases in proportion as the distance increases; the electric fluid exists in the pores of bodies; it moves unobstructedly through non-electric (conductors), but moves with difficulty in insulators; the manifestations of electricity are due to the unequal distribution of the fluid in a body, or to the approach of bodies unequally charged with the fluid." Aepinus formulated a corresponding theory of magnetism excepting that, in the case of magnetic phenomena, the fluids only acted on the particles of iron. He also made numerous electrical experiments apparently showing that, in order to manifest electrical effects, tourmaline must be heated to between 37.5 °C and 100 °C. In fact, tourmaline remains unelectrified when its temperature is uniform, but manifests electrical properties when its temperature is rising or falling. Crystals that manifest electrical properties in this way are termed pyroelectric; along with tourmaline, these include sulphate of quinine and quartz. Henry Cavendish independently conceived a theory of electricity nearly akin to that of Aepinus. In 1784, he was perhaps the first to utilize an electric spark to produce an explosion of hydrogen and oxygen in the proper proportions that would create pure water. Cavendish also discovered the inductive capacity of dielectrics (insulators), and, as early as 1778, measured the specific inductive capacity for beeswax and other substances by comparison with an air condenser. Around 1784 C. A. Coulomb devised the torsion balance, discovering what is now known as Coulomb's law: the force exerted between two small electrified bodies varies inversely as the square of the distance, not as Aepinus in his theory of electricity had assumed, merely inversely as the distance. According to the theory advanced by Cavendish, "the particles attract and are attracted inversely as some less power of the distance than the cube." A large part of the domain of electricity became virtually annexed by Coulomb's discovery of the law of inverse squares. Through the experiments of William Watson and others proving that electricity could be transmitted to a distance, the idea of making practical use of this phenomenon began, around 1753, to engross the minds of inquisitive people. To this end, suggestions as to the employment of electricity in the transmission of intelligence were made. The first of the methods devised for this purpose was probably that of Georges Lesage in 1774. This method consisted of 24 wires, insulated from one another and each having had a pith ball connected to its distant end. Each wire represented a letter of the alphabet. To send a message, a desired wire was charged momentarily with electricity from an electric machine, whereupon the pith ball connected to that wire would fly out. Other methods of telegraphing in which frictional electricity was employed were also tried, some of which are described in the history on the telegraph. The era of galvanic or voltaic electricity represented a revolutionary break from the historical focus on frictional electricity. Alessandro Volta discovered that chemical reactions could be used to create positively charged anodes and negatively charged cathodes. When a conductor was attached between these, the difference in the electrical potential (also known as voltage) drove a current between them through the conductor. The potential difference between two points is measured in units of volts in recognition of Volta's work. The first mention of voltaic electricity, although not recognized as such at the time, was probably made by Johann Georg Sulzer in 1767, who, upon placing a small disc of zinc under his tongue and a small disc of copper over it, observed a peculiar taste when the respective metals touched at their edges. Sulzer assumed that when the metals came together they were set into vibration, acting upon the nerves of the tongue to produce the effects noticed. In 1790, Prof. Luigi Alyisio Galvani of Bologna, while conducting experiments on "animal electricity", noticed the twitching of a frog's legs in the presence of an electric machine. He observed that a frog's muscle, suspended on an iron balustrade by a copper hook passing through its dorsal column, underwent lively convulsions without any extraneous cause, the electric machine being at this time absent. To account for this phenomenon, Galvani assumed that electricity of opposite kinds existed in the nerves and muscles of the frog, the muscles and nerves constituting the charged coatings of a Leyden jar. Galvani published the results of his discoveries, together with his hypothesis, which engrossed the attention of the physicists of that time. The most prominent of these was Volta, professor of physics at Pavia, who contended that the results observed by Galvani were the result of the two metals, copper and iron, acting as electromotors, and that the muscles of the frog played the part of a conductor, completing the circuit. This precipitated a long discussion between the adherents of the conflicting views. One group agreed with Volta that the electric current was the result of an electromotive force of contact at the two metals; the other adopted a modification of Galvani's view and asserted that the current was the result of a chemical affinity between the metals and the acids in the pile. Michael Faraday wrote in the preface to his "Experimental Researches", relative to the question of whether metallic contact is productive of a part of the electricity of the voltaic pile: "I see no reason as yet to alter the opinion I have given; ... but the point itself is of such great importance that I intend at the first opportunity renewing the inquiry, and, if I can, rendering the proofs either on the one side or the other, undeniable to all." Even Faraday himself, however, did not settle the controversy, and while the views of the advocates on both sides of the question have undergone modifications, as subsequent investigations and discoveries demanded, up to 1918 diversity of opinion on these points continued to crop out. Volta made numerous experiments in support of his theory and ultimately developed the pile or battery, which was the precursor of all subsequent chemical batteries, and possessed the distinguishing merit of being the first means by which a prolonged continuous current of electricity was obtainable. Volta communicated a description of his pile to the Royal Society of London and shortly thereafter Nicholson and Cavendish (1780) produced the decomposition of water by means of the electric current, using Volta's pile as the source of electromotive force. 19th century. Early 19th century. In 1800 Alessandro Volta constructed the first device to produce a large electric current, later known as the electric battery. Napoleon, informed of his works, summoned him in 1801 for a command performance of his experiments. He received many medals and decorations, including the Légion d'honneur. Davy in 1806, employing a voltaic pile of approximately 250 cells, or couples, decomposed potash and soda, showing that these substances were respectively the oxides of potassium and sodium, metals which previously had been unknown. These experiments were the beginning of electrochemistry, the investigation of which Faraday took up, and concerning which in 1833 he announced his important law of electrochemical equivalents, viz.: ""The same quantity of electricity — that is, the same electric current — decomposes chemically equivalent quantities of all the bodies which it traverses; hence the weights of elements separated in these electrolytes are to each other as their chemical equivalents"." Employing a battery of 2,000 elements of a voltaic pile Humphry Davy in 1809 gave the first public demonstration of the electric arc light, using charcoal enclosed in a vacuum. Somewhat important to note, it was not until many years after the discovery of the voltaic pile that the sameness of animal and frictional electricity with voltaic electricity was clearly recognized and demonstrated. Thus as late as January 1833 we find Faraday writing in a paper on the electricity of the electric ray. ""After an examination of the experiments of Walsh, Ingenhousz, Henry Cavendish, Sir H. Davy, and Dr. Davy, no doubt remains on my mind as to the identity of the electricity of the torpedo with common "(frictional)" and voltaic electricity; and I presume that so little will remain on the mind of others as to justify my refraining from entering at length into the philosophical proof of that identity. The doubts raised by Sir Humphry Davy have been removed by his brother, Dr. Davy; the results of the latter being the reverse of those of the former. ... The general conclusion which must, I think, be drawn from this collection of facts "(a table showing the similarity, of properties of the diversely named electricities)" is, that electricity, whatever may be its source, is identical in its nature"." It is proper to state, however, that prior to Faraday's time the similarity of electricity derived from different sources was more than suspected. Thus, William Hyde Wollaston, wrote in 1801: ""This similarity in the means by which both electricity and galvanism (voltaic electricity) appear to be excited in addition to the resemblance that has been traced between their effects shows that they are both essentially the same and confirm an opinion that has already been advanced by others, that all the differences discoverable in the effects of the latter may be owing to its being less intense, but produced in much larger quantity"." In the same paper Wollaston describes certain experiments in which he uses very fine wire in a solution of sulphate of copper through which he passed electric currents from an electric machine. This is interesting in connection with the later day use of almost similarly arranged fine wires in electrolytic receivers in wireless, or radio-telegraphy. In the first half of the 19th century many very important additions were made to the world's knowledge concerning electricity and magnetism. For example, in 1820 Hans Christian Ørsted of Copenhagen discovered the deflecting effect of an electric current traversing a wire upon a suspended magnetic needle. This discovery gave a clue to the subsequently proved intimate relationship between electricity and magnetism which was promptly followed up by Ampère who some months later, in September 1820, presented the first elements of his new theory, which he developed in the following years culminating with the publication in his 1827 "" (Memoir on the Mathematical Theory of Electrodynamic Phenomena, Uniquely Deduced from Experience) announcing his celebrated theory of electrodynamics, relating to the force that one current exerts upon another, by its electro-magnetic effects, namely Ampere brought a multitude of phenomena into theory by his investigations of the mechanical forces between conductors supporting currents and magnets. James Clerk Maxwell, in his "A Treatise on Electricity and Magnetism", named Ampere “the Newton of electricity”. The German physicist Seebeck discovered in 1821 that when heat is applied to the junction of two metals that had been soldered together an electric current is set up. This is termed thermoelectricity. Seebeck's device consists of a strip of copper bent at each end and soldered to a plate of bismuth. A magnetic needle is placed parallel with the copper strip. When the heat of a lamp is applied to the junction of the copper and bismuth an electric current is set up which deflects the needle. Around this time, Siméon Denis Poisson attacked the difficult problem of induced magnetization, and his results, though differently expressed, are still the theory, as a most important first approximation. It was in the application of mathematics to physics that his services to science were performed. Perhaps the most original, and certainly the most permanent in their influence, were his memoirs on the theory of electricity and magnetism, which virtually created a new branch of mathematical physics. George Green wrote "An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism" in 1828. The essay introduced several important concepts, among them a theorem similar to the modern Green's theorem, the idea of potential functions as currently used in physics, and the concept of what are now called Green's functions. George Green was the first person to create a mathematical theory of electricity and magnetism and his theory formed the foundation for the work of other scientists such as James Clerk Maxwell, William Thomson, and others. Peltier in 1834 discovered an effect opposite to thermoelectricity, namely, that when a current is passed through a couple of dissimilar metals the temperature is lowered or raised at the junction of the metals, depending on the direction of the current. This is termed the Peltier effect. The variations of temperature are found to be proportional to the strength of the current and not to the square of the strength of the current as in the case of heat due to the ordinary resistance of a conductor. This second law is the I2R law, discovered experimentally in 1841 by the English physicist Joule. In other words, this important law is that the heat generated in any part of an electric circuit is directly proportional to the product of the resistance R of this part of the circuit and to the square of the strength of current I flowing in the circuit. In 1822 Johann Schweigger devised the first galvanometer. This instrument was subsequently much improved by Wilhelm Weber (1833). In 1825 William Sturgeon of Woolwich, England, invented the horseshoe and straight bar electromagnet, receiving therefor the silver medal of the Society of Arts. In 1837 Carl Friedrich Gauss and Weber (both noted workers of this period) jointly invented a reflecting galvanometer for telegraph purposes. This was the forerunner of the Thomson reflecting and other exceedingly sensitive galvanometers once used in submarine signaling and still widely employed in electrical measurements. Arago in 1824 made the important discovery that when a copper disc is rotated in its own plane, and if a magnetic needle be freely suspended on a pivot over the disc, the needle will rotate with the disc. If on the other hand the needle is fixed it will tend to retard the motion of the disc. This effect was termed Arago's rotations. Futile attempts were made by Charles Babbage, Peter Barlow, John Herschel and others to explain this phenomenon. The true explanation was reserved for Faraday, namely, that electric currents are induced in the copper disc by the cutting of the magnetic lines of force of the needle, which currents in turn react on the needle. Georg Simon Ohm did his work on resistance in the years 1825 and 1826, and published his results in 1827 as the book "Die galvanische Kette, mathematisch bearbeitet". He drew considerable inspiration from Fourier's work on heat conduction in the theoretical explanation of his work. For experiments, he initially used voltaic piles, but later used a thermocouple as this provided a more stable voltage source in terms of internal resistance and constant potential difference. He used a galvanometer to measure current, and knew that the voltage between the thermocouple terminals was proportional to the junction temperature. He then added test wires of varying length, diameter, and material to complete the circuit. He found that his data could be modeled through a simple equation with variable composed of the reading from a galvanometer, the length of the test conductor, thermocouple junction temperature, and a constant of the entire setup. From this, Ohm determined his law of proportionality and published his results. In 1827, he announced the now famous law that bears his name, that is: Electromotive force = Current × Resistance Ohm brought into order a host of puzzling facts connecting electromotive force and electric current in conductors, which all previous electricians had only succeeded in loosely binding together qualitatively under some rather vague statements. Ohm found that the results could be summed up in such a simple law and by Ohm's discovery a large part of the domain of electricity became annexed to theory. Faraday and Henry. The discovery of electromagnetic induction was made almost simultaneously, although independently, by Michael Faraday, who was first to make the discovery in 1831, and Joseph Henry in 1832. Henry's discovery of self-induction and his work on spiral conductors using a copper coil were made public in 1835, just before those of Faraday. In 1831 began the epoch-making researches of Michael Faraday, the famous pupil and successor of Humphry Davy at the head of the Royal Institution, London, relating to electric and electromagnetic induction. The remarkable researches of Faraday, the "prince of experimentalists", on electrostatics and electrodynamics and the induction of currents. These were rather long in being brought from the crude experimental state to a compact system, expressing the real essence. Faraday was not a competent mathematician, but had he been one, he would have been greatly assisted in his researches, have saved himself much useless speculation, and would have anticipated much later work. He would, for instance, knowing Ampere's theory, by his own results have readily been led to Neumann's theory, and the connected work of Helmholtz and Thomson. Faraday's studies and researches extended from 1831 to 1855 and a detailed description of his experiments, deductions and speculations are to be found in his compiled papers, entitled Experimental Researches in Electricity.' Faraday was by profession a chemist. He was not in the remotest degree a mathematician in the ordinary sense — indeed it is a question if in all his writings there is a single mathematical formula. The experiment which led Faraday to the discovery of electromagnetic induction was made as follows: He constructed what is now and was then termed an induction coil, the primary and secondary wires of which were wound on a wooden bobbin, side by side, and insulated from one another. In the circuit of the primary wire he placed a battery of approximately 100 cells. In the secondary wire he inserted a galvanometer. On making his first test he observed no results, the galvanometer remaining quiescent, but on increasing the length of the wires he noticed a deflection of the galvanometer in the secondary wire when the circuit of the primary wire was made and broken. This was the first observed instance of the development of electromotive force by electromagnetic induction. He also discovered that induced currents are established in a second closed circuit when the current strength is varied in the first wire, and that the direction of the current in the secondary circuit is opposite to that in the first circuit. Also that a current is induced in a secondary circuit when another circuit carrying a current is moved to and from the first circuit, and that the approach or withdrawal of a magnet to or from a closed circuit induces momentary currents in the latter. In short, within the space of a few months Faraday discovered by experiment virtually all the laws and facts now known concerning electro-magnetic induction and magneto-electric induction. Upon these discoveries, with scarcely an exception, depends the operation of the telephone, the dynamo machine, and incidental to the dynamo electric machine practically all the gigantic electrical industries of the world, including electric lighting, electric traction, the operation of electric motors for power purposes, and electro-plating, electrotyping, etc. In his investigations of the peculiar manner in which iron filings arrange themselves on a cardboard or glass in proximity to the poles of a magnet, Faraday conceived the idea of magnetic "lines of force" extending from pole to pole of the magnet and along which the filings tend to place themselves. On the discovery being made that magnetic effects accompany the passage of an electric current in a wire, it was also assumed that similar magnetic lines of force whirled around the wire. For convenience and to account for induced electricity it was then assumed that when these lines of force are "cut" by a wire in passing across them or when the lines of force in rising and falling cut the wire, a current of electricity is developed, or to be more exact, an electromotive force is developed in the wire that sets up a current in a closed circuit. Faraday advanced what has been termed the "molecular theory of electricity" which assumes that electricity is the manifestation of a peculiar condition of the molecule of the body rubbed or the ether surrounding the body. Faraday also, by experiment, discovered paramagnetism and diamagnetism, namely, that all solids and liquids are either attracted or repelled by a magnet. For example, iron, nickel, cobalt, manganese, chromium, etc., are paramagnetic (attracted by magnetism), whilst other substances, such as bismuth, phosphorus, antimony, zinc, etc., are repelled by magnetism or are diamagnetic. Brugans of Leyden in 1778 and Le Baillif and Becquerel in 1827 had previously discovered diamagnetism in the case of bismuth and antimony. Faraday also rediscovered specific inductive capacity in 1837, the results of the experiments by Cavendish not having been published at that time. He also predicted the retardation of signals on long submarine cables due to the inductive effect of the insulation of the cable, in other words, the static capacity of the cable. In 1816 telegraph pioneer Francis Ronalds had also observed signal retardation on his buried telegraph lines, attributing it to induction. The 25 years immediately following Faraday's discoveries of electromagnetic induction were fruitful in the promulgation of laws and facts relating to induced currents and to magnetism. In 1834 Heinrich Lenz and Moritz von Jacobi independently demonstrated the now familiar fact that the currents induced in a coil are proportional to the number of turns in the coil. Lenz also announced at that time his important law that, in all cases of electromagnetic induction the induced currents have such a direction that their reaction tends to stop the motion that produces them, a law that was perhaps deducible from Faraday's explanation of Arago's rotations. The induction coil was first designed by Nicholas Callan in 1836. In 1845 Joseph Henry, the American physicist, published an account of his valuable and interesting experiments with induced currents of a high order, showing that currents could be induced from the secondary of an induction coil to the primary of a second coil, thence to its secondary wire, and so on to the primary of a third coil, etc. Heinrich Daniel Ruhmkorff further developed the induction coil, the Ruhmkorff coil was patented in 1851, and he utilized long windings of copper wire to achieve a spark of approximately 2 inches (50 mm) in length. In 1857, after examining a greatly improved version made by an American inventor, Edward Samuel Ritchie, Ruhmkorff improved his design (as did other engineers), using glass insulation and other innovations to allow the production of sparks more than long. Middle 19th century. &lt;templatestyles src="Rquote/styles.css"/&gt;{ class="rquote pullquote floatright" role="presentation" style="display:table; border-collapse:collapse; border-style:none; float:right; margin:0.5em 0.75em; width:33%; " Up to the middle of the 19th century, indeed up to about 1870, electrical science was, it may be said, a sealed book to the majority of electrical workers. Prior to this time a number of handbooks had been published on electricity and magnetism, notably Auguste de La Rive's exhaustive ' "Treatise on Electricity",' in 1851 (French) and 1853 (English); August Beer's "Einleitung in die Elektrostatik, die Lehre vom Magnetismus und die Elektrodynamik", Wiedemann's ' "Galvanismus",' and Reiss' "'Reibungsal-elektricitat".' But these works consisted in the main in details of experiments with electricity and magnetism, and but little with the laws and facts of those phenomena. Henry d'Abria published the results of some researches into the laws of induced currents, but owing to their complexity of the investigation it was not productive of very notable results. Around the mid-19th century, Fleeming Jenkin's work on electricity and magnetism and Clerk Maxwell's ' "Treatise on Electricity and Magnetism" ' were published. These books were departures from the beaten path. As Jenkin states in the preface to his work the science of the schools was so dissimilar from that of the practical electrician that it was quite impossible to give students sufficient, or even approximately sufficient, textbooks. A student he said might have mastered de la Rive's large and valuable treatise and yet feel as if in an unknown country and listening to an unknown tongue in the company of practical men. As another writer has said, with the coming of Jenkin's and Maxwell's books all impediments in the way of electrical students were removed, "the full meaning of Ohm's law becomes clear; electromotive force, difference of potential, resistance, current, capacity, lines of force, magnetization and chemical affinity were measurable, and could be reasoned about, and calculations could be made about them with as much certainty as calculations in dynamics". About 1850, Kirchhoff published his laws relating to branched or divided circuits. He also showed mathematically that according to the then prevailing electrodynamic theory, electricity would be propagated along a perfectly conducting wire with the velocity of light. Helmholtz investigated mathematically the effects of induction upon the strength of a current and deduced therefrom equations, which experiment confirmed, showing amongst other important points the retarding effect of self-induction under certain conditions of the circuit. In 1853, Sir William Thomson (later Lord Kelvin) predicted as a result of mathematical calculations the oscillatory nature of the electric discharge of a condenser circuit. To Henry, however, belongs the credit of discerning as a result of his experiments in 1842 the oscillatory nature of the Leyden jar discharge. He wrote: "The phenomena require us to admit the existence of a principal discharge in one direction, and then several reflex actions backward and forward, each more feeble than the preceding, until the equilibrium is obtained". These oscillations were subsequently observed by B. W. Feddersen (1857) who using a rotating concave mirror projected an image of the electric spark upon a sensitive plate, thereby obtaining a photograph of the spark which plainly indicated the alternating nature of the discharge. Sir William Thomson was also the discoverer of the electric convection of heat (the "Thomson" effect). He designed for electrical measurements of precision his quadrant and absolute electrometers. The reflecting galvanometer and siphon recorder, as applied to submarine cable signaling, are also due to him. About 1876 the American physicist Henry Augustus Rowland of Baltimore demonstrated the important fact that a static charge carried around produces the same magnetic effects as an electric current. The Importance of this discovery consists in that it may afford a plausible theory of magnetism, namely, that magnetism may be the result of directed motion of rows of molecules carrying static charges. After Faraday's discovery that electric currents could be developed in a wire by causing it to cut across the lines of force of a magnet, it was to be expected that attempts would be made to construct machines to avail of this fact in the development of voltaic currents. The first machine of this kind was due to Hippolyte Pixii, 1832. It consisted of two bobbins of iron wire, opposite which the poles of a horseshoe magnet were caused to rotate. As this produced in the coils of the wire an alternating current, Pixii arranged a commutating device (commutator) that converted the alternating current of the coils or armature into a direct current in the external circuit. This machine was followed by improved forms of magneto-electric machines due to Edward Samuel Ritchie, Joseph Saxton, Edward M. Clarke 1834, Emil Stohrer 1843, Floris Nollet 1849, Shepperd 1856, Van Maldern, Werner von Siemens, Henry Wilde and others. A notable advance in the art of dynamo construction was made by Samuel Alfred Varley in 1866 and by Siemens and Charles Wheatstone, who independently discovered that when a coil of wire, or armature, of the dynamo machine is rotated between the poles (or in the "field") of an electromagnet, a weak current is set up in the coil due to residual magnetism in the iron of the electromagnet, and that if the circuit of the armature be connected with the circuit of the electromagnet, the weak current developed in the armature increases the magnetism in the field. This further increases the magnetic lines of force in which the armature rotates, which still further increases the current in the electromagnet, thereby producing a corresponding increase in the field magnetism, and so on, until the maximum electromotive force which the machine is capable of developing is reached. By means of this principle the dynamo machine develops its own magnetic field, thereby much increasing its efficiency and economical operation. Not by any means, however, was the dynamo electric machine perfected at the time mentioned. In 1860 an important improvement had been made by Dr. Antonio Pacinotti of Pisa who devised the first electric machine with a ring armature. This machine was first used as an electric motor, but afterward as a generator of electricity. The discovery of the principle of the reversibility of the dynamo electric machine (variously attributed to Walenn 1860; Pacinotti 1864; Fontaine, Gramme 1873; Deprez 1881, and others) whereby it may be used as an electric motor or as a generator of electricity has been termed one of the greatest discoveries of the 19th century. In 1872 the drum armature was devised by Hefner-Alteneck. This machine in a modified form was subsequently known as the Siemens dynamo. These machines were presently followed by the Schuckert, Gulcher, Fein, Brush, Hochhausen, Edison and the dynamo machines of numerous other inventors. In the early days of dynamo machine construction the machines were mainly arranged as direct current generators, and perhaps the most important application of such machines at that time was in electro-plating, for which purpose machines of low voltage and large current strength were employed. Beginning about 1887 alternating current generators came into extensive operation and the commercial development of the transformer, by means of which currents of low voltage and high current strength are transformed to currents of high voltage and low current strength, and vice versa, in time revolutionized the transmission of electric power to long distances. Likewise the introduction of the rotary converter (in connection with the "step-down" transformer) which converts alternating currents into direct currents (and vice versa) has effected large economies in the operation of electric power systems. Before the introduction of dynamo electric machines, voltaic, or primary, batteries were extensively used for electro-plating and in telegraphy. There are two distinct types of voltaic cells, namely, the "open" and the "closed", or "constant", type. The open type in brief is that type which operated on closed circuit becomes, after a short time, polarized; that is, gases are liberated in the cell which settle on the negative plate and establish a resistance that reduces the current strength. After a brief interval of open circuit these gases are eliminated or absorbed and the cell is again ready for operation. Closed circuit cells are those in which the gases in the cells are absorbed as quickly as liberated and hence the output of the cell is practically uniform. The Leclanché and Daniell cells, respectively, are familiar examples of the "open" and "closed" type of voltaic cell. Batteries of the Daniell or "gravity" type were employed almost generally in the United States and Canada as the source of electromotive force in telegraphy before the dynamo machine became available. In the late 19th century, the term luminiferous aether, meaning light-bearing aether, was a conjectured medium for the propagation of light. The word "aether" stems via Latin from the Greek αιθήρ, from a root meaning to kindle, burn, or shine. It signifies the substance which was thought in ancient times to fill the upper regions of space, beyond the clouds. Maxwell. In 1864 James Clerk Maxwell of Edinburgh announced his electromagnetic theory of light, which was perhaps the greatest single step in the world's knowledge of electricity. Maxwell had studied and commented on the field of electricity and magnetism as early as 1855/6 when "On Faraday's lines of force" was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday's work, and how the two phenomena were related. He reduced all of the current knowledge into a linked set of differential equations with 20 equations in 20 variables. This work was later published as "On Physical Lines of Force" in March 1861. In order to determine the force which is acting on any part of the machine we must find its momentum, and then calculate the rate at which this momentum is being changed. This rate of change will give us the force. The method of calculation which it is necessary to employ was first given by Lagrange, and afterwards developed, with some modifications, by Hamilton's equations. It is usually referred to as Hamilton's principle; when the equations in the original form are used they are known as Lagrange's equations. Now Maxwell logically showed how these methods of calculation could be applied to the electro-magnetic field. The energy of a dynamical system is partly kinetic, partly potential. Maxwell supposes that the magnetic energy of the field is kinetic energy, the electric energy potential. Around 1862, while lecturing at King's College, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light. He considered this to be more than just a coincidence, and commented "We can scarcely avoid the conclusion that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena."" Working on the problem further, Maxwell showed that the equations predict the existence of waves of oscillating electric and magnetic fields that travel through empty space at a speed that could be predicted from simple electrical experiments; using the data available at the time, Maxwell obtained a velocity of 310,740,000 m/s. In his 1864 paper "A Dynamical Theory of the Electromagnetic Field", Maxwell wrote, "The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws". As already noted herein Faraday, and before him, Ampère and others, had inklings that the luminiferous ether of space was also the medium for electric action. It was known by calculation and experiment that the velocity of electricity was approximately 186,000 miles per second; that is, equal to the velocity of light, which in itself suggests the idea of a relationship between -electricity and "light." A number of the earlier philosophers or mathematicians, as Maxwell terms them, of the 19th century, held the view that electromagnetic phenomena were explainable by action at a distance. Maxwell, following Faraday, contended that the seat of the phenomena was in the medium. The methods of the mathematicians in arriving at their results were synthetical while Faraday's methods were analytical. Faraday in his mind's eye saw lines of force traversing all space where the mathematicians saw centres of force attracting at a distance. Faraday sought the seat of the phenomena in real actions going on in the medium; they were satisfied that they had found it in a power of action at a distance on the electric fluids. Both of these methods, as Maxwell points out, had succeeded in explaining the propagation of light as an electromagnetic phenomenon while at the same time the fundamental conceptions of what the quantities concerned are, radically differed. The mathematicians assumed that insulators were barriers to electric currents; that, for instance, in a Leyden jar or electric condenser the electricity was accumulated at one plate and that by some occult action at a distance electricity of an opposite kind was attracted to the other plate. Maxwell, looking further than Faraday, reasoned that if light is an electromagnetic phenomenon and is transmissible through dielectrics such as glass, the phenomenon must be in the nature of electromagnetic currents in the dielectrics. He therefore contended that in the charging of a condenser, for instance, the action did not stop at the insulator, but that some "displacement" currents are set up in the insulating medium, which currents continue until the resisting force of the medium equals that of the charging force. In a closed conductor circuit, an electric current is also a displacement of electricity. The conductor offers a certain resistance, akin to friction, to the displacement of electricity, and heat is developed in the conductor, proportional to the square of the current (as already stated herein), which current flows as long as the impelling electric force continues. This resistance may be likened to that met with by a ship as it displaces in the water in its progress. The resistance of the dielectric is of a different nature and has been compared to the compression of multitudes of springs, which, under compression, yield with an increasing back pressure, up to a point where the total back pressure equals the initial pressure. When the initial pressure is withdrawn the energy expended in compressing the "springs" is returned to the circuit, concurrently with the return of the springs to their original condition, this producing a reaction in the opposite direction. Consequently, the current due to the displacement of electricity in a conductor may be continuous, while the displacement currents in a dielectric are momentary and, in a circuit or medium which contains but little resistance compared with capacity or inductance reaction, the currents of discharge are of an oscillatory or alternating nature. Maxwell extended this view of displacement currents in dielectrics to the ether of free space. Assuming light to be the manifestation of alterations of electric currents in the ether, and vibrating at the rate of light vibrations, these vibrations by induction set up corresponding vibrations in adjoining portions of the ether, and in this way the undulations corresponding to those of light are propagated as an electromagnetic effect in the ether. Maxwell's electromagnetic theory of light obviously involved the existence of electric waves in free space, and his followers set themselves the task of experimentally demonstrating the truth of the theory. By 1871, Maxwell could already reflect on the philosophy of science. End of the 19th century. In 1887, the German physicist Heinrich Hertz in a series of experiments proved the actual existence of electromagnetic waves, showing that transverse free space electromagnetic waves can travel over some distance as predicted by Maxwell and Faraday. Hertz published his work in a book titled: "Electric waves: being researches on the propagation of electric action with finite velocity through space". The discovery of electromagnetic waves in space led to the development of radio in the closing years of the 19th century. The electron as a unit of charge in electrochemistry was posited by G. Johnstone Stoney in 1874, who also coined the term "electron" in 1894. Plasma was first identified in a Crookes tube, and so described by Sir William Crookes in 1879 (he called it "radiant matter"). The place of electricity in leading up to the discovery of those beautiful phenomena of the Crookes Tube (due to Sir William Crookes), viz., Cathode rays, and later to the discovery of Roentgen or X-rays, must not be overlooked, since without electricity as the excitant of the tube the discovery of the rays might have been postponed indefinitely. It has been noted herein that Dr. William Gilbert was termed the founder of electrical science. This must, however, be regarded as a comparative statement. Oliver Heaviside was a self-taught scholar who reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently co-formulated vector analysis. During the late 1890s a number of physicists proposed that electricity, as observed in studies of electrical conduction in conductors, electrolytes, and cathode ray tubes, consisted of discrete units, which were given a variety of names, but the reality of these units had not been confirmed in a compelling way. However, there were also indications that the cathode rays had wavelike properties. Faraday, Weber, Helmholtz, Clifford and others had glimpses of this view; and the experimental works of Zeeman, Goldstein, Crookes, J. J. Thomson and others had greatly strengthened this view. Weber predicted that electrical phenomena were due to the existence of electrical atoms, the influence of which on one another depended on their position and relative accelerations and velocities. Helmholtz and others also contended that the existence of electrical atoms followed from Faraday's laws of electrolysis, and Johnstone Stoney, to whom is due the term "electron", showed that each chemical ion of the decomposed electrolyte carries a definite and constant quantity of electricity, and inasmuch as these charged ions are separated on the electrodes as neutral substances there must be an instant, however brief, when the charges must be capable of existing separately as electrical atoms; while in 1887, Clifford wrote: "There is great reason to believe that every material atom carries upon it a small electric current, if it does not wholly consist of this current." In 1896, J. J. Thomson performed experiments indicating that cathode rays really were particles, found an accurate value for their charge-to-mass ratio e/m, and found that e/m was independent of cathode material. He made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles", had perhaps one thousandth of the mass of the least massive ion known (hydrogen). He further showed that the negatively charged particles produced by radioactive materials, by heated materials, and by illuminated materials, were universal. The nature of the Crookes tube "cathode ray" matter was identified by Thomson in 1897. In the late 19th century, the Michelson–Morley experiment was performed by Albert A. Michelson and Edward W. Morley at what is now Case Western Reserve University. It is generally considered to be the evidence against the theory of a luminiferous aether. The experiment has also been referred to as "the kicking-off point for the theoretical aspects of the Second Scientific Revolution." Primarily for this work, Michelson was awarded the Nobel Prize in 1907. Dayton Miller continued with experiments, conducting thousands of measurements and eventually developing the most accurate interferometer in the world at that time. Miller and others, such as Morley, continue observations and experiments dealing with the concepts. A range of proposed aether-dragging theories could explain the null result but these were more complex, and tended to use arbitrary-looking coefficients and physical assumptions. By the end of the 19th century electrical engineers had become a distinct profession, separate from physicists and inventors. They created companies that investigated, developed and perfected the techniques of electricity transmission, and gained support from governments all over the world for starting the first worldwide electrical telecommunication network, the telegraph network. Pioneers in this field included Werner von Siemens, founder of Siemens AG in 1847, and John Pender, founder of Cable &amp; Wireless. William Stanley made the first public demonstration of a transformer that enabled commercial delivery of alternating current in 1886. Large two-phase alternating current generators were built by a British electrician, J. E. H. Gordon, in 1882. Lord Kelvin and Sebastian Ferranti also developed early alternators, producing frequencies between 100 and 300 hertz. After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for varying alternating-current frequencies between sixteen and about one hundred hertz, for use with arc lighting, incandescent lighting and electric motors. The possibility of obtaining the electric current in large quantities, and economically, by means of dynamo electric machines gave impetus to the development of incandescent and arc lighting. Until these machines had attained a commercial basis voltaic batteries were the only available source of current for electric lighting and power. The cost of these batteries, however, and the difficulties of maintaining them in reliable operation were prohibitory of their use for practical lighting purposes. The date of the employment of arc and incandescent lamps may be set at about 1877. Even in 1880, however, but little headway had been made toward the general use of these illuminants; the rapid subsequent growth of this industry is a matter of general knowledge. The employment of storage batteries, which were originally termed secondary batteries or accumulators, began about 1879. Such batteries are now utilized on a large scale as auxiliaries to the dynamo machine in electric power-houses and substations, in electric automobiles and in immense numbers in automobile ignition and starting systems, also in fire alarm telegraphy and other signal systems. For the 1893 World's Columbian International Exposition in Chicago, General Electric proposed to power the entire fair with direct current. Westinghouse slightly undercut GE's bid and used the fair to debut their alternating current based system, showing how their system could power poly-phase motors and all the other AC and DC exhibits at the fair. Second Industrial Revolution. The Second Industrial Revolution, also known as the Technological Revolution, was a phase of rapid industrialization in the final third of the 19th century and the beginning of the 20th. Along with the expansion of railroads, iron and steel production, widespread use of machinery in manufacturing, greatly increased use of steam power and petroleum, the period saw expansion in the use of electricity and the adaption of electromagnetic theory in developing various technologies. The 1880s saw the spread of large scale commercial electric power systems, first used for lighting and eventually for electro-motive power and heating. Systems early on used alternating current and direct current. Large centralized power generation became possible when it was recognized that alternating current electric power lines could use transformers to take advantage of the fact that each doubling of the voltage would allow the same size cable to transmit the same amount of power four times the distance. Transformer were used to raise voltage at the point of generation (a representative number is a generator voltage in the low kilovolt range) to a much higher voltage (tens of thousands to several hundred thousand volts) for primary transmission, followed to several downward transformations, for commercial and residential domestic use. Between 1885 and 1890 poly-phase currents combined with electromagnetic induction and practical AC induction motors were developed. The International Electro-Technical Exhibition of 1891 featuring the long-distance transmission of high-power, three-phase electric current. It was held between 16 May and 19 October on the disused site of the three former "Westbahnhöfe" (Western Railway Stations) in Frankfurt am Main. The exhibition featured the first long-distance transmission of high-power, three-phase electric current, which was generated 175 km away at Lauffen am Neckar. As a result of this successful field trial, three-phase current became established for electrical transmission networks throughout the world. Much was done in the direction in the improvement of railroad terminal facilities, and it is difficult to find one steam railroad engineer who would have denied that all the important steam railroads of this country were not to be operated electrically. In other directions the progress of events as to the utilization of electric power was expected to be equally rapid. In every part of the world the power of falling water, nature's perpetual motion machine, which has been going to waste since the world began, is now being converted into electricity and transmitted by wire hundreds of miles to points where it is usefully and economically employed. The first windmill for electricity production was built in Scotland in July 1887 by the Scottish electrical engineer James Blyth. Across the Atlantic, in Cleveland, Ohio a larger and heavily engineered machine was designed and constructed in 1887–88 by Charles F. Brush, this was built by his engineering company at his home and operated from 1886 until 1900. The Brush wind turbine had a rotor in diameter and was mounted on a 60-foot (18 m) tower. Although large by today's standards, the machine was only rated at 12 kW; it turned relatively slowly since it had 144 blades. The connected dynamo was used either to charge a bank of batteries or to operate up to 100 incandescent light bulbs, three arc lamps, and various motors in Brush's laboratory. The machine fell into disuse after 1900 when electricity became available from Cleveland's central stations, and was abandoned in 1908. 20th century. Various units of electricity and magnetism have been adopted and named by representatives of the electrical engineering institutes of the world, which units and names have been confirmed and legalized by the governments of the United States and other countries. Thus the volt, from the Italian Volta, has been adopted as the practical unit of electromotive force, the ohm, from the enunciator of Ohm's law, as the practical unit of resistance; the ampere, after the eminent French scientist of that name, as the practical unit of current strength, the henry as the practical unit of inductance, after Joseph Henry and in recognition of his early and important experimental work in mutual induction. Dewar and John Ambrose Fleming predicted that at absolute zero, pure metals would become perfect electromagnetic conductors (though, later, Dewar altered his opinion on the disappearance of resistance believing that there would always be some resistance). Walther Hermann Nernst developed the third law of thermodynamics and stated that absolute zero was unattainable. Carl von Linde and William Hampson, both commercial researchers, nearly at the same time filed for patents on the Joule–Thomson effect. Linde's patent was the climax of 20 years of systematic investigation of established facts, using a regenerative counterflow method. Hampson's design was also of a regenerative method. The combined process became known as the Linde–Hampson liquefaction process. Heike Kamerlingh Onnes purchased a Linde machine for his research. Zygmunt Florenty Wróblewski conducted research into electrical properties at low temperatures, though his research ended early due to his accidental death. Around 1864, Karol Olszewski and Wroblewski predicted the electrical phenomena of dropping resistance levels at ultra-cold temperatures. Olszewski and Wroblewski documented evidence of this in the 1880s. A milestone was achieved on 10 July 1908 when Onnes at the Leiden University in Leiden produced, for the first time, liquified helium and achieved superconductivity. In 1900, William Du Bois Duddell develops the Singing Arc and produced melodic sounds, from a low to a high-tone, from this arc lamp. Lorentz and Poincaré. Between 1900 and 1910, many scientists like Wilhelm Wien, Max Abraham, Hermann Minkowski, or Gustav Mie believed that all forces of nature are of electromagnetic origin (the so-called "electromagnetic world view"). This was connected with the electron theory developed between 1892 and 1904 by Hendrik Lorentz. Lorentz introduced a strict separation between matter (electrons) and the aether, whereby in his model the ether is completely motionless, and it won't be set in motion in the neighborhood of ponderable matter. Contrary to other electron models before, the electromagnetic field of the ether appears as a mediator between the electrons, and changes in this field can propagate not faster than the speed of light. In 1896, three years after submitting his thesis on the Kerr effect, Pieter Zeeman disobeyed the direct orders of his supervisor and used laboratory equipment to measure the splitting of spectral lines by a strong magnetic field. Lorentz theoretically explained the Zeeman effect on the basis of his theory, for which both received the Nobel Prize in Physics in 1902. A fundamental concept of Lorentz's theory in 1895 was the "theorem of corresponding states" for terms of order v/c. This theorem states that a moving observer (relative to the ether) makes the same observations as a resting observer. This theorem was extended for terms of all orders by Lorentz in 1904. Lorentz noticed, that it was necessary to change the space-time variables when changing frames and introduced concepts like physical length contraction (1892) to explain the Michelson–Morley experiment, and the mathematical concept of local time (1895) to explain the aberration of light and the Fizeau experiment. That resulted in the formulation of the so-called Lorentz transformation by Joseph Larmor (1897, 1900) and Lorentz (1899, 1904). As Lorentz later noted (1921, 1928), he considered the time indicated by clocks resting in the aether as "true" time, while local time was seen by him as a heuristic working hypothesis and a mathematical artifice. Therefore, Lorentz's theorem is seen by modern historians as being a mathematical transformation from a "real" system resting in the aether into a "fictitious" system in motion. Continuing the work of Lorentz, Henri Poincaré between 1895 and 1905 formulated on many occasions the principle of relativity and tried to harmonize it with electrodynamics. He declared simultaneity only a convenient convention which depends on the speed of light, whereby the constancy of the speed of light would be a useful postulate for making the laws of nature as simple as possible. In 1900 he interpreted Lorentz's local time as the result of clock synchronization by light signals, and introduced the electromagnetic momentum by comparing electromagnetic energy to what he called a "fictitious fluid" of mass formula_0. And finally in June and July 1905 he declared the relativity principle a general law of nature, including gravitation. He corrected some mistakes of Lorentz and proved the Lorentz covariance of the electromagnetic equations. Poincaré also suggested that there exist non-electrical forces to stabilize the electron configuration and asserted that gravitation is a non-electrical force as well, contrary to the electromagnetic world view. However, historians pointed out that he still used the notion of an ether and distinguished between "apparent" and "real" time and therefore didn't invent special relativity in its modern understanding. Einstein's "Annus Mirabilis". In 1905, while he was working in the patent office, Albert Einstein had four papers published in the "Annalen der Physik", the leading German physics journal. These are the papers that history has come to call the "Annus Mirabilis papers": All four papers are today recognized as tremendous achievements—and hence 1905 is known as Einstein's "Wonderful Year". At the time, however, they were not noticed by most physicists as being important, and many of those who did notice them rejected them outright. Some of this work—such as the theory of light quanta—remained controversial for years. Mid-20th century. The first formulation of a quantum theory describing radiation and matter interaction is due to Paul Dirac, who, during 1920, was first able to compute the coefficient of spontaneous emission of an atom. Paul Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics. In December 1938, the German chemists Otto Hahn and Fritz Strassmann sent a manuscript to "Naturwissenschaften" reporting they had detected the element barium after bombarding uranium with neutrons; simultaneously, they communicated these results to Lise Meitner. Meitner, and her nephew Otto Robert Frisch, correctly interpreted these results as being nuclear fission. Frisch confirmed this experimentally on 13 January 1939. In 1944, Hahn received the Nobel Prize in Chemistry for the discovery of nuclear fission. Some historians who have documented the history of the discovery of nuclear fission believe Meitner should have been awarded the Nobel Prize with Hahn. Difficulties with the quantum theory increased through the end of 1940. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron. These experiments unequivocally exposed discrepancies which the theory was unable to explain. With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and ever-growing number of particles called hadrons. It seemed that such a large number of particles could not all be fundamental. Shortly after the end of the war in 1945, Bell Labs formed a Solid State Physics Group, led by William Shockley and chemist Stanley Morgan; other personnel including John Bardeen and Walter Brattain, physicist Gerald Pearson, chemist Robert Gibney, electronics expert Hilbert Moore and several technicians. Their assignment was to seek a solid-state alternative to fragile glass vacuum tube amplifiers. Their first attempts were based on Shockley's ideas about using an external electrical field on a semiconductor to affect its conductivity. These experiments failed every time in all sorts of configurations and materials. The group was at a standstill until Bardeen suggested a theory that invoked surface states that prevented the field from penetrating the semiconductor. The group changed its focus to study these surface states and they met almost daily to discuss the work. The rapport of the group was excellent, and ideas were freely exchanged. As to the problems in the electron experiments, a path to a solution was given by Hans Bethe. In 1947, while he was traveling by train to reach Schenectady from New York, after giving a talk at the conference at Shelter Island on the subject, Bethe completed the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections at mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel Prize in Physics in 1965 for their work in this area. Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus". QED has served as the model and template for all subsequent quantum field theories. Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Robert Noyce credited Kurt Lehovec for the "principle of p–n junction isolation" caused by the action of a biased p-n junction (the diode) as a key concept behind the integrated circuit. Jack Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958. In his patent application of February 6, 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated." Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Robert Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's chip solved many practical problems that Kilby's had not. Noyce's chip, made at Fairchild Semiconductor, was made of silicon, whereas Kilby's chip was made of germanium. Philo Farnsworth developed the Farnsworth–Hirsch Fusor, or simply fusor, an apparatus designed by Farnsworth to create nuclear fusion. Unlike most controlled fusion systems, which slowly heat a magnetically confined plasma, the fusor injects high temperature ions directly into a reaction chamber, thereby avoiding a considerable amount of complexity. When the Farnsworth-Hirsch Fusor was first introduced to the fusion research world in the late 1960s, the Fusor was the first device that could clearly demonstrate it was producing fusion reactions at all. Hopes at the time were high that it could be quickly developed into a practical power source. However, as with other fusion experiments, development into a power source has proven difficult. Nevertheless, the fusor has since become a practical neutron source and is produced commercially for this role. Parity violation. The mirror image of an electromagnet produces a field with the opposite polarity. Thus the north and south poles of a magnet have the same symmetry as left and right. Prior to 1956, it was believed that this symmetry was perfect, and that a technician would be unable to distinguish the north and south poles of a magnet except by reference to left and right. In that year, T. D. Lee and C. N. Yang predicted the nonconservation of parity in the weak interaction. To the surprise of many physicists, in 1957 C. S. Wu and collaborators at the U.S. National Bureau of Standards demonstrated that under suitable conditions for polarization of nuclei, the beta decay of cobalt-60 preferentially releases electrons toward the south pole of an external magnetic field, and a somewhat higher number of gamma rays toward the north pole. As a result, the experimental apparatus does not behave comparably with its mirror image. Electroweak theory. The first step towards the Standard Model was Sheldon Glashow's discovery, in 1960, of a way to combine the electromagnetic and weak interactions. In 1967, Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashow's electroweak theory, giving it its modern form. The Higgs mechanism is believed to give rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, and the masses of the fermions – i.e. the quarks and leptons. After the neutral weak currents caused by boson exchange were discovered at CERN in 1973, the electroweak theory became widely accepted and Glashow, Salam, and Weinberg shared the 1979 Nobel Prize in Physics for discovering it. The W and Z bosons were discovered experimentally in 1981, and their masses were found to be as the Standard Model predicted. The theory of the strong interaction, to which many contributed, acquired its modern form around 1973–74, when experiments confirmed that the hadrons were composed of fractionally charged quarks. With the establishment of quantum chromodynamics in the 1970s finalized a set of fundamental and exchange particles, which allowed for the establishment of a "standard model" based on the mathematics of gauge invariance, which successfully described all forces except for gravity, and which remains generally accepted within the domain to which it is designed to be applied. The 'standard model' groups the electroweak interaction theory and quantum chromodynamics into a structure denoted by the gauge group "SU(3)×SU(2)×U(1)". The formulation of the unification of the electromagnetic and weak interactions in the standard model is due to Abdus Salam, Steven Weinberg and, subsequently, Sheldon Glashow. After the discovery, made at CERN, of the existence of neutral weak currents, mediated by the boson foreseen in the standard model, the physicists Salam, Glashow and Weinberg received the 1979 Nobel Prize in Physics for their electroweak theory. Since then, discoveries of the bottom quark (1977), the top quark (1995), tau neutrino (2000) and the Higgs boson (2012) have given credence to the Standard Model. 21st century. Electromagnetic technologies. There are a range of emerging energy technologies. By 2007, solid state micrometer-scale electric double-layer capacitors based on advanced superionic conductors had been for low-voltage electronics such as deep-sub-voltage nanoelectronics and related technologies (the 22 nm technological node of CMOS and beyond). Also, the nanowire battery, a lithium-ion battery, was invented by a team led by Dr. Yi Cui in 2007. Magnetic resonance. Reflecting the fundamental importance and applicability of Magnetic resonance imaging in medicine, Paul Lauterbur of the University of Illinois at Urbana–Champaign and Sir Peter Mansfield of the University of Nottingham were awarded the 2003 Nobel Prize in Physiology or Medicine for their "discoveries concerning magnetic resonance imaging". The Nobel citation acknowledged Lauterbur's insight of using magnetic field gradients to determine spatial localization, a discovery that allowed rapid acquisition of 2D images. Wireless electricity. Wireless electricity is a form of wireless energy transfer, the ability to provide electrical energy to remote objects without wires. The term WiTricity was coined in 2005 by Dave Gerding and later used for a project led by Prof. Marin Soljačić in 2007. The MIT researchers successfully demonstrated the ability to power a 60 watt light bulb wirelessly, using two 5-turn copper coils of 60 cm (24 in) diameter, that were 2 m (7 ft) away, at roughly 45% efficiency. This technology can potentially be used in a large variety of applications, including consumer, industrial, medical and military. Its aim is to reduce the dependence on batteries. Further applications for this technology include transmission of information—it would not interfere with radio waves and thus could be used as a cheap and efficient communication device without requiring a license or a government permit. Unified theories. A Grand Unified Theory (GUT) is a model in particle physics in which, at high energy, the electromagnetic force is merged with the other two gauge interactions of the Standard Model, the weak and strong nuclear forces. Many candidates have been proposed, but none is directly supported by experimental evidence. GUTs are often seen as intermediate steps towards a "Theory of Everything" (TOE), a putative theory of theoretical physics that fully explains and links together all known physical phenomena, and, ideally, has predictive power for the outcome of any experiment that could be carried out in principle. No such theory has yet been accepted by the physics community. Open problems. The magnetic monopole in the "quantum" theory of magnetic charge started with a paper by the physicist Paul A.M. Dirac in 1931. The detection of magnetic monopoles is an open problem in experimental physics. In some theoretical models, magnetic monopoles are unlikely to be observed, because they are too massive to be created in particle accelerators, and also too rare in the Universe to enter a particle detector with much probability. After more than twenty years of intensive research, the origin of high-temperature superconductivity is still not clear, but it seems that instead of "electron-phonon" attraction mechanisms, as in conventional superconductivity, one is dealing with genuine "electronic" mechanisms (e.g. by antiferromagnetic correlations), and instead of s-wave pairing, d-wave pairings are substantial. One goal of all this research is room-temperature superconductivity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m=E/c^2" }, { "math_id": 1, "text": "E = m c^2" } ]
https://en.wikipedia.org/wiki?curid=5951576
5951995
Voice inversion
Method of obscuring transmission Voice inversion scrambling is an analog method of obscuring the content of a transmission. It is sometimes used in public service radio, automobile racing, cordless telephones and the Family Radio Service. Without a descrambler, the transmission makes the speaker "sound like Donald Duck". Despite the term, the technique operates on the passband of the information and so can be applied to any information being transmitted. Forms and details. There are various forms of voice inversion which offer differing levels of security. Overall, voice inversion scrambling offers little true security as software and even hobbyist kits are available from kit makers for scrambling and descrambling. The cadence of the speech is not changed. It is often easy to guess what is happening in the conversation by listening for other audio cues like questions, short responses and other language cadences. In the simplest form of voice inversion, the frequency formula_0 of each component is replaced with formula_1, where formula_2 is the frequency of a carrier wave. This can be done by amplitude modulating the speech signal with the carrier, then applying a low-pass filter to select the lower sideband. This will make the low tones of the voice sound like high ones and vice versa. This process also occurs naturally if a radio receiver is tuned to a single sideband transmission but set to decode the wrong sideband. There are more advanced forms of voice inversion which are more complex and require more effort to descramble. One method is to use a random code to choose the carrier frequency and then change this code in real time. This is called "Rolling Code voice inversion" and one can often hear the "ticks" in the transmission which signal the changing of the inversion point. Another method is "split band voice inversion". This is where the band is split and then each band is inverted separately. A rolling code can also be added to this method for variable split band inversion (VSB). Common carrier frequencies are: 2.632 kHz, 2.718 kHz, 2.868 kHz, 3.023 kHz, 3.107 kHz, 3.196 kHz, 3.333 kHz, 3.339 kHz, 3.496 kHz, 3.729 kHz and 4.096 kHz. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "s-p" }, { "math_id": 2, "text": "s" } ]
https://en.wikipedia.org/wiki?curid=5951995
59521
Preadditive category
Mathematical category whose hom sets form Abelian groups In mathematics, specifically in category theory, a preadditive category is another name for an Ab-category, i.e., a category that is enriched over the category of abelian groups, Ab. That is, an Ab-category C is a category such that every hom-set Hom("A","B") in C has the structure of an abelian group, and composition of morphisms is bilinear, in the sense that composition of morphisms distributes over the group operation. In formulas: formula_0 and formula_1 where + is the group operation. Some authors have used the term "additive category" for preadditive categories, but here we follow the current trend of reserving this term for certain special preadditive categories (see below). Examples. The most obvious example of a preadditive category is the category Ab itself. More precisely, Ab is a closed monoidal category. Note that commutativity is crucial here; it ensures that the sum of two group homomorphisms is again a homomorphism. In contrast, the category of all groups is not closed. See Medial category. Other common examples: These will give you an idea of what to think of; for more examples, follow the links to below. Elementary properties. Because every hom-set Hom("A","B") is an abelian group, it has a zero element 0. This is the zero morphism from "A" to "B". Because composition of morphisms is bilinear, the composition of a zero morphism and any other morphism (on either side) must be another zero morphism. If you think of composition as analogous to multiplication, then this says that multiplication by zero always results in a product of zero, which is a familiar intuition. Extending this analogy, the fact that composition is bilinear in general becomes the distributivity of multiplication over addition. Focusing on a single object "A" in a preadditive category, these facts say that the endomorphism hom-set Hom("A","A") is a ring, if we define multiplication in the ring to be composition. This ring is the endomorphism ring of "A". Conversely, every ring (with identity) is the endomorphism ring of some object in some preadditive category. Indeed, given a ring "R", we can define a preadditive category R to have a single object "A", let Hom("A","A") be "R", and let composition be ring multiplication. Since "R" is an abelian group and multiplication in a ring is bilinear (distributive), this makes R a preadditive category. Category theorists will often think of the ring "R" and the category R as two different representations of the same thing, so that a particularly perverse category theorist might define a ring as a preadditive category with exactly one object (in the same way that a monoid can be viewed as a category with only one object—and forgetting the additive structure of the ring gives us a monoid). In this way, preadditive categories can be seen as a generalisation of rings. Many concepts from ring theory, such as ideals, Jacobson radicals, and factor rings can be generalized in a straightforward manner to this setting. When attempting to write down these generalizations, one should think of the morphisms in the preadditive category as the "elements" of the "generalized ring". Additive functors. If formula_2 and formula_3 are preadditive categories, then a functor formula_4 is additive if it too is enriched over the category formula_5. That is, formula_6 is additive if and only if, given any objects formula_7 and formula_8 of formula_2, the function formula_9 is a group homomorphism. Most functors studied between preadditive categories are additive. For a simple example, if the rings formula_10 and formula_11 are represented by the one-object preadditive categories formula_12 and formula_13, then a ring homomorphism from formula_10 to formula_11 is represented by an additive functor from formula_12 to formula_13, and conversely. If formula_2 and formula_3 are categories and formula_3 is preadditive, then the functor category formula_14 is also preadditive, because natural transformations can be added in a natural way. If formula_2 is preadditive too, then the category formula_15 of additive functors and all natural transformations between them is also preadditive. The latter example leads to a generalization of modules over rings: If formula_2 is a preadditive category, then formula_16 is called the module category over formula_2. When formula_2 is the one-object preadditive category corresponding to the ring formula_10, this reduces to the ordinary category of (left) formula_10-modules. Again, virtually all concepts from the theory of modules can be generalised to this setting. R-linear categories. More generally, one can consider a category C enriched over the monoidal category of modules over a commutative ring R, called an R-linear category. In other words, each hom-set formula_17 in C has the structure of an R-module, and composition of morphisms is R-bilinear. When considering functors between two R-linear categories, one often restricts to those that are R-linear, so those that induce R-linear maps on each hom-set. Biproducts. Any finite product in a preadditive category must also be a coproduct, and conversely. In fact, finite products and coproducts in preadditive categories can be characterised by the following "biproduct condition": The object "B" is a biproduct of the objects "A"1, ..., "An" if and only if there are "projection morphisms" "p""j": "B" → "A""j" and "injection morphisms" "i""j": "A""j" → "B", such that ("i"1∘"p"1) + ··· + ("in"∘"pn") is the identity morphism of "B", "pj"∘"ij" is the identity morphism of Aj, and "p""j"∘"ik" is the zero morphism from "A""k" to "Aj" whenever "j" and "k" are distinct. This biproduct is often written "A"1 ⊕ ··· ⊕ "An", borrowing the notation for the direct sum. This is because the biproduct in well known preadditive categories like Ab "is" the direct sum. However, although infinite direct sums make sense in some categories, like Ab, infinite biproducts do "not" make sense (see ). The biproduct condition in the case "n" = 0 simplifies drastically; "B" is a "nullary biproduct" if and only if the identity morphism of "B" is the zero morphism from "B" to itself, or equivalently if the hom-set Hom("B","B") is the trivial ring. Note that because a nullary biproduct will be both terminal (a nullary product) and initial (a nullary coproduct), it will in fact be a zero object. Indeed, the term "zero object" originated in the study of preadditive categories like Ab, where the zero object is the zero group. A preadditive category in which every biproduct exists (including a zero object) is called "additive". Further facts about biproducts that are mainly useful in the context of additive categories may be found under that subject. Kernels and cokernels. Because the hom-sets in a preadditive category have zero morphisms, the notion of kernel and cokernel make sense. That is, if "f": "A" → "B" is a morphism in a preadditive category, then the kernel of "f" is the equaliser of "f" and the zero morphism from "A" to "B", while the cokernel of "f" is the coequaliser of "f" and this zero morphism. Unlike with products and coproducts, the kernel and cokernel of "f" are generally not equal in a preadditive category. When specializing to the preadditive categories of abelian groups or modules over a ring, this notion of kernel coincides with the ordinary notion of a kernel of a homomorphism, if one identifies the ordinary kernel "K" of "f": "A" → "B" with its embedding "K" → "A". However, in a general preadditive category there may exist morphisms without kernels and/or cokernels. There is a convenient relationship between the kernel and cokernel and the abelian group structure on the hom-sets. Given parallel morphisms "f" and "g", the equaliser of "f" and "g" is just the kernel of "g" − "f", if either exists, and the analogous fact is true for coequalisers. The alternative term "difference kernel" for binary equalisers derives from this fact. A preadditive category in which all biproducts, kernels, and cokernels exist is called "pre-abelian". Further facts about kernels and cokernels in preadditive categories that are mainly useful in the context of pre-abelian categories may be found under that subject. Special cases. Most of these special cases of preadditive categories have all been mentioned above, but they're gathered here for reference. The preadditive categories most commonly studied are in fact abelian categories; for example, Ab is an abelian category.
[ { "math_id": 0, "text": "\n f\\circ (g + h) = (f\\circ g) + (f\\circ h)\n" }, { "math_id": 1, "text": "\n (f + g)\\circ h = (f\\circ h) + (g\\circ h),\n" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "D" }, { "math_id": 4, "text": "F : C \\rightarrow D" }, { "math_id": 5, "text": "Ab" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "B" }, { "math_id": 9, "text": "F:\\text{Hom}(A,B)\\rightarrow \\text{Hom}(F(A),F(B))" }, { "math_id": 10, "text": "R" }, { "math_id": 11, "text": "S" }, { "math_id": 12, "text": "C_R" }, { "math_id": 13, "text": "C_S" }, { "math_id": 14, "text": "D^C" }, { "math_id": 15, "text": "\\text{Add}(C,D)" }, { "math_id": 16, "text": "\\text{Mod}(C)\\mathbin{:=} \\text{Add}(C,Ab)" }, { "math_id": 17, "text": "\\text{Hom}(A,B)" } ]
https://en.wikipedia.org/wiki?curid=59521
59529
Solubility equilibrium
Thermodynamic equilibrium between a solid and a solution of the same compound Solubility equilibrium is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation, or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent "solubility product" which functions like an equilibrium constant. Solubility equilibria are important in pharmaceutical, environmental and many other scenarios. Definitions. A solubility equilibrium exists when a chemical compound in the solid state is in chemical equilibrium with a solution containing the compound. This type of equilibrium is an example of dynamic equilibrium in that some individual molecules migrate between the solid and solution phases such that the rates of dissolution and precipitation are equal to one another. When equilibrium is established and the solid has not all dissolved, the solution is said to be saturated. The concentration of the solute in a saturated solution is known as the solubility. Units of solubility may be molar (mol dm−3) or expressed as mass per unit volume, such as μg mL−1. Solubility is temperature dependent. A solution containing a higher concentration of solute than the solubility is said to be supersaturated. A supersaturated solution may be induced to come to equilibrium by the addition of a "seed" which may be a tiny crystal of the solute, or a tiny solid particle, which initiates precipitation. There are three main types of solubility equilibria. In each case an equilibrium constant can be specified as a quotient of activities. This equilibrium constant is dimensionless as activity is a dimensionless quantity. However, use of activities is very inconvenient, so the equilibrium constant is usually divided by the quotient of activity coefficients, to become a quotient of concentrations. See Equilibrium chemistry#Equilibrium constant for details. Moreover, the activity of a solid is, by definition, equal to 1 so it is omitted from the defining expression. For a chemical equilibrium formula_0 the solubility product, "K"sp for the compound A"p"B"q" is defined as follows formula_1 where [A] and [B] are the concentrations of A and B in a saturated solution. A solubility product has a similar functionality to an equilibrium constant though formally "K"sp has the dimension of (concentration)"p"+"q". Effects of conditions. Temperature effect. Solubility is sensitive to changes in temperature. For example, sugar is more soluble in hot water than cool water. It occurs because solubility products, like other types of equilibrium constants, are functions of temperature. In accordance with Le Chatelier's Principle, when the dissolution process is endothermic (heat is absorbed), solubility increases with rising temperature. This effect is the basis for the process of recrystallization, which can be used to purify a chemical compound. When dissolution is exothermic (heat is released) solubility decreases with rising temperature. Sodium sulfate shows increasing solubility with temperature below about 32.4 °C, but a decreasing solubility at higher temperature. This is because the solid phase is the decahydrate (Na2SO4·10H2O) below the transition temperature, but a different hydrate above that temperature. The dependence on temperature of solubility for an ideal solution (achieved for low solubility substances) is given by the following expression containing the enthalpy of melting, Δ"m""H", and the mole fraction formula_2 of the solute at saturation: formula_3 where formula_4 is the partial molar enthalpy of the solute at infinite dilution and formula_5 the enthalpy per mole of the pure crystal. This differential expression for a non-electrolyte can be integrated on a temperature interval to give: formula_6 For nonideal solutions activity of the solute at saturation appears instead of mole fraction solubility in the derivative with respect to temperature: formula_7 Common-ion effect. The common-ion effect is the effect of decreased solubility of one salt when another salt that has an ion in common with it is also present. For example, the solubility of silver chloride, AgCl, is lowered when sodium chloride, a source of the common ion chloride, is added to a suspension of AgCl in water. formula_8 The solubility, "S", in the absence of a common ion can be calculated as follows. The concentrations [Ag+] and [Cl−] are equal because one mole of AgCl would dissociate into one mole of Ag+ and one mole of Cl−. Let the concentration of [Ag+(aq)] be denoted by "x". Then formula_9 formula_10 "K"sp for AgCl is equal to at 25 °C, so the solubility is . Now suppose that sodium chloride is also present, at a concentration of 0.01 mol dm−3 = 0.01 M. The solubility, ignoring any possible effect of the sodium ions, is now calculated by formula_11 This is a quadratic equation in "x", which is also equal to the solubility. formula_12 In the case of silver chloride, "x"2 is very much smaller than 0.01 M "x", so the first term can be ignored. Therefore formula_13 a considerable reduction from . In gravimetric analysis for silver, the reduction in solubility due to the common ion effect is used to ensure "complete" precipitation of AgCl. Particle size effect. The thermodynamic solubility constant is defined for large monocrystals. Solubility will increase with decreasing size of solute particle (or droplet) because of the additional surface energy. This effect is generally small unless particles become very small, typically smaller than 1 μm. The effect of the particle size on solubility constant can be quantified as follows: formula_14 where *"KA" is the solubility constant for the solute particles with the molar surface area "A", *"K""A"→0 is the solubility constant for substance with molar surface area tending to zero (i.e., when the particles are large), "γ" is the surface tension of the solute particle in the solvent, "A"m is the molar surface area of the solute (in m2/mol), "R "is the universal gas constant, and "T" is the absolute temperature. Salt effects. The salt effects (salting in and salting-out) refers to the fact that the presence of a salt which has no ion in common with the solute, has an effect on the ionic strength of the solution and hence on activity coefficients, so that the equilibrium constant, expressed as a concentration quotient, changes. Phase effect. Equilibria are defined for specific crystal phases. Therefore, the solubility product is expected to be different depending on the phase of the solid. For example, aragonite and calcite will have different solubility products even though they have both the same chemical identity (calcium carbonate). Under any given conditions one phase will be thermodynamically more stable than the other; therefore, this phase will form when thermodynamic equilibrium is established. However, kinetic factors may favor the formation the unfavorable precipitate (e.g. aragonite), which is then said to be in a metastable state. In pharmacology, the metastable state is sometimes referred to as amorphous state. Amorphous drugs have higher solubility than their crystalline counterparts due to the absence of long-distance interactions inherent in crystal lattice. Thus, it takes less energy to solvate the molecules in amorphous phase. The effect of amorphous phase on solubility is widely used to make drugs more soluble. Pressure effect. For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as: formula_15 where formula_2 is the mole fraction of the formula_16-th component in the solution, formula_17 is the pressure, formula_18 is the absolute temperature, formula_19 is the partial molar volume of the formula_16th component in the solution, formula_20 is the partial molar volume of the formula_16th component in the dissolving solid, and formula_21 is the universal gas constant. The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time. Quantitative aspects. Simple dissolution. Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms. For example, when sucrose (table sugar) forms a saturated solution formula_22 An equilibrium expression for this reaction can be written, as for any chemical reaction (products over reactants): formula_23 where "K"o is called the thermodynamic solubility constant. The braces indicate activity. The activity of a pure solid is, by definition, unity. Therefore formula_24 The activity of a substance, A, in solution can be expressed as the product of the concentration, [A], and an activity coefficient, "γ". When "K"o is divided by "γ", the solubility constant, "K"s, formula_25 is obtained. This is equivalent to defining the standard state as the saturated solution so that the activity coefficient is equal to one. The solubility constant is a true constant only if the activity coefficient is not affected by the presence of any other solutes that may be present. The unit of the solubility constant is the same as the unit of the concentration of the solute. For sucrose "K"s = 1.971 mol dm−3 at 25 °C. This shows that the solubility of sucrose at 25 °C is nearly 2 mol dm−3 (540 g/L). Sucrose is unusual in that it does not easily form a supersaturated solution at higher concentrations, as do most other carbohydrates. Dissolution with dissociation. Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for silver chloride: &lt;chem display="block"&gt;AgCl_{(s)} &lt;=&gt; Ag^+_{(aq)}{} + Cl^-_{(aq)} &lt;/chem&gt; The expression for the equilibrium constant for this reaction is: formula_26 where formula_27 is the thermodynamic equilibrium constant and braces indicate activity. The activity of a pure solid is, by definition, equal to one. When the solubility of the salt is very low the activity coefficients of the ions in solution are nearly equal to one. By setting them to be actually equal to one this expression reduces to the solubility product expression: formula_28 For 2:2 and 3:3 salts, such as CaSO4 and FePO4, the general expression for the solubility product is the same as for a 1:1 electrolyte formula_29 formula_30 (electrical charges are omitted in general expressions, for simplicity of notation) With an unsymmetrical salt like Ca(OH)2 the solubility expression is given by formula_31 formula_32 Since the concentration of hydroxide ions is twice the concentration of calcium ions this reduces to formula_33 In general, with the chemical equilibrium formula_34 formula_35 and the following table, showing the relationship between the solubility of a compound and the value of its solubility product, can be derived. Solubility products are often expressed in logarithmic form. Thus, for calcium sulfate, with "K"sp = mol2 dm−6, log "K"sp = −4.32. The smaller the value of "K"sp, or the more negative the log value, the lower the solubility. Some salts are not fully dissociated in solution. Examples include MgSO4, famously discovered by Manfred Eigen to be present in seawater as both an inner sphere complex and an outer sphere complex. The solubility of such salts is calculated by the method outlined in dissolution with reaction. Hydroxides. The solubility product for the hydroxide of a metal ion, M"n"+, is usually defined, as follows: formula_36 formula_37 However, general-purpose computer programs are designed to use hydrogen ion concentrations with the alternative definitions. formula_38 formula_39 For hydroxides, solubility products are often given in a modified form, "K"*sp, using hydrogen ion concentration in place of hydroxide ion concentration. The two values are related by the self-ionization constant for water, "K"w. formula_40 formula_41 formula_42 For example, at ambient temperature, for calcium hydroxide, Ca(OH)2, lg "K"sp is ca. −5 and lg "K"*sp ≈ −5 + 2 × 14 ≈ 23. Dissolution with reaction. A typical reaction with dissolution involves a weak base, B, dissolving in an acidic aqueous solution. formula_43 This reaction is very important for pharmaceutical products. Dissolution of weak acids in alkaline media is similarly important. formula_44 The uncharged molecule usually has lower solubility than the ionic form, so solubility depends on pH and the acid dissociation constant of the solute. The term "intrinsic solubility" is used to describe the solubility of the un-ionized form in the absence of acid or alkali. Leaching of aluminium salts from rocks and soil by acid rain is another example of dissolution with reaction: alumino-silicates are bases which react with the acid to form soluble species, such as Al3+(aq). Formation of a chemical complex may also change solubility. A well-known example is the addition of a concentrated solution of ammonia to a suspension of silver chloride, in which dissolution is favoured by the formation of an ammine complex. formula_45 When sufficient ammonia is added to a suspension of silver chloride, the solid dissolves. The addition of water softeners to washing powders to inhibit the formation of soap scum provides an example of practical importance. Experimental determination. The determination of solubility is fraught with difficulties. First and foremost is the difficulty in establishing that the system is in equilibrium at the chosen temperature. This is because both precipitation and dissolution reactions may be extremely slow. If the process is very slow solvent evaporation may be an issue. Supersaturation may occur. With very insoluble substances, the concentrations in solution are very low and difficult to determine. The methods used fall broadly into two categories, static and dynamic. Static methods. In static methods a mixture is brought to equilibrium and the concentration of a species in the solution phase is determined by chemical analysis. This usually requires separation of the solid and solution phases. In order to do this the equilibration and separation should be performed in a thermostatted room. Very low concentrations can be measured if a radioactive tracer is incorporated in the solid phase. A variation of the static method is to add a solution of the substance in a non-aqueous solvent, such as dimethyl sulfoxide, to an aqueous buffer mixture. Immediate precipitation may occur giving a cloudy mixture. The solubility measured for such a mixture is known as "kinetic solubility". The cloudiness is due to the fact that the precipitate particles are very small resulting in Tyndall scattering. In fact the particles are so small that the particle size effect comes into play and kinetic solubility is often greater than equilibrium solubility. Over time the cloudiness will disappear as the size of the crystallites increases, and eventually equilibrium will be reached in a process known as precipitate ageing. Dynamic methods. Solubility values of organic acids, bases, and ampholytes of pharmaceutical interest may be obtained by a process called "Chasing equilibrium solubility". In this procedure, a quantity of substance is first dissolved at a pH where it exists predominantly in its ionized form and then a precipitate of the neutral (un-ionized) species is formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution is monitored and strong acid and base titrant are added to adjust the pH to discover the equilibrium conditions when the two rates are equal. The advantage of this method is that it is relatively fast as the quantity of precipitate formed is quite small. However, the performance of the method may be affected by the formation supersaturated solutions. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. A number of computer programs are available to do the calculations. They include:
[ { "math_id": 0, "text": "\\mathrm A_p \\mathrm B_q \\leftrightharpoons p\\mathrm A + q\\mathrm B" }, { "math_id": 1, "text": "K_\\mathrm{sp} = [\\mathrm A]^p[\\mathrm B]^q" }, { "math_id": 2, "text": "x_i" }, { "math_id": 3, "text": " \\left(\\frac{\\partial \\ln x_i}{\\partial T} \\right)_P = \\frac{\\bar{H}_{i,\\mathrm{aq}}-H_{i,\\mathrm{cr}}}{RT^2}" }, { "math_id": 4, "text": " \\bar{H}_{i,\\mathrm{aq}}" }, { "math_id": 5, "text": " H_{i,\\mathrm{cr}}" }, { "math_id": 6, "text": " \\ln x_i=\\frac{\\Delta _m H_i}{R} \\left(\\frac 1 {T_f} - \\frac{1}{T} \\right)" }, { "math_id": 7, "text": " \\left(\\frac{\\partial \\ln a_i}{\\partial T} \\right)_P= \\frac{H_{i,\\mathrm{aq}}-H_{i,\\mathrm{cr}}}{RT^2}" }, { "math_id": 8, "text": "\\mathrm{AgCl(s) \\leftrightharpoons Ag^+ (aq) + Cl^- (aq) }" }, { "math_id": 9, "text": "K_\\mathrm{sp}=\\mathrm{[Ag^+] [Cl^-]}= x^2" }, { "math_id": 10, "text": " \\text{Solubility} = \\mathrm{[Ag^+]=[Cl^-]} = x = \\sqrt{K_\\mathrm{sp}} " }, { "math_id": 11, "text": "K_\\mathrm{sp}=\\mathrm{[Ag^+] [Cl^-]}=x(0.01 \\,\\text{M} + x)" }, { "math_id": 12, "text": " x^2 + 0.01 \\, \\text{M}\\, x - K_{sp} = 0" }, { "math_id": 13, "text": "\\text{Solubility}=\\mathrm{[Ag^+]} = x = \\frac{K_\\mathrm{sp}}{0.01 \\,\\text{M}} = \\mathrm{1.77 \\times 10^{-8} \\, mol \\, dm^{-3}}" }, { "math_id": 14, "text": "\\log(^*K_{A}) = \\log(^*K_{A \\to 0}) + \\frac{\\gamma A_\\mathrm{m}} {3.454RT}" }, { "math_id": 15, "text": " \\left(\\frac{\\partial \\ln x_i}{\\partial P} \\right)_T = -\\frac{\\bar{V}_{i,\\mathrm{aq}}-V_{i,\\mathrm{cr}}} {RT} " }, { "math_id": 16, "text": "i" }, { "math_id": 17, "text": "P" }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "\\bar{V}_{i,\\text{aq}}" }, { "math_id": 20, "text": "V_{i,\\text{cr}}" }, { "math_id": 21, "text": "R" }, { "math_id": 22, "text": "\\mathrm { C_{12} H_{22} O_{11}(s) \\leftrightharpoons C_{12} H_{22} O_{11} (aq)}" }, { "math_id": 23, "text": "K^\\ominus = \\frac{\\left\\{\\mathrm{{C}_{12}{H}_{22}{O}_{11}(aq)}\\right\\}}{ \\left \\{\\mathrm{{C}_{12}{H}_{22}{O}_{11}(s)}\\right\\}}" }, { "math_id": 24, "text": "K^\\ominus = \\left\\{\\mathrm{{C}_{12}{H}_{22}{O}_{11}(aq)}\\right\\}" }, { "math_id": 25, "text": "K_\\mathrm{s} = \\left[\\mathrm{{C}_{12}{H}_{22}{O}_{11}(aq)}\\right]" }, { "math_id": 26, "text": "K^\\ominus\n= \\frac{\\left\\{\\ce{Ag+}_\\ce{(aq)}\\right\\}\\left\\{\\ce{Cl-}_\\ce{(aq)}\\right\\}}{ \\left\\{\\ce{AgCl_{(s)}}\\right\\}}\n=\\left\\{\\ce{Ag+}_\\ce{(aq)}\\right\\}\\left\\{\\ce{Cl-}_\\ce{(aq)}\\right\\}\n" }, { "math_id": 27, "text": "K^\\ominus" }, { "math_id": 28, "text": "K_\\ce{sp} = [\\ce{Ag+}] [\\ce{Cl-}]= [\\ce{Ag+}]^2= [\\ce{Cl-}]^2." }, { "math_id": 29, "text": " \\mathrm{AB} \\leftrightharpoons \\mathrm{A}^{p+} + \\mathrm{B}^{p-}" }, { "math_id": 30, "text": "K_{sp}= \\mathrm{[A] [B]} = \\mathrm{[A]^2}= \\mathrm{[B]^2}" }, { "math_id": 31, "text": " \\ce{ Ca(OH)_2 <=> {Ca}^{2+} + 2OH^- }" }, { "math_id": 32, "text": "K_{sp} = \\ce{[Ca]} \\ce{[OH]}^2 " }, { "math_id": 33, "text": "\\mathrm{K_{sp} = 4[Ca]^3 }" }, { "math_id": 34, "text": " \\ce{A}_p \\ce{B}_q ~\\ce{\\leftrightharpoons}~ p\\ce{A}^{n+} + q\\ce{B}^{m-}" }, { "math_id": 35, "text": " \\ce{[B]} = \\frac{q}{p}\\ce{[A] } " }, { "math_id": 36, "text": "\\mathrm{M(OH)_n \\leftrightharpoons \\mathrm{M^{n+} + n OH^-}}" }, { "math_id": 37, "text": "K_{sp} = \\mathrm{[M^{n+}] [OH^-]^n} " }, { "math_id": 38, "text": "\\mathrm{M(OH)_n + nH^+ \\leftrightharpoons M^{n+} + n H_2O }" }, { "math_id": 39, "text": "K^*_\\text{sp} = \\mathrm{[M^{n+}] [H^+]^{-n}} " }, { "math_id": 40, "text": "K_\\mathrm{w} = [\\mathrm{H^+}] [\\mathrm{OH^-}]" }, { "math_id": 41, "text": "K^*_\\text{sp} = \\frac{K_\\text{sp}}{(K_\\text{w})^n}" }, { "math_id": 42, "text": "\\log K^*_\\text{sp} = \\log K_\\text{sp} - n \\log K_\\text{w}" }, { "math_id": 43, "text": "\\mathrm {B} \\mathrm{(s)} + \\mathrm H^+ \\mathrm {(aq)} \\leftrightharpoons \\mathrm {BH}^+ (\\mathrm{aq)}" }, { "math_id": 44, "text": "\\mathrm{ HA(s) + OH^-(aq) \\leftrightharpoons A^- (aq) + H_2O}" }, { "math_id": 45, "text": "\\mathrm{AgCl(s) + 2 NH_3(aq) \\leftrightharpoons [Ag(NH_3)_2]^+(aq) + Cl^- (aq)}" } ]
https://en.wikipedia.org/wiki?curid=59529
59530924
S-object
In algebraic topology, an formula_0-object (also called a symmetric sequence) is a sequence formula_1 of objects such that each formula_2 comes with an action of the symmetric group formula_3. The category of combinatorial species is equivalent to the category of finite formula_0-sets (roughly because the permutation category is equivalent to the category of finite sets and bijections.) S-module. By "formula_0-module", we mean an formula_0-object in the category formula_4 of finite-dimensional vector spaces over a field "k" of characteristic zero (the symmetric groups act from the right by convention). Then each formula_0-module determines a Schur functor on formula_4. This definition of formula_0-module shares its name with the considerably better-known model for highly structured ring spectra due to Elmendorf, Kriz, Mandell and May. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{S}" }, { "math_id": 1, "text": "\\{ X(n) \\}" }, { "math_id": 2, "text": "X(n)" }, { "math_id": 3, "text": "\\mathbb{S}_n" }, { "math_id": 4, "text": "\\mathsf{Vect}" } ]
https://en.wikipedia.org/wiki?curid=59530924
5953545
Rate-of-return regulation
Rate-of-return regulation (also cost-based regulation) is a system for setting the prices charged by government-regulated monopolies, such as public utilities. It attempts to set prices at efficient (non-monopolistic, competitive) levels equal to the efficient costs of production, plus a government-permitted rate of return on capital. Rate-of-return regulation has been criticized because it encourages cost-padding and because if the rate is set too high, it encourages regulated firms to adopt capital-labor ratios that are too high. That is known as the Averch–Johnson effect, or simply "gold-plating." Under rate-of-return regulation, regulated monopolies have no incentive to minimize their capital purchases, since prices are set equal to their costs of production. Rate-of-return regulation was dominant in the US for a number of years in the government regulation of utility companies and other natural monopolies. Such companies, if not regulated, could easily charge far higher rates since consumers would pay any price for essential goods such as electricity or water. Method of Regulation. Rate-of-return regulation is considered fair because it gives the company the opportunity to recover the costs of serving their customers while protecting consumers from paying exorbitant prices. Under this method of regulation, government regulators examine the firm's rate base, cost of capital, operating expenses, depreciation expenses and taxes in order to estimate the total revenue needed for the firm to fully recover its expenses. Revenue Requirement Calculation. Regulators combine a company's expenses and cost of capital to calculate a revenue requirement. This revenue requirement becomes the target revenue for setting prices. formula_0 Basics for Assessing Rate of Return. The goal of rate-of-return regulation is for the regulator to evaluate the effects of different price levels on a public utility's potential earnings, protect consumers and provide the utility the opportunity to receive a "fair" rate of return on its investment. There are five criteria utilized by regulators to assess the suitable rate of return for a firm. Advantages of Rate-of-Return Regulation. Rate-of-Return regulation was mainly used due to its ability to be sustainable in the long-term and resistant to changes in the company's conditions as well as its popularity among investors. While regulation of this type prevents monopolies with the potential to make large profits from doing so, such as electricity companies, it provides stability. Investors will not make as large dividends off of regulated utility companies; however, they will be able to make fairly constant, substantial returns despite fluctuations in the economy or firm composure. Investor risk is minimized since the regulator's prudence in price setting is constrained by the method used to set the regulation rate. Therefore, investors can depend on consistency, which can be an attractive offer, especially in a volatile world market. Furthermore, regulation of this sort protects the firm from negative public opinion while providing the consumer with ease of mind. Throughout history, due to their large profits, public opinion has turned against monopolies, which eventually resulted in severe anti-trust laws in the early 20th century. Unregulated monopolies such as Standard Oil that pulled vast profits quickly became the subjects of negative public opinion, the original source of regulation of monopolies. With rate-of-return regulation, consumers can rely on the government to ensure that they are paying fair prices for their electricity and other regulated services, and not feeding into a business of trusts and greed. Disadvantages of Rate-of-Return Regulation and Criticism. The central problem with rate-of-return regulation, the reason most countries with economic regulation have switched to alternate methods of regulating such firms, is that rate-of-return regulation does not provide strong incentives for regulated firms to operate efficiently. The main form of this weakness is the Averch-Johnson effect. Firms regulated in this manner may engage in disproportionate capital accumulation, which in turn will heighten the price level allotted by the government regulator, raising the firm's short-term profits. Unnecessary capital expenditures and operating expenses would increase the firm's revenue requirement (R) as a result of both an increase in operating expenses (E) and depreciation costs (d). Depreciation costs rise due to the fact that as a firm obtains more capital, that physical capital will depreciate over time, thereby raising the overall depreciation cost and their regulated price level as allocated by the government. History of Rate-of-Return Regulation. The right of states to prescribe rates was affirmed in the United States Supreme Court case of "Munn v. Illinois" of 1877. This case generally allowed states to regulate certain businesses and practices within their borders, including railroads, which had risen to substantial power at the time. This case was one of six that were later dubbed the "Granger Cases", all concerning the proper degree of government regulation on private industry. While the political sentiment of the early 20th century was increasingly anti-monopoly and anti-trust, government officials recognized the need for some goods and services to be provided by monopolies. In specific cases, a monopolistic economic model is more efficient than a perfectly competitive model. This type of firm is called a "Natural monopoly" due to the fact that the cost-technology of the industry is markedly high, suggesting that it is more effective for only one or a few firms to dominate production. In a monopolistic market, one or several firms can make the large investment necessary, and in turn provide a large enough percentage of the output to cover the costs of their large initial investment. In a competitive market, numerous firms would be required to spend large sums on the necessary capital only to produce a small quantity of output, thereby sacrificing economic efficiency. The system of rate setting was developed through a series of Supreme Court cases beginning with the "Smyth v. Ames" case in 1898. In this so-called "Maximum Freight Case", the Supreme Court defined the constitutional limits of governmental power to set railroad utility rates. The Court stated that regulated industries had a right to "fair return." This was later overturned in the "Federal Power Commission v. Hope Natural Gas Company" case, but it was important to the development of rate-of-return regulation and more generally, to the practice of government regulation of private industry. As the concept of rate-of-return regulation spread throughout the anti-trust leaning America, the question of "what profit should investors receive?" became the main decisive issue. This was the question the "Hope" case set out to answer in 1944. Failing prices in the late 19th century raised the issue of whether profit should be based on the amount the investors originally invested in assets years earlier, or on the lower current asset value resulting from a drop in overall price level. The "Hope" case settled on a compromise for asset valuation. With respect to debt capital, "Hope" accepted the original historic cost as reasonable for valuating the debt portion of the asset rate base and allowing the historically agreed upon interest rate as its rate of return. However, with respect to equity capital, "Hope" determined that the current return value would be acceptable. Therefore, asset valuation was to be calculated by regulators based on a combination of historical cost and current return value. Rate-of-return regulation was primarily used in the United States to regulate utility companies that provide goods such as electricity, gas, telephone service, water, and television cable to the general public. Despite its relative success in regulating such companies, rate-of-return regulation was gradually replaced in the late 20th century by new, more efficient forms of regulation such as Price-cap regulation and Revenue-cap regulation. Price-cap regulation was developed in the 1980s by British Treasury economist Stephen Littlechild and was gradually incorporated globally into monopoly regulations. Price-cap regulation adjusts firm prices according to a price cap index which reflects the inflation rate in the economy generally, efficiencies a specific firm is able to utilize relative to the average firm in the economy, and the inflation in a firm's output prices relative to the average firm in the economy. Revenue-cap regulation is a similar means of regulating monopolies, except instead of prices being the regulated variable, regulators set revenue limits. These new forms of regulation gradually replaced rate-of-return regulation in the American and global economies. While rate-of-return regulation is very susceptible to the Averch-Johnson effect, new forms of regulation avoid this loophole by using indexes to properly evaluate firm efficiency and use of resources. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = (B \\times r) + E + d + T" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "d" }, { "math_id": 6, "text": "T" } ]
https://en.wikipedia.org/wiki?curid=5953545
59538
Monomorphism
Injective homomorphism In the context of abstract algebra or universal algebra, a monomorphism is an injective homomorphism. A monomorphism from X to Y is often denoted with the notation formula_0. In the more general setting of category theory, a monomorphism (also called a monic morphism or a mono) is a left-cancellative morphism. That is, an arrow "f" : "X" → "Y" such that for all objects "Z" and all morphisms "g"1, "g"2: "Z" → "X", formula_1 Monomorphisms are a categorical generalization of injective functions (also called "one-to-one functions"); in some categories the notions coincide, but monomorphisms are more general, as in the examples below. In the setting of posets intersections are idempotent: the intersection of anything with itself is itself. Monomorphisms generalize this property to arbitrary categories. A morphism is a monomorphism if it is idempotent with respect to pullbacks. The categorical dual of a monomorphism is an epimorphism, that is, a monomorphism in a category "C" is an epimorphism in the dual category "C"op. Every section is a monomorphism, and every retraction is an epimorphism. Relation to invertibility. Left-invertible morphisms are necessarily monic: if "l" is a left inverse for "f" (meaning "l" is a morphism and formula_2), then "f" is monic, as formula_3 A left-invertible morphism is called a split mono or a section. However, a monomorphism need not be left-invertible. For example, in the category Group of all groups and group homomorphisms among them, if "H" is a subgroup of "G" then the inclusion "f" : "H" → "G" is always a monomorphism; but "f" has a left inverse in the category if and only if "H" has a normal complement in "G". A morphism "f" : "X" → "Y" is monic if and only if the induced map "f"∗ : Hom("Z", "X") → Hom("Z", "Y"), defined by "f"∗("h") = "f" ∘ "h" for all morphisms "h" : "Z" → "X", is injective for all objects "Z". Examples. Every morphism in a concrete category whose underlying function is injective is a monomorphism; in other words, if morphisms are actually functions between sets, then any morphism which is a one-to-one function will necessarily be a monomorphism in the categorical sense. In the category of sets the converse also holds, so the monomorphisms are exactly the injective morphisms. The converse also holds in most naturally occurring categories of algebras because of the existence of a free object on one generator. In particular, it is true in the categories of all groups, of all rings, and in any abelian category. It is not true in general, however, that all monomorphisms must be injective in other categories; that is, there are settings in which the morphisms are functions between sets, but one can have a function that is not injective and yet is a monomorphism in the categorical sense. For example, in the category Div of divisible (abelian) groups and group homomorphisms between them there are monomorphisms that are not injective: consider, for example, the quotient map "q" : Q → Q/Z, where Q is the rationals under addition, Z the integers (also considered a group under addition), and Q/Z is the corresponding quotient group. This is not an injective map, as for example every integer is mapped to 0. Nevertheless, it is a monomorphism in this category. This follows from the implication "q" ∘ "h" = 0 ⇒ "h" = 0, which we will now prove. If "h" : "G" → Q, where "G" is some divisible group, and "q" ∘ "h" = 0, then "h"("x") ∈ Z, ∀ "x" ∈ "G". Now fix some "x" ∈ "G". Without loss of generality, we may assume that "h"("x") ≥ 0 (otherwise, choose −"x" instead). Then, letting "n" = "h"("x") + 1, since "G" is a divisible group, there exists some "y" ∈ "G" such that "x" = "ny", so "h"("x") = "n" "h"("y"). From this, and 0 ≤ "h"("x") &lt; "h"("x") + 1 = "n", it follows that formula_4 Since "h"("y") ∈ Z, it follows that "h"("y") = 0, and thus "h"("x") = 0 = "h"(−"x"), ∀ "x" ∈ "G". This says that "h" = 0, as desired. To go from that implication to the fact that "q" is a monomorphism, assume that "q" ∘ "f" = "q" ∘ "g" for some morphisms "f", "g" : "G" → Q, where "G" is some divisible group. Then "q" ∘ ("f" − "g") = 0, where ("f" − "g") : "x" ↦ "f"("x") − "g"("x"). (Since ("f" − "g")(0) = 0, and ("f" − "g")("x" + "y") = ("f" − "g")("x") + ("f" − "g")("y"), it follows that ("f" − "g") ∈ Hom("G", Q)). From the implication just proved, "q" ∘ ("f" − "g") = 0 ⇒ "f" − "g" = 0 ⇔ ∀ "x" ∈ "G", "f"("x") = "g"("x") ⇔ "f" = "g". Hence "q" is a monomorphism, as claimed. Related concepts. There are also useful concepts of "regular monomorphism", "extremal monomorphism", "immediate monomorphism", "strong monomorphism", and "split monomorphism". Terminology. The companion terms "monomorphism" and "epimorphism" were originally introduced by Nicolas Bourbaki; Bourbaki uses "monomorphism" as shorthand for an injective function. Early category theorists believed that the correct generalization of injectivity to the context of categories was the cancellation property given above. While this is not exactly true for monic maps, it is very close, so this has caused little trouble, unlike the case of epimorphisms. Saunders Mac Lane attempted to make a distinction between what he called "monomorphisms", which were maps in a concrete category whose underlying maps of sets were injective, and "monic maps", which are monomorphisms in the categorical sense of the word. This distinction never came into general use. Another name for monomorphism is "extension", although this has other uses too. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X\\hookrightarrow Y" }, { "math_id": 1, "text": "f \\circ g_1 = f \\circ g_2 \\implies g_1 = g_2." }, { "math_id": 2, "text": "l \\circ f = \\operatorname{id}_{X}" }, { "math_id": 3, "text": "f \\circ g_1 = f \\circ g_2 \\Rightarrow l\\circ f\\circ g_1 = l\\circ f\\circ g_2 \\Rightarrow g_1 = g_2." }, { "math_id": 4, "text": "0 \\leq \\frac{h(x)}{h(x) + 1} = h(y) < 1 " }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "\\mu=\\varphi\\circ\\varepsilon" }, { "math_id": 7, "text": "\\varepsilon" }, { "math_id": 8, "text": "\\mu=\\mu'\\circ\\varepsilon" }, { "math_id": 9, "text": "\\mu'" }, { "math_id": 10, "text": "\\mu:C\\to D" }, { "math_id": 11, "text": "\\varepsilon:A\\to B" }, { "math_id": 12, "text": "\\alpha:A\\to C" }, { "math_id": 13, "text": "\\beta:B\\to D" }, { "math_id": 14, "text": "\\beta\\circ\\varepsilon=\\mu\\circ\\alpha" }, { "math_id": 15, "text": "\\delta:B\\to C" }, { "math_id": 16, "text": "\\delta\\circ\\varepsilon=\\alpha" }, { "math_id": 17, "text": "\\mu\\circ\\delta=\\beta" }, { "math_id": 18, "text": "\\varepsilon\\circ\\mu=1" } ]
https://en.wikipedia.org/wiki?curid=59538
59539
Epimorphism
Surjective homomorphism In category theory, an epimorphism is a morphism "f" : "X" → "Y" that is right-cancellative in the sense that, for all objects "Z" and all morphisms "g"1, "g"2: "Y" → "Z", formula_0 Epimorphisms are categorical analogues of onto or surjective functions (and in the category of sets the concept corresponds exactly to the surjective functions), but they may not exactly coincide in all contexts; for example, the inclusion formula_1 is a ring epimorphism. The dual of an epimorphism is a monomorphism (i.e. an epimorphism in a category "C" is a monomorphism in the dual category "C"op). Many authors in abstract algebra and universal algebra define an epimorphism simply as an "onto" or surjective homomorphism. Every epimorphism in this algebraic sense is an epimorphism in the sense of category theory, but the converse is not true in all categories. In this article, the term "epimorphism" will be used in the sense of category theory given above. For more on this, see below. Examples. Every morphism in a concrete category whose underlying function is surjective is an epimorphism. In many concrete categories of interest the converse is also true. For example, in the following categories, the epimorphisms are exactly those morphisms that are surjective on the underlying sets: However, there are also many concrete categories of interest where epimorphisms fail to be surjective. A few examples are: The above differs from the case of monomorphisms where it is more frequently true that monomorphisms are precisely those whose underlying functions are injective. As for examples of epimorphisms in non-concrete categories: Properties. Every isomorphism is an epimorphism; indeed only a right-sided inverse is needed: if there exists a morphism "j" : "Y" → "X" such that "fj" = id"Y", then "f": "X" → "Y" is easily seen to be an epimorphism. A map with such a right-sided inverse is called a split epi. In a topos, a map that is both a monic morphism and an epimorphism is an isomorphism. The composition of two epimorphisms is again an epimorphism. If the composition "fg" of two morphisms is an epimorphism, then "f" must be an epimorphism. As some of the above examples show, the property of being an epimorphism is not determined by the morphism alone, but also by the category of context. If "D" is a subcategory of "C", then every morphism in "D" that is an epimorphism when considered as a morphism in "C" is also an epimorphism in "D". However the converse need not hold; the smaller category can (and often will) have more epimorphisms. As for most concepts in category theory, epimorphisms are preserved under equivalences of categories: given an equivalence "F" : "C" → "D", a morphism "f" is an epimorphism in the category "C" if and only if "F"("f") is an epimorphism in "D". A duality between two categories turns epimorphisms into monomorphisms, and vice versa. The definition of epimorphism may be reformulated to state that "f" : "X" → "Y" is an epimorphism if and only if the induced maps formula_2 are injective for every choice of "Z". This in turn is equivalent to the induced natural transformation formula_3 being a monomorphism in the functor category Set"C". Every coequalizer is an epimorphism, a consequence of the uniqueness requirement in the definition of coequalizers. It follows in particular that every cokernel is an epimorphism. The converse, namely that every epimorphism be a coequalizer, is not true in all categories. In many categories it is possible to write every morphism as the composition of an epimorphism followed by a monomorphism. For instance, given a group homomorphism "f" : "G" → "H", we can define the group "K" = im("f") and then write "f" as the composition of the surjective homomorphism "G" → "K" that is defined like "f", followed by the injective homomorphism "K" → "H" that sends each element to itself. Such a factorization of an arbitrary morphism into an epimorphism followed by a monomorphism can be carried out in all abelian categories and also in all the concrete categories mentioned above in (though not in all concrete categories). Related concepts. Among other useful concepts are "regular epimorphism", "extremal epimorphism", "immediate epimorphism", "strong epimorphism", and "split epimorphism". There is also the notion of homological epimorphism in ring theory. A morphism "f": "A" → "B" of rings is a homological epimorphism if it is an epimorphism and it induces a full and faithful functor on derived categories: D("f") : D("B") → D("A"). A morphism that is both a monomorphism and an epimorphism is called a bimorphism. Every isomorphism is a bimorphism but the converse is not true in general. For example, the map from the half-open interval [0,1) to the unit circle S1 (thought of as a subspace of the complex plane) that sends "x" to exp(2πi"x") (see Euler's formula) is continuous and bijective but not a homeomorphism since the inverse map is not continuous at 1, so it is an instance of a bimorphism that is not an isomorphism in the category Top. Another example is the embedding Q → R in the category Haus; as noted above, it is a bimorphism, but it is not bijective and therefore not an isomorphism. Similarly, in the category of rings, the map Z → Q is a bimorphism but not an isomorphism. Epimorphisms are used to define abstract quotient objects in general categories: two epimorphisms "f"1 : "X" → "Y"1 and "f"2 : "X" → "Y"2 are said to be "equivalent" if there exists an isomorphism "j" : "Y"1 → "Y"2 with "j" "f"1 = "f"2. This is an equivalence relation, and the equivalence classes are defined to be the quotient objects of "X". Terminology. The companion terms "epimorphism" and "monomorphism" were first introduced by Bourbaki. Bourbaki uses "epimorphism" as shorthand for a surjective function. Early category theorists believed that epimorphisms were the correct analogue of surjections in an arbitrary category, similar to how monomorphisms are very nearly an exact analogue of injections. Unfortunately this is incorrect; strong or regular epimorphisms behave much more closely to surjections than ordinary epimorphisms. Saunders Mac Lane attempted to create a distinction between "epimorphisms", which were maps in a concrete category whose underlying set maps were surjective, and "epic morphisms", which are epimorphisms in the modern sense. However, this distinction never caught on. It is a common mistake to believe that epimorphisms are either identical to surjections or that they are a better concept. Unfortunately this is rarely the case; epimorphisms can be very mysterious and have unexpected behavior. It is very difficult, for example, to classify all the epimorphisms of rings. In general, epimorphisms are their own unique concept, related to surjections but fundamentally different. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "g_1 \\circ f = g_2 \\circ f \\implies g_1 = g_2." }, { "math_id": 1, "text": " \\mathbb{Z}\\to\\mathbb{Q} " }, { "math_id": 2, "text": "\\begin{matrix}\\operatorname{Hom}(Y,Z) &\\rightarrow& \\operatorname{Hom}(X,Z)\\\\\ng &\\mapsto& gf\\end{matrix}" }, { "math_id": 3, "text": "\\begin{matrix}\\operatorname{Hom}(Y,-) &\\rightarrow& \\operatorname{Hom}(X,-)\\end{matrix}" }, { "math_id": 4, "text": "\\varepsilon" }, { "math_id": 5, "text": "\\varepsilon=\\mu\\circ\\varphi" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": "\\varepsilon=\\mu\\circ\\varepsilon'" }, { "math_id": 8, "text": "\\varepsilon'" }, { "math_id": 9, "text": "\\varepsilon:A\\to B" }, { "math_id": 10, "text": "\\mu:C\\to D" }, { "math_id": 11, "text": "\\alpha:A\\to C" }, { "math_id": 12, "text": "\\beta:B\\to D" }, { "math_id": 13, "text": "\\beta\\circ\\varepsilon=\\mu\\circ\\alpha" }, { "math_id": 14, "text": "\\delta:B\\to C" }, { "math_id": 15, "text": "\\delta\\circ\\varepsilon=\\alpha" }, { "math_id": 16, "text": "\\mu\\circ\\delta=\\beta" }, { "math_id": 17, "text": "\\varepsilon\\circ\\mu=1" } ]
https://en.wikipedia.org/wiki?curid=59539
5954184
De Haas–Van Alphen effect
Quantum mechanical magnetic effect The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field "B" is increased. It can be used to determine the Fermi surface of a material. Other quantities also oscillate, such as the electrical resistivity (Shubnikov–de Haas effect), specific heat, and sound attenuation and speed. It is named after Wander Johannes de Haas and his student Pieter M. van Alphen. The DHVA effect comes from the orbital motion of itinerant electrons in the material. An equivalent phenomenon at low magnetic fields is known as Landau diamagnetism. Description. The differential magnetic susceptibility of a material is defined as formula_0 where formula_1 is the applied external magnetic field and formula_2 the magnetization of the material. Such that formula_3, where formula_4 is the vacuum permeability. For practical purposes, the applied and the measured field are approximately the same formula_5 (if the material is not ferromagnetic). The oscillations of the differential susceptibility when plotted against formula_6, have a period formula_7 (in teslas−1) that is inversely proportional to the area formula_8 of the extremal orbit of the Fermi surface (m−2), in the direction of the applied field, that is formula_9, where formula_10 is Planck constant and formula_11 is the elementary charge. The existence of more than one extremal orbit leads to multiple periods becoming superimposed. A more precise formula, known as Lifshitz–Kosevich formula, can be obtained using semiclassical approximations. The modern formulation allows the experimental determination of the Fermi surface of a metal from measurements performed with different orientations of the magnetic field around the sample. History. Experimentally it was discovered in 1930 by W.J. de Haas and P.M. van Alphen under careful study of the magnetization of a single crystal of bismuth. The magnetization oscillated as a function of the field. The inspiration for the experiment was the recently discovered Shubnikov–de Haas effect by Lev Shubnikov and De Haas, which showed oscillations of the electrical resistivity as function of a strong magnetic field. De Haas thought that the magnetoresistance should behave in an analogous way. The theoretical prediction of the phenomenon was formulated before the experiment, in the same year, by Lev Landau, but he discarded it as he thought that the magnetic fields necessary for its demonstration could not yet be created in a laboratory. The effect was described mathematically using Landau quantization of the electron energies in an applied magnetic field. A strong homogeneous magnetic field — typically several teslas — and a low temperature are required to cause a material to exhibit the DHVA effect. Later in life, in private discussion, David Shoenberg asked Landau why he thought that an experimental demonstration was not possible. He answered by saying that Pyotr Kapitsa, Shoenberg's advisor, had convinced him that such homogeneity in the field was impractical. After the 1950s, the DHVA effect gained wider relevance after Lars Onsager (1952), and independently, Ilya Lifshitz and Arnold Kosevich (1954), pointed out that the phenomenon could be used to image the Fermi surface of a metal. In 1954, Lifshitz and Aleksei Pogorelov determined the range of applicability of the theory and described how to determine the shape of any arbitrary convex Fermi surface by measuring the extremal sections. Lifshitz and Pogorelov also found a relation between the temperature dependence of the oscillations and the cyclotron mass of an electron. By the 1970s the Fermi surface of most metallic elements had been reconstructed using De Haas–Van Alphen and Shubnikov–de Haas effects. Other techniques to study the Fermi surface have appeared since like the angle-resolved photoemission spectroscopy (ARPES). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\chi=\\frac{\\partial M}{\\partial H}" }, { "math_id": 1, "text": "H" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "\\mathbf{B}=\\mu_0 (\\mathbf{H}+\\mathbf{M})" }, { "math_id": 4, "text": "\\mu_0" }, { "math_id": 5, "text": "\\mathbf{B}\\approx\\mu_0 \\mathbf{H}" }, { "math_id": 6, "text": "1/B" }, { "math_id": 7, "text": "P" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "P\\left( B^{-1} \\right) = \\frac{2 \\pi e}{\\hbar S}" }, { "math_id": 10, "text": "\\hbar" }, { "math_id": 11, "text": "e" } ]
https://en.wikipedia.org/wiki?curid=5954184
59542010
Elisa Frota Pessoa
Brazilian physicist (1921–2018) Elisa Frota Pessoa, born Elisa Esther Habbema de Maia (17 January 1921 – 28 December 2018), was a Brazilian experimental physicist. She was one of the first women to graduate in physics in Brazil, in 1942, and a founding member of the Centro Brasileiro de Pesquisas Físicas (Brazilian Center of Physical Research). She was distinguished by her studies of radioactivity with nuclear emulsions; reactions and disintegrations of K and π mesons in nuclear emulsions; and reactions of protons and deuterons with nuclei of average masses. Early life. She was born in Rio de Janeiro, daughter of Juvenal Moreira Maia and Elisa Habbema de Maia. She began to become interested in science in 1935, in the "ginasial" (middle school) at the Escola Paulo de Frontin. Her greatest influence was professor Plinio Süssekind da Rocha with whom she took physics classes. He followed her closely and guided her, giving her subjects outside the program to study. At the end of high school Elisa wanted to study engineering, against the will of the family, since her father, who was very conservative, considered that the best career for women was marriage. This, however, did not prevent her from enrolling in the Physics course of the Faculty of Philosophy of the University of Brazil (current Federal University of Rio de Janeiro), graduating in 1942. Career. Together with Sonja Ashauer, who graduated the same year at USP, she was the second woman to graduate in physics in Brazil. Soon she excelled in the course and in the second year, was called by professor Joaquim da Costa Ribeiro to be his assistant. She worked with Costa Ribeiro, without receiving a salary, until 1944, when she was hired by the university. At the age of 18, she was married to her former teacher, the biologist Oswaldo Frota-Pessoa, with whom she had two children, Sonia and Roberto. In 1951, she separated of the husband, and started to live with the also physicist Jayme Tiomno. Along with Tiomno and other graduated physicists at the same time, such as José Leite Lopes, Cesar Lattes, and Mario Schenberg, she promoted science in Brazil, at the same time facing prejudice being a woman and separated from her husband in a time when divorce was illegal in Brazil. In 1949, she was one of the founders of the Brazilian Center for Physical Research (CBPF), where she was Chief of the Nuclear Emulsions Division until 1964. In 1950, she published with Neusa Margem the first research article of the new institution: "Sobre a desintegração do méson pesado positivo". This work obtained for the first time results that experimentally supported the V-A theory of weak interactions. Her other work, published in 1969, put an end to a long controversy over the possibility of "π" meson having non-zero spin. She also collaborated with European researchers in the study of K mesons. Frota Pessoa moved to Brasília in 1965, to work in the University of Brasília. She then transferred to the University of São Paulo, but she was expelled by the AI-5 in April 1969. Fleeing from the military dictatorship persecution, she worked in Europe and the US, where she collaborated in the training of Brazilian physicists. In 1975, Elisa began setting up an emulsion laboratory at PUC with the assistance of Ernst Hamburger from IFUSP. Two years later, in 1977, as a member of the Experimental Physics Department at IFUSP, she continued her work at PUC in collaboration with IFUSP. In 1980, she resumed her work at CBPF, implanting a nuclear emulsion laboratory for nuclear spectroscopy. Even after compulsory retirement, in 1991, she remained until 1995, as an emeritus professor at the center. Death. Elisa died on 28 December 2018, in Rio de Janeiro. She left five grandchildren (a biochemist, an art historian, an economist, an engineering student and a high school graduate) and six other great-grandchildren. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Sigma" } ]
https://en.wikipedia.org/wiki?curid=59542010
5954264
Mock modular form
Complex-differentiable part of a Maass wave function In mathematics, a mock modular form is the holomorphic part of a harmonic weak Maass form, and a mock theta function is essentially a mock modular form of weight . The first examples of mock theta functions were described by Srinivasa Ramanujan in his last 1920 letter to G. H. Hardy and in his lost notebook. Sander Zwegers discovered that adding certain non-holomorphic functions to them turns them into harmonic weak Maass forms. History. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; "Suppose there is a function in the Eulerian form and suppose that all or an infinity of points are exponential singularities, and also suppose that at these points the asymptotic form closes as neatly as in the cases of (A) and (B). The question is: Is the function taken the sum of two functions one of which is an ordinary "θ"-function and the other a (trivial) function which is O(1) at "all" the points "e"2"m"π"i"/"n"? ... When it is not so, I call the function a Mock θ-function." Ramanujan's original definition of a mock theta function Ramanujan's 12 January 1920 letter to Hardy listed 17 examples of functions that he called mock theta functions, and his lost notebook contained several more examples. (Ramanujan used the term "theta function" for what today would be called a modular form.) Ramanujan pointed out that they have an asymptotic expansion at the cusps, similar to that of modular forms of weight , possibly with poles at cusps, but cannot be expressed in terms of "ordinary" theta functions. He called functions with similar properties "mock theta functions". Zwegers later discovered the connection of the mock theta function with weak Maass forms. Ramanujan associated an order to his mock theta functions, which was not clearly defined. Before the work of Zwegers, the orders of known mock theta functions included 3, 5, 6, 7, 8, 10. Ramanujan's notion of order later turned out to correspond to the conductor of the Nebentypus character of the weight harmonic Maass forms which admit Ramanujan's mock theta functions as their holomorphic projections. In the next few decades, Ramanujan's mock theta functions were studied by Watson, Andrews, Selberg, Hickerson, Choi, McIntosh, and others, who proved Ramanujan's statements about them and found several more examples and identities. (Most of the "new" identities and examples were already known to Ramanujan and reappeared in his lost notebook.) In 1936, Watson found that under the action of elements of the modular group, the order 3 mock theta functions almost transform like modular forms of weight (multiplied by suitable powers of "q"), except that there are "error terms" in the functional equations, usually given as explicit integrals. However, for many years there was no good definition of a mock theta function. This changed in 2001 when Zwegers discovered the relation with non-holomorphic modular forms, Lerch sums, and indefinite theta series. Zwegers showed, using the previous work of Watson and Andrews, that the mock theta functions of orders 3, 5, and 7 can be written as the sum of a weak Maass form of weight and a function that is bounded along geodesics ending at cusps. The weak Maass form has eigenvalue under the hyperbolic Laplacian (the same value as holomorphic modular forms of weight ); however, it increases exponentially fast near cusps, so it does not satisfy the usual growth condition for Maass wave forms. Zwegers proved this result in three different ways, by relating the mock theta functions to Hecke's theta functions of indefinite lattices of dimension 2, and to Appell–Lerch sums, and to meromorphic Jacobi forms. Zwegers's fundamental result shows that mock theta functions are the "holomorphic parts" of real analytic modular forms of weight . This allows one to extend many results about modular forms to mock theta functions. In particular, like modular forms, mock theta functions all lie in certain explicit finite-dimensional spaces, which reduces the long and hard proofs of many identities between them to routine linear algebra. For the first time it became possible to produce infinite number of examples of mock theta functions; before this work there were only about 50 examples known (most of which were first found by Ramanujan). As further applications of Zwegers's ideas, Kathrin Bringmann and Ken Ono showed that certain q-series arising from the Rogers–Fine basic hypergeometric series are related to holomorphic parts of weight harmonic weak Maass forms and showed that the asymptotic series for coefficients of the order 3 mock theta function "f"("q") studied by George Andrews and Leila Dragonette converges to the coefficients. In particular Mock theta functions have asymptotic expansions at cusps of the modular group, acting on the upper half-plane, that resemble those of modular forms of weight with poles at the cusps. Definition. A mock modular form will be defined as the "holomorphic part" of a harmonic weak Maass form. Fix a weight "k", usually with 2"k" integral. Fix a subgroup Γ of SL2("Z") (or of the metaplectic group if "k" is half-integral) and a character "ρ" of Γ. A modular form "f" for this character and this group Γ transforms under elements of Γ by formula_0 A weak Maass form of weight "k" is a continuous function on the upper half plane that transforms like a modular form of weight "k" and is an eigenfunction of the weight "k" Laplacian operator, and is called harmonic if its eigenvalue is (). This is the eigenvalue of holomorphic weight "k" modular forms, so these are all examples of harmonic weak Maass forms. (A Maass form is a weak Maass form that decreases rapidly at cusps.) So a harmonic weak Maass form is annihilated by the differential operator formula_1 If "F" is any harmonic weak Maass form then the function "g" given by formula_2 is holomorphic and transforms like a modular form of weight "k", though it may not be holomorphic at cusps. If we can find any other function "g"* with the same image "g", then "F" − "g"* will be holomorphic. Such a function is given by inverting the differential operator by integration; for example we can define formula_3 where formula_4 is essentially the incomplete gamma function. The integral converges whenever "g" has a zero at the cusp "i"∞, and the incomplete gamma function can be extended by analytic continuation, so this formula can be used to define the holomorphic part "g"* of "F" even in the case when "g" is meromorphic at "i"∞, though this requires some care if "k" is 1 or not integral or if "n" = 0. The inverse of the differential operator is far from unique as we can add any homomorphic function to "g"* without affecting its image, and as a result the function "g"* need not be invariant under the group Γ. The function "h" = "F" − "g"* is called the holomorphic part of "F". A mock modular form is defined to be the holomorphic part "h" of some harmonic weak Maass form "F". So there is an isomorphism from the space of mock modular forms "h" to a subspace of the harmonic weak Maass forms. The mock modular form "h" is holomorphic but not quite modular, while "h" + "g"* is modular but not quite holomorphic. The space of mock modular forms of weight "k" contains the space of nearly modular forms ("modular forms that may be meromorphic at cusps") of weight "k" as a subspace. The quotient is (antilinearly) isomorphic to the space of holomorphic modular forms of weight 2 − "k". The weight-(2 − "k") modular form "g" corresponding to a mock modular form "h" is called its shadow. It is quite common for different mock theta functions to have the same shadow. For example, the 10 mock theta functions of order 5 found by Ramanujan fall into two groups of 5, where all the functions in each group have the same shadow (up to multiplication by a constant). Don Zagier defines a mock theta function as a rational power of "q" = e2π"i"𝜏 times a mock modular form of weight whose shadow is a theta series of the form formula_5 for a positive rational "κ" and an odd periodic function "ε". (Any such theta series is a modular form of weight ). The rational power of "q" is a historical accident. Most mock modular forms and weak Maass forms have rapid growth at cusps. It is common to impose the condition that they grow at most exponentially fast at cusps (which for mock modular forms means they are "meromorphic" at cusps). The space of mock modular forms (of given weight and group) whose growth is bounded by some fixed exponential function at cusps is finite-dimensional. Appell–Lerch sums. Appell–Lerch sums, a generalization of Lambert series, were first studied by Paul Émile Appell and Mathias Lerch. Watson studied the order 3 mock theta functions by expressing them in terms of Appell–Lerch sums, and Zwegers used them to show that mock theta functions are essentially mock modular forms. The Appell–Lerch series is formula_6 where formula_7 and formula_8 The modified series formula_9 where formula_10 and "y" = Im(𝜏) and formula_11 satisfies the following transformation properties formula_12 In other words, the modified Appell–Lerch series transforms like a modular form with respect to 𝜏. Since mock theta functions can be expressed in terms of Appell–Lerch series this means that mock theta functions transform like modular forms if they have a certain non-analytic series added to them. Indefinite theta series. George Andrews showed that several of Ramanujan's fifth order mock theta functions are equal to quotients where "θ"(𝜏) is a modular form of weight and Θ(𝜏) is a theta function of an indefinite binary quadratic form, and Dean Hickerson proved similar results for seventh order mock theta functions. Zwegers showed how to complete the indefinite theta functions to produce real analytic modular forms, and used this to give another proof of the relation between mock theta functions and weak Maass wave forms. Meromorphic Jacobi forms. George Andrews observed that some of Ramanujan's fifth order mock theta functions could be expressed in terms of quotients of Jacobi's theta functions. Zwegers used this idea to express mock theta functions as Fourier coefficients of meromorphic Jacobi forms. formula_13 of weight 2 and level 1 is a mock modular form of weight 2, with shadow a constant. This means that formula_14 transforms like a modular form of weight 2 (where 𝜏 = "x" + "iy"). formula_15 where formula_16 and "y" = Im(𝜏), "q" = e2π"i"𝜏 . Examples. Mock theta functions are mock modular forms of weight whose shadow is a unary theta function, multiplied by a rational power of "q" (for historical reasons). Before the work of Zwegers led to a general method for constructing them, most examples were given as basic hypergeometric functions, but this is largely a historical accident, and most mock theta functions have no known simple expression in terms of such functions. The "trivial" mock theta functions are the (holomorphic) modular forms of weight , which were classified by Serre and Stark, who showed that they could all be written in terms of theta functions of 1-dimensional lattices. The following examples use the q-Pochhammer symbols ("a";"q")"n" which are defined as: formula_17 Order 2. Some order 2 mock theta functions were studied by McIntosh. formula_18 (sequence in the OEIS) formula_19 (sequence in the OEIS) formula_20 (sequence in the OEIS) The function "μ" was found by Ramanujan in his lost notebook. These are related to the functions listed in the section on order-8 functions by formula_21 formula_22 formula_23 Order 3. Ramanujan mentioned four order-3 mock theta functions in his letter to Hardy, and listed a further three in his lost notebook, which were rediscovered by G. N. Watson. The latter proved the relations between them stated by Ramanujan and also found their transformations under elements of the modular group by expressing them as Appell–Lerch sums. Dragonette described the asymptotic expansion of their coefficients. Zwegers related them to harmonic weak Maass forms. See also the monograph by Nathan Fine. The seven order-3 mock theta functions given by Ramanujan are formula_24, (sequence in the OEIS). formula_25 (sequence in the OEIS). formula_26 (sequence in the OEIS). formula_27 (sequence in the OEIS). formula_28 (sequence in the OEIS). formula_29 (sequence in the OEIS). formula_30 (sequence in the OEIS). The first four of these form a group with the same shadow (up to a constant), and so do the last three. More precisely, the functions satisfy the following relations (found by Ramanujan and proved by Watson): formula_31 Order 5. Ramanujan wrote down ten mock theta functions of order 5 in his 1920 letter to Hardy, and stated some relations between them that were proved by Watson. In his lost notebook he stated some further identities relating these functions, equivalent to the mock theta conjectures, that were proved by Hickerson. Andrews found representations of many of these functions as the quotient of an indefinite theta series by modular forms of weight . formula_32 (sequence in the OEIS) formula_33 (sequence in the OEIS) formula_34 (sequence in the OEIS) formula_35 (sequence in the OEIS) formula_36 (sequence in the OEIS) formula_37 (sequence in the OEIS) formula_38 (sequence in the OEIS) formula_39 (sequence in the OEIS) formula_40 (sequence in the OEIS) formula_41 (sequence in the OEIS) formula_42 (sequence in the OEIS) formula_43 (sequence in the OEIS) Order 6. Ramanujan wrote down seven mock theta functions of order 6 in his lost notebook, and stated 11 identities between them, which were proved by Andrews and Hickerson. Two of Ramanujan's identities relate "φ" and "ψ" at various arguments, four of them express "φ" and "ψ" in terms of Appell–Lerch series, and the last five identities express the remaining five sixth-order mock theta functions in terms of "φ" and "ψ". Berndt and Chan discovered two more sixth-order functions. The order 6 mock theta functions are: formula_44 (sequence in the OEIS) formula_45 (sequence in the OEIS) formula_46 (sequence in the OEIS) formula_47 (sequence in the OEIS) formula_48 (sequence in the OEIS) formula_49 (sequence in the OEIS) formula_50 (sequence in the OEIS) formula_51 (sequence in the OEIS) formula_52 (sequence in the OEIS) Order 7. Ramanujan gave three mock theta functions of order 7 in his 1920 letter to Hardy. They were studied by Selberg, who found asymptotic expansion for their coefficients, and by Andrews. Hickerson found representations of many of these functions as the quotients of indefinite theta series by modular forms of weight . Zwegers described their modular transformation properties. These three mock theta functions have different shadows, so unlike the case of Ramanujan's order-3 and order-5 functions, there are no linear relations between them and ordinary modular forms. The corresponding weak Maass forms are formula_56 where formula_57 and formula_58 is more or less the complementary error function. Under the metaplectic group, these three functions transform according to a certain 3-dimensional representation of the metaplectic group as follows formula_59 In other words, they are the components of a level 1 vector-valued harmonic weak Maass form of weight . Order 8. Gordon and McIntosh found eight mock theta functions of order 8. They found five linear relations involving them, and expressed four of the functions as Appell–Lerch sums, and described their transformations under the modular group. The two functions "V"1 and "U"0 were found earlier by Ramanujan in his lost notebook. formula_60 (sequence in the OEIS) formula_61 (sequence in the OEIS) formula_62 (sequence in the OEIS) formula_63 (sequence in the OEIS) formula_64 (sequence in the OEIS) formula_65 (sequence in the OEIS) formula_66 (sequence in the OEIS) formula_67 (sequence in the OEIS) Order 10. Ramanujan listed four order-10 mock theta functions in his lost notebook, and stated some relations between them, which were proved by Choi. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f\\left(\\frac{a\\tau + b}{c\\tau + d}\\right) = \\rho{\n \\begin{pmatrix}\n a & b \\\\\n c & d\n \\end{pmatrix}\n}(c\\tau + d)^kf(\\tau)" }, { "math_id": 1, "text": "\\frac{\\partial}{\\partial \\tau}y^k\\frac{\\partial}{\\partial \\overline\\tau}" }, { "math_id": 2, "text": "g = y^k\\frac{\\partial \\overline{F}}{\\partial \\tau} = \\sum_n b_nq^n" }, { "math_id": 3, "text": "g^*(\\tau) = \\left(\\frac{i}{2}\\right)^{k - 1} \\int_{-\\overline\\tau}^{i\\infty} (z+\\tau)^{-k}\\overline{g(-\\overline z)}\\,dz= \\sum_nn^{k - 1}\\overline {b_n}\\beta_k(4ny)q^{-n + 1}" }, { "math_id": 4, "text": "\\displaystyle \\beta_k(t) = \\int_t^\\infty u^{-k} e^{-\\pi u} \\,du" }, { "math_id": 5, "text": "\\sum_{n\\in Z}\\varepsilon(n)nq^{\\kappa n^2}" }, { "math_id": 6, "text": "\\mu(u, v; \\tau) = \\frac{a^\\frac{1}{2}}{\\theta(v; \\tau)}\\sum_{n\\in Z}\\frac{(-b)^nq^{\\frac{1}{2}n(n + 1)}}{1 - aq^n}" }, { "math_id": 7, "text": "\\displaystyle q = e^{2\\pi i \\tau},\\quad a = e^{2\\pi i u},\\quad b = e^{2\\pi i v}" }, { "math_id": 8, "text": "\\theta(v,\\tau) = \\sum_{n\\in Z}(-1)^n b^{n + \\frac{1}{2}}q^{\\frac{1}{2}\\left(n + \\frac{1}{2}\\right)^2}." }, { "math_id": 9, "text": "\\hat\\mu(u, v; \\tau) = \\mu(u, v; \\tau) - \\frac{1}{2}R(u - v; \\tau)" }, { "math_id": 10, "text": "R(z;\\tau) = \\sum_{\\nu\\in Z + \\frac{1}{2}}(-1)^{\\nu - \\frac{1}{2}}\\left({\\rm sign}(\\nu) - E\\left[\\left(\\nu + \\frac{\\Im(z)}{y}\\right)\\sqrt{2y}\\right]\\right)e^{-2\\pi i \\nu z}q^{-\\frac{1}{2}\\nu^2}" }, { "math_id": 11, "text": "E(z) = 2\\int_0^ze^{-\\pi u^2}\\,du" }, { "math_id": 12, "text": "\\begin{align}\n \\hat\\mu(u + 1, v; \\tau) &= a^{-1}bq^{-\\frac{1}{2}}\\hat\\mu(u + \\tau, v; \\tau) \\\\\n & {} = -\\hat\\mu(u, v; \\tau) \\\\\n e^{\\frac{2}{8}\\pi i}\\hat\\mu(u, v; \\tau + 1) &= \\hat\\mu(u,v;\\tau) \\\\\n & {} = -\\left(\\frac{\\tau}{i}\\right)^{-\\frac{1}{2}}e^{\\frac{\\pi i}{\\tau} (u - v)^2}\\hat\\mu\\left(\\frac{u}{\\tau},\\frac{v}{\\tau};-\\frac{1}{\\tau}\\right).\n\\end{align}" }, { "math_id": 13, "text": "\\displaystyle E_2(\\tau) = 1-24\\sum_{n>0}\\sigma_1(n)q^n" }, { "math_id": 14, "text": "\\displaystyle E_2(\\tau) -\\frac{3}{\\pi y}" }, { "math_id": 15, "text": "F(\\tau) = \\sum_NH(N)q^n + y^{-1/2}\\sum_{n\\in Z}\\beta(4\\pi n^2y)q^{-n^2}" }, { "math_id": 16, "text": "\\beta(x) = \\frac{1}{16\\pi}\\int_1^\\infty u^{-3/2}e^{-xu}du" }, { "math_id": 17, "text": "(a;q)_n = \\prod_{0\\le j<n}(1-aq^j) = (1-a)(1-aq)\\cdots(1-aq^{n-1})." }, { "math_id": 18, "text": "A(q) = \\sum_{n\\ge 0} \\frac{q^{(n+1)^2}(-q;q^2)_n}{(q;q^2)^2_{n+1}} = \\sum_{n\\ge 0} \\frac{q^{n+1}(-q^2;q^2)_n}{(q;q^2)_{n+1}}" }, { "math_id": 19, "text": "B(q) = \\sum_{n\\ge 0} \\frac{q^{n(n+1)}(-q^2;q^2)_n}{(q;q^2)^2_{n+1}} = \\sum_{n\\ge 0} \\frac{q^{n}(-q;q^2)_n}{(q;q^2)_{n+1}}" }, { "math_id": 20, "text": "\\mu(q) = \\sum_{n\\ge 0} \\frac{(-1)^nq^{n^2}(q;q^2)_n}{(-q^2;q^2)^2_{n}} " }, { "math_id": 21, "text": " U_0(q) - 2U_1(q) = \\mu(q)" }, { "math_id": 22, "text": " V_0(q) - V_0(-q) = 4qB(q^2)" }, { "math_id": 23, "text": " V_1(q) + V_1(-q) = 2A(q^2)" }, { "math_id": 24, "text": "\nf(q) = \\sum_{n\\ge 0} \\frac{q^{n^2}}{(-q; q)_n^2} = \\frac{2}{\\prod_{n>0}(1-q^n)}\\sum_{n\\in \\mathbf{Z}}\\frac{(-1)^nq^{n(3n+1)/2}}{1+q^n}\n" }, { "math_id": 25, "text": "\n\\phi(q) = \\sum_{n\\ge 0} \\frac{q^{n^2}}{(-q^2;q^2)_n} = \\frac{1}{\\prod_{n>0}(1-q^n)}\\sum_{n\\in \\mathbf{Z}}\\frac{(-1)^n(1+q^n)q^{n(3n+1)/2}}{1+q^{2n}}\n" }, { "math_id": 26, "text": "\n\\psi(q) = \\sum_{n > 0} \\frac{q^{n^2}}{(q;q^2)_n} = \\frac{q}{\\prod_{n>0}(1-q^{4n})}\\sum_{n\\in \\mathbf{Z}}\\frac{(-1)^nq^{6n(n+1)}}{1-q^{4n+1}}\n" }, { "math_id": 27, "text": "\n\\chi(q) = \\sum_{n\\ge 0} \\frac{q^{n^2}}{\\prod_{1\\le i\\le n}(1-q^i+q^{2i})} = \\frac{1}{2 \\prod_{n>0}(1-q^n)}\\sum_{n\\in \\mathbf{Z}}\\frac{(-1)^n(1+q^n)q^{n(3n+1)/2}}{1-q^n+q^{2n}}\n" }, { "math_id": 28, "text": "\n\\omega(q) = \\sum_{n\\ge 0} \\frac{q^{2n(n+1)}}{(q;q^2)^2_{n+1}} = \\frac{1}{\\prod_{n>0}(1-q^{2n})}\\sum_{n\\ge 0}{(-1)^nq^{3n(n+1)} \\frac{1+q^{2n+1}}{1-q^{2n+1}}}\n" }, { "math_id": 29, "text": "\n\\nu(q) = \\sum_{n\\ge 0} \\frac{q^{n(n+1)}}{(-q;q^2)_{n+1}} = \\frac{1}{\\prod_{n>0}(1-q^n)}\\sum_{n\\ge 0}{(-1)^nq^{3n(n+1)/2}\\frac{1-q^{2n+1}}{1+q^{2n+1}}}\n" }, { "math_id": 30, "text": "\n\\rho(q) = \\sum_{n\\ge 0} \\frac{q^{2n(n+1)}}{\\prod_{0\\le i\\le n}(1+q^{2i+1}+q^{4i+2})} = \\frac{1}{\\prod_{n>0}(1-q^{2n})}\\sum_{n\\ge 0}{(-1)^nq^{3n(n+1)} \\frac{1-q^{4n+2}}{1+q^{2n+1}+q^{4n+2}}}\n" }, { "math_id": 31, "text": "\\begin{align}\n 2\\phi(-q) - f(q) &= f(q) + 4\\psi(-q) = \\theta_4(0,q)\\prod_{r > 0}\\left(1 + q^r\\right)^{-1} \\\\\n 4\\chi(q) - f(q) &= 3\\theta_4^2(0, q^3)\\prod_{r > 0}\\left(1 - q^r\\right)^{-1} \\\\\n 2\\rho(q) + \\omega(q) &= 3\\left(\\frac{1}{2}q^{-\\frac{3}{8}}\\theta_2(0, q^\\frac{3}{2})\\right)^2\\prod_{r > 0}\\left(1 - q^{2r}\\right)^{-1} \\\\\n \\nu(\\pm q) \\pm q\\omega\\left(q^2\\right) &= \\frac{1}{2}q^{-\\frac{1}{4}}\\theta_2(0, q)\\prod_{r > 0}\\left(1 + q^{2r}\\right) \\\\\n f\\left(q^8\\right) \\pm 2q\\omega(\\pm q) \\pm 2q^3\\omega\\left(-q^4\\right) &= \\theta_3(0, \\pm q)\\theta_3^2\\left(0, q^2\\right)\\prod_{r > 0}\\left(1 - q^{4r}\\right)^{-2} \\\\\n f(q^8) + q\\omega(q) - q\\omega(-q) &= \\theta_3(0, q^4)\n \\theta_3^2(0, q^2)\\prod_{r > 0}\\left(1 - q^{4r}\\right)^{-2}\n\\end{align}" }, { "math_id": 32, "text": "f_0(q) = \\sum_{n\\ge 0} \\frac{q^{n^2}}{(-q;q)_{n}}" }, { "math_id": 33, "text": "f_1(q) = \\sum_{n\\ge 0} \\frac{q^{n^2+n}}{(-q;q)_{n}}" }, { "math_id": 34, "text": "\\phi_0(q) = \\sum_{n\\ge 0} {q^{n^2}(-q;q^2)_{n}}" }, { "math_id": 35, "text": "\\phi_1(q) = \\sum_{n\\ge 0} {q^{(n+1)^2}(-q;q^2)_{n}}" }, { "math_id": 36, "text": "\\psi_0(q) = \\sum_{n\\ge 0} {q^{(n+1)(n+2)/2}(-q;q)_{n}}" }, { "math_id": 37, "text": "\\psi_1(q) = \\sum_{n\\ge 0} {q^{n(n+1)/2}(-q;q)_{n}}" }, { "math_id": 38, "text": "\\chi_0(q) = \\sum_{n\\ge 0} \\frac{q^{n}}{(q^{n+1};q)_{n}} = 2F_0(q)-\\phi_0(-q)" }, { "math_id": 39, "text": "\\chi_1(q) = \\sum_{n\\ge 0} \\frac{q^{n}}{(q^{n+1};q)_{n+1}} = 2F_1(q)+q^{-1}\\phi_1(-q)" }, { "math_id": 40, "text": "F_0(q) = \\sum_{n\\ge 0} \\frac{q^{2n^2}}{(q;q^2)_{n}}" }, { "math_id": 41, "text": "F_1(q) = \\sum_{n\\ge 0} \\frac{q^{2n^2+2n}}{(q;q^2)_{n+1}}" }, { "math_id": 42, "text": "\\Psi_0(q) = -1 + \\sum_{n \\ge 0} \\frac{ q^{5n^2}}{(1-q)(1-q^4)(1-q^6)(1-q^9)...(1-q^{5n-1})(1-q^{5n+1})}" }, { "math_id": 43, "text": "\\Psi_1(q) = -1 + \\sum_{n \\ge 0} \\frac{ q^{5n^2}}{(1-q^2)(1-q^3)(1-q^7)(1-q^8)...(1-q^{5n-2})(1-q^{5n+2}) }" }, { "math_id": 44, "text": "\\phi(q) = \\sum_{n\\ge 0} \\frac{(-1)^nq^{n^2}(q;q^2)_n}{(-q;q)_{2n}}" }, { "math_id": 45, "text": "\\psi(q) = \\sum_{n\\ge 0} \\frac{(-1)^nq^{(n+1)^2}(q;q^2)_n}{(-q;q)_{2n+1}}" }, { "math_id": 46, "text": "\\rho(q) = \\sum_{n\\ge 0} \\frac{q^{n(n+1)/2}(-q;q)_n}{(q;q^2)_{n+1}}" }, { "math_id": 47, "text": "\\sigma(q) = \\sum_{n\\ge 0} \\frac{q^{(n+1)(n+2)/2}(-q;q)_n}{(q;q^2)_{n+1}}" }, { "math_id": 48, "text": "\\lambda(q) = \\sum_{n\\ge 0} \\frac{(-1)^nq^{n}(q;q^2)_n}{(-q;q)_{n}}" }, { "math_id": 49, "text": "2\\mu(q) = \\sum_{n\\ge 0} \\frac{(-1)^nq^{n+1}(1+q^n)(q;q^2)_n}{(-q;q)_{n+1}}" }, { "math_id": 50, "text": "\\gamma(q) = \\sum_{n\\ge 0} \\frac{q^{n^2}(q;q)_n}{(q^3;q^3)_{n}}" }, { "math_id": 51, "text": "\\phi_{-}(q) = \\sum_{n\\ge 1} \\frac{q^{n}(-q;q)_{2n-1}}{(q;q^2)_{n}}" }, { "math_id": 52, "text": "\\psi_{-}(q) = \\sum_{n\\ge 1} \\frac{q^{n}(-q;q)_{2n-2}}{(q;q^2)_{n}}" }, { "math_id": 53, "text": "\\displaystyle F_0(q) = \\sum_{n\\ge 0}\\frac{q^{n^2}}{(q^{n+1};q)_n}" }, { "math_id": 54, "text": "\\displaystyle F_1(q) = \\sum_{n\\ge 0}\\frac{q^{n^2}}{(q^{n};q)_n}" }, { "math_id": 55, "text": "\\displaystyle F_2(q) = \\sum_{n\\ge 0}\\frac{q^{n(n+1)}}{(q^{n+1};q)_{n+1}}" }, { "math_id": 56, "text": "\n\\begin{align}\nM_1(\\tau) & = q^{-1/168}F_1(q) + R_{7,1}(\\tau) \\\\[4pt]\nM_2(\\tau) & = -q^{-25/168}F_2(q) + R_{7,2}(\\tau) \\\\[4pt]\nM_3(\\tau) & = q^{47/168}F_3(q) + R_{7,3}(\\tau)\n\\end{align}\n" }, { "math_id": 57, "text": "R_{p,j}(\\tau) = \\!\\!\\!\\! \\sum_{n\\equiv j\\bmod p}\\binom{12}{n}\\sgn(n)\\beta\\left(\\frac{n^2y}{6p}\\right)q^{-n^2/24p}" }, { "math_id": 58, "text": "\\beta(x) = \\int_x^\\infty u^{-1/2}e^{-\\pi u} \\, du" }, { "math_id": 59, "text": "\n\\begin{align}\nM_j\\left(-\\frac{1}{\\tau}\\right) & = \\sqrt\\frac{\\tau}{7i}\\,\\sum_{k=1}^32\\sin\\left(\\frac{6\\pi jk}{7}\\right)M_k(\\tau) \\\\[6pt]\nM_1(\\tau+1) & = e^{-2\\pi i/168} M_1(\\tau), \\\\[6pt]\nM_2(\\tau+1) & = e^{-2\\times 25\\pi i/168} M_2(\\tau), \\\\[6pt]\nM_3(\\tau+1) & = e^{-2\\times 121\\pi i/168} M_3(\\tau).\n\\end{align}\n" }, { "math_id": 60, "text": "S_0(q) = \\sum_{n\\ge 0} \\frac{q^{n^2} (-q;q^2)_n }{ (-q^2;q^2)_n}" }, { "math_id": 61, "text": "S_1(q) = \\sum_{n\\ge 0} \\frac{q^{n(n+2)} (-q;q^2)_n }{ (-q^2;q^2)_n}" }, { "math_id": 62, "text": "T_0(q) = \\sum_{n\\ge 0} \\frac{q^{(n+1)(n+2)} (-q^2;q^2)_n }{ (-q;q^2)_{n+1}}" }, { "math_id": 63, "text": "T_1(q) = \\sum_{n\\ge 0} \\frac{q^{n(n+1)} (-q^2;q^2)_n }{ (-q;q^2)_{n+1}}" }, { "math_id": 64, "text": "U_0(q) = \\sum_{n\\ge 0} \\frac{q^{n^2} (-q;q^2)_n }{ (-q^4;q^4)_n}" }, { "math_id": 65, "text": "U_1(q) = \\sum_{n\\ge 0} \\frac{q^{(n+1)^2} (-q;q^2)_n }{ (-q^2;q^4)_{n+1}}" }, { "math_id": 66, "text": "V_0(q) = -1+2\\sum_{n\\ge 0} \\frac{q^{n^2} (-q;q^2)_n }{ (q;q^2)_n} = -1+2\\sum_{n\\ge 0} \\frac{q^{2n^2} (-q^2;q^4)_n}{(q;q^2)_{2n+1}}" }, { "math_id": 67, "text": "V_1(q) = \\sum_{n\\ge 0} \\frac{q^{(n+1)^2} (-q;q^2)_n }{ (q;q^2)_{n+1}} = \\sum_{n\\ge 0} \\frac{q^{2n^2+2n+1} (-q^4;q^4)_n}{(q;q^2)_{2n+2}}" }, { "math_id": 68, "text": " \\phi(q)=\\sum_{n\\ge 0}\\frac{q^{n(n+1)/2}}{(q;q^2)_{n+1}}" }, { "math_id": 69, "text": " \\psi(q)=\\sum_{n\\ge 0}\\frac{q^{(n+1)(n+2)/2}}{(q;q^2)_{n+1}}" }, { "math_id": 70, "text": " \\Chi(q)=\\sum_{n\\ge 0}\\frac{(-1)^nq^{n^2}}{(-q;q)_{2n}}" }, { "math_id": 71, "text": " \\chi(q)=\\sum_{n\\ge 0}\\frac{(-1)^nq^{(n+1)^2}}{(-q;q)_{2n+1}}" } ]
https://en.wikipedia.org/wiki?curid=5954264
59543266
Alexandroff plank
Topological space mathematics Alexandroff plank in topology, an area of mathematics, is a topological space that serves as an instructive example. Definition. The construction of the Alexandroff plank starts by defining the topological space formula_0 to be the Cartesian product of formula_1 and formula_2 where formula_3 is the first uncountable ordinal, and both carry the interval topology. The topology formula_4 is extended to a topology formula_5 by adding the sets of the form formula_6 where formula_7 The Alexandroff plank is the topological space formula_8 It is called plank for being constructed from a subspace of the product of two spaces. Properties. The space formula_9 has the following properties: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(X,\\tau)" }, { "math_id": 1, "text": "[0,\\omega_1]" }, { "math_id": 2, "text": "[-1,1]," }, { "math_id": 3, "text": "\\omega_1" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "\\sigma" }, { "math_id": 6, "text": "U(\\alpha,n) = \\{p\\} \\cup (\\alpha,\\omega_1] \\times (0,1/n)" }, { "math_id": 7, "text": "p = (\\omega_1,0) \\in X." }, { "math_id": 8, "text": "(X,\\sigma)." }, { "math_id": 9, "text": "(X,\\sigma)" }, { "math_id": 10, "text": "C = \\{(\\alpha,0) : \\alpha < \\omega_1\\}" }, { "math_id": 11, "text": "(\\omega_1,0)," }, { "math_id": 12, "text": "C" }, { "math_id": 13, "text": "(\\omega_1,0)." }, { "math_id": 14, "text": "U(\\alpha,n)" }, { "math_id": 15, "text": "\\{(\\omega_1,-1/n) : n=2,3,\\dots\\}" }, { "math_id": 16, "text": "\\{V_\\alpha\\}" }, { "math_id": 17, "text": "[0,\\omega_1)" }, { "math_id": 18, "text": "\\{U_\\alpha\\}" }, { "math_id": 19, "text": "X" }, { "math_id": 20, "text": "U_1 = \\{(0,\\omega_1)\\} \\cup ([0,\\omega_1] \\times (0,1])," }, { "math_id": 21, "text": "U_2 = [0,\\omega_1] \\times [-1,0)," }, { "math_id": 22, "text": "U_\\alpha = V_\\alpha \\times [-1,1]" } ]
https://en.wikipedia.org/wiki?curid=59543266
59543651
Arens square
Topological space mathematics In mathematics, the Arens square is a topological space, named for Richard Friederich Arens. Its role is mainly to serve as a counterexample. Definition. The Arens square is the topological space formula_0 where formula_1 The topology formula_2 is defined from the following basis. Every point of formula_3 is given the local basis of relatively open sets inherited from the Euclidean topology on formula_4. The remaining points of formula_5 are given the local bases Properties. The space formula_9 is:
[ { "math_id": 0, "text": "(X,\\tau)," }, { "math_id": 1, "text": "X=((0,1)^2\\cap\\mathbb{Q}^2)\\cup\\{(0,0)\\}\\cup\\{(1,0)\\}\\cup\\{(1/2,r\\sqrt{2})|\\ r\\in\\mathbb{Q},\\ 0<r\\sqrt{2}<1\\}" }, { "math_id": 2, "text": "\\tau" }, { "math_id": 3, "text": "(0,1)^2\\cap\\mathbb{Q}^2" }, { "math_id": 4, "text": "(0,1)^2" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "U_n(0,0)=\\{(0,0)\\}\\cup\\{(x,y)|\\ 0<x<1/4,\\ 0<y<1/n\\}" }, { "math_id": 7, "text": "U_n(1,0)=\\{(1,0)\\}\\cup\\{(x,y)|\\ 3/4<x<1,\\ 0<y<1/n\\}" }, { "math_id": 8, "text": "U_n(1/2,r\\sqrt{2})=\\{(x,y)|1/4<x<3/4,\\ |y-r\\sqrt{2}|<1/n\\}" }, { "math_id": 9, "text": "(X,\\tau)" }, { "math_id": 10, "text": "(0,0)" }, { "math_id": 11, "text": "(0,1)" }, { "math_id": 12, "text": "(1/2,r\\sqrt{2})" }, { "math_id": 13, "text": "r\\in\\mathbb{Q}" }, { "math_id": 14, "text": "(0,0)\\in U_n(0,0)" }, { "math_id": 15, "text": "U" }, { "math_id": 16, "text": "(0,0)\\in U\\subset \\overline{U}\\subset U_n(0,0)" }, { "math_id": 17, "text": "\\overline{U}" }, { "math_id": 18, "text": "1/4" }, { "math_id": 19, "text": "U_n(0,0)" }, { "math_id": 20, "text": "n\\in\\mathbb{N}" }, { "math_id": 21, "text": "f:X\\to [0,1]" }, { "math_id": 22, "text": "f(0,0)=0" }, { "math_id": 23, "text": "f(1,0)=1" }, { "math_id": 24, "text": "[0,1/4)" }, { "math_id": 25, "text": "(3/4,1]" }, { "math_id": 26, "text": "[0,1]" }, { "math_id": 27, "text": "U_m(1,0)" }, { "math_id": 28, "text": "m,n\\in\\mathbb{N}" }, { "math_id": 29, "text": "r\\sqrt{2}<\\min\\{1/n,1/m\\}" }, { "math_id": 30, "text": "f(1/2,r\\sqrt{2})" }, { "math_id": 31, "text": "[0,1/4)\\cap(3/4,1]=\\emptyset" }, { "math_id": 32, "text": "f(1/2,r\\sqrt{2})\\notin[0,1/4)" }, { "math_id": 33, "text": "U\\ni f(1/2,r\\sqrt{2})" }, { "math_id": 34, "text": "\\overline{U}\\cap[0,1/4)=\\emptyset" }, { "math_id": 35, "text": "\\overline{[0,1/4)}" }, { "math_id": 36, "text": "f" }, { "math_id": 37, "text": "U_k(1/2,r\\sqrt{2})" }, { "math_id": 38, "text": "k\\in\\mathbb{N}" }, { "math_id": 39, "text": "f(1/2,r\\sqrt{2})\\notin(3/4,1]" }, { "math_id": 40, "text": "\\{(0,0),(1,0)\\}" }, { "math_id": 41, "text": "A" }, { "math_id": 42, "text": "x\\in[0,1]" }, { "math_id": 43, "text": "(x,1/4)" } ]
https://en.wikipedia.org/wiki?curid=59543651
5954800
Rabin signature algorithm
Digital signature scheme In cryptography, the Rabin signature algorithm is a method of digital signature originally proposed by Michael O. Rabin in 1978. The Rabin signature algorithm was one of the first digital signature schemes proposed. By introducing the use of hashing as an essential step in signing, it was the first design to meet what is now the modern standard of security against forgery, existential unforgeability under chosen-message attack, assuming suitably scaled parameters. Rabin signatures resemble RSA signatures with exponent formula_0, but this leads to qualitative differences that enable more efficient implementation and a security guarantee relative to the difficulty of integer factorization, which has not been proven for RSA. However, Rabin signatures have seen relatively little use or standardization outside IEEE P1363 in comparison to RSA signature schemes such as RSASSA-PKCS1-v1_5 and RSASSA-PSS. Definition. The Rabin signature scheme is parametrized by a randomized hash function formula_1 of a message formula_2 and formula_3-bit randomization string formula_4. A public key is a pair of integers formula_5 with formula_6 and formula_7 odd. formula_8 is chosen arbitrarily and may be a fixed constant. A signature on a message formula_2 is a pair formula_9 of a formula_3-bit string formula_4 and an integer formula_10 such that formula_11 The private key for a public key formula_5 is the secret odd prime factorization formula_12 of formula_7, chosen uniformly at random from some space of large primes. Let formula_13, formula_14, and formula_15. To make a signature on a message formula_2, the signer picks a formula_3-bit string formula_4 uniformly at random, and computes formula_16. If formula_17 is a quadratic nonresidue modulo formula_7, then the signer repeats this process with a different random bit-string formula_4. Otherwise, the signer computes formula_18 using a standard algorithm for computing square roots modulo a prime—picking formula_19 makes it easiest. Square roots are not unique, and different variants of the signature scheme make different choices of square root; in any case, the signer must ensure not to reveal two different roots for the same hash formula_20. The signer then uses the Chinese remainder theorem to try to solve the system formula_21 for formula_10. A solution to this system implies formula_22. If there is no solution formula_10, this process is repeated with a different random string formula_4. Otherwise, the signer finally uses the pair formula_9 as the signature The number of trials to solve formula_10 for different bit-strings formula_4 is geometrically distributed with an average around 4 trials, because about 1/4 of all integers are quadratic residues modulo formula_7. Verification is simple: Given formula_9, the signature is valid if and only if the relation formula_23 is true. Security. Security against any adversary defined generically in terms of a hash function formula_24 (i.e., security in the random oracle model) follows from the difficulty of factoring formula_7: Any such adversary with high probability of success at forgery can, with nearly as high probability, find two distinct square roots formula_25 and formula_26 of a random integer formula_20 modulo formula_7. If formula_27 then formula_28 is a nontrivial factor of formula_7, since formula_29 so formula_30 but formula_31. Formalizing the security in modern terms requires filling in some additional details, such as the codomain of formula_24; if we set a standard size formula_32 for the prime factors, formula_33, then we might specify formula_34. Randomization of the hash function was introduced to allow the signer to find a quadratic residue, but randomized hashing for signatures later became relevant in its own right for tighter security theorems and resilience to collision attacks on fixed hash functions. Variants. Removing formula_8. The quantity formula_8 in the public key adds no security, since any algorithm to solve congruences formula_35 for formula_10 given formula_8 and formula_20 can be trivially used as a subroutine in an algorithm to compute square roots modulo formula_7 and vice versa, so implementations can safely set formula_36 for simplicity; formula_8 was discarded altogether in treatments after the initial proposal. After removing formula_8, the equations for formula_37 and formula_38 become:formula_39 Rabin-Williams. The Rabin signature scheme was later tweaked by Williams in 1980 to choose formula_40 and formula_41, and replace a square root formula_10 by a tweaked square root formula_42, with formula_43 and formula_44, so that a signature instead satisfies formula_45 which allows the signer to create a signature in a single trial without sacrificing security. This variant is known as Rabin–Williams. Others. Further variants allow tradeoffs between signature size and verification speed, partial message recovery, signature compression (down to one-half size), and public key compression (down to one-third size), still without sacrificing security. Variants without the hash function have been published in textbooks, crediting Rabin for exponent 2 but not for the use of a hash function. These variants are trivially broken—for example, the signature formula_46 can be forged by anyone as a valid signature on the message formula_47 if the signature verification equation is formula_48 instead of formula_49. In the original paper, the hash function formula_1 was written with the notation formula_50, with "C" for "compression", and using juxtaposition to denote concatenation of formula_51 and formula_52 as bit strings: By convention, when wishing to sign a given message, formula_51, [the signer] formula_53 adds as suffix a word formula_52 of an agreed upon length formula_3. The choice of formula_52 is randomized each time a message is to be signed. The signer now compresses formula_54 by a hashing function to a word formula_55, so that as a binary number formula_56… This notation has led to some confusion among some authors later who ignored the formula_57 part and misunderstood formula_58 to mean multiplication, giving the misapprehension of a trivially broken signature scheme. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e=2" }, { "math_id": 1, "text": "H(m, u)" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "u" }, { "math_id": 5, "text": "(n, b)" }, { "math_id": 6, "text": "0 \\leq b < n" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "b" }, { "math_id": 9, "text": "(u, x)" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "x (x + b) \\equiv H(m, u) \\pmod n." }, { "math_id": 12, "text": "p\\cdot q" }, { "math_id": 13, "text": "d = (b/2) \\bmod n" }, { "math_id": 14, "text": "d_p = (b/2) \\bmod p" }, { "math_id": 15, "text": "d_q = (b/2) \\bmod q" }, { "math_id": 16, "text": "c := H(m, u)" }, { "math_id": 17, "text": "c + d^2" }, { "math_id": 18, "text": "\n \\begin{align}\n x_p &:= \\Bigl(-d_p \\pm \\sqrt{c + {d_p}^2}\\Bigr) \\bmod p \\\\\n x_q &:= \\Bigl(-d_q \\pm \\sqrt{c + {d_q}^2}\\Bigr) \\bmod q,\n \\end{align}\n " }, { "math_id": 19, "text": "p \\equiv q \\equiv 3 \\pmod 4" }, { "math_id": 20, "text": "c" }, { "math_id": 21, "text": "\n \\begin{align}\n x &\\equiv x_p \\pmod p \\\\\n x &\\equiv x_q \\pmod q\n \\end{align}\n " }, { "math_id": 22, "text": "x (x + b) \\equiv c\\pmod n" }, { "math_id": 23, "text": "x (x + b) \\equiv H(m, u) \\pmod n" }, { "math_id": 24, "text": "H" }, { "math_id": 25, "text": "x_1" }, { "math_id": 26, "text": "x_2" }, { "math_id": 27, "text": "x_1 \\pm x_2 \\not\\equiv 0 \\pmod n" }, { "math_id": 28, "text": "\\gcd(x_1 \\pm x_2, n)" }, { "math_id": 29, "text": "{x_1}^2 \\equiv {x_2}^2 \\equiv c \\pmod n" }, { "math_id": 30, "text": "n \\mid {x_1}^2 - {x_2}^2 = (x_1 + x_2) (x_1 - x_2)" }, { "math_id": 31, "text": "n \\nmid x_1 \\pm x_2" }, { "math_id": 32, "text": "K" }, { "math_id": 33, "text": "2^{K - 1} < p < q < 2^K" }, { "math_id": 34, "text": "H\\colon \\{0,1\\}^* \\times \\{0,1\\}^k \\to \\{0,1\\}^K" }, { "math_id": 35, "text": "x (x + b) \\equiv c \\pmod n" }, { "math_id": 36, "text": "b = 0" }, { "math_id": 37, "text": "x_p" }, { "math_id": 38, "text": "x_q" }, { "math_id": 39, "text": "\n \\begin{align}\n x_p &:= \\pm \\sqrt{c} \\bmod p \\\\\n x_q &:= \\pm \\sqrt{c} \\bmod q,\n \\end{align}\n " }, { "math_id": 40, "text": "p \\equiv 3 \\pmod 8" }, { "math_id": 41, "text": "q \\equiv 7 \\pmod 8" }, { "math_id": 42, "text": "(e, f, x)" }, { "math_id": 43, "text": "e = \\pm1" }, { "math_id": 44, "text": "f \\in \\{1,2\\}" }, { "math_id": 45, "text": "\ne f x^2 \\equiv H(m, u) \\pmod n,\n" }, { "math_id": 46, "text": "x = 2" }, { "math_id": 47, "text": "m = 4" }, { "math_id": 48, "text": "x^2 \\equiv m \\pmod n" }, { "math_id": 49, "text": "x^2 \\equiv H(m, u) \\pmod n" }, { "math_id": 50, "text": "C(MU)" }, { "math_id": 51, "text": "M" }, { "math_id": 52, "text": "U" }, { "math_id": 53, "text": "P" }, { "math_id": 54, "text": "M_1 = MU" }, { "math_id": 55, "text": "C(M_1) = c" }, { "math_id": 56, "text": "c \\leq n" }, { "math_id": 57, "text": "C" }, { "math_id": 58, "text": "MU" } ]
https://en.wikipedia.org/wiki?curid=5954800
5955036
Organofluorine chemistry
Study of chemical compounds containing fluorine-carbon bonds Organofluorine chemistry describes the chemistry of organofluorine compounds, organic compounds that contain a carbon–fluorine bond. Organofluorine compounds find diverse applications ranging from oil and water repellents to pharmaceuticals, refrigerants, and reagents in catalysis. In addition to these applications, some organofluorine compounds are pollutants because of their contributions to ozone depletion, global warming, bioaccumulation, and toxicity. The area of organofluorine chemistry often requires special techniques associated with the handling of fluorinating agents. The carbon–fluorine bond. Fluorine has several distinctive differences from all other substituents encountered in organic molecules. As a result, the physical and chemical properties of organofluorines can be distinctive in comparison to other organohalogens. In comparison to aryl chlorides and bromides, aryl fluorides form Grignard reagents only reluctantly. On the other hand, aryl fluorides, e.g. fluoroanilines and fluorophenols, often undergo nucleophilic substitution efficiently. Types of organofluorine compounds. Fluorocarbons. Formally, fluorocarbons only contain carbon and fluorine. Sometimes they are called perfluorocarbons. They can be gases, liquids, waxes, or solids, depending upon their molecular weight. The simplest fluorocarbon is the gas tetrafluoromethane (CF4). Liquids include perfluorooctane and perfluorodecalin. While fluorocarbons with single bonds are stable, unsaturated fluorocarbons are more reactive, especially those with triple bonds. Fluorocarbons are more chemically and thermally stable than hydrocarbons, reflecting the relative inertness of the C-F bond. They are also relatively lipophobic. Because of the reduced intermolecular van der Waals interactions, fluorocarbon-based compounds are sometimes used as lubricants or are highly volatile. Fluorocarbon liquids have medical applications as oxygen carriers. The structure of organofluorine compounds can be distinctive. As shown below, perfluorinated aliphatic compounds tend to segregate from hydrocarbons. This "like dissolves like effect" is related to the usefulness of fluorous phases and the use of PFOA in processing of fluoropolymers. In contrast to the aliphatic derivatives, perfluoroaromatic derivatives tend to form mixed phases with nonfluorinated aromatic compounds, resulting from donor-acceptor interactions between the pi-systems. Fluoropolymers. Polymeric organofluorine compounds are numerous and commercially significant. They range from fully fluorinated species, e.g. PTFE to partially fluorinated, e.g. polyvinylidene fluoride ([CH2CF2]n) and polychlorotrifluoroethylene ([CFClCF2]n). The fluoropolymer polytetrafluoroethylene (PTFE/Teflon) is a solid. Hydrofluorocarbons. Hydrofluorocarbons (HFCs), organic compounds that contain fluorine and hydrogen atoms, are the most common type of organofluorine compounds. They are commonly used in air conditioning and as refrigerants in place of the older chlorofluorocarbons such as R-12 and hydrochlorofluorocarbons such as R-21. They do not harm the ozone layer as much as the compounds they replace; however, they do contribute to global warming. Their atmospheric concentrations and contribution to anthropogenic greenhouse gas emissions are rapidly increasing, causing international concern about their radiative forcing. Fluorocarbons with few C-F bonds behave similarly to the parent hydrocarbons, but their reactivity can be altered significantly. For example, both uracil and 5-fluorouracil are colourless, high-melting crystalline solids, but the latter is a potent anti-cancer drug. The use of the C-F bond in pharmaceuticals is predicated on this altered reactivity. Several drugs and agrochemicals contain only one fluorine center or one trifluoromethyl group. Unlike other greenhouse gases in the Paris Agreement, hydrofluorocarbons have other international negotiations. In September 2016, the so-called New York Declaration urged a global reduction in the use of HFCs. On 15 October 2016, due to these chemicals contribution to climate change, negotiators from 197 nations meeting at the summit of the United Nations Environment Programme in Kigali, Rwanda reached a legally-binding accord to phase out hydrofluorocarbons (HFCs) in an amendment to the Montreal Protocol. Fluorocarbenes. As indicated throughout this article, fluorine-substituents lead to reactivity that differs strongly from classical organic chemistry. The premier example is difluorocarbene, CF2, which is a singlet whereas carbene (CH2) has a triplet ground state. This difference is significant because difluorocarbene is a precursor to tetrafluoroethylene. Perfluorinated compounds. Perfluorinated compounds are fluorocarbon derivatives, as they are closely structurally related to fluorocarbons. However, they also possess new atoms such as nitrogen, iodine, or ionic groups, such as perfluorinated carboxylic acids. Methods for preparation of C–F bonds. Organofluorine compounds are prepared by numerous routes, depending on the degree and regiochemistry of fluorination sought and the nature of the precursors. The direct fluorination of hydrocarbons with F2, often diluted with N2, is useful for highly fluorinated compounds: R3CH + F2 → R3CF + HF Such reactions however are often unselective and require care because hydrocarbons can uncontrollably "burn" in F2, analogous to the combustion of hydrocarbon in O2. For this reason, alternative fluorination methodologies have been developed. Generally, such methods are classified into two classes. Electrophilic fluorination. Electrophilic fluorination rely on sources of "F+". Often such reagents feature N-F bonds, for example F-TEDA-BF4. Asymmetric fluorination, whereby only one of two possible enantiomeric products are generated from a prochiral substrate, rely on electrophilic fluorination reagents. Illustrative of this approach is the preparation of a precursor to anti-inflammatory agents: Electrosynthetic methods. A specialized but important method of electrophilic fluorination involves electrosynthesis. The method is mainly used to perfluorinate, i.e. replace all C–H bonds by C–F bonds. The hydrocarbon is dissolved or suspended in liquid HF, and the mixture is electrolyzed at 5–6 V using Ni anodes. The method was first demonstrated with the preparation of perfluoropyridine (C5F5N) from pyridine (C5H5N). Several variations of this technique have been described, including the use of molten potassium bifluoride or organic solvents. Nucleophilic fluorination. The major alternative to electrophilic fluorination is nucleophilic fluorination using reagents that are sources of "F−," for Nucleophilic displacement typically of chloride and bromide. Metathesis reactions employing alkali metal fluorides are the simplest. For aliphatic compounds this is sometimes called the Finkelstein reaction, while for aromatic compounds it is known as the Halex process. R3CCl + MF → R3CF + MCl (M = Na, K, Cs) Alkyl monofluorides can be obtained from alcohols and Olah reagent (pyridinium fluoride) or another fluoridating agents. The decomposition of aryldiazonium tetrafluoroborates in the Sandmeyer or Schiemann reactions exploit fluoroborates as F− sources. ArN2BF4 → ArF + N2 + BF3 Although hydrogen fluoride may appear to be an unlikely nucleophile, it is the most common source of fluoride in the synthesis of organofluorine compounds. Such reactions are often catalysed by metal fluorides such as chromium trifluoride. 1,1,1,2-Tetrafluoroethane, a replacement for CFC's, is prepared industrially using this approach: Cl2C=CClH + 4 HF → F3CCFH2 + 3 HCl Notice that this transformation entails two reaction types, metathesis (replacement of Cl− by F−) and hydrofluorination of an alkene. Deoxofluorination. Deoxofluorination convert a variety of oxygen-containing groups into fluorides. The usual reagent is sulfur tetrafluoride: RCO2H + SF4 → RCF3 + SO2 + HF A more convenient alternative to SF4 is the diethylaminosulfur trifluoride, which is a liquid whereas SF4 is a corrosive gas: Apart from DAST, a wide variety of similar reagents exist, including, but not limited to, 2-pyridinesulfonyl fluoride (PyFluor) and "N"-tosyl-4-chlorobenzenesulfonimidoyl fluoride (SulfoxFluor). Many of these display improved properties such as better safety profile, higher thermodynamic stability, ease of handling, high enantioselectivity, and selectivity over elimination side-reactions. From fluorinated building blocks. Many organofluorine compounds are generated from reagents that deliver perfluoroalkyl and perfluoroaryl groups. (Trifluoromethyl)trimethylsilane, CF3Si(CH3)3, is used as a source of the trifluoromethyl group, for example. Among the available fluorinated building blocks are CF3X (X = Br, I), C6F5Br, and C3F7I. These species form Grignard reagents that then can be treated with a variety of electrophiles. The development of fluorous technologies (see below, under solvents) is leading to the development of reagents for the introduction of "fluorous tails". A special but significant application of the fluorinated building block approach is the synthesis of tetrafluoroethylene, which is produced on a large-scale industrially via the intermediacy of difluorocarbene. The process begins with the thermal (600-800 °C) dehydrochlorination of chlorodifluoromethane: CHClF2 → CF2 + HCl 2 CF2 → C2F4 Sodium fluorodichloroacetate (CAS# 2837-90-3) is used to generate chlorofluorocarbene, for cyclopropanations. 18F-Delivery methods. The usefulness of fluorine-containing radiopharmaceuticals in 18F-positron emission tomography has motivated the development of new methods for forming C–F bonds. Because of the short half-life of 18F, these syntheses must be highly efficient, rapid, and easy. Illustrative of the methods is the preparation of fluoride-modified glucose by displacement of a triflate by a labeled fluoride nucleophile: Biological role. Biologically synthesized organofluorines have been found in microorganisms and plants, but not animals. The most common example is fluoroacetate, which occurs as a plant defence against herbivores in at least 40 plants in Australia, Brazil and Africa. Other biologically synthesized organofluorines include ω-fluoro fatty acids, fluoroacetone, and 2-fluorocitrate which are all believed to be biosynthesized in biochemical pathways from the intermediate fluoroacetaldehyde. Adenosyl-fluoride synthase is an enzyme capable of biologically synthesizing the carbon–fluorine bond. Applications. Organofluorine chemistry impacts many areas of everyday life and technology. The C-F bond is found in pharmaceuticals, agrichemicals, fluoropolymers, refrigerants, surfactants, anesthetics, oil-repellents, catalysis, and water-repellents, among others. Pharmaceuticals and agrochemicals. The carbon-fluorine bond is commonly found in pharmaceuticals and agrochemicals because it is generally metabolically stable and fluorine acts as a bioisostere of the hydrogen atom. An estimated 1/5 of pharmaceuticals contain fluorine, including several of the top drugs. Examples include 5-fluorouracil, flunitrazepam (Rohypnol), fluoxetine (Prozac), paroxetine (Paxil), ciprofloxacin (Cipro), mefloquine, and fluconazole. Introducing the carbon–fluorine bond to organic compounds is the major challenge for medicinal chemists using organofluorine chemistry, as the carbon–fluorine bond increases the probability of having a successful drug by about a factor of ten. Inhaler propellant. Fluorocarbons are also used as a propellant for metered-dose inhalers used to administer some asthma medications. The current generation of propellant consists of hydrofluoroalkanes (HFA), which have replaced CFC-propellant-based inhalers. CFC inhalers were banned as of 2008[ [update]] as part of the Montreal Protocol because of environmental concerns with the ozone layer. HFA propellant inhalers like FloVent and ProAir ( Salbutamol ) have no generic versions available as of October 2014. Fluorosurfactants. Fluorosurfactants, which have a polyfluorinated "tail" and a hydrophilic "head", serve as surfactants because they concentrate at the liquid-air interface due to their lipophobicity. Fluorosurfactants have low surface energies and dramatically lower surface tension. The fluorosurfactants perfluorooctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) are two of the most studied because of their ubiquity, toxicity, and long residence times in humans and wildlife. Solvents. Fluorinated compounds often display distinct solubility properties. Dichlorodifluoromethane and chlorodifluoromethane were at one time widely used refrigerants. CFCs have potent ozone depletion potential due to the homolytic cleavage of the carbon-chlorine bonds; their use is largely prohibited by the Montreal Protocol. Hydrofluorocarbons (HFCs), such as tetrafluoroethane, serve as CFC replacements because they do not catalyze ozone depletion. Oxygen exhibits a high solubility in perfluorocarbon compounds, reflecting on their lipophilicity. Perfluorodecalin has been demonstrated as a blood substitute transporting oxygen to the lungs. Fluorine-substituted ethers are volatile anesthetics, including the commercial products methoxyflurane, enflurane, isoflurane, sevoflurane and desflurane. Fluorocarbon anesthetics reduce the hazard of flammability with diethyl ether and cyclopropane. Perfluorinated alkanes are used as blood substitutes. The solvent 1,1,1,2-tetrafluoroethane has been used for extraction of natural products such as taxol, evening primrose oil, and vanillin. 2,2,2-trifluoroethanol is an oxidation-resistant polar solvent. Organofluorine reagents. The development of organofluorine chemistry has contributed many reagents of value beyond organofluorine chemistry. Triflic acid (CF3SO3H) and trifluoroacetic acid (CF3CO2H) are useful throughout organic synthesis. Their strong acidity is attributed to the electronegativity of the trifluoromethyl group that stabilizes the negative charge. The triflate-group (the conjugate base of the triflic acid) is a good leaving group in substitution reactions. Fluorous phases. Highly fluorinated substituents, e.g. perfluorohexyl (C6F13) confer distinctive solubility properties to molecules, which facilitates purification of products in organic synthesis. This area, described as "fluorous chemistry," exploits the concept of like-dissolves-like in the sense that fluorine-rich compounds dissolve preferentially in fluorine-rich solvents. Because of the relative inertness of the C-F bond, such fluorous phases are compatible with harsh reagents. This theme has spawned techniques of "fluorous tagging" and "fluorous protection". Illustrative of fluorous technology is the use of fluoroalkyl-substituted tin hydrides for reductions, the products being easily separated from the spent tin reagent by extraction using fluorinated solvents. Hydrophobic fluorinated ionic liquids, such as organic salts of bistriflimide or hexafluorophosphate, can form phases that are insoluble in both water and organic solvents, producing multiphasic liquids. Organofluorine ligands in coordination chemistry. Organofluorine ligands have long been featured in organometallic and coordination chemistry. One advantage to F-containing ligands is the convenience of 19F NMR spectroscopy for monitoring reactions. The organofluorine compounds can serve as a "sigma-donor ligand," as illustrated by the titanium(III) derivative [(C5Me5)2Ti(FC6H5)]BPh4. Most often, however, fluorocarbon substituents are used to enhance the Lewis acidity of metal centers. A premier example is "Eufod," a coordination complex of europium(III) that features a perfluoroheptyl modified acetylacetonate ligand. This and related species are useful in organic synthesis and as "shift reagents" in NMR spectroscopy. In an area where coordination chemistry and materials science overlap, the fluorination of organic ligands is used to tune the properties of component molecules. For example, the degree and regiochemistry of fluorination of metalated 2-phenylpyridine ligands in platinum(II) complexes significantly modifies the emission properties of the complexes. The coordination chemistry of organofluorine ligands also embraces fluorous technologies. For example, triphenylphosphine has been modified by attachment of perfluoroalkyl substituents that confer solubility in perfluorohexane as well as supercritical carbon dioxide. As a specific example, [(C8F17C3H6-4-C6H4)3P. Some metal complexes cleave C-F bonds. These reactions are of interest from the perspectives of organic synthesis and remediation of xenochemicals. C-F bond activation has been classified as follows "(i) oxidative addition of fluorocarbon, (ii) M–C bond formation with HF elimination, (iii) M–C bond formation with fluorosilane elimination, (iv) hydrodefluorination of fluorocarbon with M–F bond formation, (v) nucleophilic attack on fluorocarbon, and (vi) defluorination of fluorocarbon". An illustrative metal-mediated C-F activation reaction is the defluorination of fluorohexane by a zirconocene dihydride: Fluorine-containing compounds are often featured in noncoordinating or weakly coordinating anions. Both tetrakis(pentafluorophenyl)borate, B(C6F5)4−, and the related tetrakis[3,5-bis(trifluoromethyl)phenyl]borate, are useful in Ziegler-Natta catalysis and related alkene polymerization methodologies. The fluorinated substituents render the anions weakly basic and enhance the solubility in weakly basic solvents, which are compatible with strong Lewis acids. Materials science. Organofluorine compounds enjoy many niche applications in materials science. With a low coefficient of friction, fluid fluoropolymers are used as specialty lubricants. Fluorocarbon-based greases are used in demanding applications. Representative products include Fomblin and Krytox, made by Solvay Solexis and DuPont, respectively. Certain firearm lubricants such as "Tetra Gun" contain fluorocarbons. Capitalizing on their nonflammability, fluorocarbons are used in fire fighting foam. Organofluorine compounds are components of liquid crystal displays. The polymeric analogue of triflic acid, nafion is a solid acid that is used as the membrane in most low temperature fuel cells. The bifunctional monomer 4,4'-difluorobenzophenone is a precursor to PEEK-class polymers. Biosynthesis of organofluorine compounds. In contrast to the many naturally-occurring organic compounds containing the heavier halides, chloride, bromide, and iodide, only a handful of biologically synthesized carbon-fluorine bonds are known. The most common natural organofluorine species is fluoroacetate, a toxin found in a few species of plants. Others include fluorooleic acid, fluoroacetone, nucleocidin (4'-fluoro-5'-O-sulfamoyladenosine), fluorothreonine, and 2-fluorocitrate. Several of these species are probably biosynthesized from fluoroacetaldehyde. The enzyme fluorinase catalyzed the synthesis of 5'-deoxy-5'-fluoroadenosine (see scheme to right). History. Organofluorine chemistry began in the 1800s with the development of organic chemistry. The first organofluorine compounds were prepared using antimony trifluoride as the F− source. The nonflammability and nontoxicity of the chlorofluorocarbons CCl3F and CCl2F2 attracted industrial attention in the 1920s. On April 6, 1938, Roy J. Plunkett a young research chemist who worked at DuPont's Jackson Laboratory in Deepwater, New Jersey, accidentally discovered polytetrafluoroethylene (PTFE). Subsequent major developments, especially in the US, benefited from expertise gained in the production of uranium hexafluoride. Starting in the late 1940s, a series of electrophilic fluorinating methodologies were introduced, beginning with CoF3. Electrochemical fluorination ("electrofluorination") was announced, which Joseph H. Simons had developed in the 1930s to generate highly stable perfluorinated materials compatible with uranium hexafluoride. These new methodologies allowed the synthesis of C-F bonds without using elemental fluorine and without relying on metathetical methods. In 1957, the anticancer activity of 5-fluorouracil was described. This report provided one of the first examples of rational design of drugs. This discovery sparked a surge of interest in fluorinated pharmaceuticals and agrichemicals. The discovery of the noble gas compounds, e.g. XeF4, provided a host of new reagents starting in the early 1960s. In the 1970s, fluorodeoxyglucose was established as a useful reagent in 18F positron emission tomography. In Nobel Prize-winning work, CFC's were shown to contribute to the depletion of atmospheric ozone. This discovery alerted the world to the negative consequences of organofluorine compounds and motivated the development of new routes to organofluorine compounds. In 2002, the first C-F bond-forming enzyme, fluorinase, was reported. Environmental and health concerns. Only a few organofluorine compounds are acutely bioactive and highly toxic, such as fluoroacetate and perfluoroisobutene. Some organofluorine compounds pose significant risks and dangers to health and the environment. CFCs and HCFCs (hydrochlorofluorocarbon) deplete the ozone layer and are potent greenhouse gases. HFCs are potent greenhouse gases and are facing calls for stricter international regulation and phase out schedules as a fast-acting greenhouse emission abatement measure, as are perfluorocarbons (PFCs), and sulfur hexafluoride (SF6). Because of the compound's effect on climate, the G-20 major economies agreed in 2013 to support initiatives to phase out use of HCFCs. They affirmed the roles of the Montreal Protocol and the United Nations Framework Convention on Climate Change in global HCFC accounting and reduction. The U.S. and China at the same time announced a bilateral agreement to similar effect. Persistence and bioaccumulation. Because of the strength of the carbon–fluorine bond, many synthetic fluorocarbons and fluorocarbon-based compounds are persistent in the environment. Fluorosurfactants, such as PFOS and PFOA, are persistent global contaminants. Fluorocarbon based CFCs and tetrafluoromethane have been reported in igneous and metamorphic rock. PFOS is a persistent organic pollutant and may be harming the health of wildlife; the potential health effects of PFOA to humans are under investigation by the C8 Science Panel. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\times" } ]
https://en.wikipedia.org/wiki?curid=5955036
59551559
Evdokimov's algorithm
Algorithm for factorization of polynomials In computational number theory, Evdokimov's algorithm, named after Sergei Evdokimov, is an algorithm for factorization of polynomials over finite fields. It was the fastest algorithm known for this problem, from its publication in 1994 until 2020. It can factorize a one-variable polynomial of degree formula_0 over an explicitly given finite field of cardinality formula_1. Assuming the generalized Riemann hypothesis the algorithm runs in deterministic time formula_2 (see Big O notation). This is an improvement of both Berlekamp's algorithm and Rónyai's algorithm in the sense that the first algorithm is polynomial for small characteristic of the field, whearas the second one is polynomial for small formula_0; however, both of them are exponential if no restriction is made. The factorization of a polynomial formula_3 over a ground field formula_4 is reduced to the case when formula_3 has no multiple roots and is completely splitting over formula_4 (i.e. formula_3 has formula_0 distinct roots in formula_4). In order to find a root of formula_3 in this case, the algorithm deals with polynomials not only over the ground field formula_4 but also over a completely splitting semisimple algebra over formula_4 (an example of such an algebra is given by formula_5, where formula_6). The main problem here is to find efficiently a nonzero zero-divisor in the algebra. The GRH is used only to take roots in finite fields in polynomial time. Thus the Evdokimov algorithm, in fact, solves a polynomial equation over a finite field "by radicals" in quasipolynomial time. The analyses of Evdokimov's algorithm is closely related with some problems in the association scheme theory. With the help of this approach, it was proved that if formula_0 is a prime and formula_7 has a ‘large’ formula_8-smooth divisor formula_9, then a modification of the Evdokimov algorithm finds a nontrivial factor of the polynomial formula_3 in deterministic formula_10 time, assuming GRH and that formula_11. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "q " }, { "math_id": 2, "text": "(n^{\\log n}\\log q)^{{\\mathcal O}(1)}" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "k[X]/(f) = k[A]" }, { "math_id": 6, "text": "A = X\\bmod f" }, { "math_id": 7, "text": "n-1" }, { "math_id": 8, "text": "r" }, { "math_id": 9, "text": "s" }, { "math_id": 10, "text": "\\operatorname{poly}(n^r,\\log q)" }, { "math_id": 11, "text": "s=\\Omega\\left(\\sqrt{n/2^r}\\,\\right)" } ]
https://en.wikipedia.org/wiki?curid=59551559
59558857
Stabilization hypothesis
In mathematics, specifically in category theory and algebraic topology, the Baez–Dolan stabilization hypothesis, proposed in , states that suspension of a weak "n"-category has no more essential effect after "n" + 2 times. Precisely, it states that the suspension functor formula_0 is an equivalence for formula_1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{nCat}_k \\to \\mathsf{nCat}_{k+1}" }, { "math_id": 1, "text": "k \\ge n + 2" } ]
https://en.wikipedia.org/wiki?curid=59558857
59560643
Gautschi's inequality
In real analysis, a branch of mathematics, Gautschi's inequality is an inequality for ratios of gamma functions. It is named after Walter Gautschi. Statement. Let formula_0 be a positive real number, and let formula_1. Then formula_2 History. In 1948, Wendel proved the inequalities formula_3 for formula_4 and formula_1. He used this to determine the asymptotic behavior of a ratio of gamma functions. The upper bound in this inequality is stronger than the one given above. In 1959, Gautschi independently proved two inequalities for ratios of gamma functions. His lower bounds were identical to Wendel's. One of his upper bounds was the one given in the statement above, while the other one was sometimes stronger and sometimes weaker than Wendel's. Consequences. An immediate consequence is the following description of the asymptotic behavior of ratios of gamma functions: formula_5 Proofs. There are several known proofs of Gautschi's inequality. One simple proof is based on the strict logarithmic convexity of Euler's gamma function. By definition, this means that for every formula_6 and formula_7 with formula_8 and every formula_9, we have formula_10 Apply this inequality with formula_11, formula_12, and formula_13. Also apply it with formula_14, formula_15, and formula_16. The resulting inequalities are: formula_17 Rearranging the first of these gives the lower bound, while rearranging the second and applying the trivial estimate formula_18 gives the upper bound. Related inequalities. A survey of inequalities for ratios of gamma functions was written by Qi. The proof by logarithmic convexity gives the stronger upper bound formula_19 Gautschi's original paper proved a different stronger upper bound, formula_20 where formula_21 is the digamma function. Neither of these upper bounds is always stronger than the other. Kershaw proved two tighter inequalities. Again assuming that formula_4 and formula_22, formula_23 Gautschi's inequality is specific to a quotient of gamma functions evaluated at two real numbers having a small difference. However, there are extensions to other situations. If formula_0 and formula_24 are positive real numbers, then the convexity of formula_21 leads to the inequality: formula_25 For formula_22, this leads to the estimates formula_26 A related but weaker inequality can be easily derived from the mean value theorem and the monotonicity of formula_21. A more explicit inequality valid for a wider class of arguments is due to Kečkić and Vasić, who proved that if formula_27, then: formula_28 In particular, for formula_1, we have: formula_29 Guo, Qi, and Srivastava proved a similar-looking inequality, valid for all formula_30: formula_31 For formula_1, this leads to: formula_32 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "s\\in (0,1)" }, { "math_id": 2, "text": "x^{1 - s} < \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < (x + 1)^{1 - s}." }, { "math_id": 3, "text": "\\left(\\frac{x}{x + s}\\right)^{1 - s} \\le \\frac{\\Gamma(x + s)}{x^s\\Gamma(x)} \\le 1" }, { "math_id": 4, "text": "x>0" }, { "math_id": 5, "text": "\\lim_{x \\to \\infty} \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)x^{1-s}} = 1." }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "u \\neq v" }, { "math_id": 9, "text": "t\\in (0,1)" }, { "math_id": 10, "text": "\\Gamma(tu + (1 - t)v) < \\Gamma(u)^t\\Gamma(v)^{1-t}." }, { "math_id": 11, "text": "u=x" }, { "math_id": 12, "text": "v=x+1" }, { "math_id": 13, "text": "t=1-s" }, { "math_id": 14, "text": "u=x+2" }, { "math_id": 15, "text": "v=x+s+1" }, { "math_id": 16, "text": "t=s" }, { "math_id": 17, "text": "\\begin{align}\n\\Gamma(x + s) &< \\Gamma(x)^{1 - s}\\Gamma(x + 1)^s = x^{s - 1}\\Gamma(x + 1), \\\\\n\\Gamma(x + 1) &< \\Gamma(x + s)^s\\Gamma(x + s + 1)^{1 - s} = (x + s)^{1 - s}\\Gamma(x + s).\n\\end{align}" }, { "math_id": 18, "text": "x + s < x + 1" }, { "math_id": 19, "text": "\\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < (x + s)^{1 - s}." }, { "math_id": 20, "text": "\\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} \\le \\exp((1 - s)\\psi(x + 1))," }, { "math_id": 21, "text": "\\psi" }, { "math_id": 22, "text": "s\\in(0,1)" }, { "math_id": 23, "text": "\\begin{align}\n\\left(x + \\frac{s}{2}\\right)^{1 - s} &< \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < \\left[x - \\frac{1}{2} + \\left(s + \\frac{1}{4}\\right)^{1/2}\\right]^{1 - s}, \\\\\n\\exp\\left((1 - s)\\psi(x + s^{1/2})\\right) &< \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < \\exp\\left((1 - s)\\psi\\left(x + \\frac{1}{2}(s + 1)\\right)\\right).\n\\end{align}" }, { "math_id": 24, "text": "y" }, { "math_id": 25, "text": "\\frac{1}{2}(\\psi(x) + \\psi(y)) \\le \\frac{\\log \\Gamma(y) - \\log \\Gamma(x)}{y - x} \\le \\psi\\left(\\frac{x + y}{2}\\right)." }, { "math_id": 26, "text": "\\exp\\bigl((1 - s)\\psi(x + s)\\bigr) \\le \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} \\le \\exp\\left((1 - s)\\psi\\left(x + \\frac{1}{2}(s + 1)\\right)\\right)." }, { "math_id": 27, "text": "y>x>1" }, { "math_id": 28, "text": "\\frac{y^{y-1}}{x^{x-1}}e^{x-y} < \\frac{\\Gamma(y)}{\\Gamma(x)} < \\frac{y^{y-1/2}}{x^{x-1/2}}e^{x-y}." }, { "math_id": 29, "text": "\\frac{(x + 1)^x}{(x + s)^{x + s - 1}}e^{-(1 - s)} < \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < \\frac{(x + 1)^{x + 1/2}}{(x + s)^{x + s - 1/2}}e^{-(1 - s)}." }, { "math_id": 30, "text": "y>x>0" }, { "math_id": 31, "text": "\\frac{(x + 1)^{x + 1}}{(y + 1)^{y + 1}}e^{y-x} < \\frac{\\Gamma(x + 1)}{\\Gamma(y + 1)} < \\frac{(x + 1/2)^{x + 1/2}}{(y + 1/2)^{y + 1/2}}e^{y-x}." }, { "math_id": 32, "text": "\\frac{(x + 1)^{x + 1}}{(x + s)^{x + s}}e^{s - 1} < \\frac{\\Gamma(x + 1)}{\\Gamma(x + s)} < \\frac{(x + 1/2)^{x + 1/2}}{(x + s - 1/2)^{x + s - 1/2}}e^{s - 1}." } ]
https://en.wikipedia.org/wiki?curid=59560643
59565
Figure-eight knot (mathematics)
Unique knot with a crossing number of four In knot theory, a figure-eight knot (also called Listing's knot) is the unique knot with a crossing number of four. This makes it the knot with the third-smallest possible crossing number, after the unknot and the trefoil knot. The figure-eight knot is a prime knot. Origin of name. The name is given because tying a normal figure-eight knot in a rope and then joining the ends together, in the most natural way, gives a model of the mathematical knot. Description. A simple parametric representation of the figure-eight knot is as the set of all points ("x","y","z") where formula_0 for "t" varying over the real numbers (see 2D visual realization at bottom right). The figure-eight knot is prime, alternating, rational with an associated value of 5/3, and is achiral. The figure-eight knot is also a fibered knot. This follows from other, less simple (but very interesting) representations of the knot: (1) It is a "homogeneous" closed braid (namely, the closure of the 3-string braid σ1σ2−1σ1σ2−1), and a theorem of John Stallings shows that any closed homogeneous braid is fibered. (2) It is the link at (0,0,0,0) of an isolated critical point of a real-polynomial map F: R4→R2, so (according to a theorem of John Milnor) the Milnor map of F is actually a fibration. Bernard Perron found the first such F for this knot, namely, formula_1 where formula_2 Mathematical properties. The figure-eight knot has played an important role historically (and continues to do so) in the theory of 3-manifolds. Sometime in the mid-to-late 1970s, William Thurston showed that the figure-eight was hyperbolic, by decomposing its complement into two ideal hyperbolic tetrahedra. (Robert Riley and Troels Jørgensen, working independently of each other, had earlier shown that the figure-eight knot was hyperbolic by other means.) This construction, new at the time, led him to many powerful results and methods. For example, he was able to show that all but ten Dehn surgeries on the figure-eight knot resulted in non-Haken, non-Seifert-fibered irreducible 3-manifolds; these were the first such examples. Many more have been discovered by generalizing Thurston's construction to other knots and links. The figure-eight knot is also the hyperbolic knot whose complement has the smallest possible volume, formula_3 (sequence in the OEIS), where formula_4 is the Lobachevsky function. From this perspective, the figure-eight knot can be considered the simplest hyperbolic knot. The figure eight knot complement is a double-cover of the Gieseking manifold, which has the smallest volume among non-compact hyperbolic 3-manifolds. The figure-eight knot and the (−2,3,7) pretzel knot are the only two hyperbolic knots known to have more than 6 "exceptional surgeries", Dehn surgeries resulting in a non-hyperbolic 3-manifold; they have 10 and 7, respectively. A theorem of Lackenby and Meyerhoff, whose proof relies on the geometrization conjecture and computer assistance, holds that 10 is the largest possible number of exceptional surgeries of any hyperbolic knot. However, it is not currently known whether the figure-eight knot is the only one that achieves the bound of 10. A well-known conjecture is that the bound (except for the two knots mentioned) is 6. The figure-eight knot has genus 1 and is fibered. Therefore its complement fibers over the circle, the fibers being Seifert surfaces which are 2-dimensional tori with one boundary component. The monodromy map is then a homeomorphism of the 2-torus, which can be represented in this case by the matrix formula_5. Invariants. The Alexander polynomial of the figure-eight knot is formula_6 the Conway polynomial is formula_7 and the Jones polynomial is formula_8 The symmetry between formula_9 and formula_10 in the Jones polynomial reflects the fact that the figure-eight knot is achiral. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\begin{align}\n x & = \\left(2 + \\cos{(2t)} \\right) \\cos{(3t)} \\\\\n y & = \\left(2 + \\cos{(2t)} \\right) \\sin{(3t)} \\\\\n z & = \\sin{(4t)}\n \\end{align} " }, { "math_id": 1, "text": "F(x, y, z, t)=G(x, y, z^2-t^2, 2zt),\\,\\!" }, { "math_id": 2, "text": "\\begin{align}\n G(x,y,z,t)=\\ & (z(x^2+y^2+z^2+t^2)+x (6x^2-2y^2-2z^2-2t^2), \\\\\n & \\ t x \\sqrt{2}+y (6x^2-2y^2-2z^2-2t^2)).\n \\end{align}" }, { "math_id": 3, "text": "6\\Lambda(\\pi/3) \\approx 2.02988..." }, { "math_id": 4, "text": "\\Lambda" }, { "math_id": 5, "text": "(\\begin{smallmatrix}2&1\\\\1&1\\end{smallmatrix})" }, { "math_id": 6, "text": "\\Delta(t) = -t + 3 - t^{-1},\\ " }, { "math_id": 7, "text": "\\nabla(z) = 1-z^2,\\ " }, { "math_id": 8, "text": "V(q) = q^2 - q + 1 - q^{-1} + q^{-2}.\\ " }, { "math_id": 9, "text": "q" }, { "math_id": 10, "text": "q^{-1}" } ]
https://en.wikipedia.org/wiki?curid=59565
595708
Functional equation (L-function)
In mathematics, the L-functions of number theory are expected to have several characteristic properties, one of which is that they satisfy certain functional equations. There is an elaborate theory of what these equations should be, much of which is still conjectural. Introduction. A prototypical example, the Riemann zeta function has a functional equation relating its value at the complex number "s" with its value at 1 − "s". In every case this relates to some value ζ("s") that is only defined by analytic continuation from the infinite series definition. That is, writing – as is conventional – σ for the real part of "s", the functional equation relates the cases σ &gt; 1 and σ &lt; 0, and also changes a case with 0 &lt; σ &lt; 1 in the "critical strip" to another such case, reflected in the line σ = ½. Therefore, use of the functional equation is basic, in order to study the zeta-function in the whole complex plane. The functional equation in question for the Riemann zeta function takes the simple form formula_0 where "Z"("s") is ζ("s") multiplied by a "gamma-factor", involving the gamma function. This is now read as an 'extra' factor in the Euler product for the zeta-function, corresponding to the infinite prime. Just the same shape of functional equation holds for the Dedekind zeta function of a number field "K", with an appropriate gamma-factor that depends only on the embeddings of "K" (in algebraic terms, on the tensor product of "K" with the real field). There is a similar equation for the Dirichlet L-functions, but this time relating them in pairs: formula_1 with χ a primitive Dirichlet character, χ* its complex conjugate, Λ the L-function multiplied by a gamma-factor, and ε a complex number of absolute value 1, of shape formula_2 where "G"(χ) is a Gauss sum formed from χ. This equation has the same function on both sides if and only if χ is a "real character", taking values in {0,1,−1}. Then ε must be 1 or −1, and the case of the value −1 would imply a zero of "Λ"("s") at "s" = ½. According to the theory (of Gauss, in effect) of Gauss sums, the value is always 1, so no such "simple" zero can exist (the function is "even" about the point). Theory of functional equations. A unified theory of such functional equations was given by Erich Hecke, and the theory was taken up again in Tate's thesis by John Tate. Hecke found generalised characters of number fields, now called Hecke characters, for which his proof (based on theta functions) also worked. These characters and their associated L-functions are now understood to be strictly related to complex multiplication, as the Dirichlet characters are to cyclotomic fields. There are also functional equations for the local zeta-functions, arising at a fundamental level for the (analogue of) Poincaré duality in étale cohomology. The Euler products of the Hasse–Weil zeta-function for an algebraic variety "V" over a number field "K", formed by reducing "modulo" prime ideals to get local zeta-functions, are conjectured to have a "global" functional equation; but this is currently considered out of reach except in special cases. The definition can be read directly out of étale cohomology theory, again; but in general some assumption coming from automorphic representation theory seems required to get the functional equation. The Taniyama–Shimura conjecture was a particular case of this as general theory. By relating the gamma-factor aspect to Hodge theory, and detailed studies of the expected ε factor, the theory as empirical has been brought to quite a refined state, even if proofs are missing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z(s) = Z(1-s) \\, " }, { "math_id": 1, "text": "\\Lambda(s,\\chi)=\\varepsilon\\Lambda(1-s,\\chi^*)" }, { "math_id": 2, "text": "G(\\chi) \\over {\\left |G(\\chi)\\right \\vert}" } ]
https://en.wikipedia.org/wiki?curid=595708
5957084
Indeterminate system
In mathematics, particularly in algebra, an indeterminate system is a system of simultaneous equations (e.g., linear equations) which has more than one solution (sometimes infinitely many solutions). In the case of a linear system, the system may be said to be underspecified, in which case the presence of more than one solution would imply an infinite number of solutions (since the system would be describable in terms of at least one free variable), but that property does not extend to nonlinear systems (e.g., the system with the equation formula_0). An indeterminate system by definition is consistent, in the sense of having at least one solution. For a system of linear equations, the number of equations in an indeterminate system could be the same as the number of unknowns, less than the number of unknowns (an underdetermined system), or greater than the number of unknowns (an overdetermined system). Conversely, any of those three cases may or may not be indeterminate. Examples. The following examples of indeterminate systems of equations have respectively, fewer equations than, as many equations as, and more equations than unknowns: formula_1 formula_2 formula_3 Conditions giving rise to indeterminacy. In linear systems, indeterminacy occurs if and only if the number of independent equations (the rank of the augmented matrix of the system) is less than the number of unknowns and is the same as the rank of the coefficient matrix. For if there are at least as many independent equations as unknowns, that will eliminate any stretches of overlap of the equations' surfaces in the geometric space of the unknowns (aside from possibly a single point), which in turn excludes the possibility of having more than one solution. On the other hand, if the rank of the augmented matrix exceeds (necessarily by one, if at all) the rank of the coefficient matrix, then the equations will jointly contradict each other, which excludes the possibility of having any solution. Finding the solution set of an indeterminate linear system. Let the system of equations be written in matrix form as formula_4 where formula_5 is the formula_6 coefficient matrix, formula_7 is the formula_8 vector of unknowns, and formula_9 is an formula_10 vector of constants. In which case, if the system is indeterminate, then the infinite solution set is the set of all formula_7 vectors generated by formula_11 where formula_12 is the Moore–Penrose pseudoinverse of formula_5 and formula_13 is any formula_8 vector. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2=1 " }, { "math_id": 1, "text": "\\text{System 1: }x+y=2" }, { "math_id": 2, "text": "\\text{System 2: }x+y=2, \\,\\,\\,\\,\\, 2x+2y=4" }, { "math_id": 3, "text": "\\text{System 3: }x+y=2, \\,\\,\\,\\,\\, 2x+2y=4, \\,\\,\\,\\,\\, 3x+3y=6" }, { "math_id": 4, "text": "Ax=b" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "m \\times n" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "n \\times 1" }, { "math_id": 9, "text": "b" }, { "math_id": 10, "text": "m \\times 1" }, { "math_id": 11, "text": "x=A^+b + [I_n-A^+A]w" }, { "math_id": 12, "text": "A^+" }, { "math_id": 13, "text": "w" } ]
https://en.wikipedia.org/wiki?curid=5957084
59573298
Stanisław Knapowski
Polish mathematician Stanisław Knapowski (May 19, 1931 – September 28, 1967) was a Polish mathematician who worked on prime numbers and number theory. Knapowski published 53 papers despite dying at only 36 years old. Life and education. Stanisław Knapowski was the son of Zofia Krysiewicz and Roch Knapowski. His father, Roch Knapowski was a lawyer in Poznań but later taught at Poznań University. The family moved to the Kielce province in south-eastern Poland after the German invasion of 1939 but returned to Poznań after the war. Stanisław completed his high school education in 1949 excelling at math and continued on at Poznań University to study mathematics. Later in 1952 he continued his studies at University of Wrocław and earned his master's degree in 1954. Knapowski was appointment an assistant at Adam Mickiewicz University in Poznań under Władysław Orlicz and worked towards his doctorate. He studied under the direction of Pál Turán starting in Lublin in 1956. He published many of his papers with Turán and Turán wrote a short biography of his life and work in 1971 after his death. Knapowski began to work in this area and finished his doctorate in 1957 “Zastosowanie metod Turaná w analitycznej teorii liczb” ("Certain applications of Turan's methods in the analytical theory of numbers"). Knapowski spent a year in Cambridge where he worked with Louis J. Mordell and listened to classes by J.W.S. Cassels and Albert Ingham. He visited Belgium, France and The Netherlands. Knapowski returned to Poznań to finish another thesis to complete a post-doctoral qualification needed to lecture at a German university. "On new "explicit formulas" in prime number theory" in 1960. In 1962 the Polish Mathematical Society awarded him their Mazurkiewicz Prize and he moved to Tulane University in New Orleans, United States. After a very short return to Poland, he left again and taught in Marburg in Germany, Gainesville, Florida and Miami, Florida. Personal life; and death. Knapowski was a good classical pianist. He was an avid driver. He died in a traffic accident where he lost control of his car while leaving the Miami airport. Work. Knapowski expanded on the work of others in several fields of number theory, prime number theorem, modular arithmetic and non-Euclidean geometry. Number of times the Δ("n") prime sign changes. Mathematicians work on primality tests to develop easier ways to find prime numbers when finding them by trial division is not practical. This has many applications in cybersecurity. There is no formula to calculate prime numbers. However, the distribution of primes can be statistically modelled. The prime number theorem, which was proven at the end of the 19th century, says that the probability of a randomly chosen number being prime is inversely proportional to its number of digits (logarithm). At the start of the 19th century, Adrien-Marie Legendre and Carl Friedrich Gauss suggested that as formula_1 becomes large, the number of primes up to formula_1 asymptotically approaches formula_2, where formula_3 is the natural logarithm of formula_1. formula_4 where the integral is evaluated at formula_5, also fits the distribution. The prime-counting function formula_6 is defined as the number of primes not greater than formula_0. And formula_7 Bernhard Riemann stated that formula_8 was always negative but J.E. Littlewood later disproved this. In 1914 J.E. Littlewood proved that there are arbitrarily large values of "x" for which formula_9 and that there are also arbitrarily large values of "x" for which formula_10 Thus the difference π("x") − Li("x") changes sign infinitely many times. Stanley Skewes then added an upper bound on the smallest natural number formula_1: formula_11 Knapowski followed this up and published a paper on the number of times formula_8 changes sign in the interval formula_8. Modular arithmetic. Knapowski worked in other areas of number theory. One area was on the distribution of prime numbers in different residue classes modulo formula_12. Modular arithmetic modifies usual arithmetic by only using the numbers formula_13, for a natural number formula_0 called the modulus. Any other natural number can be mapped into this system by replacing it by its remainder after division by formula_0. The distribution of the primes looks random, without a pattern. Take a list of consecutive prime numbers and divide them by another prime (like 7) and keep only the remainder (this is called reducing them modulo 7). The result is a sequence of integers from 1 to 6. Knapowski worked to determine the parameters of this modular distribution References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "x/\\log x" }, { "math_id": 3, "text": "\\log x" }, { "math_id": 4, "text": "\\operatorname{li}(n) = \\int_0^n \\frac{dt}{\\log t} " }, { "math_id": 5, "text": " t = n " }, { "math_id": 6, "text": "\\pi(n)" }, { "math_id": 7, "text": "\\Delta(n)=\\pi(n)-\\operatorname{li}(n) " }, { "math_id": 8, "text": " \\Delta(n) " }, { "math_id": 9, "text": "\\pi(x)>\\operatorname{Li}(x) +\\frac13\\frac{\\sqrt x}{\\log x}\\log\\log\\log x," }, { "math_id": 10, "text": "\\pi(x)<\\operatorname{Li}(x) -\\frac13\\frac{\\sqrt x}{\\log x}\\log\\log\\log x." }, { "math_id": 11, "text": "\\pi(x) > \\operatorname{li}(x)," }, { "math_id": 12, "text": " k " }, { "math_id": 13, "text": "\\{0,1,2,\\dots,n-1\\}" } ]
https://en.wikipedia.org/wiki?curid=59573298
5957331
Bellard's formula
Mathematical formula Bellard's formula is used to calculate the "n"th digit of π in base 16. Bellard's formula was discovered by Fabrice Bellard in 1997. It is about 43% faster than the Bailey–Borwein–Plouffe formula (discovered in 1995). It has been used in PiHex, the now-completed distributed computing project. One important application is verifying computations of all digits of pi performed by other means. Rather than having to compute all of the digits twice by two separate algorithms to ensure that a computation is correct, the final digits of a very long all-digits computation can be verified by the much faster Bellard's formula. Formula: formula_0
[ { "math_id": 0, "text": "\n\\begin{align}\n\\pi = \\frac1{2^6} \\sum_{n=0}^\\infty \\frac{(-1)^n}{2^{10n}} \\, \\left(-\\frac{2^5}{4n+1} \\right. & {} - \\frac1{4n+3} + \\frac{2^8}{10n+1} - \\frac{2^6}{10n+3} \\left. {} - \\frac{2^2}{10n+5} - \\frac{2^2}{10n+7} + \\frac1{10n+9} \\right)\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=5957331
59574
Biproduct
In category theory and its applications to mathematics, a biproduct of a finite collection of objects, in a category with zero objects, is both a product and a coproduct. In a preadditive category the notions of product and coproduct coincide for finite collections of objects. The biproduct is a generalization of finite direct sums of modules. Definition. Let C be a category with zero morphisms. Given a finite (possibly empty) collection of objects "A"1, ..., "A""n" in C, their "biproduct" is an object formula_0 in C together with morphisms satisfying and such that If C is preadditive and the first two conditions hold, then each of the last two conditions is equivalent to formula_12 when "n" &gt; 0. An empty, or nullary, product is always a terminal object in the category, and the empty coproduct is always an initial object in the category. Thus an empty, or nullary, biproduct is always a zero object. Examples. In the category of abelian groups, biproducts always exist and are given by the direct sum. The zero object is the trivial group. Similarly, biproducts exist in the category of vector spaces over a field. The biproduct is again the direct sum, and the zero object is the trivial vector space. More generally, biproducts exist in the category of modules over a ring. On the other hand, biproducts do not exist in the category of groups. Here, the product is the direct product, but the coproduct is the free product. Also, biproducts do not exist in the category of sets. For, the product is given by the Cartesian product, whereas the coproduct is given by the disjoint union. This category does not have a zero object. Block matrix algebra relies upon biproducts in categories of matrices. Properties. If the biproduct formula_13 exists for all pairs of objects "A" and "B" in the category C, and C has a zero object, then all finite biproducts exist, making C both a Cartesian monoidal category and a co-Cartesian monoidal category. If the product formula_14 and coproduct formula_15 both exist for some pair of objects "A"1, "A"2 then there is a unique morphism formula_16 such that It follows that the biproduct formula_20 exists if and only if "f" is an isomorphism. If C is a preadditive category, then every finite product is a biproduct, and every finite coproduct is a biproduct. For example, if formula_14 exists, then there are unique morphisms formula_21 such that To see that formula_14 is now also a coproduct, and hence a biproduct, suppose we have morphisms formula_24 for some object formula_25. Define formula_26 Then formula_27 is a morphism from formula_14 to formula_25, and formula_28 for formula_29. In this case we always have An additive category is a preadditive category in which all finite biproducts exist. In particular, biproducts always exist in abelian categories. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_1 \\oplus \\dots \\oplus A_n" }, { "math_id": 1, "text": "p_k \\!: A_1 \\oplus \\dots \\oplus A_n \\to A_k" }, { "math_id": 2, "text": "i_k \\!: A_k \\to A_1 \\oplus \\dots \\oplus A_n" }, { "math_id": 3, "text": "p_k \\circ i_k = 1_{A_k}" }, { "math_id": 4, "text": "A_k," }, { "math_id": 5, "text": "p_l \\circ i_k = 0" }, { "math_id": 6, "text": "A_k \\to A_l," }, { "math_id": 7, "text": "k \\neq l," }, { "math_id": 8, "text": "\\left( A_1 \\oplus \\dots \\oplus A_n, p_k \\right)" }, { "math_id": 9, "text": "A_k," }, { "math_id": 10, "text": "\\left( A_1 \\oplus \\dots \\oplus A_n, i_k \\right)" }, { "math_id": 11, "text": "A_k." }, { "math_id": 12, "text": "i_1 \\circ p_1 + \\dots + i_n\\circ p_n = 1_{A_1 \\oplus \\dots \\oplus A_n}" }, { "math_id": 13, "text": "A \\oplus B" }, { "math_id": 14, "text": "A_1 \\times A_2" }, { "math_id": 15, "text": "A_1 \\coprod A_2" }, { "math_id": 16, "text": "f: A_1 \\coprod A_2 \\to A_1 \\times A_2" }, { "math_id": 17, "text": "p_k \\circ f \\circ i_k = 1_{A_k},\\ (k = 1, 2)" }, { "math_id": 18, "text": "p_l \\circ f \\circ i_k = 0 " }, { "math_id": 19, "text": "k \\neq l." }, { "math_id": 20, "text": "A_1 \\oplus A_2" }, { "math_id": 21, "text": "i_k: A_k \\to A_1 \\times A_2" }, { "math_id": 22, "text": "p_k \\circ i_k = 1_{A_k},\\ (k = 1, 2)" }, { "math_id": 23, "text": "p_l \\circ i_k = 0 " }, { "math_id": 24, "text": "f_k: A_k \\to X,\\ k=1,2" }, { "math_id": 25, "text": "X" }, { "math_id": 26, "text": "f := f_1 \\circ p_1 + f_2 \\circ p_2." }, { "math_id": 27, "text": "f" }, { "math_id": 28, "text": "f \\circ i_k = f_k" }, { "math_id": 29, "text": "k = 1, 2" }, { "math_id": 30, "text": "i_1 \\circ p_1 + i_2 \\circ p_2 = 1_{A_1 \\times A_2}." } ]
https://en.wikipedia.org/wiki?curid=59574
59577089
Interchange lemma
In the theory of formal languages, the interchange lemma states a necessary condition for a language to be context-free, just like the pumping lemma for context-free languages. It states that for every context-free language formula_0 there is a formula_1 such that for all formula_2 for any collection of length formula_3 words formula_4 there is a formula_5 with formula_6, and decompositions formula_7 such that each of formula_8, formula_9, formula_10 is independent of formula_11, moreover, formula_12, and the words formula_13 are in formula_0 for every formula_11 and formula_14. The first application of the interchange lemma was to show that the set of repetitive strings (i.e., strings of the form formula_15 with formula_16) over an alphabet of three or more characters is not context-free.
[ { "math_id": 0, "text": "L" }, { "math_id": 1, "text": "c>0" }, { "math_id": 2, "text": "n\\geq m\\geq 2" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "R\\subset L" }, { "math_id": 5, "text": "Z=\\{z_1,\\ldots,z_k\\}\\subset R" }, { "math_id": 6, "text": "k\\ge |R|/(cn^2)" }, { "math_id": 7, "text": "z_i=w_ix_iy_i" }, { "math_id": 8, "text": "|w_i|" }, { "math_id": 9, "text": "|x_i|" }, { "math_id": 10, "text": "|y_i|" }, { "math_id": 11, "text": "i" }, { "math_id": 12, "text": "m/2<|x_i|\\leq m" }, { "math_id": 13, "text": "w_ix_jy_i" }, { "math_id": 14, "text": "j" }, { "math_id": 15, "text": "xyyz" }, { "math_id": 16, "text": "|y|>0" } ]
https://en.wikipedia.org/wiki?curid=59577089
595824
Tschirnhaus transformation
Mathematical term; type of polynomial transformation In mathematics, a Tschirnhaus transformation, also known as Tschirnhausen transformation, is a type of mapping on polynomials developed by Ehrenfried Walther von Tschirnhaus in 1683. Simply, it is a method for transforming a polynomial equation of degree formula_0 with some nonzero intermediate coefficients, formula_1, such that some or all of the transformed intermediate coefficients, formula_2, are exactly zero. For example, finding a substitutionformula_3for a cubic equation of degree formula_4,formula_5such that substituting formula_6 yields a new equationformula_7such that formula_8, formula_9, or both. More generally, it may be defined conveniently by means of field theory, as the transformation on minimal polynomials implied by a different choice of primitive element. This is the most general transformation of an irreducible polynomial that takes a root to some rational function applied to that root. Definition. For a generic formula_10 degree reducible monic polynomial equation formula_11 of the form formula_12, where formula_13 and formula_14 are polynomials and formula_14 does not vanish at formula_15,formula_16the Tschirnhaus transformation is the function:formula_17Such that the new equation in formula_18, formula_19, has certain special properties, most commonly such that some coefficients, formula_20, are identically zero. Example: Tschirnhaus' method for cubic equations. In Tschirnhaus' 1683 paper, he solved the equation formula_21 using the Tschirnhaus transformation formula_22Substituting yields the transformed equationformula_23 or formula_24Setting formula_8 yields, formula_25 and finally the Tschirnhaus transformation formula_26 which may be substituted into formula_27 to yield an equation of the form: formula_28 Tschirnhaus went on to describe how a Tschirnhaus transformation of the form: formula_29 may be used to eliminate two coefficients in a similar way. Generalization. In detail, let formula_30 be a field, and formula_31 a polynomial over formula_30. If formula_32 is irreducible, then the quotient ring of the polynomial ring formula_33 by the principal ideal generated by formula_32, formula_34, is a field extension of formula_30. We have formula_35 where formula_36 is formula_37 modulo formula_38. That is, any element of formula_39 is a polynomial in formula_36, which is thus a primitive element of formula_39. There will be other choices formula_40 of primitive element in formula_39: for any such choice of formula_40 we will have by definition: formula_41, with polynomials formula_42 and formula_43 over formula_30. Now if formula_44 is the minimal polynomial for formula_40 over formula_30, we can call formula_44 a Tschirnhaus transformation of formula_32. Therefore the set of all Tschirnhaus transformations of an irreducible polynomial is to be described as running over all ways of changing formula_32, but leaving formula_39 the same. This concept is used in reducing quintics to Bring–Jerrard form, for example. There is a connection with Galois theory, when formula_39 is a Galois extension of formula_30. The Galois group may then be considered as all the Tschirnhaus transformations of formula_32 to itself. History. In 1683, Ehrenfried Walther von Tschirnhaus published a method for rewriting a polynomial of degree formula_45 such that the formula_46 and formula_47 terms have zero coefficients. In his paper, Tschirnhaus referenced a method by René Descartes to reduce a quadratic polynomial formula_48 such that the formula_49 term has zero coefficient. In 1786, this work was expanded by Erland Samuel Bring who showed that any generic quintic polynomial could be similarly reduced. In 1834, George Jerrard further expanded Tschirnhaus' work by showing a Tschirnhaus transformation may be used to eliminate the formula_46, formula_47, and formula_50 for a general polynomial of degree formula_51. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\ge2" }, { "math_id": 1, "text": "a_1, ..., a_{n-1}" }, { "math_id": 2, "text": "a'_1, ..., a'_{n-1}" }, { "math_id": 3, "text": "y(x)=k_1x^2 + k_2x+k_3" }, { "math_id": 4, "text": "n=3" }, { "math_id": 5, "text": "f(x) = x^3+a_2x^2+a_1x+a_0" }, { "math_id": 6, "text": "x=x(y)" }, { "math_id": 7, "text": "f'(y)=y^3+a'_2y^2+a'_1y+a'_0" }, { "math_id": 8, "text": "a'_1=0" }, { "math_id": 9, "text": "a'_2=0" }, { "math_id": 10, "text": "n^{th}" }, { "math_id": 11, "text": "f(x)=0" }, { "math_id": 12, "text": "f(x) = g(x) / h(x)" }, { "math_id": 13, "text": "g(x)" }, { "math_id": 14, "text": "h(x)" }, { "math_id": 15, "text": "f(x) = 0" }, { "math_id": 16, "text": "f(x) = x^n+a_1x^{n-1}+a_2x^{n-2}+...+a_{n-1}x+a_n=0" }, { "math_id": 17, "text": "y=k_1x^{n-1} + k_2x^{n-2}+...+k_{n-1}x+k_n" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "f'(y)" }, { "math_id": 20, "text": "a'_1,...,a'_{n-1}" }, { "math_id": 21, "text": "f(x)=x^3-px^2+qx-r=0" }, { "math_id": 22, "text": "y(x;a)=x-a\\longleftrightarrow x(y;a)=x=y+a." }, { "math_id": 23, "text": "f'(y;a)=y^3+(3a-p)y^2+(3a^2-2pa+q) y+(a^3-pa^2+qa-r)=0" }, { "math_id": 24, "text": "\\begin{cases} a'_1=3a-p \\\\ a'_2=3a^2-2pa+q \\\\ a'_3=a^3-pa^2+qa-r \\end{cases}." }, { "math_id": 25, "text": "3a-p=0\\rightarrow a=\\frac{p}{3}" }, { "math_id": 26, "text": "y=x-\\frac{p}{3}," }, { "math_id": 27, "text": "f'(y;a)" }, { "math_id": 28, "text": "f'(y)=y^3-q'y-r'." }, { "math_id": 29, "text": "x^2(y;a,b)=x^2=bx+y+a" }, { "math_id": 30, "text": "K" }, { "math_id": 31, "text": "P(t)" }, { "math_id": 32, "text": "P" }, { "math_id": 33, "text": "K[t]" }, { "math_id": 34, "text": "K[t]/(P(t)) = L" }, { "math_id": 35, "text": "L = K(\\alpha)" }, { "math_id": 36, "text": "\\alpha" }, { "math_id": 37, "text": "t" }, { "math_id": 38, "text": "(P)" }, { "math_id": 39, "text": "L" }, { "math_id": 40, "text": "\\beta" }, { "math_id": 41, "text": "\\beta = F(\\alpha), \\alpha = G(\\beta)" }, { "math_id": 42, "text": "F" }, { "math_id": 43, "text": "G" }, { "math_id": 44, "text": "Q" }, { "math_id": 45, "text": "n>2" }, { "math_id": 46, "text": "x^{n-1}" }, { "math_id": 47, "text": "x^{n-2}" }, { "math_id": 48, "text": "(n=2)" }, { "math_id": 49, "text": "x" }, { "math_id": 50, "text": "x^{n-3}" }, { "math_id": 51, "text": "n>3" } ]
https://en.wikipedia.org/wiki?curid=595824
59586120
Mantle oxidation state
Application of oxidation state to the study of the Earth's mantle Mantle oxidation state (redox state) applies the concept of oxidation state in chemistry to the study of the Earth's mantle. The chemical concept of oxidation state mainly refers to the valence state of one element, while mantle oxidation state provides the degree of decreasing of increasing valence states of all polyvalent elements in mantle materials confined in a closed system. The mantle oxidation state is controlled by oxygen fugacity and can be benchmarked by specific groups of redox buffers. Mantle oxidation state changes because of the existence of polyvalent elements (elements with more than one valence state, e.g. Fe, Cr, V, Ti, Ce, Eu, C and others). Among them, Fe is the most abundant (≈8 wt% of the mantle) and its oxidation state largely reflects the oxidation state of mantle. Examining the valence state of other polyvalent elements could also provide the information of mantle oxidation state. It is well known that the oxidation state can influence the partitioning behavior of elements and liquid water between melts and minerals, the speciation of C-O-H-bearing fluids and melts, as well as transport properties like electrical conductivity and creep. The formation of diamond requires both reaching high pressures and high temperatures and a carbon source. The most common carbon source in deep Earth is not elemental carbon and redox reactions need to be involved in diamond formation. Examining the oxidation state can help us predict the P-T conditions of diamond formation and elucidate the origin of deep diamonds. Thermodynamic description of oxidation state. Mantle oxidation state can be quantified as the oxygen fugacity (formula_0) of the system within the framework of thermodynamics. A higher oxygen fugacity implies a more oxygen-rich and more oxidized environment. At each given pressure-temperature conditions, for any compound or element M that bears the potential to be oxidized by oxygen formula_1 For example, if M is Fe, the redox equilibrium reaction can be Fe+1/2O2=FeO; if M is FeO, the redox equilibrium reaction can be 2FeO+1/2O2=Fe2O3. Gibbs energy change associated with this reaction is therefore formula_2 Along each isotherm, the partial derivation of "ΔG" with respect to "P" is "ΔV", formula_3. Combining the 2 equations above, formula_4. Therefore, formula_5 (note that ln(e as the base) changed to log(10 as the base) in this formula. For a closed system, there might exist more than one of these equilibrium oxidation reactions, but since all these reactions share a same formula_0, examining one of them would allow extraction of oxidation state of the system. Pressure effect on oxygen fugacity. The physics and chemistry of mantle largely depend on pressure. As mantle minerals are compressed, they are transformed into other minerals at certain depths. Seismic observations of velocity discontinuities and experimental simulations on phase boundaries both verified the structure transformations within the mantle. As such, the mantle can be further divided into three layers with distinct mineral compositions. Since mantle mineral composition changes, the mineral hosting environment for polyvalent elements also alters. For each layer, the mineral combination governing the redox reactions is unique and will be discussed in detailed below. Upper mantle. Between depths of 30 and 60 km, oxygen fugacity is mainly controlled by Olivine-Orthopyroxene-Spinel oxidation reaction. formula_6 Under deeper upper mantle conditions, Olivine-Orthopyroxene-Garnet oxygen barometer is the redox reaction that is used to calibrate oxygen fugacity. formula_7 In this reaction, 4 mole of ferrous ions were oxidized to ferric ions and the other 2 mole of ferrous ions remain unchanged. Transition zone. Garnet-Garnet reaction can be used to estimate the redox state of transition zone. formula_8 A recent study showed that the oxygen fugacity of transition referred from Garnet-Garnet reaction is -0.26 formula_9 to +3 formula_9 relative to the Fe-FeO (IW, iron- wütstite) oxygen buffer. Lower mantle. Disproportionation of ferrous iron at lower mantle conditions also affect the mantle oxidation state. This reaction is different from the reactions mentioned above as it does not incorporate the participation of free oxygen. formula_10, FeO resides in the form of ferropericlase ("Fp") and Fe2O3 resides in the form of bridgmanite ("Bdg"). There is no oxygen fugacity change associated with the reaction. However, as the reaction products differ in density significantly, the metallic iron phase could descend downwards to the Earth's core and get separated from the mantle. In this case, the mantle loses metallic iron and becomes more oxidized. Implications for diamond formation. The equilibrium reaction involving diamond is formula_11. Examining the oxygen fugacity of the upper mantle and transition enables us to compare it with the conditions (equilibrium reaction shown above) required for diamond formation. The results show that the formula_9 is usually 2 units lower than the carbonate-carbon reaction which means favoring the formation of diamond at transition zone conditions. It has also been reported that pH decrease would also facilitate the formation of diamond in Mantle conditions. formula_12 formula_13 where the subscript "aq" means 'aqueous', implying H2 is dissolved in the solution. Deep diamonds have become important windows to look into the mineralogy of the Earth's interior. Minerals not stable at the surface could possibly be found within inclusions of superdeep diamonds—implying they were stable where these diamond crystallized. Because of the hardness of diamonds, the high pressure environment is retained even after transporting to the surface. So far, these superdeep minerals brought by diamonds include ringwoodite, ice-VII, cubic δ-N2 and Ca-perovskite. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "fO_2" }, { "math_id": 1, "text": "M+\\frac{x}{2}O_2\\rightleftharpoons MO_x" }, { "math_id": 2, "text": "\\Delta G= G(MO_x)-G(M)=\\frac{x}{2}RT lnfO_2" }, { "math_id": 3, "text": "\\frac{\\partial \\Delta G}{\\partial P_{|T}}={\\Delta V}" }, { "math_id": 4, "text": "\\frac{\\partial (lnfO_2))}{\\partial P|_T}=\\frac{2}{xRT}\\Delta V" }, { "math_id": 5, "text": "logfO_2(P)=logfO_2(1bar)+(\\frac{0.8686}{RT})\\int_{1bar}^{P} \\Delta VdP" }, { "math_id": 6, "text": "6Fe_2SiO_4+O_2\\rightleftharpoons3Fe_2Si_2O_6+2Fe_3O_4" }, { "math_id": 7, "text": "4Fe_2SiO_4+2FeSiO_3+O_2\\rightleftharpoons2Fe_3^{2+}Fe_2^{3+}Si_3O_{12}" }, { "math_id": 8, "text": "2Ca_3Al_2Si_3O_{12}+\\frac{4}{3}Fe_3Al_2Si_3O_{12}+2.5Mg_4Si_4O_{12}+O_2\n\\rightleftharpoons2Ca_3Fe_2Si_3O_{12}+\\frac{10}{3}Mg_3Al_2Si_3O_{12}+SiO_2" }, { "math_id": 9, "text": "logfO_2" }, { "math_id": 10, "text": "3Fe^{2+}(Fp)\\rightleftharpoons Fe+2Fe^{3+}(Bdg)" }, { "math_id": 11, "text": "Mg_2Si_2O_6+2MgCO_3\\rightleftharpoons2Mg_2SiO_4+2C(Diamond)+2O_2" }, { "math_id": 12, "text": "HCOO^{-}+H^++H_{2,aq} \\rightleftharpoons C_{diamond}+2H_2O\n" }, { "math_id": 13, "text": "CH_3CH_2COO^-+H^+\\rightleftharpoons3C_{diamond}+H_{2,aq}+2H_2O" } ]
https://en.wikipedia.org/wiki?curid=59586120
595896
Chowla–Mordell theorem
When a Gauss sum is the square root of a prime number, multiplied by a root of unity In mathematics, the Chowla–Mordell theorem is a result in number theory determining cases where a Gauss sum is the square root of a prime number, multiplied by a root of unity. It was proved and published independently by Sarvadaman Chowla and Louis Mordell, around 1951. In detail, if formula_0 is a prime number, formula_1 a nontrivial Dirichlet character modulo formula_0, and formula_2 where formula_3 is a primitive formula_0-th root of unity in the complex numbers, then formula_4 is a root of unity if and only if formula_1 is the quadratic residue symbol modulo formula_0. The 'if' part was known to Gauss: the contribution of Chowla and Mordell was the 'only if' direction. The ratio in the theorem occurs in the functional equation of L-functions.
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "\\chi" }, { "math_id": 2, "text": "G(\\chi)=\\sum \\chi(a) \\zeta^a" }, { "math_id": 3, "text": "\\zeta" }, { "math_id": 4, "text": "\\frac{G(\\chi)}{|G(\\chi)|}" } ]
https://en.wikipedia.org/wiki?curid=595896
595898
Quadratic Gauss sum
In number theory, quadratic Gauss sums are certain finite sums of roots of unity. A quadratic Gauss sum can be interpreted as a linear combination of the values of the complex exponential function with coefficients given by a quadratic character; for a general character, one obtains a more general Gauss sum. These objects are named after Carl Friedrich Gauss, who studied them extensively and applied them to quadratic, cubic, and biquadratic reciprocity laws. Definition. For an odd prime number p and an integer a, the quadratic Gauss sum "g"("a"; "p") is defined as formula_0 where formula_1 is a primitive pth root of unity, for example formula_2. Equivalently, formula_3 For a divisible by p the expression formula_4 evaluates to formula_5. Hence, we have formula_6 For a not divisible by p, this expression reduces to formula_7 where formula_8 is the Gauss sum defined for any character "χ" modulo p. Properties. 1: formula_10 1 is given by the formula: formula_11 In fact, the identity formula_12 was easy to prove and led to one of Gauss's proofs of quadratic reciprocity. However, the determination of the "sign" of the Gauss sum turned out to be considerably more difficult: Gauss could only establish it after several years' work. Later, Dirichlet, Kronecker, Schur and other mathematicians found different proofs. Generalized quadratic Gauss sums. Let "a", "b", "c" be natural numbers. The generalized quadratic Gauss sum "G"("a", "b", "c") is defined by formula_13. The classical quadratic Gauss sum is the sum "g"("a", "p") "G"("a", 0, "p"). 1 one has formula_14 This is a direct consequence of the Chinese remainder theorem. 0 if gcd("a", "c") &gt; 1 except if gcd("a","c") divides "b" in which case one has formula_15. Thus in the evaluation of quadratic Gauss sums one may always assume gcd("a", "c") 1. formula_16. formula_17 for every odd integer "m". The values of Gauss sums with "b" 0 and gcd("a", "c") 1 are explicitly given by formula_18 Here () is the Jacobi symbol. This is the famous formula of Carl Friedrich Gauss. 1 one has formula_19 where "ψ"("a") is some number with 4"ψ"("a")"a" ≡ 1 (mod "c"). As another example, if 4 divides c and b is odd and as always gcd("a", "c") 1 then "G"("a", "b", "c") 0. This can, for example, be proved as follows: because of the multiplicative property of Gauss sums we only have to show that "G"("a", "b", 2"n") 0 if "n" &gt; 1 and "a", "b" are odd with gcd("a", "c") 1. If b is odd then "an"2 + "bn" is even for all 0 ≤ "n" &lt; "c" − 1. By Hensel's lemma, for every q, the equation "an"2 + "bn" + "q" 0 has at most two solutions in formula_20/2"n"formula_20. Because of a counting argument "an"2 + "bn" runs through all even residue classes modulo c exactly two times. The geometric sum formula then shows that "G"("a", "b", 2"n") 0. 1, then formula_21 If c is not squarefree then the right side vanishes while the left side does not. Often the right sum is also called a quadratic Gauss sum. formula_22 holds for "k" ≥ 2 and an odd prime number p, and for "k" ≥ 4 and "p" 2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " g(a;p) = \\sum_{n=0}^{p-1}\\zeta_p^{an^2}," }, { "math_id": 1, "text": "\\zeta_p" }, { "math_id": 2, "text": "\\zeta_p=\\exp(2\\pi i/p)" }, { "math_id": 3, "text": "g(a;p) = \\sum_{n=0}^{p-1}\\big(1+\\left(\\tfrac{n}{p}\\right)\\big)\\,\\zeta_p^{an}." }, { "math_id": 4, "text": "\\zeta_p^{an^2}" }, { "math_id": 5, "text": "1" }, { "math_id": 6, "text": " g(a;p) = p." }, { "math_id": 7, "text": "g(a;p) = \\sum_{n=0}^{p-1}\\left(\\tfrac{n}{p}\\right)\\,\\zeta_p^{an} = G(a,\\left(\\tfrac{\\cdot}{p}\\right))," }, { "math_id": 8, "text": "G(a,\\chi)=\\sum_{n=0}^{p-1}\\chi(n)\\,\\zeta_p^{an}" }, { "math_id": 9, "text": "\\mathbb{Q}(\\zeta_p)" }, { "math_id": 10, "text": " g(a;p)=\\left(\\tfrac{a}{p}\\right)g(1;p). " }, { "math_id": 11, "text": " g(1;p) =\\sum_{n=0}^{p-1}e^\\frac{2\\pi in^2}{p}=\n\\begin{cases} \n(1+i)\\sqrt{p} & \\text{if}\\ p\\equiv 0 \\pmod 4, \\\\ \n\\sqrt{p} & \\text{if}\\ p\\equiv 1\\pmod 4, \\\\ \n0 & \\text{if}\\ p \\equiv 2 \\pmod 4, \\\\\ni\\sqrt{p} & \\text{if}\\ p\\equiv 3\\pmod 4. \n\\end{cases}" }, { "math_id": 12, "text": "g(1;p)^2=\\left(\\tfrac{-1}{p}\\right)p" }, { "math_id": 13, "text": "G(a,b,c)=\\sum_{n=0}^{c-1} e^{2\\pi i\\frac{a n^2+bn}{c}}" }, { "math_id": 14, "text": "G(a,b,cd)=G(ac,b,d)G(ad,b,c)." }, { "math_id": 15, "text": "G(a,b,c)= \\gcd(a,c) \\cdot G\\left(\\frac{a}{\\gcd(a,c)},\\frac{b}{\\gcd(a,c)},\\frac{c}{\\gcd(a,c)}\\right)" }, { "math_id": 16, "text": "\\sum_{n=0}^{|c|-1} e^{\\pi i \\frac{a n^2+bn}{c}} = \\left|\\frac{c}{a}\\right|^\\frac12 e^{\\pi i \\frac{|ac|-b^2}{4ac}} \\sum_{n=0}^{|a|-1} e^{-\\pi i \\frac{c n^2+b n}{a}}" }, { "math_id": 17, "text": " \\varepsilon_m = \\begin{cases} 1 & \\text{if}\\ m\\equiv 1\\pmod 4 \\\\ i & \\text{if}\\ m\\equiv 3\\pmod 4 \\end{cases}" }, { "math_id": 18, "text": "G(a,c) = G(a,0,c) =\n\\begin{cases}\n0 & \\text{if}\\ c\\equiv 2\\pmod 4 \\\\ \n\\varepsilon_c \\sqrt{c} \\left(\\dfrac{a}{c}\\right) & \\text{if}\\ c\\equiv 1\\pmod 2 \\\\ \n(1+i) \\varepsilon_a^{-1} \\sqrt{c} \\left(\\dfrac{c}{a}\\right) & \\text{if}\\ c\\equiv 0\\pmod 4.\n\\end{cases}" }, { "math_id": 19, "text": "G(a,b,c) = \\varepsilon_c \\sqrt{c} \\cdot \\left(\\frac{a}{c}\\right) e^{-2\\pi i \\frac{\\psi(a) b^2}{c}}," }, { "math_id": 20, "text": "\\mathbb{Z}" }, { "math_id": 21, "text": "G(a,0,c) = \\sum_{n=0}^{c-1} \\left(\\frac{n}{c}\\right) e^\\frac{2\\pi i a n}{c}." }, { "math_id": 22, "text": "G\\left(n,p^k\\right) = p\\cdot G\\left(n,p^{k-2}\\right)" } ]
https://en.wikipedia.org/wiki?curid=595898
595929
S-Adenosyl methionine
Chemical compound found in all domains of life with largely unexplored effects &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound "S"-Adenosyl methionine (SAM), also known under the commercial names of SAMe, SAM-e, or AdoMet, is a common cosubstrate involved in methyl group transfers, transsulfuration, and aminopropylation. Although these anabolic reactions occur throughout the body, most SAM is produced and consumed in the liver. More than 40 methyl transfers from SAM are known, to various substrates such as nucleic acids, proteins, lipids and secondary metabolites. It is made from adenosine triphosphate (ATP) and methionine by methionine adenosyltransferase. SAM was first discovered by Giulio Cantoni in 1952. In bacteria, SAM is bound by the SAM riboswitch, which regulates genes involved in methionine or cysteine biosynthesis. In eukaryotic cells, SAM serves as a regulator of a variety of processes including DNA, tRNA, and rRNA methylation; immune response; amino acid metabolism; transsulfuration; and more. In plants, SAM is crucial to the biosynthesis of ethylene, an important plant hormone and signaling molecule. Structure. "S"-Adenosyl methionine consists of the adenosyl group attached to the sulfur of methionine, providing it with a positive charge. It is synthesized from ATP and methionine by "S"-Adenosylmethionine synthetase enzyme through the following reaction: ATP + -methionine + HO formula_0 phosphate + diphosphate + "S"-adenosyl--methionine The sulfonium functional group present in "S"-adenosyl methionine is the center of its peculiar reactivity. Depending on the enzyme, "S"-adenosyl methionine can be converted into one of three products: Biochemistry. SAM cycle. The reactions that produce, consume, and regenerate SAM are called the SAM cycle. In the first step of this cycle, the SAM-dependent methylases (EC 2.1.1) that use SAM as a substrate produce "S"-adenosyl homocysteine as a product. "S"-Adenosyl homocysteine is a strong negative regulator of nearly all SAM-dependent methylases despite their biological diversity. This is hydrolysed to homocysteine and adenosine by "S"-adenosylhomocysteine hydrolase EC 3.3.1.1 and the homocysteine recycled back to methionine through transfer of a methyl group from 5-methyltetrahydrofolate, by one of the two classes of methionine synthases (i.e. cobalamin-dependent (EC 2.1.1.13) or cobalamin-independent (EC 2.1.1.14)). This methionine can then be converted back to SAM, completing the cycle. In the rate-limiting step of the SAM cycle, MTHFR (methylenetetrahydrofolate reductase) irreversibly reduces 5,10-methylenetetrahydrofolate to 5-methyltetrahydrofolate. Radical SAM enzymes. A large number of enzymes cleave SAM reductively to produce radicals: 5′-deoxyadenosyl 5′-radical, methyl radical, and others. These enzymes are called radical SAMs. They all feature iron-sulfur cluster at their active sites. Most enzymes with this capability share a region of sequence homology that includes the motif CxxxCxxC or a close variant. This sequence provides three cysteinyl thiolate ligands that bind to three of the four metals in the 4Fe-4S cluster. The fourth Fe binds the SAM. The radical intermediates generated by these enzymes perform a wide variety of unusual chemical reactions. Examples of radical SAM enzymes include spore photoproduct lyase, activases of pyruvate formate lyase and anaerobic sulfatases, lysine 2,3-aminomutase, and various enzymes of cofactor biosynthesis, peptide modification, metalloprotein cluster formation, tRNA modification, lipid metabolism, etc. Some radical SAM enzymes use a second SAM as a methyl donor. Radical SAM enzymes are much more abundant in anaerobic bacteria than in aerobic organisms. They can be found in all domains of life and are largely unexplored. A recent bioinformatics study concluded that this family of enzymes includes at least 114,000 sequences including 65 unique reactions. Deficiencies in radical SAM enzymes have been associated with a variety of diseases including congenital heart disease, amyotrophic lateral sclerosis, and increased viral susceptibility. Polyamine biosynthesis. Another major role of SAM is in polyamine biosynthesis. Here, SAM is decarboxylated by adenosylmethionine decarboxylase (EC 4.1.1.50) to form "S"-adenosylmethioninamine. This compound then donates its "n"-propylamine group in the biosynthesis of polyamines such as spermidine and spermine from putrescine. SAM is required for cellular growth and repair. It is also involved in the biosynthesis of several hormones and neurotransmitters that affect mood, such as epinephrine. Methyltransferases are also responsible for the addition of methyl groups to the 2′ hydroxyls of the first and second nucleotides next to the 5′ cap in messenger RNA. Therapeutic uses. Osteoarthrtitis pain. As of 2012, the evidence was inconclusive as to whether SAM can mitigate the pain of osteoarthritis; clinical trials that had been conducted were too small from which to generalize. Liver disease. The SAM cycle has been closely tied to the liver since 1947 because people with alcoholic cirrhosis of the liver would accumulate large amounts of methionine in their blood. While multiple lines of evidence from laboratory tests on cells and animal models suggest that SAM might be useful to treat various liver diseases, as of 2012 SAM had not been studied in any large randomized placebo-controlled clinical trials that would allow an assessment of its efficacy and safety. Depression. A 2016 Cochrane review concluded that for major depressive disorder, "Given the absence of high quality evidence and the inability to draw firm conclusions based on that evidence, the use of SAMe for the treatment of depression in adults should be investigated further." A 2020 systematic review found that it performed significantly better than placebo, and had similar outcomes to other commonly used antidepressants (imipramine and escitalopram). Anti-cancer treatment. SAM has recently been shown to play a role in epigenetic regulation. DNA methylation is a key regulator in epigenetic modification during mammalian cell development and differentiation. In mouse models, excess levels of SAM have been implicated in erroneous methylation patterns associated with diabetic neuropathy. SAM serves as the methyl donor in cytosine methylation, which is a key epigenetic regulatory process. Because of this impact on epigenetic regulation, SAM has been tested as an anti-cancer treatment. In many cancers, proliferation is dependent on having low levels of DNA methylation. In vitro addition in such cancers has been shown to remethylate oncogene promoter sequences and decrease the production of proto-oncogenes. In cancers such as colorectal cancer, aberrant global hypermethylation can inhibit promoter regions of tumor-suppressing genes. Contrary to the former information, colorectal cancers (CRCs) are characterized by global hypomethylation and promoter-specific DNA methylation. Pharmacokinetics. Oral SAM achieves peak plasma concentrations three to five hours after ingestion of an enteric-coated tablet (400–1000 mg). The half-life is about 100 minutes. Availability in different countries. In Canada, the UK, and the United States, SAM is sold as a dietary supplement under the marketing name SAM-e (also spelled SAME or SAMe). It was introduced in the US in 1999, after the Dietary Supplement Health and Education Act was passed in 1994. It was introduced as a prescription drug in Italy in 1979, in Spain in 1985, and in Germany in 1989. As of 2012, it was sold as a prescription drug in Russia, India, China, Italy, Germany, Vietnam, and Mexico. Adverse effects. Gastrointestinal disorder, dyspepsia and anxiety can occur with SAM consumption. Long-term effects are unknown. SAM is a weak DNA-alkylating agent. Another reported side effect of SAM is insomnia; therefore, the supplement is often taken in the morning. Other reports of mild side effects include lack of appetite, constipation, nausea, dry mouth, sweating, and anxiety/nervousness, but in placebo-controlled studies, these side effects occur at about the same incidence in the placebo groups. Interactions and contraindications. Taking SAM at the same time as some drugs may increase the risk of serotonin syndrome, a potentially dangerous condition caused by having too much serotonin. These drugs include, but are certainly not limited to, dextromethorphan (Robitussin), meperidine (Demerol), pentazocine (Talwin), and tramadol (Ultram). SAM can also interact with many antidepressant medications — including tryptophan and the herbal medicine "Hypericum perforatum" (St. John's wort) — increasing the potential for serotonin syndrome or other side effects, and may reduce the effectiveness of levodopa for Parkinson's disease. SAM can increase the risk of manic episodes in people who have bipolar disorder. Toxicity. A 2022 study concluded that SAMe could be toxic. Jean-Michel Fustin of Manchester University said that the researchers found that excess SAMe breaks down into adenine and methylthioadenosine in the body, both producing the paradoxical effect of inhibiting methylation. This was found in laboratory mice, causing harm to health, and in "in vitro" tests on human cells. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=595929
59593380
Georgii Polozii
Soviet mathematician Georgii Nikolaevich Polozii (; 23 April 1914 – 26 November 1968) was a Soviet mathematician who mostly worked in pure mathematics such as complex analysis, approximation theory and numerical analysis. He also worked on elasticity theory, which is used in applied math and physics. He was Corresponding Member of the Academy of Sciences of the Ukrainian SSR, Doctor of Physical and Mathematical Sciences (1953), Head of the Department of Computational Mathematics of the Kyiv Cybernetics Faculty University (1958). Education. In 1933, Polozii graduated from high school in the village of Verkhnyi Baskunchaky of Astrakhan Oblast, and subsequently entered the Faculty of Physics and Mathematics of Saratov University. He graduated from Saratov University in 1937 and stayed to teach until he moved to the University of Kyiv in 1949. Later life. After 1938, Polozii worked at the Department of Mathematical Analysis. He participated in the Soviet-Finnish war. During the German-Soviet war in one of the battles near Nelidovo as infantry platoon commander he was seriously wounded. He had seven operations then he returned to Saratov University where he was engaged in scientific and pedagogical work. In 1946 he defended his Ph.D. thesis "Integral images of continuously differentiable functions of a complex variable". In 1949, he began working at the University of Kyiv, first as an associate professor of the Department of Mathematical Physics, and from 1951 to 1958 – its head. In 1953 he defended his doctoral dissertation on the topic "On some methods of the theory of functions in the mechanics of a continuous medium". In 1958 Georgiy Polozhia was elected head of the Department of Computational Mathematics. He died on September 26, 1968, less than a year before realizing his dream of creating a separate faculty for computational mathematics and cybernetics. He is buried in Kyiv at Baykovoye Cemetery. Works. Polozii mostly worked in the following four areas. Complex functions. He produced "original results in the theory of functions of a complex variable" A complex function is a function whose domain and range are subsets of the complex plane. For any complex function, the values formula_0 from the domain and their images formula_1 in the range may be separated into real and imaginary parts: formula_2 where formula_3 are all real-valued. In other words, a complex function formula_4 may be decomposed into formula_5 i.e., into two real-valued functions (formula_6, formula_7) of two real variables (formula_8). The basic concepts of complex analysis are often introduced by extending the elementary real functions (e.g., exponential functions, logarithmic functions, and trigonometric functions) into a complex domain and the corresponding complex range. Approximation theory. He developed methods to solve boundary value problems which arise in mathematical physics. His work produced the method of summary representation. He "devised a new approximation method for the solution of problems in elasticity and filtration". Approximation theory tries to develop simpler functions to mimic or get close to more complex ones and define the errors that can be introduced by these approximations. Numerical analysis. Polozii came up with a new class of ("p","q") analytic functions and developed a new notion of p-analytic functions, defined the notion of derivative and integral for these functions, developed their calculus and obtained a generalised Cauchy's formula". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "z" }, { "math_id": 1, "text": "f(z)" }, { "math_id": 2, "text": "z=x+iy \\quad \\text{ and } \\quad f(z) = f(x+iy)=u(x,y)+iv(x,y)," }, { "math_id": 3, "text": "x,y,u(x,y),v(x,y)" }, { "math_id": 4, "text": "f:\\mathbb{C}\\to\\mathbb{C}" }, { "math_id": 5, "text": "u:\\mathbb{R}^2\\to\\mathbb{R} \\quad \\text{ and } \\quad v:\\mathbb{R}^2\\to\\mathbb{R}," }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "x,y" } ]
https://en.wikipedia.org/wiki?curid=59593380
59594023
BAITSSS
BAITSSS (Backward-Averaged Iterative Two-Source Surface temperature and energy balance Solution) is biophysical Evapotranspiration (ET) computer model that determines water use, primarily in agriculture landscape, using remote sensing-based information. It was developed and refined by Ramesh Dhungel and the water resources group at University of Idaho's Kimberly Research and Extension Center since 2010. It has been used in different areas in the United States including Southern Idaho, Northern California, northwest Kansas, Texas, and Arizona. History of development. BAITSSS originated from the research of Ramesh Dhungel, a graduate student at the University of Idaho, who joined a project called "Producing and integrating time series of gridded evapotranspiration for irrigation management, hydrology and remote sensing applications" under professor Richard G. Allen. In 2012, the initial version of landscape model was developed using the Python IDLE environment using NARR weather data (~ 32 kilometers). Dhungel submitted his PhD dissertation in 2014 where the model was called BATANS (backward averaged two source accelerated numerical solution). The model was first published in "Meteorological Applications" journal in 2016 under the name BAITSSS as a framework to interpolate ET between the satellite overpass when thermal based surface temperature is unavailable. The overall concept of backward averaging was introduced to expedite the convergence process of iteratively solved surface energy balance components which can be time-consuming and can frequently suffer non-convergence, especially in low wind speed. In 2017, the landscape BAITSSS model was scripted in Python shell, together with GDAL and NumPy libraries using NLDAS weather data (~ 12.5 kilometers). The detailed independent model was evaluated against weighing lysimeter measured ET, infrared temperature (IRT) and net radiometer of drought-tolerant corn and sorghum at Conservation and Production Research Laboratory in Bushland, Texas by group of scientists from USDA-ARS and Kansas State University between 2017 and 2020. Some later development of BAITSSS includes physically based crop productivity components, i.e. biomass and crop yield computation. Rationale. The majority of remote sensing based instantaneous ET models use evaporative fraction (EF) or reference ET fraction (ETrF), similar to crop coefficients, for computing seasonal values, these models generally lack the soil water balance and Irrigation components in surface energy balance. Other limiting factors is the dependence on thermal-based radiometric surface temperature, which is not always available at required temporal resolution and frequently obscured by factors such as cloud cover. BAITSSS was developed to fill these gaps in remote sensing based models liberating the use of thermal-based radiometric surface temperature and to serve as a digital crop water tracker simulating high temporal (hourly or sub-hourly) and spatial resolution (30 meter) ET maps. BAITSSS utilizes remote sensing based canopy formation information, i.e. estimation of seasonal variation of vegetation indices and senescence. Approach and model structure. Surface energy balance is one of the commonly utilized approaches to quantify ET (latent heat flux in terms of flux), where weather variables and vegetation Indices are the drivers of this process. BAITSSS adopts numerous equations to compute surface energy balance and resistances where primarily are from Javis, 1976, Choudhury and Monteith, 1988, and aerodynamic methods or flux-gradient relationship equations with stability functions associated with Monin–Obukhov similarity theory. Underlying fundamental equations of surface energy balance. Latent heat flux (LE) The aerodynamic or flux-gradient equations of latent heat flux in BAITSSS are shown below. formula_0 is saturation vapor pressure at the canopy and formula_1 is for soil, formula_2 is ambient vapor pressure, rac is bulk boundary layer resistance of vegetative elements in the canopy, rah is aerodynamic resistance between zero plane displacement (d) + roughness length of momentum (zom) and measurement height (z) of wind speed, ras is the aerodynamic resistance between the substrate and canopy height (d +zom), and rss is soil surface resistance. formula_3 Sensible heat flux (H) and surface temperature calculation The flux-gradient equations of sensible heat flux and surface temperature in BAITSSS are shown below. formula_4 formula_5 Canopy resistance (rsc) Typical Jarvis type-equation of rsc adopted in BAITSSS is shown below, Rc-min is the minimum value of rsc, LAI is leaf area index, fc is fraction of canopy cover, weighting functions representing plant response to solar radiation (F1), air temperature (F2), vapor pressure deficit (F3), and soil moisture (F4) each varying between 0 and 1. formula_6 Equations of soil water balance and irrigation decision. Standard soil water balance equations for soil surface and the root zone are implemented in BAITSSS for each time step, where irrigation decisions are based on the soil moisture at the root zone. Data. Input. ET models, in general, need information about vegetation (physical properties and vegetation indices) and environment condition (weather data) to compute water use. Primary weather data requirements in BAITSSS are solar irradiance (Rs↓), wind speed (uz), air temperature (Ta), relative humidity (RH) or specific humidity (qa), and precipitation (P). Vegetation indices requirements in BAITSSS are leaf area index (LAI) and fractional canopy cover (fc), generally estimated from normalized difference vegetation index (NDVI). Automated BAITSSS can compute ET throughout United States using National Oceanic and Atmospheric Administration (NOAA) weather data (i.e. hourly NLDAS: North American Land Data Assimilation system at 1/8 degree; ~ 12.5 kilometers), Vegetation indices those acquired by Landsat, and soil information from SSURGO. Output. BAITSSS generates large numbers of variables (fluxes, resistances, and moisture) in gridded form in each time-step. The most commonly used outputs are evapotranspiration, evaporation, transpiration, soil moisture, irrigation amount, and surface temperature maps and time series analysis. Agriculture system applications and recognition. BAITSSS was implemented to compute ET in southern Idaho for 2008, and in northern California for 2010. It was used to calculate corn and sorghum ET in Bushland, Texas for 2016, and multiple crops in northwest Kansas for 2013–2017. BAITSSS has been widely discussed among the peers around the world, including Bhattarai et al. in 2017 and Jones et al. in 2019. United States Senate Committee on Agriculture, Nutrition and Forestry listed BAITSSS in its climate change report. BAITSSS was also covered by articles in Open Access Government, Landsat science team, Grass &amp; Grain magazine, National Information Management &amp; Support System (NIMSS), terrestrial ecological models, key research contribution related to sensible heat flux estimation and irrigation decision in remote sensing based ET models. In September 2019, the Northwest Kansas Groundwater Management District 4 (GMD 4) along with BAITSSS received national recognition from American Association for the Advancement of Science (AAAS). AAAS highlighted 18 communities across the U.S. that are responding to climate change including Sheridan County, Kansas to prolong the life of Ogallala Aquifer by minimizing water use where this aquifer is depleting rapidly due to extensive agricultural practices . AAAS discussed the development and use of intricate ET model BAITSSS and Dhungel's and other scientists efforts supporting effective use of water in Sheridan County, Kansas. Furthermore, Upper Republican Regional Advisory Committee of Kansas (June 2019) and GMD 4 discussed possible benefit and utilization of BAITSSS for managing water use, educational purpose, and cost-share. A short story about Ogallala Aquifer Conservation effort from Kansas State University and GMD4 using ET model was published in Mother Earth News (April/May 2020), and Progressive Crop Consultant. Example application. Groundwater and Irrigation. Dhungel et al., 2020, combined with field crop scientists, systems analysts, and district water managers, applied BAITSSS at the district water management level focusing on seasonal ET and annual groundwater withdrawal rates at Sheridan 6 (SD-6) Local Enhanced Management Plan (LEMA) for five years period (2013-2017) in northwest, Kansas, United States. BAITSSS simulated irrigation was compared to reported irrigation as well as to infer deficit irrigation within water right management units (WRMU). In Kansas, groundwater pumping records are legal documents and maintained by the Kansas Division of Water Resources. The in-season water supply was compared to BAITSSS simulated ET as well-watered crop water condition. Evapotranspiration Hysterisis and Advection. A study related to ET uncertainty associated with ET hysteresis (Vapor pressure and net radiation) were conducted using lysimeter, Eddy covariance (EC), and BAITSSS model (point-scale) in an advective environment of Bushland, Texas. Results indicated that the pattern of hysteresis from BAITSSS closely followed the lysimeter and showed weak hysteresis related to net radiation when compared to EC. However, both lysimeter and BAITSSS showed strong hysteresis related to VPD when compared to EC. Lettuce Evapotranspiration. A study related to lettuce evapotranspiration was conducted at Yuma, Arizona using BAITSSS between 2016 and 2020, where model simulated ET closely followed twelve eddy covariance sites Challenges and limitations. Simulation of hourly ET at 30 m spatial resolution for seasonal time scale is computationally challenging and data-intensive. The low wind speed complicates the convergence of surface energy balance components as well. The peer group Pan et al. in 2017 and Dhungel et al., 2019 pointed out the possible difficulty of parameterization and validations of these kinds of resistance based models. The simulated irrigation may vary than that actually applied in field. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e^{o}_c " }, { "math_id": 1, "text": "e^{o}_s " }, { "math_id": 2, "text": "e_a " }, { "math_id": 3, "text": "LE_{c} = \\frac{\\rho_a c_{p} }{{\\gamma}} \\bigl(\\frac{e^{o}_c - e_a }{{r_{ac} + r_{ah} + r_{sc}}}\\bigr)\\ \\And \\ LE_{s} = \\frac{\\rho_a c_{p} }{{\\gamma}} \\bigl(\\frac{e^{o}_s - e_a }{{r_{as} + r_{ah} + r_{ss}}}\\bigr) " }, { "math_id": 4, "text": "H_{c} = {\\rho_a c_{p} } \\bigl(\\frac{T_c - T_a }{{r_{ah} + r_{ac}}}\\bigr)\\Longleftrightarrow T_{c} = \\frac{H_{c}(r_{ah} + r_{ac}) }{{\\rho_a c_{p}}} + {T_a } " }, { "math_id": 5, "text": "H_{s} = {\\rho_a c_{p} } \\bigl(\\frac{T_s - T_a }{{r_{ah} + r_{as}}}\\bigr)\\Longleftrightarrow T_{s} = \\frac{H_{s}(r_{ah} + r_{as}) }{{\\rho_a c_{p}}} + {T_a } " }, { "math_id": 6, "text": "r_{sc} = \\frac{R_{c-min}}{\\frac{LAI} {f_c} F_1 F_2 F_3 F_4 } " } ]
https://en.wikipedia.org/wiki?curid=59594023
59595
Heine–Borel theorem
Subset of Euclidean space is compact if and only if it is closed and bounded In real analysis the Heine–Borel theorem, named after Eduard Heine and Émile Borel, states: For a subset "S" of Euclidean space R"n", the following two statements are equivalent: History and motivation. The history of what today is called the Heine–Borel theorem starts in the 19th century, with the search for solid foundations of real analysis. Central to the theory was the concept of uniform continuity and the theorem stating that every continuous function on a closed and bounded interval is uniformly continuous. Peter Gustav Lejeune Dirichlet was the first to prove this and implicitly he used the existence of a finite subcover of a given open cover of a closed interval in his proof. He used this proof in his 1852 lectures, which were published only in 1904. Later Eduard Heine, Karl Weierstrass and Salvatore Pincherle used similar techniques. Émile Borel in 1895 was the first to state and prove a form of what is now called the Heine–Borel theorem. His formulation was restricted to countable covers. Pierre Cousin (1895), Lebesgue (1898) and Schoenflies (1900) generalized it to arbitrary covers. Proof. If a set is compact, then it must be closed. Let "S" be a subset of R"n". Observe first the following: if "a" is a limit point of "S", then any finite collection "C" of open sets, such that each open set "U" ∈ "C" is disjoint from some neighborhood "V""U" of "a", fails to be a cover of "S". Indeed, the intersection of the finite family of sets "V""U" is a neighborhood "W" of "a" in R"n". Since "a" is a limit point of "S", "W" must contain a point "x" in "S". This "x" ∈ "S" is not covered by the family "C", because every "U" in "C" is disjoint from "V""U" and hence disjoint from "W", which contains "x". If "S" is compact but not closed, then it has a limit point "a" not in "S". Consider a collection "C" ′ consisting of an open neighborhood "N"("x") for each "x" ∈ "S", chosen small enough to not intersect some neighborhood "V""x" of "a". Then "C" ′ is an open cover of "S", but any finite subcollection of "C" ′ has the form of "C" discussed previously, and thus cannot be an open subcover of "S". This contradicts the compactness of "S". Hence, every limit point of "S" is in "S", so "S" is closed. The proof above applies with almost no change to showing that any compact subset "S" of a Hausdorff topological space "X" is closed in "X". If a set is compact, then it is bounded. Let formula_0 be a compact set in formula_1, and formula_2 a ball of radius 1 centered at formula_3. Then the set of all such balls centered at formula_4 is clearly an open cover of formula_0, since formula_5 contains all of formula_0. Since formula_0 is compact, take a finite subcover of this cover. This subcover is the finite union of balls of radius 1. Consider all pairs of centers of these (finitely many) balls (of radius 1) and let formula_6 be the maximum of the distances between them. Then if formula_7 and formula_8 are the centers (respectively) of unit balls containing arbitrary formula_9, the triangle inequality says: formula_10 So the diameter of formula_0 is bounded by formula_11. Lemma: A closed subset of a compact set is compact. Let "K" be a closed subset of a compact set "T" in R"n" and let "C""K" be an open cover of "K". Then "U" R"n" \ "K" is an open set and formula_12 is an open cover of "T". Since "T" is compact, then "C""T" has a finite subcover formula_13 that also covers the smaller set "K". Since "U" does not contain any point of "K", the set "K" is already covered by formula_14 that is a finite subcollection of the original collection "C""K". It is thus possible to extract from any open cover "C""K" of "K" a finite subcover. If a set is closed and bounded, then it is compact. If a set "S" in R"n" is bounded, then it can be enclosed within an "n"-box formula_15 where "a" &gt; 0. By the lemma above, it is enough to show that "T"0 is compact. Assume, by way of contradiction, that "T"0 is not compact. Then there exists an infinite open cover "C" of "T"0 that does not admit any finite subcover. Through bisection of each of the sides of "T"0, the box "T"0 can be broken up into 2"n" sub "n"-boxes, each of which has diameter equal to half the diameter of "T"0. Then at least one of the 2"n" sections of "T"0 must require an infinite subcover of "C", otherwise "C" itself would have a finite subcover, by uniting together the finite covers of the sections. Call this section "T"1. Likewise, the sides of "T"1 can be bisected, yielding 2"n" sections of "T"1, at least one of which must require an infinite subcover of "C". Continuing in like manner yields a decreasing sequence of nested "n"-boxes: formula_16 where the side length of "T""k" is (2 "a") / 2"k", which tends to 0 as "k" tends to infinity. Let us define a sequence ("x"k) such that each "x"k is in "T"k. This sequence is Cauchy, so it must converge to some limit "L". Since each "T""k" is closed, and for each "k" the sequence ("x"k) is eventually always inside "T"k, we see that "L" ∈ "T"k for each "k". Since "C" covers "T"0, then it has some member "U" ∈ "C" such that "L" ∈ "U". Since "U" is open, there is an "n"-ball "B"("L") ⊆ "U". For large enough "k", one has "T""k" ⊆ "B"("L") ⊆ "U", but then the infinite number of members of "C" needed to cover "Tk" can be replaced by just one: "U", a contradiction. Thus, "T"0 is compact. Since "S" is closed and a subset of the compact set "T"0, then "S" is also compact (see the lemma above). Generalization of the Heine-Borel theorem. In general metric spaces, we have the following theorem: For a subset formula_0 of a metric space formula_17, the following two statements are equivalent: The above follows directly from Jean Dieudonné, theorem 3.16.1, which states: For a metric space formula_17, the following three conditions are equivalent: Heine–Borel property. The Heine–Borel theorem does not hold as stated for general metric and topological vector spaces, and this gives rise to the necessity to consider special classes of spaces where this proposition is true. These spaces are said to have the Heine–Borel property. In the theory of metric spaces. A metric space formula_19 is said to have the Heine–Borel property if each closed bounded set in formula_18 is compact. Many metric spaces fail to have the Heine–Borel property, such as the metric space of rational numbers (or indeed any incomplete metric space). Complete metric spaces may also fail to have the property; for instance, no infinite-dimensional Banach spaces have the Heine–Borel property (as metric spaces). Even more trivially, if the real line is not endowed with the usual metric, it may fail to have the Heine–Borel property. A metric space formula_19 has a Heine–Borel metric which is Cauchy locally identical to formula_20 if and only if it is complete, formula_21-compact, and locally compact. In the theory of topological vector spaces. A topological vector space formula_18 is said to have the Heine–Borel property (R.E. Edwards uses the term "boundedly compact space") if each closed bounded set in formula_18 is compact. No infinite-dimensional Banach spaces have the Heine–Borel property (as topological vector spaces). But some infinite-dimensional Fréchet spaces do have, for instance, the space formula_22 of smooth functions on an open set formula_23 and the space formula_24 of holomorphic functions on an open set formula_25. More generally, any quasi-complete nuclear space has the Heine–Borel property. All Montel spaces have the Heine–Borel property as well. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "\\mathbf{R}^n" }, { "math_id": 2, "text": "U_x" }, { "math_id": 3, "text": "x\\in\\mathbf{R}^n" }, { "math_id": 4, "text": "x\\in S" }, { "math_id": 5, "text": "\\cup_{x\\in S} U_x" }, { "math_id": 6, "text": "M" }, { "math_id": 7, "text": "C_p" }, { "math_id": 8, "text": "C_q" }, { "math_id": 9, "text": "p,q\\in S" }, { "math_id": 10, "text": "\nd(p, q)\\le d(p, C_p) + d(C_p, C_q) + d(C_q, q)\\le 1 + M + 1 = M + 2.\n" }, { "math_id": 11, "text": "M+2" }, { "math_id": 12, "text": " C_T = C_K \\cup \\{U\\} " }, { "math_id": 13, "text": " C_T'," }, { "math_id": 14, "text": " C_K' = C_T' \\setminus \\{U\\}, " }, { "math_id": 15, "text": " T_0 = [-a, a]^n" }, { "math_id": 16, "text": " T_0 \\supset T_1 \\supset T_2 \\supset \\ldots \\supset T_k \\supset \\ldots " }, { "math_id": 17, "text": "(X, d)" }, { "math_id": 18, "text": "X" }, { "math_id": 19, "text": "(X,d)" }, { "math_id": 20, "text": "d" }, { "math_id": 21, "text": "\\sigma" }, { "math_id": 22, "text": "C^\\infty(\\Omega)" }, { "math_id": 23, "text": "\\Omega\\subset\\mathbb{R}^n" }, { "math_id": 24, "text": "H(\\Omega)" }, { "math_id": 25, "text": "\\Omega\\subset\\mathbb{C}^n" } ]
https://en.wikipedia.org/wiki?curid=59595
59597756
Minimum relevant variables in linear system
Minimum relevant variables in linear system (Min-RVLS) is a problem in mathematical optimization. Given a linear program, it is required to find a feasible solution in which the number of non-zero variables is as small as possible. The problem is known to be NP-hard and even hard to approximate. Definition. A Min-RVLS problem is defined by: The linear system is given by: "A x" "R" "b." It is assumed to be feasible (i.e., satisfied by at least one "x"). Depending on R, there are four different variants of this system: "A x = b, A x ≥ b, A x &gt; b, A x ≠ b". The goal is to find an "n"-by-1 vector "x" that satisfies the system "A x" "R" "b", and subject to that, contains as few as possible nonzero elements. Special case. The problem Min-RVLS[=] was presented by Garey and Johnson, who called it "minimum weight solution to linear equations". They proved it was NP-hard, but did not consider approximations. Applications. The Min-RVLS problem is important in machine learning and linear discriminant analysis. Given a set of positive and negative examples, it is required to minimize the number of features that are required to correctly classify them. The problem is known as the minimum feature set problem. An algorithm that approximates Min-RVLS within a factor of formula_0 could substantially reduce the number of training samples required to attain a given accuracy level. The shortest codeword problem in coding theory is the same problem as Min-RVLS[=] when the coefficients are in GF(2). Related problems. In minimum unsatisfied linear relations (Min-ULR), we are given a binary relation "R" and a linear system "A x" "R" "b", which is now assumed to be "infeasible". The goal is to find a vector "x" that violates as few relations as possible, while satisfying all the others. Min-ULR[≠] is trivially solvable, since any system with real variables and a finite number of inequality constraints is feasible. As for the other three variants: In the complementary problem maximum feasible linear subsystem (Max-FLS), the goal is to find a maximum subset of the constraints that can be satisfied simultaneously. Hardness of approximation. All four variants of Min-RVLS are hard to approximate. In particular all four variants cannot be approximated within a factor of formula_2, for any formula_3, unless NP is contained in DTIME(formula_4). The hardness is proved by reductions: On the other hand, there is a reduction from Min-RVLS[=] to Min-ULR[=]. It also applies to Min-ULR[≥] and Min-ULR[&gt;], since each equation can be replaced by two complementary inequalities. Therefore, when R is in {=,&gt;,≥}, Min-ULR and Min-RVLS are equivalent in terms of approximation hardness. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(\\log(m))" }, { "math_id": 1, "text": "O(n\\cdot m^n / 2^{n-1})" }, { "math_id": 2, "text": "2^{\\log^{1-\\varepsilon}n}" }, { "math_id": 3, "text": "\\varepsilon>0" }, { "math_id": 4, "text": "n^{\\operatorname{polylog}(n)}" }, { "math_id": 5, "text": "m^{\\varepsilon}" } ]
https://en.wikipedia.org/wiki?curid=59597756
5959843
Vector notation
Use of coordinates for representing vectors In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space. For representing a vector, the common typographic convention is lower case, upright boldface type, as in v. The International Organization for Standardization (ISO) recommends either bold italic serif, as in v, or non-bold italic serif accented by a right arrow, as in formula_0. In advanced mathematics, vectors are often represented in a simple italic type, like any variable. History. In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments formula_1 which resulted in the concept of a vector as an equivalence class of such segments. The term "vector" was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion "q" = "a" + "b"i + "c"j + "d"k, Hamilton used two projections: S "q" = "a", for the scalar part of "q", and V "q" = "b"i + "c"j + "d"k, the vector part. Using the modern terms "cross product" (×) and "dot product" (.), the "quaternion product" of two vectors "p" and "q" can be written "pq" = –"p"."q" + "p"×"q". In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook "Elements of Dynamic". Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in "Vector Analysis". In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell. In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to the "Bulletin" of the Quaternion Society. Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication. Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians were not taken with quaternions as much as were English-speaking mathematicians. When Felix Klein was organizing the German mathematical encyclopedia, he assigned Arnold Sommerfeld to standardize vector notation. In 1950, when Academic Press published G. Kuerti’s translation of the second edition of volume 2 of "Lectures on Theoretical Physics" by Sommerfeld, vector notation was the subject of a footnote: "In the original German text, vectors "and" their components are printed in the same Gothic types. The more usual way of making a typographical distinction between the two has been adopted for this translation." Felix Klein commented on differences in notation of vectors and their operations in 1925 through a Mr. Seyfarth who prepared a supplement to "Elementary Mathematics from an Advanced Standpoint — Geometry" after "repeated conferences" with him. The terms line-segment, plane-segment, plane magnitude, inner and outer product come from Grassmann, while the words scalar, vector, scalar product, and vector product came from Hamilton. The disciples of Grassmann, in other ways so orthodox, replaced in part the appropriate expressions of the master by others. The existing terminologies were merged or modified, and the symbols which indicate the separate operations have been used with the greatest arbitrariness. On these accounts even for the expert, a great lack of clearness has crept into this field, which is mathematically so simple. Efforts to unify the various notational terms through committees of the International Congress of Mathematicians were described as follows: The Committee which was set up in Rome for the unification of vector notation did not have the slightest success, as was to have been expected. At the following Congress in Cambridge (1912), they had to explain that they had not finished their task, and to request that their time be extended to the meeting of the next Congress, which was to have taken place in Stockholm in 1916, but which was omitted because of the war. The committee on units and symbols met a similar fate. It published in 1921 a proposed notation for vector quantities, which aroused at once and from many sides the most violent opposition. Rectangular coordinates. Given a Cartesian coordinate system, a vector may be specified by its Cartesian coordinates. Tuple notation. A vector v in "n"-dimensional real coordinate space can be specified using a tuple (ordered list) of coordinates: formula_2 Sometimes angle brackets formula_3 are used instead of parentheses. Matrix notation. A vector in formula_4 can also be specified as a row or column matrix containing the ordered set of components. A vector specified as a row matrix is known as a row vector; one specified as a column matrix is known as a column vector. Again, an "n"-dimensional vector formula_5 can be specified in either of the following forms using matrices: where "v"1, "v"2, …, "v""n" − 1, "v""n" are the components of v. In some advanced contexts, a row and a column vector have different meaning; see covariance and contravariance of vectors for more. Unit vector notation. A vector in formula_8 (or fewer dimensions, such as formula_9 where "v""z" below is zero) can be specified as the sum of the scalar multiples of the components of the vector with the members of the standard basis in formula_8. The basis is represented with the unit vectors formula_10, formula_11, and formula_12. A three-dimensional vector formula_13 can be specified in the following form, using unit vector notation: formula_14 where "v""x", "v""y", and "v""z" are the scalar components of v. Scalar components may be positive or negative; the absolute value of a scalar component is its magnitude. Polar coordinates. The two polar coordinates of a point in a plane may be considered as a two dimensional vector. Such a vector consists of a magnitude (or length) and a direction (or angle). The magnitude, typically represented as "r", is the distance from a starting point, the origin, to the point which is represented. The angle, typically represented as "θ" (the Greek letter theta), is the angle, usually measured , between a fixed direction, typically that of the positive "x"-axis, and the direction from the origin to the point. The angle is typically reduced to lie within the range formula_15 radians or formula_16. Ordered set and matrix notations. Vectors can be specified using either ordered pair notation (a subset of ordered set notation using only two components), or matrix notation, as with rectangular coordinates. In these forms, the first component of the vector is "r" (instead of "v"1), and the second component is "θ" (instead of "v"2). To differentiate polar coordinates from rectangular coordinates, the angle may be prefixed with the angle symbol, formula_17. Two-dimensional polar coordinates for "v" can be represented as any of the following, using either ordered pair or matrix notation: where "r" is the magnitude, "θ" is the angle, and the angle symbol (formula_17) is optional. Direct notation. Vectors can also be specified using simplified autonomous equations that define "r" and "θ" explicitly. This can be unwieldy, but is useful for avoiding the confusion with two-dimensional rectangular vectors that arises from using ordered pair or matrix notation. A two-dimensional vector whose magnitude is 5 units, and whose direction is "π"/9 radians (20°), can be specified using either of the following forms: Cylindrical vectors. A cylindrical vector is an extension of the concept of polar coordinates into three dimensions. It is akin to an arrow in the cylindrical coordinate system. A cylindrical vector is specified by a distance in the "xy"-plane, an angle, and a distance from the "xy"-plane (a height). The first distance, usually represented as "r" or "ρ" (the Greek letter rho), is the magnitude of the projection of the vector onto the "xy"-plane. The angle, usually represented as "θ" or "φ" (the Greek letter phi), is measured as the offset from the line collinear with the "x"-axis in the positive direction; the angle is typically reduced to lie within the range formula_15. The second distance, usually represented as "h" or "z", is the distance from the "xy"-plane to the endpoint of the vector. Ordered set and matrix notations. Cylindrical vectors use polar coordinates, where the second distance component is concatenated as a third component to form ordered triplets (again, a subset of ordered set notation) and matrices. The angle may be prefixed with the angle symbol (formula_17); the distance-angle-distance combination distinguishes cylindrical vectors in this notation from spherical vectors in similar notation. A three-dimensional cylindrical vector "v" can be represented as any of the following, using either ordered triplet or matrix notation: Where "r" is the magnitude of the projection of v onto the "xy"-plane, "θ" is the angle between the positive "x"-axis and v, and "h" is the height from the "xy"-plane to the endpoint of "v". Again, the angle symbol (formula_17) is optional. Direct notation. A cylindrical vector can also be specified directly, using simplified autonomous equations that define "r" (or "ρ"), "θ" (or "φ"), and "h" (or "z"). Consistency should be used when choosing the names to use for the variables; "ρ" should not be mixed with "θ" and so on. A three-dimensional vector, the magnitude of whose projection onto the "xy"-plane is 5 units, whose angle from the positive "x"-axis is "π"/9 radians (20°), and whose height from the "xy"-plane is 3 units can be specified in any of the following forms: Spherical vectors. A spherical vector is another method for extending the concept of polar vectors into three dimensions. It is akin to an arrow in the spherical coordinate system. A spherical vector is specified by a magnitude, an azimuth angle, and a zenith angle. The magnitude is usually represented as "ρ". The azimuth angle, usually represented as "θ", is the () offset from the positive "x"-axis. The zenith angle, usually represented as "φ", is the offset from the positive "z"-axis. Both angles are typically reduced to lie within the range from zero (inclusive) to 2"π" (exclusive). Ordered set and matrix notations. Spherical vectors are specified like polar vectors, where the zenith angle is concatenated as a third component to form ordered triplets and matrices. The azimuth and zenith angles may be both prefixed with the angle symbol (formula_17); the prefix should be used consistently to produce the distance-angle-angle combination that distinguishes spherical vectors from cylindrical ones. A three-dimensional spherical vector "v" can be represented as any of the following, using either ordered triplet or matrix notation: Where "ρ" is the magnitude, "θ" is the azimuth angle, and "φ" is the zenith angle. Direct notation. Like polar and cylindrical vectors, spherical vectors can be specified using simplified autonomous equations, in this case for "ρ", "θ", and "φ". A three-dimensional vector whose magnitude is 5 units, whose azimuth angle is "π"/9 radians (20°), and whose zenith angle is "π"/4 radians (45°) can be specified as: Operations. In any given vector space, the operations of vector addition and scalar multiplication are defined. Normed vector spaces also define an operation known as the norm (or determination of magnitude). Inner product spaces also define an operation known as the inner product. In formula_4, the inner product is known as the dot product. In formula_8 and formula_38, an additional operation known as the cross product is also defined. Vector addition. Vector addition is represented with the plus sign used as an operator between two vectors. The sum of two vectors u and v would be represented as: formula_39 Scalar multiplication. Scalar multiplication is represented in the same manners as algebraic multiplication. A scalar beside a vector (either or both of which may be in parentheses) implies scalar multiplication. The two common operators, a dot and a rotated cross, are also acceptable (although the rotated cross is almost never used), but they risk confusion with dot products and cross products, which operate on two vectors. The product of a scalar "k" with a vector v can be represented in any of the following fashions: Vector subtraction and scalar division. Using the algebraic properties of subtraction and division, along with scalar multiplication, it is also possible to “subtract” two vectors and “divide” a vector by a scalar. Vector subtraction is performed by adding the scalar multiple of −1 with the second vector operand to the first vector operand. This can be represented by the use of the minus sign as an operator. The difference between two vectors u and v can be represented in either of the following fashions: Scalar division is performed by multiplying the vector operand with the numeric inverse of the scalar operand. This can be represented by the use of the fraction bar or division signs as operators. The quotient of a vector v and a scalar "c" can be represented in any of the following forms: Norm. The norm of a vector is represented with double bars on both sides of the vector. The norm of a vector v can be represented as: formula_47 The norm is also sometimes represented with single bars, like formula_48, but this can be confused with absolute value (which is a type of norm). Inner product. The inner product of two vectors (also known as the scalar product, not to be confused with scalar multiplication) is represented as an ordered pair enclosed in angle brackets. The inner product of two vectors u and v would be represented as: formula_49 Dot product. In formula_4, the inner product is also known as the dot product. In addition to the standard inner product notation, the dot product notation (using the dot as an operator) can also be used (and is more common). The dot product of two vectors u and v can be represented as: formula_50 In some older literature, the dot product is implied between two vectors written side-by-side. This notation can be confused with the dyadic product between two vectors. Cross product. The cross product of two vectors (in formula_8) is represented using the rotated cross as an operator. The cross product of two vectors u and v would be represented as: formula_51 By some conventions (e.g. in France and in some areas of higher mathematics), this is also denoted by a wedge, which avoids confusion with the wedge product since the two are functionally equivalent in three dimensions: formula_52 In some older literature, the following notation is used for the cross product between u and v: formula_53 Nabla. Vector notation is used with calculus through the Nabla operator: formula_54 With a scalar function "f", the gradient is written as formula_55 with a vector field, "F" the divergence is written as formula_56 and with a vector field, "F" the curl is written as formula_57 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{v}" }, { "math_id": 1, "text": "AB \\bumpeq CD" }, { "math_id": 2, "text": "\\mathbf{v} = (v_1, v_2, \\dots, v_{n - 1}, v_n)" }, { "math_id": 3, "text": "\\langle \\dots \\rangle" }, { "math_id": 4, "text": "\\mathbb{R}^n" }, { "math_id": 5, "text": "\\mathbf{v}" }, { "math_id": 6, "text": "\\mathbf{v} = \\begin{bmatrix} v_1 & v_2 & \\cdots & v_{n - 1} & v_n \\end{bmatrix} = \\begin{pmatrix} v_1 & v_2 & \\cdots & v_{n - 1} & v_n \\end{pmatrix}" }, { "math_id": 7, "text": "\\mathbf{v} = \\begin{bmatrix} v_1 \\\\ v_2 \\\\ \\vdots \\\\ v_{n - 1} \\\\ v_n \\end{bmatrix} = \\begin{pmatrix} v_1 \\\\ v_2 \\\\ \\vdots \\\\ v_{n - 1} \\\\ v_n \\end{pmatrix}" }, { "math_id": 8, "text": "\\mathbb{R}^3" }, { "math_id": 9, "text": "\\mathbb{R}^2" }, { "math_id": 10, "text": "\\boldsymbol{\\hat{\\imath}} = (1, 0, 0)" }, { "math_id": 11, "text": "\\boldsymbol{\\hat{\\jmath}} = (0, 1, 0)" }, { "math_id": 12, "text": "\\boldsymbol{\\hat{k}} = (0, 0, 1)" }, { "math_id": 13, "text": "\\boldsymbol{v}" }, { "math_id": 14, "text": "\\mathbf{v} = v_x \\boldsymbol{\\hat{\\imath}} + v_y \\boldsymbol{\\hat{\\jmath}} + v_z \\boldsymbol{\\hat{k}}" }, { "math_id": 15, "text": "0 \\le \\theta < 2\\pi" }, { "math_id": 16, "text": "0 \\le \\theta < 360^{\\circ}" }, { "math_id": 17, "text": "\\angle" }, { "math_id": 18, "text": "\\mathbf{v} = (r, \\angle \\theta)" }, { "math_id": 19, "text": "\\mathbf{v} = \\langle r, \\angle \\theta \\rangle" }, { "math_id": 20, "text": "\\mathbf{v} = \\begin{bmatrix} r & \\angle \\theta \\end{bmatrix}" }, { "math_id": 21, "text": "\\mathbf{v} = \\begin{bmatrix} r \\\\ \\angle \\theta \\end{bmatrix}" }, { "math_id": 22, "text": "r=5, \\ \\theta={\\pi \\over 9}" }, { "math_id": 23, "text": "r=5, \\ \\theta=20^{\\circ}" }, { "math_id": 24, "text": "\\mathbf{v} = (r, \\angle \\theta, h)" }, { "math_id": 25, "text": "\\mathbf{v} = \\langle r, \\angle \\theta, h \\rangle" }, { "math_id": 26, "text": "\\mathbf{v} = \\begin{bmatrix} r & \\angle \\theta & h \\end{bmatrix}" }, { "math_id": 27, "text": "\\mathbf{v} = \\begin{bmatrix} r \\\\ \\angle \\theta \\\\ h \\end{bmatrix}" }, { "math_id": 28, "text": "r=5, \\ \\theta={\\pi \\over 9}, \\ h=3" }, { "math_id": 29, "text": "r=5, \\ \\theta=20^{\\circ}, \\ h=3" }, { "math_id": 30, "text": "\\rho=5, \\ \\phi={\\pi \\over 9}, \\ z=3" }, { "math_id": 31, "text": "\\rho=5, \\ \\phi=20^{\\circ}, \\ z=3" }, { "math_id": 32, "text": "\\mathbf{v} = (\\rho, \\angle \\theta, \\angle \\phi)" }, { "math_id": 33, "text": "\\mathbf{v} = \\langle \\rho, \\angle \\theta, \\angle \\phi \\rangle" }, { "math_id": 34, "text": "\\mathbf{v} = \\begin{bmatrix} \\rho & \\angle \\theta & \\angle \\phi \\end{bmatrix}" }, { "math_id": 35, "text": "\\mathbf{v} = \\begin{bmatrix} \\rho \\\\ \\angle \\theta \\\\ \\angle \\phi \\end{bmatrix} " }, { "math_id": 36, "text": "\\rho=5, \\ \\theta={\\pi \\over 9}, \\ \\phi={\\pi \\over 4}" }, { "math_id": 37, "text": "\\rho=5, \\ \\theta=20^{\\circ}, \\ \\phi=45^{\\circ}" }, { "math_id": 38, "text": "\\mathbb{R}^7" }, { "math_id": 39, "text": "\\mathbf{u} + \\mathbf{v}" }, { "math_id": 40, "text": "k \\mathbf{v}" }, { "math_id": 41, "text": "k \\cdot \\mathbf{v}" }, { "math_id": 42, "text": "\\mathbf{u} + -\\mathbf{v}" }, { "math_id": 43, "text": "\\mathbf{u} - \\mathbf{v}" }, { "math_id": 44, "text": "{1 \\over c} \\mathbf{v}" }, { "math_id": 45, "text": "{\\mathbf{v} \\over c}" }, { "math_id": 46, "text": "{\\mathbf{v} \\div c}" }, { "math_id": 47, "text": "\\|\\mathbf{v}\\|" }, { "math_id": 48, "text": "|\\mathbf{v}|" }, { "math_id": 49, "text": "\\langle \\mathbf{u}, \\mathbf{v} \\rangle" }, { "math_id": 50, "text": "\\mathbf{u} \\cdot \\mathbf{v}" }, { "math_id": 51, "text": "\\mathbf{u} \\times \\mathbf{v}" }, { "math_id": 52, "text": "\\mathbf{u} \\wedge \\mathbf{v}" }, { "math_id": 53, "text": "[\\mathbf{u},\\mathbf{v}]" }, { "math_id": 54, "text": " \\mathbf{i}\\frac{\\partial}{\\partial x} + \\mathbf{j}\\frac{\\partial}{\\partial y} + \\mathbf{k}\\frac{\\partial}{\\partial z} " }, { "math_id": 55, "text": "\\nabla f \\, ," }, { "math_id": 56, "text": "\\nabla \\cdot F ," }, { "math_id": 57, "text": "\\nabla \\times F ." } ]
https://en.wikipedia.org/wiki?curid=5959843
59600531
Phase reduction
Phase reduction is a method used to reduce a multi-dimensional dynamical equation describing a nonlinear limit cycle oscillator into a one-dimensional phase equation. Many phenomena in our world such as chemical reactions, electric circuits, mechanical vibrations, cardiac cells, and spiking neurons are examples of rhythmic phenomena, and can be considered as nonlinear limit cycle oscillators. History. The theory of phase reduction method was first introduced in the 1950s, the existence of periodic solutions to nonlinear oscillators under perturbation, has been discussed by Malkin in, in the 1960s, Winfree illustrated the importance of the notion of phase and formulated the phase model for a population of nonlinear oscillators in his studies on biological synchronization. Since then, many researchers have discovered different rhythmic phenomena related to phase reduction theory. Phase model of reduction. Consider the dynamical system of the form formula_0 where formula_1 is the oscillator state variable, formula_2 is the baseline vector field. Let formula_3 be the flow induced by the system, that is, formula_4 is the solution of the system for the initial condition formula_5. This system of differential equations can describe for a neuron model for conductance with formula_6, where formula_7 represents the voltage difference across the membrane and formula_8 represents the formula_9-dimensional vector that defines gating variables. When a neuron is perturbed by a stimulus current, the dynamics of the perturbed system will no longer be the same with the dynamics of the baseline neural oscillator. The target here is to reduce the system by defining a phase for each point in some neighbourhood of the limit cycle. The allowance of sufficiently small perturbations (e.g. external forcing or stimulus effect to the system) might cause a large deviation of the phase, but the amplitude is perturbed slightly because of the attracting of the limit cycle. Hence we need to extend the definition of the phase to points in the neighborhood of the cycle by introducing the definition of asymptotic phase (or latent phase). This helps us to assign a phase to each point in the basin of attraction of a periodic orbit. The set of points in the basin of attraction of formula_10 that share the same asymptotic phase formula_11 is called the isochron (e.g. see ), which were first introduced by Winfree. Isochrons can be shown to exist for such a stable hyperbolic limit cycle formula_10. So for all point formula_12 in some neighbourhood of the cycle, the evolution of the phase formula_13 can be given by the relation formula_14, where formula_15 is the natural frequency of the oscillation. By the chain rule we then obtain an equation that govern the evolution of the phase of the neuron model is given by the phase model: formula_16 where formula_17 is the gradient of the phase function formula_11 with respect to the vector of the neuron's state vector formula_12, for the derivation of this result, see This means that the formula_18-dimensional system describing the oscillating neuron dynamics is then reduced to a simple one-dimensional phase equation. One can notice that, it is impossible to retrieve the full information of the oscillator formula_12 from the phase formula_19 because formula_11 is not one-to-one mapping. Phase model with external forcing. Consider now a weakly perturbed system of the form formula_20 where formula_21 is the baseline vector field, formula_22 is a weak periodic external forcing (or stimulus effect) of period formula_23, which can be different from formula_24 (in general), and frequency formula_25, which might depend on the oscillator state formula_12. Assuming that the baseline neural oscillator (that is, when formula_26) has an exponentially stable limit cycle formula_27 with period formula_24 (example, see ) formula_28 that is normally hyperbolic, it can be shown that formula_10 persists under small perturbations. This implies that for a small perturbation, the perturbed system will remain close to the limit cycle. Hence we assume that such a limit cycle always exists for each neuron. The evolution of the perturbed system in terms of the isochrons is formula_29 where formula_17 is the gradient of the phase formula_11 with respect to the vector of the neuron's state vector formula_12, and formula_30 is the stimulus effect driving the firing of the neuron as a function of time formula_31. This phase equation is a partial differential equation (PDE). For a sufficiently small formula_32, a reduced phase model evaluated on the limit cycle formula_10 of the unperturbed system can be given by, up to the first order of formula_33, formula_34 where function formula_35 measures the normalized phase shift due to a small perturbation delivered at any point formula_36 on the limit cycle formula_10, and is called the phase sensitivity function or infinitesimal phase response curve. In order to analyze the reduced phase equation corresponding to the perturbed nonlinear system, we need to solve a PDE, which is not a trivial one. So we need to simplify it into an autonomous phase equation for formula_37, which can more easily be analyzed. Assuming that the frequencies formula_38 and formula_39 are sufficiently small so that formula_40, where formula_41 is formula_42, we can introduce a new phase function formula_43. By the method of averaging, assuming that formula_44 does not vary within formula_23, we obtain an approximated phase equation formula_45 where formula_46, and formula_47 is a formula_48-periodic function representing the effect of the periodic external forcing on the oscillator phase, defined by formula_49 The graph of this function formula_47 can be shown to exhibit the dynamics of the approximated phase model, for more illustrations see. Examples of phase reduction. For a sufficiently small perturbation of a certain nonlinear oscillator or a network of coupled oscillators, we can compute the corresponding phase sensitivity function or infinitesimal PRC formula_50. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{dx}{dt}=f(x), \n" }, { "math_id": 1, "text": "x\\in \\mathbb{R}^N" }, { "math_id": 2, "text": "f(x)" }, { "math_id": 3, "text": "\\varphi:\\mathbb{R}^N\\times \\mathbb{R} \\rightarrow \\mathbb{R}^N" }, { "math_id": 4, "text": "\\varphi(x_0,t)" }, { "math_id": 5, "text": "x(0)=x_0" }, { "math_id": 6, "text": " x=(V,n)\\in \\mathbb{R}^N" }, { "math_id": 7, "text": " V" }, { "math_id": 8, "text": " n" }, { "math_id": 9, "text": " (N-1)" }, { "math_id": 10, "text": "\\gamma" }, { "math_id": 11, "text": "\\Phi(x)" }, { "math_id": 12, "text": "x" }, { "math_id": 13, "text": "\\varphi=\\Phi(x)" }, { "math_id": 14, "text": " \\frac{d\\varphi}{dt}=\\omega " }, { "math_id": 15, "text": "\\omega=\\frac{2\\pi}{T_0}" }, { "math_id": 16, "text": "\n\\frac{d\\varphi}{dt}=\\nabla\\Phi(x)\\cdot f(x)=\\omega,\n" }, { "math_id": 17, "text": "\\nabla\\Phi(x)" }, { "math_id": 18, "text": "N" }, { "math_id": 19, "text": "\\Phi" }, { "math_id": 20, "text": "\n\\frac{dx(t)}{dt}=f(x)+\\varepsilon g(t),\n" }, { "math_id": 21, "text": " f(x)" }, { "math_id": 22, "text": "\\varepsilon g(t) " }, { "math_id": 23, "text": "T" }, { "math_id": 24, "text": "T_0" }, { "math_id": 25, "text": "\\Omega=2\\pi/T " }, { "math_id": 26, "text": "\\varepsilon=0" }, { "math_id": 27, "text": " \\gamma" }, { "math_id": 28, "text": " \\gamma " }, { "math_id": 29, "text": "\n\\frac{d\\varphi}{dt}=\\omega +\\varepsilon \\, \\nabla\\Phi(x)\\cdot g(t),\n" }, { "math_id": 30, "text": "g(t)" }, { "math_id": 31, "text": "t" }, { "math_id": 32, "text": "\\varepsilon>0" }, { "math_id": 33, "text": "\\varepsilon" }, { "math_id": 34, "text": "\n\\frac{d\\varphi}{dt}=\\omega + \\varepsilon \\, Z(\\varphi) \\cdot g(t),\n" }, { "math_id": 35, "text": "Z(\\varphi):=\\nabla\\Phi(\\gamma(t))" }, { "math_id": 36, "text": " x" }, { "math_id": 37, "text": "\\varphi" }, { "math_id": 38, "text": "\\omega" }, { "math_id": 39, "text": "\\Omega" }, { "math_id": 40, "text": "\\omega-\\Omega=\\varepsilon\\delta " }, { "math_id": 41, "text": "\\delta" }, { "math_id": 42, "text": "O(1)" }, { "math_id": 43, "text": " \\psi(t)=\\varphi(t)-\\Omega t" }, { "math_id": 44, "text": "\\psi(t)" }, { "math_id": 45, "text": "\n\\frac{d\\psi(t)}{dt}=\\Delta_\\varepsilon + \\varepsilon\\Gamma(\\psi),\n" }, { "math_id": 46, "text": "\\Delta_\\varepsilon=\\varepsilon\\delta " }, { "math_id": 47, "text": "\\Gamma(\\psi)" }, { "math_id": 48, "text": "2\\pi" }, { "math_id": 49, "text": "\n\\Gamma(\\psi)= \\frac 1 {2\\pi} \\int_0^{2\\pi}Z(\\psi+\\eta)\\cdot g\\left(\\frac\\eta\\Omega\\right) \\, d\\eta .\n" }, { "math_id": 50, "text": "Z(\\varphi)" } ]
https://en.wikipedia.org/wiki?curid=59600531
59601600
Squeeze flow
Squeeze flow (also called squeezing flow, squeezing film flow, or squeeze flow theory) is a type of flow in which a material is pressed out or deformed between two parallel plates or objects. First explored in 1874 by Josef Stefan, squeeze flow describes the outward movement of a droplet of material, its area of contact with the plate surfaces, and the effects of internal and external factors such as temperature, viscoelasticity, and heterogeneity of the material. Several squeeze flow models exist to describe Newtonian and non-Newtonian fluids undergoing squeeze flow under various geometries and conditions. Numerous applications across scientific and engineering disciplines including rheometry, welding engineering, and materials science provide examples of squeeze flow in practical use. Basic Assumptions. Conservation of mass (expressed as a continuity equation), the Navier-Stokes equations for conservation of momentum, and the Reynolds number provide the foundations for calculating and modeling squeeze flow. Boundary conditions for such calculations include assumptions of an incompressible fluid, a two-dimensional system, neglecting of body forces, and neglecting of inertial forces. Relating applied force to material thickness: formula_0 Where formula_1 is the applied squeezing force, formula_2 is the initial length of the droplet, formula_3 is the fluid viscosity, formula_4 is the width of the assumed rectangular plate, formula_5 is the final height of the droplet, and formula_6is the change in droplet height over time. To simplify most calculations, the applied force is assumed to be constant. Newtonian fluids. Several equations accurately model Newtonian droplet sizes under different initial conditions. Consideration of a single asperity, or surface protrusion, allows for measurement of a very specific cross-section of a droplet. To measure macroscopic squeeze flow effects, models exist for two the most common surfaces: circular and rectangular plate squeeze flows. Single asperity. For single asperity squeeze flow: formula_7 Where formula_8 is the initial height of the droplet, formula_5 is the final height of the droplet, formula_1 is the applied squeezing force, formula_9 is the squeezing time, formula_3 is the fluid viscosity, formula_4 is the width of the assumed rectangular plate, and formula_10 is the initial length of the droplet. Based on conservation of mass calculations, the droplet width is inversely proportional to droplet height; as the width increases, the height decreases in response to squeezing forces. Circular plate. For circular plate squeeze flow: formula_11 formula_12 is the radius of the circular plate. Rectangular plate. For rectangular plate squeeze flow: formula_13 These calculations assume a melt layer that has a length much larger than the sample width and thickness. Non-Newtonian fluids. Simplifying calculations for Newtonian fluids allows for basic analysis of squeeze flow, but many polymers can exhibit properties of non-Newtonian fluids, such as viscoelastic characteristics, under deformation. The power law fluid model is sufficient to describe behaviors above the melting temperature for semicrystalline thermoplastics or the glass transition temperature for amorphous thermoplastics, and the Bingham fluid model provides calculations based on variations in yield stress calculations. Power law fluid. For squeeze flow in a power law fluid: formula_14 Where formula_15 (or formula_16) is the "flow consistency index" and formula_17 is the dimensionless "flow behavior index". formula_18 Where formula_15 is the "flow consistency index, formula_19"is the "initial flow consistency index", "formula_20"is the activation energy, "formula_12"is the universal gas constant, and "formula_21"is the . During experimentation to determine the accuracy of the power law fluid model, observations showed that modeling slow squeeze flow generated inaccurate power law constants (formula_15and formula_17) using a standard viscometer, and fast squeeze flow demonstrated that polymers may exhibit better lubrication than current constitutive models will predict. The current empirical model for power law fluids is relatively accurate for modeling inelastic flows, but certain kinematic flow assumptions and incomplete understanding of polymeric lubrication properties tend to provide inaccurate modeling of power law fluids. Bingham fluid. Bingham fluids exhibit uncommon characteristics during squeeze flow. While undergoing compression, Bingham fluids should fail to move and act as a solid until achieving a yield stress; however, as the parallel plates move closer together, the fluid shows some radial movement. One study proposes a “biviscosity” model where the Bingham fluid retains some unyielded regions that maintain solid-like properties, while other regions yield and allow for some compression and outward movement. formula_22 Where formula_23 is the "known viscosity" of the Bingham fluid, formula_24 is the ""paradoxical" viscosity" of the solid-like state, and formula_25 is the "biviscosity region stress". To determine this new stress: formula_26 Where formula_27 is the "yield stress" and formula_28 is the dimensionless "viscosity ratio". If formula_29, the fluid exhibits Newtonian behavior; as formula_30, the Bingham model applies. Applications. Squeeze flow application is prevalent in several science and engineering fields. Modeling and experimentation assist with understanding the complexities of squeeze flow during processes such as rheological testing, hot plate welding, and composite material joining. Rheological testing. Squeeze flow rheometry allows for evaluation of polymers under wide ranges of temperatures, shear rates, and flow indexes. Parallel plate plastometers provide analysis for high viscosity materials such as rubber and glass, cure times for epoxy resins, and fiber-filled suspension flows. While viscometers provide useful results for squeeze flow measurements, testing conditions such as applied rotation rates, material composition, and fluid flow behaviors under shear may require the use of rheometers or other novel setups to obtain accurate data. Hot plate welding. During conventional hot plate welding, a successful joining phase depends on proper maintenance of squeeze flow to ensure that pressure and temperature create an ideal weld. Excessive pressure causes squeeze out of valuable material and weakens the bond due to fiber realignment in the melt layer, while failure to allow cooling to room temperature creates weak, brittle welds that crack or break completely during use. Composite material joining. Prevalent in the aerospace and automotive industries, composites serve as expensive, yet mechanically strong, materials in the construction of several types of aircraft and vehicles. While aircraft parts are typically composed of thermosetting polymers, thermoplastics may become an analog to permit increased manufacturing of these stronger materials through their melting abilities and relatively inexpensive raw materials. Characterization and testing of thermoplastic composites experiencing squeeze flow allow for study of fiber orientations within the melt and final products to determine weld strength. Fiber strand length and size show significant effects on material strength, and squeeze flow causes fibers to orient along the load direction while being perpendicular to the joining direction to achieve the same final properties as thermosetting composites.
[ { "math_id": 0, "text": "F=-\\frac{4*L^3*\\eta*W}{h^3}{dh \\over dt}" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "2L" }, { "math_id": 3, "text": "\\eta" }, { "math_id": 4, "text": "W" }, { "math_id": 5, "text": "2h" }, { "math_id": 6, "text": "{dh \\over dt}" }, { "math_id": 7, "text": "\\frac{h_0}{h}=\\left (1+\\frac{5*F*t*h_0^2}{4*\\eta*W*L_0^3}\\right )^{1/5}" }, { "math_id": 8, "text": "2h_0" }, { "math_id": 9, "text": "t" }, { "math_id": 10, "text": "2L_0" }, { "math_id": 11, "text": "\\frac{h_0}{h}=\\left (1+\\frac{16*F*t*h_0^2}{3*\\pi*\\eta*R^4}\\right )^{1/2}" }, { "math_id": 12, "text": "R" }, { "math_id": 13, "text": "\\frac{h_0}{h}=\\left (1+\\frac{F*t*h_0^2}{2*\\mu*W*L^3}\\right )^{1/2}" }, { "math_id": 14, "text": "\\frac{h_0}{h}=\\left ( 1+t*(\\frac{2n+3}{4n+2})(\\frac{(4*h_0*L_0)^{n+1}*F*(n+2)}{(2*L_0)^{2n+3}*W*m})^{1/n}\\right )^{n/2n+3}" }, { "math_id": 15, "text": "m" }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "m=m_0*exp\\left ( \\frac{-E_a}{R*T} \\right )" }, { "math_id": 19, "text": "m_0" }, { "math_id": 20, "text": "E_a" }, { "math_id": 21, "text": "T" }, { "math_id": 22, "text": "\\tau = \\begin{cases} \\eta_2*{du \\over dy}+\\tau_1, & \\text{if }\\tau\\geq\\tau_1 \\\\ \\eta_1*{du \\over dy}, & \\text{if }\\tau<\\tau_1 \\end{cases}" }, { "math_id": 23, "text": "\\eta_2" }, { "math_id": 24, "text": "\\eta_1" }, { "math_id": 25, "text": "\\tau_1" }, { "math_id": 26, "text": "\\tau_0=\\tau_1(1-\\epsilon)" }, { "math_id": 27, "text": "\\tau_0" }, { "math_id": 28, "text": "\\epsilon=\\frac{\\eta_2}{\\eta_1}" }, { "math_id": 29, "text": "\\epsilon=1" }, { "math_id": 30, "text": "\\epsilon\\rightarrow0" } ]
https://en.wikipedia.org/wiki?curid=59601600
59609094
Lewis' law
Observed property of epithelial cells Lewis' law gives a relationship between the size and the shape of epithelial cells. It states that the average apical area formula_0of an epithelial cell is linearly related to its neighbor number formula_1. It is a phenomenological law that was first described in the cucumber epidermis by the morphologist Frederic Thomas Lewis in 1928. The simplest version of Lewis' law can be expressed as formula_2, which reads: The average apical area of a cell with formula_1 neighbors (divided by the average apical area of all cells) is proportional to its shape. While neighbor number distributions change throughout organogenesis, the average neighbor number of epithelial cells is formula_3, which can be traced back to Euler's formula for polygons. Discovery. Frederic Thomas Lewis noticed that epidermal cells display a patterning similar to froths, which led him to quantify and analyze the sizes and shapes of epidermal cells. Confirmation and mechanism. A variety of empirical studies in different epithelial tissues have confirmed Lewis' law. It has been suggested that the emergence of Lewis' law on the apical surface of epithelia is a result of the concurrence of According to this theory, the observed tissue-specific polygon distributions and Lewis' law arise as a compromise in order to maintain tissue integrity. Importance. In order to understand morphogenetic events, i.e. the growth and shaping of tissues and organs, it is necessary to analyze the packing of cells into tissues. In that context, an analysis of patterning processes can help to identify the underlying mechanisms that drive morphogenesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bar{A}_n" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\frac{\\overline{A_n}}{\\overline{A}}=\\frac{n-2}{4}" }, { "math_id": 3, "text": "\\bar{n}\\approx 6" } ]
https://en.wikipedia.org/wiki?curid=59609094
59611
Ionization
Process by which atoms or molecules acquire charge by gaining or losing electrons Ionization (or ionisation specifically in Britain, Ireland, Australia and New Zealand) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules, electrons, positrons, protons, antiprotons and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected. Uses. Everyday examples of gas ionization occur within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in medical treatment (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application. Production of ions. Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization. Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vector of all collision fragments (the scattered projectile, the recoiling target-ion, and the ejected electron) are determined, have contributed to major advances in the theoretical understanding of the few-body problem in recent years. Adiabatic ionization. Adiabatic ionization is a form of ionization in which an electron is removed from or added to an atom or molecule in its lowest energy state to form an ion in its lowest energy state. The Townsend discharge is a good example of the creation of positive ions and free electrons due to ion impact. It is a cascade reaction involving electrons in a region with a sufficiently high electric field in a gaseous medium that can be ionized, such as air. Following an original ionization event, due to such as ionizing radiation, the positive ion drifts towards the cathode, while the free electron drifts towards the anode of the device. If the electric field is strong enough, the free electron gains sufficient energy to liberate a further electron when it next collides with another molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause impact ionization when the next collisions occur; and so on. This is effectively a chain reaction of electron generation, and is dependent on the free electrons gaining sufficient energy between collisions to sustain the avalanche. Ionization efficiency is the ratio of the number of ions formed to the number of electrons or photons used. Ionization energy of atoms. The trend in the ionization energy of atoms is often used to demonstrate the periodic behavior of atoms with respect to the atomic number, as summarized by ordering atoms in Mendeleev's table. This is a valuable tool for establishing and understanding the ordering of electrons in atomic orbitals without going into the details of wave functions or the ionization process. An example is presented in the figure to the right. The periodic abrupt decrease in ionization potential after rare gas atoms, for instance, indicates the emergence of a new shell in alkali metals. In addition, the local maximums in the ionization energy plot, moving from left to right in a row, are indicative of s, p, d, and f sub-shells. Semi-classical description of ionization. Classical physics and the Bohr model of the atom can qualitatively explain photoionization and collision-mediated ionization. In these cases, during the ionization process, the energy of the electron exceeds the energy difference of the potential barrier it is trying to pass. The classical description, however, cannot describe tunnel ionization since the process involves the passage of electron through a classically forbidden potential barrier. Quantum mechanical description of ionization. The interaction of atoms and molecules with sufficiently strong laser pulses or with other charged particles leads to the ionization to singly or multiply charged ions. The ionization rate, i.e. the ionization probability in unit time, can be calculated using quantum mechanics. (There are classical methods available also, like the Classical Trajectory Monte Carlo Method (CTMC) ,but it is not overall accepted and often criticized by the community.) There are two quantum mechanical methods exist, perturbative and non-perturbative methods like time-dependent coupled-channel or time independent close coupling methods where the wave function is expanded in a finite basis set. There are numerous options available e.g. B-splines or Coulomb wave packets. Another non-perturbative method is to solve the corresponding Schrödinger equation fully numerically on a lattice. In general, the analytic solutions are not available, and the approximations required for manageable numerical calculations do not provide accurate enough results. However, when the laser intensity is sufficiently high, the detailed structure of the atom or molecule can be ignored and analytic solution for the ionization rate is possible. Tunnel ionization. Tunnel ionization is ionization due to quantum tunneling. In classical ionization, an electron must have enough energy to make it over the potential barrier, but quantum tunneling allows the electron simply to go through the potential barrier instead of going all the way over it because of the wave nature of the electron. The probability of an electron's tunneling through the barrier drops off exponentially with the width of the potential barrier. Therefore, an electron with a higher energy can make it further up the potential barrier, leaving a much thinner barrier to tunnel through and thus a greater chance to do so. In practice, tunnel ionization is observable when the atom or molecule is interacting with near-infrared strong laser pulses. This process can be understood as a process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. This picture is generally known as multiphoton ionization (MPI). Keldysh modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states. In this model the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As it is observed from figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. Perelomov et al. included the Coulomb interaction at larger internuclear distances. Their model (which we call the PPT model) was derived for short range potential and includes the effect of the long range Coulomb interaction through the first order correction in the quasi-classical action. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:Sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fit very well the experimental ion yields for all rare gases in the intermediate regime of the Keldysh parameter. The rate of MPI on atom with an ionization potential formula_0 in a linearly polarized laser with frequency formula_1 is given by formula_2 where The coefficients formula_7, formula_8 and formula_9 are given by formula_10 The coefficient formula_11 is given by formula_12 where formula_13 Quasi-static tunnel ionization. The quasi-static tunneling (QST) is the ionization whose rate can be satisfactorily predicted by the ADK model, i.e. the limit of the PPT model when formula_14 approaches zero. The rate of QST is given by formula_15 As compared to formula_16 the absence of summation over n, which represent different above threshold ionization (ATI) peaks, is remarkable. Strong field approximation for the ionization rate. The calculations of PPT are done in the E-gauge, meaning that the laser field is taken as electromagnetic waves. The ionization rate can also be calculated in A-gauge, which emphasizes the particle nature of light (absorbing multiple photons during ionization). This approach was adopted by Krainov model based on the earlier works of Faisal and Reiss. The resulting rate is given by formula_17 where: Population trapping. In calculating the rate of MPI of atoms only transitions to the continuum states are considered. Such an approximation is acceptable as long as there is no multiphoton resonance between the ground state and some excited states. However, in real situation of interaction with pulsed lasers, during the evolution of laser intensity, due to different Stark shift of the ground and excited states there is a possibility that some excited state go into multiphoton resonance with the ground state. Within the dressed atom picture, the ground state dressed by formula_27 photons and the resonant state undergo an avoided crossing at the resonance intensity formula_28. The minimum distance, formula_29, at the avoided crossing is proportional to the generalized Rabi frequency, formula_30 coupling the two states. According to Story et al., the probability of remaining in the ground state, formula_31, is given by formula_32 where formula_33 is the time-dependent energy difference between the two dressed states. In interaction with a short pulse, if the dynamic resonance is reached in the rising or the falling part of the pulse, the population practically remains in the ground state and the effect of multiphoton resonances may be neglected. However, if the states go onto resonance at the peak of the pulse, where formula_34, then the excited state is populated. After being populated, since the ionization potential of the excited state is small, it is expected that the electron will be instantly ionized. In 1992, de Boer and Muller showed that Xe atoms subjected to short laser pulses could survive in the highly excited states 4f, 5f, and 6f. These states were believed to have been excited by the dynamic Stark shift of the levels into multiphoton resonance with the field during the rising part of the laser pulse. Subsequent evolution of the laser pulse did not completely ionize these states, leaving behind some highly excited atoms. We shall refer to this phenomenon as "population trapping". We mention the theoretical calculation that incomplete ionization occurs whenever there is parallel resonant excitation into a common level with ionization loss. We consider a state such as 6f of Xe which consists of 7 quasi-degnerate levels in the range of the laser bandwidth. These levels along with the continuum constitute a lambda system. The mechanism of the lambda type trapping is schematically presented in figure. At the rising part of the pulse (a) the excited state (with two degenerate levels 1 and 2) are not in multiphoton resonance with the ground state. The electron is ionized through multiphoton coupling with the continuum. As the intensity of the pulse is increased the excited state and the continuum are shifted in energy due to the Stark shift. At the peak of the pulse (b) the excited states go into multiphoton resonance with the ground state. As the intensity starts to decrease (c), the two state are coupled through continuum and the population is trapped in a coherent superposition of the two states. Under subsequent action of the same pulse, due to interference in the transition amplitudes of the lambda system, the field cannot ionize the population completely and a fraction of the population will be trapped in a coherent superposition of the quasi degenerate levels. According to this explanation the states with higher angular momentum – with more sublevels – would have a higher probability of trapping the population. In general the strength of the trapping will be determined by the strength of the two photon coupling between the quasi-degenerate levels via the continuum. In 1996, using a very stable laser and by minimizing the masking effects of the focal region expansion with increasing intensity, Talebpour et al. observed structures on the curves of singly charged ions of Xe, Kr and Ar. These structures were attributed to electron trapping in the strong laser field. A more unambiguous demonstration of population trapping has been reported by T. Morishita and C. D. Lin. Non-sequential multiple ionization. The phenomenon of non-sequential ionization (NSI) of atoms exposed to intense laser fields has been a subject of many theoretical and experimental studies since 1983. The pioneering work began with the observation of a "knee" structure on the Xe2+ ion signal versus intensity curve by L’Huillier et al. From the experimental point of view, the NS double ionization refers to processes which somehow enhance the rate of production of doubly charged ions by a huge factor at intensities below the saturation intensity of the singly charged ion. Many, on the other hand, prefer to define the NSI as a process by which two electrons are ionized nearly simultaneously. This definition implies that apart from the sequential channel formula_35 there is another channel formula_36 which is the main contribution to the production of doubly charged ions at lower intensities. The first observation of triple NSI in argon interacting with a 1 μm laser was reported by Augst et al. Later, systematically studying the NSI of all rare gas atoms, the quadruple NSI of Xe was observed. The most important conclusion of this study was the observation of the following relation between the rate of NSI to any charge state and the rate of tunnel ionization (predicted by the ADK formula) to the previous charge states; formula_37 where formula_38 is the rate of quasi-static tunneling to i'th charge state and formula_39 are some constants depending on the wavelength of the laser (but not on the pulse duration). Two models have been proposed to explain the non-sequential ionization; the shake-off model and electron re-scattering model. The shake-off (SO) model, first proposed by Fittinghoff et al., is adopted from the field of ionization of atoms by X rays and electron projectiles where the SO process is one of the major mechanisms responsible for the multiple ionization of atoms. The SO model describes the NSI process as a mechanism where one electron is ionized by the laser field and the departure of this electron is so rapid that the remaining electrons do not have enough time to adjust themselves to the new energy states. Therefore, there is a certain probability that, after the ionization of the first electron, a second electron is excited to states with higher energy (shake-up) or even ionized (shake-off). We should mention that, until now, there has been no quantitative calculation based on the SO model, and the model is still qualitative. The electron rescattering model was independently developed by Kuchiev, Schafer "et al", Corkum, Becker and Faisal and Faisal and Becker. The principal features of the model can be understood easily from Corkum's version. Corkum's model describes the NS ionization as a process whereby an electron is tunnel ionized. The electron then interacts with the laser field where it is accelerated away from the nuclear core. If the electron has been ionized at an appropriate phase of the field, it will pass by the position of the remaining ion half a cycle later, where it can free an additional electron by electron impact. Only half of the time the electron is released with the appropriate phase and the other half it never return to the nuclear core. The maximum kinetic energy that the returning electron can have is 3.17 times the ponderomotive potential (formula_40) of the laser. Corkum's model places a cut-off limit on the minimum intensity (formula_40 is proportional to intensity) where ionization due to re-scattering can occur. The re-scattering model in Kuchiev's version (Kuchiev's model) is quantum mechanical. The basic idea of the model is illustrated by Feynman diagrams in figure a. First both electrons are in the ground state of an atom. The lines marked a and b describe the corresponding atomic states. Then the electron a is ionized. The beginning of the ionization process is shown by the intersection with a sloped dashed line. where the MPI occurs. The propagation of the ionized electron in the laser field, during which it absorbs other photons (ATI), is shown by the full thick line. The collision of this electron with the parent atomic ion is shown by a vertical dotted line representing the Coulomb interaction between the electrons. The state marked with c describes the ion excitation to a discrete or continuum state. Figure b describes the exchange process. Kuchiev's model, contrary to Corkum's model, does not predict any threshold intensity for the occurrence of NS ionization. Kuchiev did not include the Coulomb effects on the dynamics of the ionized electron. This resulted in the underestimation of the double ionization rate by a huge factor. Obviously, in the approach of Becker and Faisal (which is equivalent to Kuchiev's model in spirit), this drawback does not exist. In fact, their model is more exact and does not suffer from the large number of approximations made by Kuchiev. Their calculation results perfectly fit with the experimental results of Walker et al. Becker and Faisal have been able to fit the experimental results on the multiple NSI of rare gas atoms using their model. As a result, the electron re-scattering can be taken as the main mechanism for the occurrence of the NSI process. Multiphoton ionization of inner-valence electrons and fragmentation of polyatomic molecules. The ionization of inner valence electrons are responsible for the fragmentation of polyatomic molecules in strong laser fields. According to a qualitative model the dissociation of the molecules occurs through a three-step mechanism: The short pulse induced molecular fragmentation may be used as an ion source for high performance mass spectroscopy. The selectivity provided by a short pulse based source is superior to that expected when using the conventional electron ionization based sources, in particular when the identification of optical isomers is required. Kramers–Henneberger frame. The Kramers–Henneberger frame is the non-inertial frame moving with the free electron under the influence of the harmonic laser pulse, obtained by applying a translation to the laboratory frame equal to the quiver motion of a classical electron in the laboratory frame. In other words, in the Kramers–Henneberger frame the classical electron is at rest. Starting in the lab frame (velocity gauge), we may describe the electron with the Hamiltonian: formula_41 In the dipole approximation, the quiver motion of a classical electron in the laboratory frame for an arbitrary field can be obtained from the vector potential of the electromagnetic field: formula_42 where formula_43 for a monochromatic plane wave. By applying a transformation to the laboratory frame equal to the quiver motion formula_44 one moves to the ‘oscillating’ or ‘Kramers–Henneberger’ frame, in which the classical electron is at rest. By a phase factor transformation for convenience one obtains the ‘space-translated’ Hamiltonian, which is unitarily equivalent to the lab-frame Hamiltonian, which contains the original potential centered on the oscillating point formula_45: formula_46 The utility of the KH frame lies in the fact that in this frame the laser-atom interaction can be reduced to the form of an oscillating potential energy, where the natural parameters describing the electron dynamics are formula_47 and formula_48(sometimes called the “excursion amplitude’, obtained from formula_44). From here one can apply Floquet theory to calculate quasi-stationary solutions of the TDSE. In high frequency Floquet theory, to lowest order in formula_49 the system reduces to the so-called ‘structure equation’, which has the form of a typical energy-eigenvalue Schrödinger equation containing the ‘dressed potential’ formula_50 (the cycle-average of the oscillating potential). The interpretation of the presence of formula_51 is as follows: in the oscillating frame, the nucleus has an oscillatory motion of trajectory formula_45 and formula_52 can be seen as the potential of the smeared out nuclear charge along its trajectory. The KH frame is thus employed in theoretical studies of strong-field ionization and atomic stabilization (a predicted phenomenon in which the ionization probability of an atom in a high-intensity, high-frequency field actually decreases for intensities above a certain threshold) in conjunction with high-frequency Floquet theory. Dissociation – distinction. A substance may dissociate without necessarily producing ions. As an example, the molecules of table sugar dissociate in water (sugar is dissolved) but exist as intact neutral entities. Another subtle event is the dissociation of sodium chloride (table salt) into sodium and chlorine ions. Although it may seem as a case of ionization, in reality the ions already exist within the crystal lattice. When salt is dissociated, its constituent ions are simply surrounded by water molecules and their effects are visible (e.g. the solution becomes electrolytic). However, no transfer or displacement of electrons occurs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " E_i " }, { "math_id": 1, "text": " \\omega " }, { "math_id": 2, "text": "W_{PPT} =\n \\left|C_{n^* l^*}\\right|^2 \\sqrt{\\frac{6}{\\pi}} f_{lm}\n E_i \\left(\\frac{2}{F} \\left(2E_i\\right)^{\\frac{3}{2}}\\right)^{2n^* - |m|- \\frac{3}{2}}\n \\left(1 + \\gamma^2\\right)^{\\left|\\frac{m}{2}\\right|+ \\frac{3}{4}}\n A_m (\\omega, \\gamma) e^{-\\frac{2}{F}\\left(2E_i\\right)^{\\frac{3}{2}} g\\left(\\gamma\\right)}\n" }, { "math_id": 3, "text": " \\gamma=\\frac{\\omega \\sqrt{2E_i}}{F} " }, { "math_id": 4, "text": " n^*=\\frac{\\sqrt{2E_i}}{Z^2} " }, { "math_id": 5, "text": " F " }, { "math_id": 6, "text": " l^*=n^* - 1 " }, { "math_id": 7, "text": " f_{lm} " }, { "math_id": 8, "text": " g(\\gamma) " }, { "math_id": 9, "text": " C_{n^* l^*} " }, { "math_id": 10, "text": "\\begin{align}\n f_{lm} &= \\frac{(2l + 1)(l + |m|)!}{2^m |m|!(l - |m|)!} \\\\\n g(\\gamma) &= \\frac{3}{2\\gamma} \\left(1 + \\frac{1}{2\\gamma^2} \\sinh^{-1}(\\gamma) - \\frac{\\sqrt{1 + \\gamma^2}}{2\\gamma}\\right) \\\\\n |C_{n^* l^*}|^2 &= \\frac{2^{2n^*}}{n^* \\Gamma(n^* + l^* + 1) \\Gamma(n^* - l^*)}\n\\end{align}" }, { "math_id": 11, "text": " A_m (\\omega, \\gamma)" }, { "math_id": 12, "text": "\n A_m (\\omega, \\gamma) =\n \\frac{4}{3\\pi} \\frac{1}{|m|!}\n \\frac{\\gamma^2}{1 + \\gamma^2}\n \\sum_{n>v}^\\infty e^{-(n - v) \\alpha(\\gamma)}\n w_m \\left(\\sqrt{\\frac{2\\gamma}{\\sqrt{1 + \\gamma^2}} (n - v)}\\right)\n" }, { "math_id": 13, "text": "\\begin{align}\n w_m(x) &= e^{-x^2} \\int_0^x (x^2 - y^2)^m e^{y^2}\\,dy \\\\\n \\alpha(\\gamma) &= 2\\left(\\sinh^{-1}(\\gamma) - \\frac{\\gamma}{\\sqrt{1 + \\gamma^2}}\\right) \\\\\n v &= \\frac{E_i}{\\omega} \\left(1 + \\frac{2}{\\gamma^2}\\right)\n\\end{align}" }, { "math_id": 14, "text": " \\gamma " }, { "math_id": 15, "text": "W_{ADK} =\n \\left|C_{n^* l^*}\\right|^2 \\sqrt{\\frac{6}{\\pi}}\n f_{lm} E_i \\left(\\frac{2}{F} \\left(2E_i\\right)^{\\frac{3}{2}}\\right)^{2n^* - |m|- \\frac{3}{2}}\n e^{-\\frac{2}{3F} \\left(2E_i\\right)^{\\frac{3}{2}}}\n" }, { "math_id": 16, "text": "W_{PPT}" }, { "math_id": 17, "text": "W_{KRA} =\n \\sum_{n=N}^{\\infty} 2 \\pi \\omega^2 p \\left(n - n_\\mathrm{osc}\\right)^2 \\int \\mathrm{d}\\Omega\n \\left|FT \\left(I_{KAR} \\Psi \\left(\\mathbf{r}\\right)\\right)\\right|^2\n J_n^2 \\left(n_f, \\frac{n_\\mathrm{osc}}{2}\\right)\n" }, { "math_id": 18, "text": " n_{i} =E_i/\\omega," }, { "math_id": 19, "text": "n_\\mathrm{osc}=U_{p}/ \\omega " }, { "math_id": 20, "text": "U_p" }, { "math_id": 21, "text": "N=[n_i + n_\\mathrm{osc}] " }, { "math_id": 22, "text": "J_{n}(u,v)" }, { "math_id": 23, "text": "p=\\sqrt{ 2 \\omega (n-n_\\mathrm{osc}- n_i)}," }, { "math_id": 24, "text": "n_{f}=2 \\sqrt { n_\\mathrm{osc} / \\omega} p \\cos(\\theta)" }, { "math_id": 25, "text": " \\theta" }, { "math_id": 26, "text": " I_{KAR}=\\left(\\frac {2 Z^2}{n^2 F r}\\right)^n " }, { "math_id": 27, "text": "m" }, { "math_id": 28, "text": "I_r" }, { "math_id": 29, "text": "V_m" }, { "math_id": 30, "text": "\\Gamma(t) =\\Gamma_m I(t)^{m/2}" }, { "math_id": 31, "text": "P_g" }, { "math_id": 32, "text": "P_g = \\exp\\left(-\\frac{2\\pi W_m^2}{\\mathrm{d}W/\\mathrm{d}t}\\right)" }, { "math_id": 33, "text": "W" }, { "math_id": 34, "text": "\\mathrm{d}W/\\mathrm{d}t = 0" }, { "math_id": 35, "text": " A+L -> A^+ + L -> A^{++} " }, { "math_id": 36, "text": " A+L-> A^{++} " }, { "math_id": 37, "text": " W_{NS}(A^{n+})= \\sum_{i=1}^{n-1} \\alpha_n\\left(\\lambda\\right) W_{ADK}\\left(A^{i+}\\right)" }, { "math_id": 38, "text": "W_{ADK}\\left(A^{i+}\\right)" }, { "math_id": 39, "text": "\\alpha_n(\\lambda)" }, { "math_id": 40, "text": " U_p " }, { "math_id": 41, "text": " H_{lab}=\\frac{1}{2}(\\mathbf{P}+\\frac{1}{c}\\mathbf{A}(t))^2 + V(r)" }, { "math_id": 42, "text": " \\mathbf{\\alpha}(t) \\equiv \\frac{1}{c} \\int^{t}_{0}\\mathbf{A}(t')dt' = (\\alpha_0/E_0)\\mathbf{E}(t)" }, { "math_id": 43, "text": "\\alpha_0 \\equiv E_0\\omega^{-2} " }, { "math_id": 44, "text": "\\mathbf{\\alpha}(t)" }, { "math_id": 45, "text": "-\\mathbf{\\alpha}(t)" }, { "math_id": 46, "text": "H_{KH}=\\frac{1}{2}\\mathbf{P}^2 + V(\\mathbf{r} + \\mathbf{\\alpha}(t)) " }, { "math_id": 47, "text": "\\omega " }, { "math_id": 48, "text": " \\alpha_0 " }, { "math_id": 49, "text": "\\omega^{-1}" }, { "math_id": 50, "text": " V_0(\\alpha_0,\\mathbf{r}) " }, { "math_id": 51, "text": " V_0 " }, { "math_id": 52, "text": "V_0" } ]
https://en.wikipedia.org/wiki?curid=59611
5961115
Surgery theory
In mathematics, specifically in geometric topology, surgery theory is a collection of techniques used to produce one finite-dimensional manifold from another in a 'controlled' way, introduced by John Milnor (1961). Milnor called this technique "surgery", while Andrew Wallace called it spherical modification. The "surgery" on a differentiable manifold "M" of dimension formula_0, could be described as removing an imbedded sphere of dimension "p" from "M". Originally developed for differentiable (or, smooth) manifolds, surgery techniques also apply to piecewise linear (PL-) and topological manifolds. Surgery refers to cutting out parts of the manifold and replacing it with a part of another manifold, matching up along the cut or boundary. This is closely related to, but not identical with, handlebody decompositions. More technically, the idea is to start with a well-understood manifold "M" and perform surgery on it to produce a manifold "M"′ having some desired property, in such a way that the effects on the homology, homotopy groups, or other invariants of the manifold are known. A relatively easy argument using Morse theory shows that a manifold can be obtained from another one by a sequence of spherical modifications if and only if those two belong to the same cobordism class. The classification of exotic spheres by Michel Kervaire and Milnor (1963) led to the emergence of surgery theory as a major tool in high-dimensional topology. Surgery on a manifold. A basic observation. If "X", "Y" are manifolds with boundary, then the boundary of the product manifold is formula_1 The basic observation which justifies surgery is that the space formula_2 can be understood either as the boundary of formula_3 or as the boundary of formula_4. In symbols, formula_5, where formula_6 is the "q"-dimensional disk, i.e., the set of points in formula_7 that are at distance one-or-less from a given fixed point (the center of the disk); for example, then, formula_8 is homeomorphic to the unit interval, while formula_9 is a circle together with the points in its interior. Surgery. Now, given a manifold "M" of dimension formula_10 and an embedding formula_11, define another "n"-dimensional manifold formula_12 to be formula_13 Since formula_14 and from the equation from our basic observation before, the gluing is justified then formula_15 One says that the manifold "M"′ is produced by a "surgery" cutting out formula_4 and gluing in formula_3, or by a "p"-"surgery" if one wants to specify the number "p". Strictly speaking, "M"′ is a manifold with corners, but there is a canonical way to smooth them out. Notice that the submanifold that was replaced in "M" was of the same dimension as "M" (it was of codimension 0). Attaching handles and cobordisms. Surgery is closely related to (but not the same as) handle attaching. Given an formula_16-manifold with boundary formula_17 and an embedding formula_18, where formula_10, define another formula_16-manifold with boundary "L"′ by formula_19 The manifold "L"′ is obtained by "attaching a formula_20-handle", with formula_21 obtained from formula_22 by a "p"-surgery formula_23 A surgery on "M" not only produces a new manifold "M"′, but also a cobordism "W" between "M" and "M"′. The "trace" of the surgery is the cobordism formula_24, with formula_25 the formula_16-dimensional manifold with boundary formula_26 obtained from the product formula_27 by attaching a formula_20-handle formula_28. Surgery is symmetric in the sense that the manifold "M" can be re-obtained from "M"′ by a formula_29-surgery, the trace of which coincides with the trace of the original surgery, up to orientation. In most applications, the manifold "M" comes with additional geometric structure, such as a map to some reference space, or additional bundle data. One then wants the surgery process to endow "M"′ with the same kind of additional structure. For instance, a standard tool in surgery theory is surgery on normal maps: such a process changes a normal map to another normal map within the same bordism class. Effects on homotopy groups, and comparison to cell-attachment. Intuitively, the process of surgery is the manifold analog of attaching a cell to a topological space, where the embedding formula_30 takes the place of the attaching map. A simple attachment of a formula_20-cell to an "n"-manifold would destroy the manifold structure for dimension reasons, so it has to be thickened by crossing with another cell. Up to homotopy, the process of surgery on an embedding formula_31 can be described as the attaching of a formula_20-cell, giving the homotopy type of the trace, and the detaching of a "q"-cell to obtain "N". The necessity of the detaching process can be understood as an effect of Poincaré duality. In the same way as a cell can be attached to a space to kill an element in some homotopy group of the space, a "p"-surgery on a manifold "M" can often be used to kill an element formula_32. Two points are important however: Firstly, the element formula_32 has to be representable by an embedding formula_31 (which means embedding the corresponding sphere with a trivial normal bundle). For instance, it is not possible to perform surgery on an orientation-reversing loop. Secondly, the effect of the detaching process has to be considered, since it might also have an effect on the homotopy group under consideration. Roughly speaking, this second point is only important when "p" is at least of the order of half the dimension of "M". Application to classification of manifolds. The origin and main application of surgery theory lies in the classification of manifolds of dimension greater than four. Loosely, the organizing questions of surgery theory are: More formally, one asks these questions "up to homotopy": It turns out that the second ("uniqueness") question is a relative version of a question of the first ("existence") type; thus both questions can be treated with the same methods. Note that surgery theory does "not" give a complete set of invariants to these questions. Instead, it is obstruction-theoretic: there is a primary obstruction, and a secondary obstruction called the surgery obstruction which is only defined if the primary obstruction vanishes, and which depends on the choice made in verifying that the primary obstruction vanishes. The surgery approach. In the classical approach, as developed by William Browder, Sergei Novikov, Dennis Sullivan, and C. T. C. Wall, surgery is done on normal maps of degree one. Using surgery, the question "Is the normal map formula_34 of degree one cobordant to a homotopy equivalence?" can be translated (in dimensions greater than four) to an algebraic statement about some element in an L-group of the group ring formula_35. More precisely, the question has a positive answer if and only if the surgery obstruction formula_36 is zero, where "n" is the dimension of "M". For example, consider the case where the dimension "n = 4k" is a multiple of four, and formula_37. It is known that formula_38 is isomorphic to the integers formula_39; under this isomorphism the surgery obstruction of "f" is proportional to the difference of the signatures formula_40 of "X" and "M". Hence a normal map of degree one is cobordant to a homotopy equivalence if and only if the signatures of domain and codomain agree. Coming back to the "existence" question from above, we see that a space "X" has the homotopy type of a smooth manifold if and only if it receives a normal map of degree one whose surgery obstruction vanishes. This leads to a multi-step obstruction process: In order to speak of normal maps, "X" must satisfy an appropriate version of Poincaré duality which turns it into a Poincaré complex. Supposing that "X" is a Poincaré complex, the Pontryagin–Thom construction shows that a normal map of degree one to "X" exists if and only if the Spivak normal fibration of "X" has a reduction to a stable vector bundle. If normal maps of degree one to "X" exist, their bordism classes (called normal invariants) are classified by the set of homotopy classes formula_41. Each of these normal invariants has a surgery obstruction; "X" has the homotopy type of a smooth manifold if and only if one of these obstructions is zero. Stated differently, this means that there is a choice of normal invariant with zero image under the surgery obstruction map formula_42 Structure sets and surgery exact sequence. The concept of structure set is the unifying framework for both questions of existence and uniqueness. Roughly speaking, the structure set of a space "X" consists of homotopy equivalences "M" → "X" from some manifold to "X", where two maps are identified under a bordism-type relation. A necessary (but not in general sufficient) condition for the structure set of a space "X" to be non-empty is that "X" be an "n"-dimensional Poincaré complex, i.e. that the homology and cohomology groups be related by isomorphisms formula_43 of an "n"-dimensional manifold, for some integer "n". Depending on the precise definition and the category of manifolds (smooth, PL, or topological), there are various versions of structure sets. Since, by the s-cobordism theorem, certain bordisms between manifolds are isomorphic (in the respective category) to cylinders, the concept of structure set allows a classification even up to diffeomorphism. The structure set and the surgery obstruction map are brought together in the surgery exact sequence. This sequence allows to determine the structure set of a Poincaré complex once the surgery obstruction map (and a relative version of it) are understood. In important cases, the smooth or topological structure set can be computed by means of the surgery exact sequence. Examples are the classification of exotic spheres, and the proofs of the Borel conjecture for negatively curved manifolds and manifolds with hyperbolic fundamental group. In the topological category, the surgery exact sequence is the long exact sequence induced by a fibration sequence of spectra. This implies that all the sets involved in the sequence are in fact abelian groups. On the spectrum level, the surgery obstruction map is an assembly map whose fiber is the block structure space of the corresponding manifold. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "n=p+q+1" }, { "math_id": 1, "text": "\\partial(X \\times Y) = (\\partial X \\times Y) \\cup (X \\times \\partial Y)." }, { "math_id": 2, "text": "S^p \\times S^{q-1}" }, { "math_id": 3, "text": "D^{p+1} \\times S^{q-1}" }, { "math_id": 4, "text": "S^p \\times D^q" }, { "math_id": 5, "text": "\\partial\\left(S^p \\times D^q\\right) = S^p \\times S^{q-1} = \\partial\\left(D^{p+1} \\times S^{q-1}\\right)" }, { "math_id": 6, "text": "D^q" }, { "math_id": 7, "text": "\\R^q" }, { "math_id": 8, "text": "D^1" }, { "math_id": 9, "text": "D^2" }, { "math_id": 10, "text": "n = p+q" }, { "math_id": 11, "text": "\\phi\\colon S^p \\times D^q \\to M" }, { "math_id": 12, "text": "M'" }, { "math_id": 13, "text": "M' := \\left(M \\setminus \\operatorname{int}(\\operatorname{im}(\\phi))\\right) \\; \\cup_{\\phi|_{S^p \\times S^{q-1}}} \\left(D^{p+1} \\times S^{q-1}\\right)." }, { "math_id": 14, "text": "\\operatorname{im}(\\phi)=\\phi(S^p \\times D^q)" }, { "math_id": 15, "text": "\\phi\\left(\\partial\\left(S^p \\times D^q\\right)\\right) = \\phi\\left(S^p \\times S^{q-1}\\right)." }, { "math_id": 16, "text": "(n+1)" }, { "math_id": 17, "text": "(L, \\partial L)" }, { "math_id": 18, "text": "\\phi\\colon S^p\\times D^q \\to \\partial L" }, { "math_id": 19, "text": "L' := L\\; \\cup_\\phi \\left(D^{p+1} \\times D^q\\right)." }, { "math_id": 20, "text": "(p+1)" }, { "math_id": 21, "text": "\\partial L'" }, { "math_id": 22, "text": "\\partial L" }, { "math_id": 23, "text": "\\partial L' = (\\partial L \\setminus \\operatorname{int} ( \\operatorname{im}(\\phi)) ) \\; \\cup_{\\phi|_{S^p \\times D^{q}}} \\left(D^{p+1} \\times S^{q-1}\\right)." }, { "math_id": 24, "text": "(W; M, M')" }, { "math_id": 25, "text": "W := (M \\times I)\\; \\cup_{\\phi \\times \\{1\\}} \\left(D^{p+1} \\times D^q\\right)" }, { "math_id": 26, "text": "\\partial W = M\\cup M'" }, { "math_id": 27, "text": "M\\times I" }, { "math_id": 28, "text": "D^{p+1} \\times D^q" }, { "math_id": 29, "text": "(q-1)" }, { "math_id": 30, "text": "\\phi" }, { "math_id": 31, "text": "\\phi\\colon S^p\\times D^q \\to M" }, { "math_id": 32, "text": "\\alpha\\in\\pi_p(M)" }, { "math_id": 33, "text": "f\\colon M\\to N" }, { "math_id": 34, "text": "f\\colon M\\to X" }, { "math_id": 35, "text": "\\Z[\\pi_1(X)]" }, { "math_id": 36, "text": "\\sigma(f)\\in L_n(\\Z[\\pi_1(X)])" }, { "math_id": 37, "text": "\\pi_1(X) = 0" }, { "math_id": 38, "text": "L_{4k}(\\Z)" }, { "math_id": 39, "text": "\\Z" }, { "math_id": 40, "text": "\\sigma(X) - \\sigma(M)" }, { "math_id": 41, "text": "[X, G/O]" }, { "math_id": 42, "text": "[X, G/O] \\to L_n\\left(\\Z\\left[\\pi_1(X)\\right]\\right)." }, { "math_id": 43, "text": "H^*(X) \\cong H_{n-*}(X)" } ]
https://en.wikipedia.org/wiki?curid=5961115
59613
Ionization energy
Energy needed to remove an electron In physics and chemistry, ionization energy (IE) is the minimum energy required to remove the most loosely bound electron of an isolated gaseous atom, positive ion, or molecule. The first ionization energy is quantitatively expressed as X(g) + energy ⟶ X+(g) + e− where X is any atom or molecule, X+ is the resultant ion when the original atom was stripped of a single electron, and e− is the removed electron. Ionization energy is positive for neutral atoms, meaning that the ionization is an endothermic process. Roughly speaking, the closer the outermost electrons are to the nucleus of the atom, the higher the atom's ionization energy. In physics, ionization energy is usually expressed in electronvolts (eV) or joules (J). In chemistry, it is expressed as the energy to ionize a mole of atoms or molecules, usually as kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Comparison of ionization energies of atoms in the periodic table reveals two periodic trends which follow the rules of Coulombic attraction: The latter trend results from the outer electron shell being progressively farther from the nucleus, with the addition of one inner shell per row as one moves down the column. The "n"th ionization energy refers to the amount of energy required to remove the most loosely bound electron from the species having a positive charge of ("n" − 1). For example, the first three ionization energies are defined as follows: 1st ionization energy is the energy that enables the reaction X ⟶ X+ + e− 2nd ionization energy is the energy that enables the reaction X+ ⟶ X2+ + e− 3rd ionization energy is the energy that enables the reaction X2+ ⟶ X3+ + e− The most notable influences that determine ionization energy include: Minor influences include: The term "ionization potential" is an older and obsolete term for ionization energy, because the oldest method of measuring ionization energy was based on ionizing a sample and accelerating the electron removed using an electrostatic potential. Determination of ionization energies. The ionization energy of atoms, denoted "E"i, is measured by finding the minimal energy of light quanta (photons) or electrons accelerated to a known energy that will kick out the least bound atomic electrons. The measurement is performed in the gas phase on single atoms. While only noble gases occur as monatomic gases, other gases can be split into single atoms. Also, many solid elements can be heated and vaporized into single atoms. Monatomic vapor is contained in a previously evacuated tube that has two parallel electrodes connected to a voltage source. The ionizing excitation is introduced through the walls of the tube or produced within. When ultraviolet light is used, the wavelength is swept down the ultraviolet range. At a certain wavelength (λ) and frequency of light (ν=c/λ, where c is the speed of light), the light quanta, whose energy is proportional to the frequency, will have energy high enough to dislodge the least bound electrons. These electrons will be attracted to the positive electrode, and the positive ions remaining after the photoionization will get attracted to the negatively charged electrode. These electrons and ions will establish a current through the tube. The ionization energy will be the energy of photons "hν"i ("h" is the Planck constant) that caused a steep rise in the current: "E"i = "hν"i. When high-velocity electrons are used to ionize the atoms, they are produced by an electron gun inside a similar evacuated tube. The energy of the electron beam can be controlled by the acceleration voltages. The energy of these electrons that gives rise to a sharp onset of the current of ions and freed electrons through the tube will match the ionization energy of the atoms. Atoms: values and trends. Generally, the ("N"+1)th ionization energy of a particular element is larger than the "N"th ionization energy (it may also be noted that the ionization energy of an anion is generally less than that of cations and neutral atom for the same element). When the next ionization energy involves removing an electron from the same electron shell, the increase in ionization energy is primarily due to the increased net charge of the ion from which the electron is being removed. Electrons removed from more highly charged ions experience greater forces of electrostatic attraction; thus, their removal requires more energy. In addition, when the next ionization energy involves removing an electron from a lower electron shell, the greatly decreased distance between the nucleus and the electron also increases both the electrostatic force and the distance over which that force must be overcome to remove the electron. Both of these factors further increase the ionization energy. Some values for elements of the third period are given in the following table: Large jumps in the successive molar ionization energies occur when passing noble gas configurations. For example, as can be seen in the table above, the first two molar ionization energies of magnesium (stripping the two 3s electrons from a magnesium atom) are much smaller than the third, which requires stripping off a 2p electron from the neon configuration of Mg2+. That 2p electron is much closer to the nucleus than the 3s electrons removed previously. Ionization energy is also a periodic trend within the periodic table. Moving left to right within a period, or upward within a group, the first ionization energy generally increases, with exceptions such as aluminium and sulfur in the table above. As the nuclear charge of the nucleus increases across the period, the electrostatic attraction increases between electrons and protons, hence the atomic radius decreases, and the electron cloud comes closer to the nucleus because the electrons, especially the outermost one, are held more tightly by the higher effective nuclear charge. On moving downward within a given group, the electrons are held in higher-energy shells with higher principal quantum number n, further from the nucleus and therefore are more loosely bound so that the ionization energy decreases. The effective nuclear charge increases only slowly so that its effect is outweighed by the increase in n. Exceptions in ionization energies. There are exceptions to the general trend of rising ionization energies within a period. For example, the value decreases from beryllium (Be: 9.3 eV) to boron (B: 8.3 eV), and from nitrogen (N: 14.5 eV) to oxygen (O: 13.6 eV). These dips can be explained in terms of electron configurations. Boron has its last electron in a 2p orbital, which has its electron density further away from the nucleus on average than the 2s electrons in the same shell. The 2s electrons then shield the 2p electron from the nucleus to some extent, and it is easier to remove the 2p electron from boron than to remove a 2s electron from beryllium, resulting in a lower ionization energy for B. In oxygen, the last electron shares a doubly occupied p-orbital with an electron of opposing spin. The two electrons in the same orbital are closer together on average than two electrons in different orbitals, so that they shield each other from the nucleus more effectively and it is easier to remove one electron, resulting in a lower ionization energy. Furthermore, after every noble gas element, the ionization energy drastically drops. This occurs because the outer electron in the alkali metals requires a much lower amount of energy to be removed from the atom than the inner shells. This also gives rise to low electronegativity values for the alkali metals. The trends and exceptions are summarized in the following subsections: Ionization energy anomalies in groups. Ionization energy values tend to decrease on going to heavier elements within a group as shielding is provided by more electrons and overall, the valence shells experience a weaker attraction from the nucleus, attributed to the larger covalent radius which increase on going down a group Nonetheless, this is not always the case. As one exception, in Group 10 palladium (Pd: 8.34 eV) has a higher ionization energy than nickel (Ni: 7.64 eV), contrary to the general decrease for the elements from technetium Tc to xenon Xe. Such anomalies are summarized below: Bohr model for hydrogen atom. The ionization energy of the hydrogen atom (formula_0) can be evaluated in the Bohr model, which predicts that the atomic energy level formula_1 has energy formula_2 RH is the Rydberg constant for the hydrogen atom. For hydrogen in the ground state formula_3 and formula_4 so that the energy of the atom before ionization is simply formula_5 After ionization, the energy is zero for a motionless electron infinitely far from the proton, so that the ionization energy is formula_6. This agrees with the experimental value for the hydrogen atom. Quantum-mechanical explanation. According to the more complete theory of quantum mechanics, the location of an electron is best described as a probability distribution within an electron cloud, i.e. atomic orbital. The energy can be calculated by integrating over this cloud. The cloud's underlying mathematical representation is the wavefunction, which is built from Slater determinants consisting of molecular spin orbitals. These are related by Pauli's exclusion principle to the antisymmetrized products of the atomic or molecular orbitals. There are two main ways in which ionization energy is calculated. In general, the computation for the "N"th ionization energy requires calculating the energies of formula_7 and formula_8 electron systems. Calculating these energies exactly is not possible except for the simplest systems (i.e. hydrogen and hydrogen-like elements), primarily because of difficulties in integrating the electron correlation terms. Therefore, approximation methods are routinely employed, with different methods varying in complexity (computational time) and accuracy compared to empirical data. This has become a well-studied problem and is routinely done in computational chemistry. The second way of calculating ionization energies is mainly used at the lowest level of approximation, where the ionization energy is provided by Koopmans' theorem, which involves the highest occupied molecular orbital or "HOMO" and the lowest unoccupied molecular orbital or "LUMO", and states that the ionization energy of an atom or molecule is equal to the negative value of energy of the orbital from which the electron is ejected. This means that the ionization energy is equal to the negative of HOMO energy, which in a formal equation can be written as: formula_9 Molecules: vertical and adiabatic ionization energy. Ionization of molecules often leads to changes in molecular geometry, and two types of (first) ionization energy are defined – "adiabatic" and "vertical". Adiabatic ionization energy. The adiabatic ionization energy of a molecule is the "minimum" amount of energy required to remove an electron from a neutral molecule, i.e. the difference between the energy of the vibrational ground state of the neutral species (v" = 0 level) and that of the positive ion (v' = 0). The specific equilibrium geometry of each species does not affect this value. Vertical ionization energy. Due to the possible changes in molecular geometry that may result from ionization, additional transitions may exist between the vibrational ground state of the neutral species and vibrational excited states of the positive ion. In other words, ionization is accompanied by vibrational excitation. The intensity of such transitions is explained by the Franck–Condon principle, which predicts that the most probable and intense transition corresponds to the vibrationally excited state of the positive ion that has the same geometry as the neutral molecule. This transition is referred to as the "vertical" ionization energy since it is represented by a completely vertical line on a potential energy diagram (see Figure). For a diatomic molecule, the geometry is defined by the length of a single bond. The removal of an electron from a bonding molecular orbital weakens the bond and increases the bond length. In Figure 1, the lower potential energy curve is for the neutral molecule and the upper surface is for the positive ion. Both curves plot the potential energy as a function of bond length. The horizontal lines correspond to vibrational levels with their associated vibrational wave functions. Since the ion has a weaker bond, it will have a longer bond length. This effect is represented by shifting the minimum of the potential energy curve to the right of the neutral species. The adiabatic ionization is the diagonal transition to the vibrational ground state of the ion. Vertical ionization may involve vibrational excitation of the ionic state and therefore requires greater energy. In many circumstances, the adiabatic ionization energy is often a more interesting physical quantity since it describes the difference in energy between the two potential energy surfaces. However, due to experimental limitations, the adiabatic ionization energy is often difficult to determine, whereas the vertical detachment energy is easily identifiable and measurable. Analogs of ionization energy to other systems. While the term ionization energy is largely used only for gas-phase atomic, cationic, or molecular species, there are a number of analogous quantities that consider the amount of energy required to remove an electron from other physical systems. Electron binding energy. Electron binding energy is a generic term for the minimum energy needed to remove an electron from a particular electron shell for an atom or ion, due to these negatively charged electrons being held in place by the electrostatic pull of the positively charged nucleus. For example, the electron binding energy for removing a 3p3/2 electron from the chloride ion is the minimum amount of energy required to remove an electron from the chlorine atom when it has a charge of −1. In this particular example, the electron binding energy has the same magnitude as the electron affinity for the neutral chlorine atom. In another example, the electron binding energy refers to the minimum amount of energy required to remove an electron from the dicarboxylate dianion −O2C(CH2)8CO. The graph to the right shows the binding energy for electrons in different shells in neutral atoms. The ionization energy is the lowest binding energy for a particular atom (although these are not all shown in the graph). Solid surfaces: work function. Work function is the minimum amount of energy required to remove an electron from a solid surface, where the work function "W" for a given surface is defined by the difference formula_10 where −"e" is the charge of an electron, "ϕ" is the electrostatic potential in the vacuum nearby the surface, and "E"F is the Fermi level (electrochemical potential of electrons) inside the material. Note. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z = 1" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": " E = - \\frac{1}{n^2} \\frac{Z^2e^2}{2a_0} = - \\frac{Z^2 R_H}{n^2} = - \\frac{Z^213.6\\ \\mathrm{eV}}{n^2}" }, { "math_id": 3, "text": "Z=1" }, { "math_id": 4, "text": "n=1" }, { "math_id": 5, "text": " E = - 13.6\\ \\mathrm{eV}" }, { "math_id": 6, "text": " I = E(\\mathrm{H}^+) - E(\\mathrm{H}) = +13.6\\ \\mathrm{eV}" }, { "math_id": 7, "text": "Z-N+1" }, { "math_id": 8, "text": "Z-N" }, { "math_id": 9, "text": "I_i=-E_i" }, { "math_id": 10, "text": "W = -e\\phi - E_{\\rm F}, " } ]
https://en.wikipedia.org/wiki?curid=59613
59615
Electric potential
Line integral of the electric field Electric potential (also called the "electric field potential", potential drop, the electrostatic potential) is defined as the amount of work/energy needed per unit of electric charge to move the charge from a reference point to a specific point in an electric field. More precisely, the electric potential is the energy per unit charge for a test charge that is so small that the disturbance of the field under consideration is negligible. The motion across the field is supposed to proceed with negligible acceleration, so as to avoid the test charge acquiring kinetic energy or producing radiation. By definition, the electric potential at the reference point is zero units. Typically, the reference point is earth or a point at infinity, although any point can be used. In classical electrostatics, the electrostatic field is a vector quantity expressed as the gradient of the electrostatic potential, which is a scalar quantity denoted by "V" or occasionally "φ", equal to the electric potential energy of any charged particle at any location (measured in joules) divided by the charge of that particle (measured in coulombs). By dividing out the charge on the particle a quotient is obtained that is a property of the electric field itself. In short, an electric potential is the electric potential energy per unit charge. This value can be calculated in either a static (time-invariant) or a dynamic (time-varying) electric field at a specific time with the unit joules per coulomb (J⋅C−1) or volt (V). The electric potential at infinity is assumed to be zero. In electrodynamics, when time-varying fields are present, the electric field cannot be expressed only as a scalar potential. Instead, the electric field can be expressed as both the scalar electric potential and the magnetic vector potential. The electric potential and the magnetic vector potential together form a four-vector, so that the two kinds of potential are mixed under Lorentz transformations. Practically, the electric potential is a continuous function in all space, because a spatial derivative of a discontinuous electric potential yields an electric field of impossibly infinite magnitude. Notably, the electric potential due to an idealized point charge (proportional to 1 ⁄ "r", with "r" the distance from the point charge) is continuous in all space except at the location of the point charge. Though electric field is not continuous across an idealized surface charge, it is not infinite at any point. Therefore, the electric potential is continuous across an idealized surface charge. Additionally, an idealized line of charge has electric potential (proportional to ln("r"), with "r" the radial distance from the line of charge) is continuous everywhere except on the line of charge. Introduction. Classical mechanics explores concepts such as force, energy, and potential. Force and potential energy are directly related. A net force acting on any object will cause it to accelerate. As an object moves in the direction of a force acting on it, its potential energy decreases. For example, the gravitational potential energy of a cannonball at the top of a hill is greater than at the base of the hill. As it rolls downhill, its potential energy decreases and is being translated to motion – kinetic energy. It is possible to define the potential of certain force fields so that the potential energy of an object in that field depends only on the position of the object with respect to the field. Two such force fields are a gravitational field and an electric field (in the absence of time-varying magnetic fields). Such fields affect objects because of the intrinsic properties (e.g., mass or charge) and positions of the objects. An object may possess a property known as electric charge. Since an electric field exerts force on a charged object, if the object has a positive charge, the force will be in the direction of the electric field vector at the location of the charge; if the charge is negative, the force will be in the opposite direction. The magnitude of force is given by the quantity of the charge multiplied by the magnitude of the electric field vector, formula_0 Electrostatics. An electric potential at a point r in a static electric field E is given by the line integral formula_1 where C is an arbitrary path from some fixed reference point to r; it is uniquely determined up to a constant that is added or subtracted from the integral. In electrostatics, the Maxwell-Faraday equation reveals that the curl formula_2 is zero, making the electric field conservative. Thus, the line integral above does not depend on the specific path C chosen but only on its endpoints, making formula_3 well-defined everywhere. The gradient theorem then allows us to write: formula_4 This states that the electric field points "downhill" towards lower voltages. By Gauss's law, the potential can also be found to satisfy Poisson's equation: formula_5 where ρ is the total charge density and formula_6 denotes the divergence. The concept of electric potential is closely linked with potential energy. A test charge, "q", has an electric potential energy, "U"E, given by formula_7 The potential energy and hence, also the electric potential, is only defined up to an additive constant: one must arbitrarily choose a position where the potential energy and the electric potential are zero. These equations cannot be used if formula_8, i.e., in the case of a "non-conservative electric field" (caused by a changing magnetic field; see Maxwell's equations). The generalization of electric potential to this case is described in the section . Electric potential due to a point charge. The electric potential arising from a point charge, "Q", at a distance, "r", from the location of "Q" is observed to be formula_9 where "ε"0 is the permittivity of vacuum‍, "V"E is known as the Coulomb potential. Note that, in contrast to the magnitude of an electric field due to a point charge, the electric potential scales respective to the reciprocal of the radius, rather than the radius squared. The electric potential at any location, r, in a system of point charges is equal to the sum of the individual electric potentials due to every point charge in the system. This fact simplifies calculations significantly, because addition of potential (scalar) fields is much easier than addition of the electric (vector) fields. Specifically, the potential of a set of discrete point charges qi at points r"i" becomes formula_10 where And the potential of a continuous charge distribution "ρ"(r) becomes formula_11 where The equations given above for the electric potential (and all the equations used here) are in the forms required by SI units. In some other (less common) systems of units, such as CGS-Gaussian, many of these equations would be altered. Generalization to electrodynamics. When time-varying magnetic fields are present (which is true whenever there are time-varying electric fields and vice versa), it is not possible to describe the electric field simply as a scalar potential "V" because the electric field is no longer conservative: formula_12 is path-dependent because formula_13 (due to the Maxwell-Faraday equation). Instead, one can still define a scalar potential by also including the magnetic vector potential A. In particular, A is defined to satisfy: formula_14 where B is the magnetic field. By the fundamental theorem of vector calculus, such an A can always be found, since the divergence of the magnetic field is always zero due to the absence of magnetic monopoles. Now, the quantity formula_15 "is" a conservative field, since the curl of formula_16 is canceled by the curl of formula_17 according to the Maxwell–Faraday equation. One can therefore write formula_18 where "V" is the scalar potential defined by the conservative field F. The electrostatic potential is simply the special case of this definition where A is time-invariant. On the other hand, for time-varying fields, formula_19 unlike electrostatics. Gauge freedom. The electrostatic potential could have any constant added to it without affecting the electric field. In electrodynamics, the electric potential has infinitely many degrees of freedom. For any (possibly time-varying or space-varying) scalar field, 𝜓, we can perform the following gauge transformation to find a new set of potentials that produce exactly the same electric and magnetic fields: formula_20 Given different choices of gauge, the electric potential could have quite different properties. In the Coulomb gauge, the electric potential is given by Poisson's equation formula_21 just like in electrostatics. However, in the Lorenz gauge, the electric potential is a retarded potential that propagates at the speed of light and is the solution to an inhomogeneous wave equation: formula_22 Units. The SI derived unit of electric potential is the volt (in honor of Alessandro Volta), denoted as V, which is why the electric potential difference between two points in space is known as a voltage. Older units are rarely used today. Variants of the centimetre–gram–second system of units included a number of different units for electric potential, including the abvolt and the statvolt. Galvani potential versus electrochemical potential. Inside metals (and other solids and liquids), the energy of an electron is affected not only by the electric potential, but also by the specific atomic environment that it is in. When a voltmeter is connected between two different types of metal, it measures the potential difference corrected for the different atomic environments. The quantity measured by a voltmeter is called electrochemical potential or fermi level, while the pure unadjusted electric potential, "V", is sometimes called the Galvani potential, ϕ. The terms "voltage" and "electric potential" are a bit ambiguous but one may refer to either of these in different contexts. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\mathbf{F}| = q |\\mathbf{E}|." }, { "math_id": 1, "text": "V_\\mathbf{E} = - \\int_{\\mathcal{C} } \\mathbf{E} \\cdot \\mathrm{d} \\boldsymbol{\\ell}\\," }, { "math_id": 2, "text": "\\nabla\\times\\mathbf{E}" }, { "math_id": 3, "text": "V_\\mathbf{E}" }, { "math_id": 4, "text": "\\mathbf{E} = - \\mathbf{\\nabla} V_\\mathbf{E}\\," }, { "math_id": 5, "text": "\\mathbf{\\nabla} \\cdot \\mathbf{E} = \\mathbf{\\nabla} \\cdot \\left (- \\mathbf{\\nabla} V_\\mathbf{E} \\right ) = -\\nabla^2 V_\\mathbf{E} = \\rho / \\varepsilon_0 " }, { "math_id": 6, "text": "\\mathbf{\\nabla}\\cdot" }, { "math_id": 7, "text": "U_ \\mathbf{E} = q\\,V." }, { "math_id": 8, "text": "\\nabla\\times\\mathbf{E}\\neq\\mathbf{0}" }, { "math_id": 9, "text": " V_\\mathbf{E} = \\frac{1}{4 \\pi \\varepsilon_0} \\frac{Q}{r}, " }, { "math_id": 10, "text": " V_\\mathbf{E}(\\mathbf{r}) = \\frac{1}{4\\pi\\varepsilon_0} \\sum_{i=1}^n\\frac{q_i}{|\\mathbf{r}-\\mathbf{r}_i|}\\," }, { "math_id": 11, "text": " V_\\mathbf{E}(\\mathbf{r}) = \\frac{1}{4\\pi\\varepsilon_0} \\int_R \\frac{\\rho(\\mathbf{r}')}{|\\mathbf{r}-\\mathbf{r}'|} \\mathrm{d}^3 r'\\,," }, { "math_id": 12, "text": "\\textstyle\\int_C \\mathbf{E}\\cdot \\mathrm{d}\\boldsymbol{\\ell}" }, { "math_id": 13, "text": "\\mathbf{\\nabla} \\times \\mathbf{E} \\neq \\mathbf{0} " }, { "math_id": 14, "text": "\\mathbf{B} = \\mathbf{\\nabla} \\times \\mathbf{A} " }, { "math_id": 15, "text": "\\mathbf{F} = \\mathbf{E} + \\frac{\\partial\\mathbf{A}}{\\partial t}" }, { "math_id": 16, "text": "\\mathbf{E}" }, { "math_id": 17, "text": "\\frac{\\partial\\mathbf{A}}{\\partial t}" }, { "math_id": 18, "text": "\\mathbf{E} = -\\mathbf{\\nabla}V - \\frac{\\partial\\mathbf{A}}{\\partial t} ," }, { "math_id": 19, "text": "-\\int_a^b \\mathbf{E} \\cdot \\mathrm{d}\\boldsymbol{\\ell} \\neq V_{(b)} - V_{(a)} " }, { "math_id": 20, "text": "\\begin{align}\nV^\\prime &= V - \\frac{\\partial\\psi}{\\partial t} \\\\\n\\mathbf{A}^\\prime &= \\mathbf{A} + \\nabla\\psi\n\\end{align}" }, { "math_id": 21, "text": "\\nabla^2 V=-\\frac{\\rho}{\\varepsilon_0} " }, { "math_id": 22, "text": "\\nabla^2 V - \\frac{1}{c^2}\\frac{\\partial^2 V}{\\partial t^2} = -\\frac{\\rho}{\\varepsilon_0} " } ]
https://en.wikipedia.org/wiki?curid=59615
59617028
The spider and the fly problem
Recreational geodesics problem The spider and the fly problem is a recreational mathematics problem with an unintuitive solution, asking for a shortest path or geodesic between two points on the surface of a cuboid. It was originally posed by Henry Dudeney. Problem. In the typical version of the puzzle, an otherwise empty cuboid room 30 feet long, 12 feet wide and 12 feet high contains a spider and a fly. The spider is 1 foot below the ceiling and horizontally centred on one 12′×12′ wall. The fly is 1 foot above the floor and horizontally centred on the opposite wall. The problem is to find the minimum distance the spider must crawl along the walls, ceiling and/or floor to reach the fly, which remains stationary. Solutions. A naive solution is for the spider to remain horizontally centred, and crawl up to the ceiling, across it and down to the fly, giving a distance of 42 feet. Instead, the shortest path, 40 feet long, spirals around five of the six faces of the cuboid. Alternatively, it can be described by unfolding the cuboid into a net and finding a shortest path (a line segment) on the resulting unfolded system of six rectangles in the plane. Different nets produce different segments with different lengths, and the question becomes one of finding a net whose segment length is minimum. Another path, of intermediate length formula_0, crosses diagonally through four faces instead of five. For a room of length "l", width "w" and height "h", the spider a distance "b" below the ceiling, and the fly a distance "a" above the floor, length of the spiral path is formula_1 while the naive solution has length formula_2. Depending on the dimensions of the cuboid, and on the initial positions of the spider and fly, one or another of these paths, or of four other paths, may be the optimal solution. However, there is no rectangular cuboid, and two points on the cuboid, for which the shortest path passes through all six faces of the cuboid. A different lateral thinking solution, beyond the stated rules of the puzzle, involves the spider attaching dragline silk to the wall to lower itself to the floor, and crawling 30 feet across it and 1 foot up the opposite wall, giving a crawl distance of 31 feet. Similarly, it can climb to the ceiling, cross it, then attach the silk to lower itself 11 feet, also a 31-foot crawl. History. The problem was originally posed by Henry Dudeney in the English newspaper "Weekly Dispatch" on 14 June 1903 and collected in "The Canterbury Puzzles" (1907). Martin Gardner calls it "Dudeney's best-known brain-teaser". A version of the problem was recorded by Adolf Hurwitz in his diary in 1908. Hurwitz stated that he heard it from L. Gustave du Pasquier, who in turn had heard it from Richard von Mises. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{1658}\\approx 40.7" }, { "math_id": 1, "text": "\\sqrt{(w + h)^2 + (b + l + a)^2}" }, { "math_id": 2, "text": "l + h - | b - a |" } ]
https://en.wikipedia.org/wiki?curid=59617028
59617340
Implant resistance welding
Method of welding used to join thermoplastics Implant resistance welding is a method used in welding to join thermoplastics and thermoplastic composites. Resistive heating of a conductive material implanted in the thermoplastic melts the thermoplastic while a pressure is applied in order to fuse two parts together. The process settings such as current and weld time are important, because they affect the strength of the joint. The quality of a joint made using implant resistance welding is determined using destructive strength testing of specimens. Applications. Implant resistance welding is used to joint thermoplastic composite components in the aerospace industry. For example, PEEK and PEI Laminate components for use in U.S. Air Force aircraft and a GF-PPS component on the Airbus A380 are joined using implant resistance welding. Electrofusion welding is a specific type of implant resistance welding used to join pipes. Process. During the implant resistance welding process, current is applied to a heating element implanted in the joint. This current flowing through the implant produces heat through electrical resistance, which melts the matrix. Pressure is applied to push the parts together and molecular diffusion occurs at the melted surfaces of the parts, creating a joint. Implants. Implants serve as the source of heat to melt the thermoplastic. The heat is created through resistive heating as a current is applied to the implant. Two common types of implants are carbon fiber and stainless-steel mesh. Carbon Fiber. The carbon fiber type implants can be further separated into unidirectional and fabric type implants. The unidirectional type carbon fibers do not transfer heat across the fibers easily, therefore, the carbon fiber fabric works better to evenly heat the entire surface. This difference affects the performance of the resulting weld, the welded joints using the carbon fiber fabric can have 69% higher shear strength and 179% more interlaminar fracture toughness, when compared to unidirectional carbon fibers. For carbon fiber reinforced thermoplastics, the carbon fiber heating element matches the reinforcing material, avoiding the introduction of a new material. Stainless Steel Mesh. Welded joints with stainless steel mesh implants tend to have higher strength than welds using carbon fiber implants and results in less air trapped in the joint. Stainless steel wire can be placed in between two layers of resin, to avoid leaving spaces in the holes of the mesh. However, there are reasons to avoid using stainless steel in favor of carbon fiber including, increased weight, the metal acts as a contaminant, possibility of stress concentrations, and possibility of corrosion. Energy Input. The amount of energy input into the system (E) depends on the resistance of the heating elements (R), the current applied to the heating elements (I), and the amount of time the current is applied (t). Alternating current (AC) and direct current (DC) both work in this process. The energy produced is calculated using the following equation: formula_0 Research has shown the input variable with the most impact on the performance of the resulting joint is the current. The same amount of energy can by input into the part by applying a low current for a long period of time or if a high current is applied for a short amount of time. In general, a higher shear strength of the joint is achieved using the method with a higher current for a shorter time. Longer heating times at lower currents do not heat the joint surface as evenly. This can lead to the fiber reinforcement to move within the melted matrix. If the current is too high, however, it can result in residual stresses and warpage. For a given constant electrical power, the temperature of the material surrounding the implants is directly dependent on the weld time. The longer weld time, yields a higher temperature. The lapped shear strength and the weld time are also correlated. Initially, there is a positive correlation between weld time and strength. However, the strength peaks for a certain weld time, and beyond this optimal weld time, the strength decreases. Pressure. Pressure is applied to the joining surfaces to prevent deconsolidation, allow intermolecular diffusion, and push air out of the joint. The pressure can be applied using displacement or pressure control. Pressure also ensures good contact between the implant and the bulk material, in order to increase electrical resistance. The pressure on the implant must create good contact without being so high that it severs the implant. This is achieved with pressures of 4 to 20 MPa for carbon fiber and 2 MPa for stainless steel mesh heating elements. Strength Testing. Lap shear strength (LSS) testing, in accordance with ASTM D 1002, is a method of destructive testing used to determine the strength of electrofusion welds of thermoplastic composite materials. For this test, two rectangular samples of the composite are lapped at the ends and joined at the lap interface using resistance implant welding. Then, a tension strength test is performed on the welded sample, with the joint surface being loaded in pure shear, a load frame machine pulls the sample until failure and measures the maximum load. The lap shear strength  is the maximum tensile load imparted on the sample by the machine divided by the lapped area. Failure Modes. Interfacial failure or tearing is when the resin or laminate in immediate contact with the heating element on either side is pulled away, leaving the mesh or fabric heating element exposed. This type of failure is associated with low LSS of the sample and can occur as a result of inadequate heat input into the weld. Another failure mode associated with low LSS is cohesive failure, which is a failure of the welded material, either the melted base material or resin surrounding the mesh. Cohesive failure is observed in samples with too much heat input during welding, which deteriorates the thermoplastic. Samples with high LSS generally fail due to debonding of the reinforcing fiber-matrix surface or other base material failure, known as intralaminar failure.
[ { "math_id": 0, "text": "E=I^2Rt" } ]
https://en.wikipedia.org/wiki?curid=59617340
596179
Chowla–Selberg formula
Evaluates a certain product of values of the Gamma function at rational values In mathematics, the Chowla–Selberg formula is the evaluation of a certain product of values of the gamma function at rational values in terms of values of the Dedekind eta function at imaginary quadratic irrational numbers. The result was essentially found by Lerch (1897) and rediscovered by Chowla and Selberg (1949, 1967). Statement. In logarithmic form, the Chowla–Selberg formula states that in certain cases the sum formula_0 can be evaluated using the Kronecker limit formula. Here χ is the quadratic residue symbol modulo "D", where "−D" is the discriminant of an imaginary quadratic field. The sum is taken over 0 &lt; "r" &lt; "D", with the usual convention χ("r") = 0 if "r" and "D" have a common factor. The function η is the Dedekind eta function, and "h" is the class number, and "w" is the number of roots of unity. Origin and applications. The origin of such formulae is now seen to be in the theory of complex multiplication, and in particular in the theory of periods of an abelian variety of CM-type. This has led to much research and generalization. In particular there is an analog of the Chowla–Selberg formula for p-adic numbers, involving a p-adic gamma function, called the Gross–Koblitz formula. The Chowla–Selberg formula gives a formula for a finite product of values of the eta functions. By combining this with the theory of complex multiplication, one can give a formula for the individual absolute values of the eta function as formula_1 for some algebraic number α. Examples. Using Euler's reflection formula for the gamma function gives: See also. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{w}{4}\\sum_r \\chi(r)\\log \\Gamma\\left( \\frac{r}{D} \\right) = \\frac{h}{2}\\log(4\\pi\\sqrt{|D|})\n+\\sum_\\tau\\log\\left(\\sqrt{\\Im(\\tau)}|\\eta(\\tau)|^2\\right)\n" }, { "math_id": 1, "text": "\\Im(\\tau)|\\eta(\\tau)|^4 = \\frac{\\alpha}{4\\pi\\sqrt{|D|}} \\prod_r\\Gamma(r/|D|)^{\\chi(r)\\frac{w}{2h}}" }, { "math_id": 2, "text": "\\eta(i) = 2^{-1}\\pi^{-3/4}\\Gamma(\\tfrac{1}{4})" } ]
https://en.wikipedia.org/wiki?curid=596179
5962
Comet
Natural object in space that releases gas A comet is an icy, small Solar System body that warms and begins to release gases when passing close to the Sun, a process called outgassing. This produces an extended, gravitationally unbound atmosphere or coma surrounding the nucleus, and sometimes a tail of gas and dust gas blown out from the coma. These phenomena are due to the effects of solar radiation and the outstreaming solar wind plasma acting upon the nucleus of the comet. Comet nuclei range from a few hundred meters to tens of kilometers across and are composed of loose collections of ice, dust, and small rocky particles. The coma may be up to 15 times Earth's diameter, while the tail may stretch beyond one astronomical unit. If sufficiently close and bright, a comet may be seen from Earth without the aid of a telescope and can subtend an arc of up to 30° (60 Moons) across the sky. Comets have been observed and recorded since ancient times by many cultures and religions. Comets usually have highly eccentric elliptical orbits, and they have a wide range of orbital periods, ranging from several years to potentially several millions of years. Short-period comets originate in the Kuiper belt or its associated scattered disc, which lie beyond the orbit of Neptune. Long-period comets are thought to originate in the Oort cloud, a spherical cloud of icy bodies extending from outside the Kuiper belt to halfway to the nearest star. Long-period comets are set in motion towards the Sun by gravitational perturbations from passing stars and the galactic tide. Hyperbolic comets may pass once through the inner Solar System before being flung to interstellar space. The appearance of a comet is called an apparition. Extinct comets that have passed close to the Sun many times have lost nearly all of their volatile ices and dust and may come to resemble small asteroids. Asteroids are thought to have a different origin from comets, having formed inside the orbit of Jupiter rather than in the outer Solar System. However, the discovery of main-belt comets and active centaur minor planets has blurred the distinction between asteroids and comets. In the early 21st century, the discovery of some minor bodies with long-period comet orbits, but characteristics of inner solar system asteroids, were called Manx comets. They are still classified as comets, such as C/2014 S3 (PANSTARRS). Twenty-seven Manx comets were found from 2013 to 2017. As of  2021[ [update]], there are 4,584 known comets. However, this represents a very small fraction of the total potential comet population, as the reservoir of comet-like bodies in the outer Solar System (in the Oort cloud) is about one trillion. Roughly one comet per year is visible to the naked eye, though many of those are faint and unspectacular. Particularly bright examples are called "great comets". Comets have been visited by uncrewed probes such as NASA's "Deep Impact", which blasted a crater on Comet Tempel 1 to study its interior, and the European Space Agency's "Rosetta", which became the first to land a robotic spacecraft on a comet. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Etymology. The word "comet" derives from the Old English from the Latin or . That, in turn, is a romanization of the Greek 'wearing long hair', and the "Oxford English Dictionary" notes that the term () already meant 'long-haired star, comet' in Greek. was derived from () 'to wear the hair long', which was itself derived from () 'the hair of the head' and was used to mean 'the tail of a comet'. The astronomical symbol for comets (represented in Unicode) is , consisting of a small disc with three hairlike extensions. Physical characteristics. Nucleus. The solid, core structure of a comet is known as the nucleus. Cometary nuclei are composed of an amalgamation of rock, dust, water ice, and frozen carbon dioxide, carbon monoxide, methane, and ammonia. As such, they are popularly described as "dirty snowballs" after Fred Whipple's model. Comets with a higher dust content have been called "icy dirtballs". The term "icy dirtballs" arose after observation of Comet 9P/Tempel 1 collision with an "impactor" probe sent by NASA Deep Impact mission in July 2005. Research conducted in 2014 suggests that comets are like "deep fried ice cream", in that their surfaces are formed of dense crystalline ice mixed with organic compounds, while the interior ice is colder and less dense. The surface of the nucleus is generally dry, dusty or rocky, suggesting that the ices are hidden beneath a surface crust several metres thick. The nuclei contains a variety of organic compounds, which may include methanol, hydrogen cyanide, formaldehyde, ethanol, ethane, and perhaps more complex molecules such as long-chain hydrocarbons and amino acids. In 2009, it was confirmed that the amino acid glycine had been found in the comet dust recovered by NASA's Stardust mission. In August 2011, a report, based on NASA studies of meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine, and related organic molecules) may have been formed on asteroids and comets. The outer surfaces of cometary nuclei have a very low albedo, making them among the least reflective objects found in the Solar System. The Giotto space probe found that the nucleus of Halley's Comet (1P/Halley) reflects about four percent of the light that falls on it, and Deep Space 1 discovered that Comet Borrelly's surface reflects less than 3.0%; by comparison, asphalt reflects seven percent. The dark surface material of the nucleus may consist of complex organic compounds. Solar heating drives off lighter volatile compounds, leaving behind larger organic compounds that tend to be very dark, like tar or crude oil. The low reflectivity of cometary surfaces causes them to absorb the heat that drives their outgassing processes. Comet nuclei with radii of up to have been observed, but ascertaining their exact size is difficult. The nucleus of 322P/SOHO is probably only in diameter. A lack of smaller comets being detected despite the increased sensitivity of instruments has led some to suggest that there is a real lack of comets smaller than across. Known comets have been estimated to have an average density of . Because of their low mass, comet nuclei do not become spherical under their own gravity and therefore have irregular shapes. Roughly six percent of the near-Earth asteroids are thought to be the extinct nuclei of comets that no longer experience outgassing, including 14827 Hypnos and 3552 Don Quixote. Results from the "Rosetta" and "Philae" spacecraft show that the nucleus of 67P/Churyumov–Gerasimenko has no magnetic field, which suggests that magnetism may not have played a role in the early formation of planetesimals. Further, the ALICE spectrograph on "Rosetta" determined that electrons (within above the comet nucleus) produced from photoionization of water molecules by solar radiation, and not photons from the Sun as thought earlier, are responsible for the degradation of water and carbon dioxide molecules released from the comet nucleus into its coma. Instruments on the "Philae" lander found at least sixteen organic compounds at the comet's surface, four of which (acetamide, acetone, methyl isocyanate and propionaldehyde) have been detected for the first time on a comet. Coma. The streams of dust and gas thus released form a huge and extremely thin atmosphere around the comet called the "coma". The force exerted on the coma by the Sun's radiation pressure and solar wind cause an enormous "tail" to form pointing away from the Sun. The coma is generally made of water and dust, with water making up to 90% of the volatiles that outflow from the nucleus when the comet is within 3 to 4 astronomical units (450,000,000 to 600,000,000 km; 280,000,000 to 370,000,000 mi) of the Sun. The parent molecule is destroyed primarily through photodissociation and to a much smaller extent photoionization, with the solar wind playing a minor role in the destruction of water compared to photochemistry. Larger dust particles are left along the comet's orbital path whereas smaller particles are pushed away from the Sun into the comet's tail by light pressure. Although the solid nucleus of comets is generally less than across, the coma may be thousands or millions of kilometers across, sometimes becoming larger than the Sun. For example, about a month after an outburst in October 2007, comet 17P/Holmes briefly had a tenuous dust atmosphere larger than the Sun. The Great Comet of 1811 had a coma roughly the diameter of the Sun. Even though the coma can become quite large, its size can decrease about the time it crosses the orbit of Mars around from the Sun. At this distance the solar wind becomes strong enough to blow the gas and dust away from the coma, and in doing so enlarging the tail. Ion tails have been observed to extend one astronomical unit (150 million km) or more. Both the coma and tail are illuminated by the Sun and may become visible when a comet passes through the inner Solar System, the dust reflects sunlight directly while the gases glow from ionisation. Most comets are too faint to be visible without the aid of a telescope, but a few each decade become bright enough to be visible to the naked eye. Occasionally a comet may experience a huge and sudden outburst of gas and dust, during which the size of the coma greatly increases for a period of time. This happened in 2007 to Comet Holmes. In 1996, comets were found to emit X-rays. This greatly surprised astronomers because X-ray emission is usually associated with very high-temperature bodies. The X-rays are generated by the interaction between comets and the solar wind: when highly charged solar wind ions fly through a cometary atmosphere, they collide with cometary atoms and molecules, "stealing" one or more electrons from the atom in a process called "charge exchange". This exchange or transfer of an electron to the solar wind ion is followed by its de-excitation into the ground state of the ion by the emission of X-rays and far ultraviolet photons. Bow shock. Bow shocks form as a result of the interaction between the solar wind and the cometary ionosphere, which is created by the ionization of gases in the coma. As the comet approaches the Sun, increasing outgassing rates cause the coma to expand, and the sunlight ionizes gases in the coma. When the solar wind passes through this ion coma, the bow shock appears. The first observations were made in the 1980s and 1990s as several spacecraft flew by comets 21P/Giacobini–Zinner, 1P/Halley, and 26P/Grigg–Skjellerup. It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at, for example, Earth. These observations were all made near perihelion when the bow shocks already were fully developed. The "Rosetta" spacecraft observed the bow shock at comet 67P/Churyumov–Gerasimenko at an early stage of bow shock development when the outgassing increased during the comet's journey toward the Sun. This young bow shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks. Tails. In the outer Solar System, comets remain frozen and inactive and are extremely difficult or impossible to detect from Earth due to their small size. Statistical detections of inactive comet nuclei in the Kuiper belt have been reported from observations by the Hubble Space Telescope but these detections have been questioned. As a comet approaches the inner Solar System, solar radiation causes the volatile materials within the comet to vaporize and stream out of the nucleus, carrying dust away with them. The streams of dust and gas each form their own distinct tail, pointing in slightly different directions. The tail of dust is left behind in the comet's orbit in such a manner that it often forms a curved tail called the type II or dust tail. At the same time, the ion or type I tail, made of gases, always points directly away from the Sun because this gas is more strongly affected by the solar wind than is dust, following magnetic field lines rather than an orbital trajectory. On occasions—such as when Earth passes through a comet's orbital plane, the antitail, pointing in the opposite direction to the ion and dust tails, may be seen. The observation of antitails contributed significantly to the discovery of solar wind. The ion tail is formed as a result of the ionization by solar ultra-violet radiation of particles in the coma. Once the particles have been ionized, they attain a net positive electrical charge, which in turn gives rise to an "induced magnetosphere" around the comet. The comet and its induced magnetic field form an obstacle to outward flowing solar wind particles. Because the relative orbital speed of the comet and the solar wind is supersonic, a bow shock is formed upstream of the comet in the flow direction of the solar wind. In this bow shock, large concentrations of cometary ions (called "pick-up ions") congregate and act to "load" the solar magnetic field with plasma, such that the field lines "drape" around the comet forming the ion tail. If the ion tail loading is sufficient, the magnetic field lines are squeezed together to the point where, at some distance along the ion tail, magnetic reconnection occurs. This leads to a "tail disconnection event". This has been observed on a number of occasions, one notable event being recorded on 20 April 2007, when the ion tail of Encke's Comet was completely severed while the comet passed through a coronal mass ejection. This event was observed by the STEREO space probe. In 2013, ESA scientists reported that the ionosphere of the planet Venus streams outwards in a manner similar to the ion tail seen streaming from a comet under similar conditions." Jets. Uneven heating can cause newly generated gases to break out of a weak spot on the surface of comet's nucleus, like a geyser. These streams of gas and dust can cause the nucleus to spin, and even split apart. In 2010 it was revealed dry ice (frozen carbon dioxide) can power jets of material flowing out of a comet nucleus. Infrared imaging of Hartley 2 shows such jets exiting and carrying with it dust grains into the coma. Orbital characteristics. Most comets are small Solar System bodies with elongated elliptical orbits that take them close to the Sun for a part of their orbit and then out into the further reaches of the Solar System for the remainder. Comets are often classified according to the length of their orbital periods: The longer the period the more elongated the ellipse. Short period. Periodic comets or short-period comets are generally defined as those having orbital periods of less than 200 years. They usually orbit more-or-less in the ecliptic plane in the same direction as the planets. Their orbits typically take them out to the region of the outer planets (Jupiter and beyond) at aphelion; for example, the aphelion of Halley's Comet is a little beyond the orbit of Neptune. Comets whose aphelia are near a major planet's orbit are called its "family". Such families are thought to arise from the planet capturing formerly long-period comets into shorter orbits. At the shorter orbital period extreme, Encke's Comet has an orbit that does not reach the orbit of Jupiter, and is known as an Encke-type comet. Short-period comets with orbital periods less than 20 years and low inclinations (up to 30 degrees) to the ecliptic are called traditional Jupiter-family comets (JFCs). Those like Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets (HTCs). As of 2023[ [update]], 70 Encke-type comets, 100 HTCs, and 755 JFCs have been reported. Recently discovered main-belt comets form a distinct class, orbiting in more circular orbits within the asteroid belt. Because their elliptical orbits frequently take them close to the giant planets, comets are subject to further gravitational perturbations. Short-period comets have a tendency for their aphelia to coincide with a giant planet's semi-major axis, with the JFCs being the largest group. It is clear that comets coming in from the Oort cloud often have their orbits strongly influenced by the gravity of giant planets as a result of a close encounter. Jupiter is the source of the greatest perturbations, being more than twice as massive as all the other planets combined. These perturbations can deflect long-period comets into shorter orbital periods. Based on their orbital characteristics, short-period comets are thought to originate from the centaurs and the Kuiper belt/scattered disc —a disk of objects in the trans-Neptunian region—whereas the source of long-period comets is thought to be the far more distant spherical Oort cloud (after the Dutch astronomer Jan Hendrik Oort who hypothesized its existence). Vast swarms of comet-like bodies are thought to orbit the Sun in these distant regions in roughly circular orbits. Occasionally the gravitational influence of the outer planets (in the case of Kuiper belt objects) or nearby stars (in the case of Oort cloud objects) may throw one of these bodies into an elliptical orbit that takes it inwards toward the Sun to form a visible comet. Unlike the return of periodic comets, whose orbits have been established by previous observations, the appearance of new comets by this mechanism is unpredictable. When flung into the orbit of the sun, and being continuously dragged towards it, tons of matter are stripped from the comets which greatly influence their lifetime; the more stripped, the shorter they live and vice versa. Long period. Long-period comets have highly eccentric orbits and periods ranging from 200 years to thousands or even millions of years. An eccentricity greater than 1 when near perihelion does not necessarily mean that a comet will leave the Solar System. For example, Comet McNaught had a heliocentric osculating eccentricity of 1.000019 near its perihelion passage epoch in January 2007 but is bound to the Sun with roughly a 92,600-year orbit because the eccentricity drops below 1 as it moves farther from the Sun. The future orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. By definition long-period comets remain gravitationally bound to the Sun; those comets that are ejected from the Solar System due to close passes by major planets are no longer properly considered as having "periods". The orbits of long-period comets take them far beyond the outer planets at aphelia, and the plane of their orbits need not lie near the ecliptic. Long-period comets such as C/1999 F1 and C/2017 T2 (PANSTARRS) can have aphelion distances of nearly with orbital periods estimated around 6 million years. Single-apparition or non-periodic comets are similar to long-period comets because they have parabolic or slightly hyperbolic trajectories when near perihelion in the inner Solar System. However, gravitational perturbations from giant planets cause their orbits to change. Single-apparition comets have a hyperbolic or parabolic osculating orbit which allows them to permanently exit the Solar System after a single pass of the Sun. The Sun's Hill sphere has an unstable maximum boundary of . Only a few hundred comets have been seen to reach a hyperbolic orbit (e &gt; 1) when near perihelion that using a heliocentric unperturbed two-body best-fit suggests they may escape the Solar System. As of 2022[ [update]], only two objects have been discovered with an eccentricity significantly greater than one: 1I/ʻOumuamua and 2I/Borisov, indicating an origin outside the Solar System. While ʻOumuamua, with an eccentricity of about 1.2, showed no optical signs of cometary activity during its passage through the inner Solar System in October 2017, changes to its trajectory—which suggests outgassing—indicate that it is probably a comet. On the other hand, 2I/Borisov, with an estimated eccentricity of about 3.36, has been observed to have the coma feature of comets, and is considered the first detected interstellar comet. Comet C/1980 E1 had an orbital period of roughly 7.1 million years before the 1982 perihelion passage, but a 1980 encounter with Jupiter accelerated the comet giving it the largest eccentricity (1.057) of any known solar comet with a reasonable observation arc.&lt;ref name="C/1980E1-jpl"&gt;&lt;/ref&gt; Comets not expected to return to the inner Solar System include C/1980 E1, C/2000 U5, C/2001 Q4 (NEAT), C/2009 R1, C/1956 R1, and C/2007 F1 (LONEOS). Some authorities use the term "periodic comet" to refer to any comet with a periodic orbit (that is, all short-period comets plus all long-period comets), whereas others use it to mean exclusively short-period comets. Similarly, although the literal meaning of "non-periodic comet" is the same as "single-apparition comet", some use it to mean all comets that are not "periodic" in the second sense (that is, to include all comets with a period greater than 200 years). Early observations have revealed a few genuinely hyperbolic (i.e. non-periodic) trajectories, but no more than could be accounted for by perturbations from Jupiter. Comets from interstellar space are moving with velocities of the same order as the relative velocities of stars near the Sun (a few tens of km per second). When such objects enter the Solar System, they have a positive specific orbital energy resulting in a positive velocity at infinity (formula_0) and have notably hyperbolic trajectories. A rough calculation shows that there might be four hyperbolic comets per century within Jupiter's orbit, give or take one and perhaps two orders of magnitude. Oort cloud and Hills cloud. The Oort cloud is thought to occupy a vast space starting from between to as far as from the Sun. This cloud encases the celestial bodies that start at the middle of the Solar System—the Sun, all the way to outer limits of the Kuiper Belt. The Oort cloud consists of viable materials necessary for the creation of celestial bodies. The Solar System's planets exist only because of the planetesimals (chunks of leftover space that assisted in the creation of planets) that were condensed and formed by the gravity of the Sun. The eccentric made from these trapped planetesimals is why the Oort Cloud even exists. Some estimates place the outer edge at between . The region can be subdivided into a spherical outer Oort cloud of , and a doughnut-shaped inner cloud, the Hills cloud, of . The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets that fall to inside the orbit of Neptune. The inner Oort cloud is also known as the Hills cloud, named after Jack G. Hills, who proposed its existence in 1981. Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo; it is seen as a possible source of new comets that resupply the relatively tenuous outer cloud as the latter's numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years. Exocomets. Exocomets beyond the Solar System have been detected and may be common in the Milky Way. The first exocomet system detected was around Beta Pictoris, a very young A-type main-sequence star, in 1987. A total of 11 such exocomet systems have been identified as of 2013[ [update]], using the absorption spectrum caused by the large clouds of gas emitted by comets when passing close to their star. For ten years the Kepler space telescope was responsible for searching for planets and other forms outside of the solar system. The first transiting exocomets were found in February 2018 by a group consisting of professional astronomers and citizen scientists in light curves recorded by the Kepler Space Telescope. After Kepler Space Telescope retired in October 2018, a new telescope called TESS Telescope has taken over Kepler's mission. Since the launch of TESS, astronomers have discovered the transits of comets around the star Beta Pictoris using a light curve from TESS. Since TESS has taken over, astronomers have since been able to better distinguish exocomets with the spectroscopic method. New planets are detected by the white light curve method which is viewed as a symmetrical dip in the charts readings when a planet overshadows its parent star. However, after further evaluation of these light curves, it has been discovered that the asymmetrical patterns of the dips presented are caused by the tail of a comet or of hundreds of comets. Effects of comets. Connection to meteor showers. As a comet is heated during close passes to the Sun, outgassing of its icy components releases solid debris too large to be swept away by radiation pressure and the solar wind. If Earth's orbit sends it through that trail of debris, which is composed mostly of fine grains of rocky material, there is likely to be a meteor shower as Earth passes through. Denser trails of debris produce quick but intense meteor showers and less dense trails create longer but less intense showers. Typically, the density of the debris trail is related to how long ago the parent comet released the material. The Perseid meteor shower, for example, occurs every year between 9 and 13 August, when Earth passes through the orbit of Comet Swift–Tuttle. Halley's Comet is the source of the Orionid shower in October. Comets and impact on life. Many comets and asteroids collided with Earth in its early stages. Many scientists think that comets bombarding the young Earth about 4 billion years ago brought the vast quantities of water that now fill Earth's oceans, or at least a significant portion of it. Others have cast doubt on this idea. The detection of organic molecules, including polycyclic aromatic hydrocarbons, in significant quantities in comets has led to speculation that comets or meteorites may have brought the precursors of life—or even life itself—to Earth. In 2013 it was suggested that impacts between rocky and icy surfaces, such as comets, had the potential to create the amino acids that make up proteins through shock synthesis. The speed at which the comets entered the atmosphere, combined with the magnitude of energy created after initial contact, allowed smaller molecules to condense into the larger macro-molecules that served as the foundation for life. In 2015, scientists found significant amounts of molecular oxygen in the outgassings of comet 67P, suggesting that the molecule may occur more often than had been thought, and thus less an indicator of life as has been supposed. It is suspected that comet impacts have, over long timescales, delivered significant quantities of water to Earth's Moon, some of which may have survived as lunar ice. Comet and meteoroid impacts are thought to be responsible for the existence of tektites and australites. Fear of comets. Fear of comets as acts of God and signs of impending doom was highest in Europe from AD 1200 to 1650. The year after the Great Comet of 1618, for example, Gotthard Arthusius published a pamphlet stating that it was a sign that the Day of Judgment was near. He listed ten pages of comet-related disasters, including "earthquakes, floods, changes in river courses, hail storms, hot and dry weather, poor harvests, epidemics, war and treason and high prices". By 1700 most scholars concluded that such events occurred whether a comet was seen or not. Using Edmond Halley's records of comet sightings, however, William Whiston in 1711 wrote that the Great Comet of 1680 had a periodicity of 574 years and was responsible for the worldwide flood in the Book of Genesis, by pouring water on Earth. His announcement revived for another century fear of comets, now as direct threats to the world instead of signs of disasters. Spectroscopic analysis in 1910 found the toxic gas cyanogen in the tail of Halley's Comet, causing panicked buying of gas masks and quack "anti-comet pills" and "anti-comet umbrellas" by the public. Fate of comets. Departure (ejection) from Solar System. If a comet is traveling fast enough, it may leave the Solar System. Such comets follow the open path of a hyperbola, and as such, they are called hyperbolic comets. Solar comets are only known to be ejected by interacting with another object in the Solar System, such as Jupiter. An example of this is Comet C/1980 E1, which was shifted from an orbit of 7.1 million years around the Sun, to a hyperbolic trajectory, after a 1980 close pass by the planet Jupiter. Interstellar comets such as 1I/ʻOumuamua and 2I/Borisov never orbited the Sun and therefore do not require a 3rd-body interaction to be ejected from the Solar System. Extinction. Jupiter-family comets and long-period comets appear to follow very different fading laws. The JFCs are active over a lifetime of about 10,000 years or ~1,000 orbits whereas long-period comets fade much faster. Only 10% of the long-period comets survive more than 50 passages to small perihelion and only 1% of them survive more than 2,000 passages. Eventually most of the volatile material contained in a comet nucleus evaporates, and the comet becomes a small, dark, inert lump of rock or rubble that can resemble an asteroid. Some asteroids in elliptical orbits are now identified as extinct comets. Roughly six percent of the near-Earth asteroids are thought to be extinct comet nuclei. Breakup and collisions. The nucleus of some comets may be fragile, a conclusion supported by the observation of comets splitting apart. A significant cometary disruption was that of Comet Shoemaker–Levy 9, which was discovered in 1993. A close encounter in July 1992 had broken it into pieces, and over a period of six days in July 1994, these pieces fell into Jupiter's atmosphere—the first time astronomers had observed a collision between two objects in the Solar System. Other splitting comets include 3D/Biela in 1846 and 73P/Schwassmann–Wachmann from 1995 to 2006. Greek historian Ephorus reported that a comet split apart as far back as the winter of 372–373 BC. Comets are suspected of splitting due to thermal stress, internal gas pressure, or impact. Comets 42P/Neujmin and 53P/Van Biesbroeck appear to be fragments of a parent comet. Numerical integrations have shown that both comets had a rather close approach to Jupiter in January 1850, and that, before 1850, the two orbits were nearly identical. Another group of comets that is the result of fragmentation episodes is the Liller comet family made of C/1988 A1 (Liller), C/1996 Q1 (Tabur), C/2015 F3 (SWAN), C/2019 Y1 (ATLAS), and C/2023 V5 (Leonard). Some comets have been observed to break up during their perihelion passage, including great comets West and Ikeya–Seki. Biela's Comet was one significant example when it broke into two pieces during its passage through the perihelion in 1846. These two comets were seen separately in 1852, but never again afterward. Instead, spectacular meteor showers were seen in 1872 and 1885 when the comet should have been visible. A minor meteor shower, the Andromedids, occurs annually in November, and it is caused when Earth crosses the orbit of Biela's Comet. Some comets meet a more spectacular end – either falling into the Sun or smashing into a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet Shoemaker–Levy 9 broke up into pieces and collided with Jupiter. Nomenclature. The names given to comets have followed several different conventions over the past two centuries. Prior to the early 20th century, most comets were referred to by the year when they appeared, sometimes with additional adjectives for particularly bright comets; thus, the "Great Comet of 1680", the "Great Comet of 1882", and the "Great January Comet of 1910". After Edmond Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759 by calculating its orbit, that comet became known as Halley's Comet. Similarly, the second and third known periodic comets, Encke's Comet and Biela's Comet, were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their appearance. In the early 20th century, the convention of naming comets after their discoverers became common, and this remains so today. A comet can be named after its discoverers or an instrument or program that helped to find it. For example, in 2019, astronomer Gennadiy Borisov observed a comet that appeared to have originated outside of the solar system; the comet was named 2I/Borisov after him. History of study. Early observations and thought. From ancient sources, such as Chinese oracle bones, it is known that comets have been noticed by humans for millennia. Until the sixteenth century, comets were usually considered bad omens of deaths of kings or noble men, or coming catastrophes, or even interpreted as attacks by heavenly beings against terrestrial inhabitants. Aristotle (384–322 BC) was the first known scientist to use various theories and observational facts to employ a consistent, structured cosmological theory of comets. He believed that comets were atmospheric phenomena, due to the fact that they could appear outside of the zodiac and vary in brightness over the course of a few days. Aristotle's cometary theory arose from his observations and cosmological theory that everything in the cosmos is arranged in a distinct configuration. Part of this configuration was a clear separation between the celestial and terrestrial, believing comets to be strictly associated with the latter. According to Aristotle, comets must be within the sphere of the moon and clearly separated from the heavens. Also in the 4th century BC, Apollonius of Myndus supported the idea that comets moved like the planets. Aristotelian theory on comets continued to be widely accepted throughout the Middle Ages, despite several discoveries from various individuals challenging aspects of it. In the 1st century AD, Seneca the Younger questioned Aristotle's logic concerning comets. Because of their regular movement and imperviousness to wind, they cannot be atmospheric, and are more permanent than suggested by their brief flashes across the sky. He pointed out that only the tails are transparent and thus cloudlike, and argued that there is no reason to confine their orbits to the zodiac. In criticizing Apollonius of Myndus, Seneca argues, "A comet cuts through the upper regions of the universe and then finally becomes visible when it reaches the lowest point of its orbit." While Seneca did not author a substantial theory of his own, his arguments would spark much debate among Aristotle's critics in the 16th and 17th centuries. In the 1st century AD, Pliny the Elder believed that comets were connected with political unrest and death. Pliny observed comets as "human like", often describing their tails with "long hair" or "long beard". His system for classifying comets according to their color and shape was used for centuries. In India, by the 6th century AD astronomers believed that comets were apparitions that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varāhamihira and Bhadrabahu, and the 10th-century astronomer Bhaṭṭotpala listed the names and estimated periods of certain comets, but it is not known how these figures were calculated or how accurate they were. There is a claim that an Arab scholar in 1258 noted several recurrent appearances of a comet (or a type of comet), and though it's not clear if he considered it to be a single periodic comet, it might have been a comet with a period of around 63 years. In 1301, the Italian painter Giotto was the first person to accurately and anatomically portray a comet. In his work "Adoration of the Magi," Giotto's depiction of Halley's Comet in the place of the Star of Bethlehem would go unmatched in accuracy until the 19th century and be bested only with the invention of photography. Astrological interpretations of comets proceeded to take precedence clear into the 15th century, despite the presence of modern scientific astronomy beginning to take root. Comets continued to forewarn of disaster, as seen in the "Luzerner Schilling" chronicles and in the warnings of Pope Callixtus III. In 1578, German Lutheran bishop Andreas Celichius defined comets as "the thick smoke of human sins ... kindled by the hot and fiery anger of the Supreme Heavenly Judge". The next year, Andreas Dudith stated that "If comets were caused by the sins of mortals, they would never be absent from the sky." Scientific approach. Crude attempts at a parallax measurement of Halley's Comet were made in 1456, but were erroneous. Regiomontanus was the first to attempt to calculate diurnal parallax by observing the Great Comet of 1472. His predictions were not very accurate, but they were conducted in the hopes of estimating the distance of a comet from Earth. In the 16th century, Tycho Brahe and Michael Maestlin demonstrated that comets must exist outside of Earth's atmosphere by measuring the parallax of the Great Comet of 1577. Within the precision of the measurements, this implied the comet must be at least four times more distant than from Earth to the Moon. Based on observations in 1664, Giovanni Borelli recorded the longitudes and latitudes of comets that he observed, and suggested that cometary orbits may be parabolic. Despite being a skilled astronomer, in his 1623 book "The Assayer", Galileo Galilei rejected Brahe's theories on the parallax of comets and claimed that they may be a mere optical illusion, despite little personal observation. In 1625, Maestlin's student Johannes Kepler upheld that Brahe's view of cometary parallax was correct. Additionally, mathematician Jacob Bernoulli published a treatise on comets in 1682. During the early modern period comets were studied for their astrological significance in medical disciplines. Many healers of this time considered medicine and astronomy to be inter-disciplinary and employed their knowledge of comets and other astrological signs for diagnosing and treating patients. Isaac Newton, in his "Principia Mathematica" of 1687, proved that an object moving under the influence of gravity by an inverse square law must trace out an orbit shaped like one of the conic sections, and he demonstrated how to fit a comet's path through the sky to a parabolic orbit, using the comet of 1680 as an example. He describes comets as compact and durable solid bodies moving in oblique orbit and their tails as thin streams of vapor emitted by their nuclei, ignited or heated by the Sun. He suspected that comets were the origin of the life-supporting component of air. He pointed out that comets usually appear near the Sun, and therefore most likely orbit it. On their luminosity, he stated, "The comets shine by the Sun's light, which they reflect," with their tails illuminated by "the Sun's light reflected by a smoke arising from [the coma]". In 1705, Edmond Halley (1656–1742) applied Newton's method to 23 cometary apparitions that had occurred between 1337 and 1698. He noted that three of these, the comets of 1531, 1607, and 1682, had very similar orbital elements, and he was further able to account for the slight differences in their orbits in terms of gravitational perturbation caused by Jupiter and Saturn. Confident that these three apparitions had been three appearances of the same comet, he predicted that it would appear again in 1758–59. Halley's predicted return date was later refined by a team of three French mathematicians: Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute, who predicted the date of the comet's 1759 perihelion to within one month's accuracy. When the comet returned as predicted, it became known as Halley's Comet. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; From his huge vapouring train perhaps to shakeReviving moisture on the numerous orbs,Thro' which his long ellipsis winds; perhapsTo lend new fuel to declining suns,To light up worlds, and feed th' ethereal fire. James Thomson "The Seasons" (1730; 1748) As early as the 18th century, some scientists had made correct hypotheses as to comets' physical composition. In 1755, Immanuel Kant hypothesized in his "Universal Natural History" that comets were condensed from "primitive matter" beyond the known planets, which is "feebly moved" by gravity, then orbit at arbitrary inclinations, and are partially vaporized by the Sun's heat as they near perihelion. In 1836, the German mathematician Friedrich Wilhelm Bessel, after observing streams of vapor during the appearance of Halley's Comet in 1835, proposed that the jet forces of evaporating material could be great enough to significantly alter a comet's orbit, and he argued that the non-gravitational movements of Encke's Comet resulted from this phenomenon. In the 19th century, the Astronomical Observatory of Padova was an epicenter in the observational study of comets. Led by Giovanni Santini (1787–1877) and followed by Giuseppe Lorenzoni (1843–1914), this observatory was devoted to classical astronomy, mainly to the new comets and planets orbit calculation, with the goal of compiling a catalog of almost ten thousand stars. Situated in the Northern portion of Italy, observations from this observatory were key in establishing important geodetic, geographic, and astronomical calculations, such as the difference of longitude between Milan and Padua as well as Padua to Fiume. Correspondence within the observatory, particularly between Santini and another astronomer Giuseppe Toaldo, mentioned the importance of comet and planetary orbital observations. In 1950, Fred Lawrence Whipple proposed that rather than being rocky objects containing some ice, comets were icy objects containing some dust and rock. This "dirty snowball" model soon became accepted and appeared to be supported by the observations of an armada of spacecraft (including the European Space Agency's "Giotto" probe and the Soviet Union's "Vega 1" and "Vega 2") that flew through the coma of Halley's Comet in 1986, photographed the nucleus, and observed jets of evaporating material. On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, , and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON). Classification. Great comets. Approximately once a decade, a comet becomes bright enough to be noticed by a casual observer, leading such comets to be designated as great comets. Predicting whether a comet will become a great comet is notoriously difficult, as many factors may cause a comet's brightness to depart drastically from predictions. Broadly speaking, if a comet has a large and active nucleus, will pass close to the Sun, and is not obscured by the Sun as seen from Earth when at its brightest, it has a chance of becoming a great comet. However, Comet Kohoutek in 1973 fulfilled all the criteria and was expected to become spectacular but failed to do so. Comet West, which appeared three years later, had much lower expectations but became an extremely impressive comet. The Great Comet of 1577 is a well-known example of a great comet. It passed near Earth as a non-periodic comet and was seen by many, including well-known astronomers Tycho Brahe and Taqi ad-Din. Observations of this comet led to several significant findings regarding cometary science, especially for Brahe. The late 20th century saw a lengthy gap without the appearance of any great comets, followed by the arrival of two in quick succession—Comet Hyakutake in 1996, followed by Hale–Bopp, which reached maximum brightness in 1997 having been discovered two years earlier. The first great comet of the 21st century was C/2006 P1 (McNaught), which became visible to naked eye observers in January 2007. It was the brightest in over 40 years. Sungrazing comets. A sungrazing comet is a comet that passes extremely close to the Sun at perihelion, generally within a few million kilometers. Although small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong tidal forces they experience often lead to their fragmentation. About 90% of the sungrazers observed with SOHO are members of the Kreutz group, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System. The remainder contains some sporadic sungrazers, but four other related groups of comets have been identified among them: the Kracht, Kracht 2a, Marsden, and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz, which is the parent of two meteor streams, the Quadrantids and the Arietids. Unusual comets. Of the thousands of known comets, some exhibit unusual properties. Comet Encke (2P/Encke) orbits from outside the asteroid belt to just inside the orbit of the planet Mercury whereas the Comet 29P/Schwassmann–Wachmann currently travels in a nearly circular orbit entirely between the orbits of Jupiter and Saturn. 2060 Chiron, whose unstable orbit is between Saturn and Uranus, was originally classified as an asteroid until a faint coma was noticed. Similarly, Comet Shoemaker–Levy 2 was originally designated asteroid 1990 UL3. Largest. The largest known periodic comet is 95P/Chiron at 200 km in diameter that comes to perihelion every 50 years just inside of Saturn's orbit at 8 AU. The largest known Oort cloud comet is suspected of being Comet Bernardinelli-Bernstein at ≈150 km that will not come to perihelion until January 2031 just outside of Saturn's orbit at 11 AU. The Comet of 1729 is estimated to have been ≈100 km in diameter and came to perihelion inside of Jupiter's orbit at 4 AU. Centaurs. Centaurs typically behave with characteristics of both asteroids and comets. Centaurs can be classified as comets such as 60558 Echeclus, and 166P/NEAT. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet despite its orbit, and 60558 Echeclus was discovered without a coma but later became active, and was then classified as both a comet and an asteroid (174P/Echeclus). One plan for "Cassini" involved sending it to a centaur, but NASA decided to destroy it instead. Observation. A comet may be discovered photographically using a wide-field telescope or visually with binoculars. However, even without access to optical equipment, it is still possible for the amateur astronomer to discover a sungrazing comet online by downloading images accumulated by some satellite observatories such as SOHO. SOHO's 2000th comet was discovered by Polish amateur astronomer Michał Kusiak on 26 December 2010 and both discoverers of Hale–Bopp used amateur equipment (although Hale was not an amateur). Lost. A number of periodic comets discovered in earlier decades or previous centuries are now lost comets. Their orbits were never known well enough to predict future appearances or the comets have disintegrated. However, occasionally a "new" comet is discovered, and calculation of its orbit shows it to be an old "lost" comet. An example is Comet 11P/Tempel–Swift–LINEAR, discovered in 1869 but unobservable after 1908 because of perturbations by Jupiter. It was not found again until accidentally rediscovered by LINEAR in 2001. There are at least 18 comets that fit this category. In popular culture. The depiction of comets in popular culture is firmly rooted in the long Western tradition of seeing comets as harbingers of doom and as omens of world-altering change. Halley's Comet alone has caused a slew of sensationalist publications of all sorts at each of its reappearances. It was especially noted that the birth and death of some notable persons coincided with separate appearances of the comet, such as with writers Mark Twain (who correctly speculated that he'd "go out with the comet" in 1910) and Eudora Welty, to whose life Mary Chapin Carpenter dedicated the song "Halley Came to Jackson". In times past, bright comets often inspired panic and hysteria in the general population, being thought of as bad omens. More recently, during the passage of Halley's Comet in 1910, Earth passed through the comet's tail, and erroneous newspaper reports inspired a fear that cyanogen in the tail might poison millions, whereas the appearance of Comet Hale–Bopp in 1997 triggered the mass suicide of the Heaven's Gate cult. In science fiction, the impact of comets has been depicted as a threat overcome by technology and heroism (as in the 1998 films "Deep Impact" and "Armageddon"), or as a trigger of global apocalypse ("Lucifer's Hammer", 1979) or zombies ("Night of the Comet", 1984). In Jules Verne's "Off on a Comet" a group of people are stranded on a comet orbiting the Sun, while a large crewed space expedition visits Halley's Comet in Sir Arthur C. Clarke's novel "". In literature. The long-period comet first recorded by Pons in Florence on 15 July 1825 inspired Lydia Sigourney's humorous poem . in which all the celestial bodies argue over the comet's appearance and purpose. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_{\\infty}\\!" } ]
https://en.wikipedia.org/wiki?curid=5962
59623
Endomorphism ring
Endomorphism algebra of an abelian group In mathematics, the endomorphisms of an abelian group "X" form a ring. This ring is called the endomorphism ring of "X", denoted by End("X"); the set of all homomorphisms of "X" into itself. Addition of endomorphisms arises naturally in a pointwise manner and multiplication via endomorphism composition. Using these operations, the set of endomorphisms of an abelian group forms a (unital) ring, with the zero map formula_0 as additive identity and the identity map formula_1 as multiplicative identity. The functions involved are restricted to what is defined as a homomorphism in the context, which depends upon the category of the object under consideration. The endomorphism ring consequently encodes several internal properties of the object. As the endomorphism ring is often an algebra over some ring "R," this may also be called the endomorphism algebra. An abelian group is the same thing as a module over the ring of integers, which is the initial object in the category of rings. In a similar fashion, if "R" is any commutative ring, the endomorphisms of an "R"-module form an algebra over "R" by the same axioms and derivation. In particular, if "R" is a field, its modules "M" are vector spaces and the endomorphism ring of each is an algebra over the field "R". Description. Let ("A", +) be an abelian group and we consider the group homomorphisms from "A" into "A". Then addition of two such homomorphisms may be defined pointwise to produce another group homomorphism. Explicitly, given two such homomorphisms "f" and "g", the sum of "f" and "g" is the homomorphism "f" + "g" : "x" ↦ "f"("x") + "g"("x"). Under this operation End("A") is an abelian group. With the additional operation of composition of homomorphisms, End("A") is a ring with multiplicative identity. This composition is explicitly "fg" : "x" ↦ "f"("g"("x")). The multiplicative identity is the identity homomorphism on "A". The additive inverses are the pointwise inverses. If the set "A" does not form an "abelian" group, then the above construction is not necessarily well-defined, as then the sum of two homomorphisms need not be a homomorphism. However, the closure of the set of endomorphisms under the above operations is a canonical example of a near-ring that is not a ring. One can use this isomorphism to construct many non-commutative endomorphism rings. For example: formula_7, since formula_8. Also, when formula_9 is a field, there is a canonical isomorphism formula_10, so formula_11, that is, the endomorphism ring of a formula_12-vector space is identified with the ring of "n"-by-"n" matrices with entries in formula_12. More generally, the endomorphism algebra of the free module formula_13 is naturally formula_14-by-formula_14 matrices with entries in the ring formula_15. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "0: x \\mapsto 0" }, { "math_id": 1, "text": "1: x \\mapsto x" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\\mathrm{M}_n(\\operatorname{End}(A))\\cong \\operatorname{End}(A^n)" }, { "math_id": 4, "text": "\\mathrm{M}_n(\\operatorname{End}(A))" }, { "math_id": 5, "text": "A^n" }, { "math_id": 6, "text": "\\begin{pmatrix}\\varphi_{11}&\\cdots &\\varphi_{1n}\\\\ \\vdots& &\\vdots \\\\ \\varphi_{n1}&\\cdots& \\varphi_{nn} \\end{pmatrix}\\begin{pmatrix}a_1\\\\\\vdots\\\\a_n\\end{pmatrix}=\\begin{pmatrix}\\sum_{i=1}^n\\varphi_{1i}(a_i)\\\\\\vdots\\\\\\sum_{i=1}^n\\varphi_{ni}(a_i) \\end{pmatrix}. " }, { "math_id": 7, "text": "\\operatorname{End}(\\mathbb{Z}\\times \\mathbb{Z})\\cong \\mathrm{M}_2(\\mathbb{Z})" }, { "math_id": 8, "text": "\\operatorname{End}(\\mathbb{Z})\\cong \\mathbb{Z}" }, { "math_id": 9, "text": "R=K" }, { "math_id": 10, "text": "\\operatorname{End}(K)\\cong K" }, { "math_id": 11, "text": "\\operatorname{End}(K^n)\\cong \\mathrm{M}_n(K)" }, { "math_id": 12, "text": "K" }, { "math_id": 13, "text": "M = R^n" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=59623
59625279
JPEG XS
Low-latency video compression standard JPEG XS (ISO/IEC 21122) is an interoperable, visually lossless, low-latency and lightweight image and video coding system used in professional applications. Applications of the standard include streaming high-quality content for virtual reality, drones, autonomous vehicles using cameras, gaming, and broadcasting (SMPTE ST 2022 and ST 2110). It was the first ISO codec ever designed for this specific purpose. JPEG XS, built on core technology from both intoPIX and Fraunhofer IIS, is formally standardized as ISO/IEC 21122 by the Joint Photographic Experts Group with the first edition published in 2019. Although not official, the XS acronym was chosen to highlight the "eXtra Small" and "eXtra Speed" characteristics of the codec. Today, the JPEG committee is still actively working on further improvements to XS, with the second edition scheduled for publication (beginning of 2022) and initial efforts being launched towards a third edition. Features of JPEG XS. Three main features are key to JPEG XS: Relying on these key features, JPEG XS is suitable to be used in any application where uncompressed content is now the norm, yet still allowing for significant savings in the required bandwidth usage, preserving quality and low latency. Among the targeted use cases are video transport over professional video links (like SDI and Ethernet/IP), real-time video storage, memory buffers, omnidirectional video capture and rendering, and image sensor compression (for example in cameras and in the automotive industry). Typical compression ratios go up to 10:1 but can also be higher depending on the nature of the image or the requirements of the targeted application. JPEG XS favors visually lossless quality in combination with low latency and low complexity, over crude compression performance. Hence, it is not a direct competitor to alternative image codecs like JPEG 2000 and JPEG XL or video codecs like AV1, AVC/H.264 and HEVC/H.265. Other important features are: Application domains. This section lists them main application domains where JPEG XS is actively used. New and other application domains are subject to be added in the future, for example, framebuffer compression or AR/VR applications. Transport over video links and IP networks. Video bandwidth requirements are growing continuously, as video resolutions, frame rates, bit depths, and the amount of video streams are constantly increasing. Likewise, the capacities of video links and communication channels are also growing, yet at a slower pace than what is needed to address the huge video bandwidth growth. In addition, the investments to upgrade the capacity of links and channels are significant and need to be amortized over several years. Moreover, both the broadcast and pro-AV markets are shifting towards AV-over-IP-based infrastructure, with a preference going to 1 Gigabit Ethernet links for remote production or 10G Ethernet networks for in-house facilities. Both 1G, 2.5G, and 10G Ethernet are cheap and ubiquitous, while 25G or better links are usually not yet affordable. Given the available bandwidth and infrastructure cost, relying on uncompressed video is therefore no longer an option, as 4K, 8K, increased bit depths (for HDR), and higher framerates need to be supported. JPEG XS is a light-weight compression that visually preserves the quality compared to an uncompressed stream, at a low cost, targeted at compression ratios of up to 10:1. With XS, it is for example possible to repurpose existing SDI cables to transport 4K60 over a single 3G-SDI (at 4:1), and even over a single HD-SDI (at 8:1). Similar scenarios can be used to transport 8K60 content over various SDI cable types (e.g. 6G-SDI and 12G-SDI). Alternatively, XS enables transporting 4K60 content over 1G Ethernet and 8K60 over 5G or 10G Ethernet, which would be impossible without compression. The following table shows some expected compression ranges for some typical use cases. Real-time video storage and playout. Related to the transport of video streams is the storage and retrieval of high-resolution streams where bandwidth limitations similarly apply. For instance, video cameras use internal storage like SSD drives or SD cards to hold large streams of images, yet the maximum data rates of such storage devices are limited and well below the uncompressed video throughput. Sensor compression. As stated, JPEG XS has built-in support for the direct compression of RAW Bayer/CFA images using the Star-Tetrix Color Transform. This transform takes a RAW Bayer pattern image and decorrelates the samples into a 4-component image with each component having only a quarter of the resolution. This means that the total amount of samples to further process and compress remains the same, yet the values are decorrelated similarly to a classical Multiple Component Transform. Avoiding such conversion prevents information loss and allows this processing step to be done outside of the camera. This is advantageous because it allows to defer demosaicing the Bayer content from the moment of capturing to the production phase, where choices regarding artistic intent and various settings can be better made. Recall that the demosaicing process is irreversible and requires certain choices, like the choice of interpolation algorithm or the level of noise reduction, to be made upfront. Moreover, the demosaicing process can be power-hungry and will also introduce extra latency and complexity. The ability to push this step out of the camera is possible with JPEG XS and allows to use more advanced algorithms resulting in better quality in the end. Standards. JPEG XS (ISO/IEC 21122). The JPEG XS coding system is an ISO/IEC suite of standards that consists of the following parts: Part 1, formally designated as ISO/IEC 21122-1, describes the core coding system of JPEG XS. This standard defines the syntax and, similarly to other JPEG and MPEG image codecs, the decompression process to reconstruct a continuous-tone digital image from its encoded codestream. Part 1 does provide some guidelines of the inverse process that compresses a digital image into a compressed codestream, or more simply called the encoding process, but leaves implementation-specific optimizations and choices to the implementers. Part 2 (ISO/IEC 21122-2) builds on top of Part 1 to segregate different applications and uses of JPEG XS into reduced coding tool subsets with tighter constraints. The definition of profiles, levels, and sublevels allows for reducing the complexity of implementations in particular application use cases, while also safeguarding interoperability. Recall that lower complexity typically means less power consumption, lower production costs, easier constraints, etc. Profiles represent interoperability subsets of the codestream syntax specified in Part 1. In addition, levels and sublevels provide limits to the maximum throughput in respectively the encoded (codestream) and the decoded (spatial and pixels) image domains. Part 2 furthermore also specifies a buffer model, consisting of a decoder model and a transmission channel model, to enable guaranteeing low latency requirements to a fraction of the frame size. Part 3 (ISO/IEC 21122-3) specifies transport and container formats for JPEG XS codestreams. It defines the carriage of important metadata, like color spaces, mastering display metadata (MDM), and EXIF, to facilitate transport, editing, and presentation. Furthermore, this part defines the XS-specific ISOBMFF boxes, an Internet Media Type registration, and additional syntax to allow embedding XS in formats like MP4, MPEG-2 TS, or the HEIF image file format. Part 4 (ISO/IEC 21122-4) is a supporting standard of JPEG XS that provides conformance testing and buffer model verification. This standard is crucial to implementers of XS and appliance conformance testing. Finally, Part 5 (ISO/IEC 21122-5) represents a reference software implementation (written in ISO C11) of the JPEG XS Part 1 decoder, conforming to the Part 2 profiles, levels and sublevels, as well as an exemplary encoder implementation. A second edition of all five parts is in the making and will be published at the latest in the beginning of 2022. It provides additional coding tools, profiles and levels, and new reference software to add support for efficient compression of 4:2:0 content, RAW Bayer/CFA content, and mathematically lossless compression. RFC9134 - RTP Payload Format for ISO/IEC 21122 (JPEG XS). RFC 9134 describes a payload format for the Real-Time Transport Protocol (RTP, RFC 3550) to carry JPEG XS encoded video. In addition, the recommendation also registers the official Media Type Registration for JPEG XS video as , along with its mapping of all parameters into the Session Description Protocol (SDP). The RTP Payload Format for JPEG XS in turn enables using JPEG XS in SMPTE ST 2110 environments using SMPTE ST 2110-22 for CBR compressed video transport. MPEG-TS for JPEG XS. ISO/IEC 13818-1:2022, known as MPEG-TS 8th edition, specifies carriage support for JPEG XS in MPEG Transport Streams. See also MPEG-2. Note that AMD1 (Carriage of LCEVC and other improvements) of ISO/IEC 13818-1:2022 contains some additional corrections, improvements, and clarifications regarding embedding JPEG XS in MPEG-TS. VSF TR-07 and TR-08. See VSF TR-07 and TR-08, published by the Video Services Forum NMOS with JPEG XS. A Networked Media Open Specifications that enables registration, discovery, and connection management of JPEG XS endpoints using the AMWA IS-04 and IS-05 NMOS Specifications. See AMWA BCP-006-01, published by Advanced Media Workflow Association. JPEG XS in IPMX. Internet Protocol Media Experience (IPMX) is a proposed set of open standards and specifications to enable the carriage of compressed and uncompressed video, audio, and data over IP networks for the pro AV market. JPEG XS is supported under IPMX via VSF TR-10-8 and TR-10-11. History. The JPEG committee started the standardization activity in 2016 with an open call for a high-performance, low-complexity image coding standard. The best-performing candidates formed the basis for the new standard. First implementations were demonstrated in April 2018 at the NAB Show and later that year at the International Broadcasting Convention. The format was developed by a team led by JPEG chairman Touradj Ebrahimi. XS was also presented at CES in 2019. Technical overview. Core coding. The JPEG XS standard is a classical wavelet-based still-image codec without any frame buffer. While the standard defines JPEG XS based on a hypothetical reference coder, JPEG XS is easier to explain through the steps a typical encoder performs: Component up-scaling and optional component decorrelation: In the first step, the DC gain of the input data is removed and it is upscaled to a bit-precision of 20 bits. Optionally, a multi-component generation, identical to the JPEG 2000 RCT, is applied. This transformation is a lossless approximation of an RGB to YUV conversion, generating one luma and two chroma channels. Wavelet transformation: Input data is spacially decorrelated by a 5/3 Daubechies wavelet filter. While a five-stage transformation is performed in the horizontal direction, only 0 to 2 transformations are run in the vertical direction. The reason for this asymmetrical filter is to minimize latency. Prequantization: The output of the wavelet filter is converted to a sign-magnitude representation and pre-quantized by a dead zone quantizer to 16-bit precision. Rate control and quantization: The encoder determines by a non-normative process the rate of each possible quantization setting and then quantizes data by either a dead zone quantizer or a data-dependent uniform quantizer. Entropy coding: JPEG XS uses minimalistic Entropy encoding for the quantized data which proceeds in up to four passes over horizontal lines of quantized wavelet coefficients. The steps are: Codestream packing: All entropy-coded data are packed into a linear stream of bits (grouped in byte multiples) along with all of the required image metadata. This sequence of bytes is called the codestream and its high-level syntax is based on the typical JPEG markers and marker segments syntax. Profiles, levels and sublevels. JPEG XS defines profiles (in ISO/IEC 21122-2) that define subsets of coding tools that conforming decoders shall support, by limiting the permitted parameter values and allowed markers. The following table represents an overview of all the profiles along with their most important properties. Please refer to the standard for a complete specification of each profile. In addition, JPEG XS defines levels to represent a lower bound on the required throughput that conforming decoders need to support in the decoded image domain (also called the spatial domain). The following table lists the levels as defined by JPEG XS. The maximums are given in the context of the sampling grid, so they refer to a per-pixel value where each pixel represents one or more component values. However, in the context of Bayer data JPEG XS internally interprets the Bayer pattern as an interleaved grid of four components. This means that the number of sampling grid points required to represent a Bayer image is four times smaller than the total number of Bayer sample points. Each group of 2x2 (four) Bayer values gets interpreted as one sampling grid point with four components. Thus sensor resolutions should be divided by four to calculate the respective width, height and amount of sampling grid points. For this reason, all levels also bear double names. Please refer to the standard for a complete specification of each level. Similarly to the concept of levels, JPEG XS defines sublevels to represent a lower bound on the required throughput that conforming decoders need to support in the encoded image domain. Each sublevel is defined by a nominal bit-per-pixel (Nbpp) value that indicates the maximum amount of bits per pixel for an encoded image of the maximum permissible number of sampling grid points according to the selected conformance level. Thus, decoders conforming to a particular level and sublevel shall conform to the following constraints derived from Nbpp: The following table lists the existing sublevels and their respective nominal bpp values. Please refer to the standard for a complete specification of each level. Patents and RAND. JPEG XS contains patented technology which is made available for licensing via the JPEG XS Patent Portfolio License (JPEG XS PPL). This license pool covers essential patents owned by Licensors for implementing the ISO/IEC 21122 JPEG XS video coding standard and is available under RAND terms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_{sl,max}=\\bigg\\lfloor \\frac{L_{max}\\times N_{bpp}}{8} \\bigg\\rfloor" }, { "math_id": 1, "text": "R_{t,max}=R_{s,max} \\times N_{bpp}" } ]
https://en.wikipedia.org/wiki?curid=59625279
596267
Rotations and reflections in two dimensions
Mathematical concept In Euclidean geometry, two-dimensional rotations and reflections are two kinds of Euclidean plane isometries which are related to one another. Process. A rotation in the plane can be formed by composing a pair of reflections. First reflect a point P to its image P′ on the other side of line "L"1. Then reflect P′ to its image P′′ on the other side of line "L"2. If lines "L"1 and "L"2 make an angle θ with one another, then points P and P′′ will make an angle 2"θ" around point O, the intersection of "L"1 and "L"2. I.e., angle ∠ "POP′′" will measure 2"θ". A pair of rotations about the same point O will be equivalent to another rotation about point O. On the other hand, the composition of a reflection and a rotation, or of a rotation and a reflection (composition is not commutative), will be equivalent to a reflection. Mathematical expression. The statements above can be expressed more mathematically. Let a rotation about the origin O by an angle θ be denoted as Rot("θ"). Let a reflection about a line L through the origin which makes an angle θ with the x-axis be denoted as Ref("θ"). Let these rotations and reflections operate on all points on the plane, and let these points be represented by position vectors. Then a rotation can be represented as a matrix, formula_0 and likewise for a reflection, formula_1 With these definitions of coordinate rotation and reflection, the following four identities hold: formula_2 Proof. These equations can be proved through straightforward matrix multiplication and application of trigonometric identities, specifically the sum and difference identities. The set of all reflections in lines through the origin and rotations about the origin, together with the operation of composition of reflections and rotations, forms a group. The group has an identity: Rot(0). Every rotation Rot("φ") has an inverse Rot(−"φ"). Every reflection Ref("θ") is its own inverse. Composition has closure and is associative, since matrix multiplication is associative. Notice that both Ref("θ") and Rot("θ") have been represented with orthogonal matrices. These matrices all have a determinant whose absolute value is unity. Rotation matrices have a determinant of +1, and reflection matrices have a determinant of −1. The set of all orthogonal two-dimensional matrices together with matrix multiplication form the orthogonal group: "O"(2). The following table gives examples of rotation and reflection matrix : References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\operatorname{Rot}(\\theta) = \\begin{bmatrix}\n \\cos \\theta & -\\sin \\theta \\\\\n \\sin \\theta & \\cos \\theta\n\\end{bmatrix}, " }, { "math_id": 1, "text": " \\operatorname{Ref}(\\theta) = \\begin{bmatrix}\n \\cos 2 \\theta & \\sin 2 \\theta \\\\\n \\sin 2 \\theta & -\\cos 2 \\theta\n\\end{bmatrix}. " }, { "math_id": 2, "text": "\\begin{align}\n \\operatorname{Rot}(\\theta) \\, \\operatorname{Rot}(\\phi) &= \\operatorname{Rot}(\\theta + \\phi), \\\\[4pt]\n \\operatorname{Ref}(\\theta) \\, \\operatorname{Ref}(\\phi) &= \\operatorname{Rot}(2\\theta - 2\\phi), \\\\[2pt]\n \\operatorname{Rot}(\\theta) \\, \\operatorname{Ref}(\\phi) &= \\operatorname{Ref}(\\phi + \\tfrac{1}{2}\\theta), \\\\[2pt]\n \\operatorname{Ref}(\\phi) \\, \\operatorname{Rot}(\\theta) &= \\operatorname{Ref}(\\phi - \\tfrac{1}{2}\\theta).\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=596267
596282
Class number problem
Finding a complete list of imaginary quadratic fields having a given class number In mathematics, the Gauss class number problem (for imaginary quadratic fields), as usually understood, is to provide for each "n" ≥ 1 a complete list of imaginary quadratic fields formula_0 (for negative integers "d") having class number "n". It is named after Carl Friedrich Gauss. It can also be stated in terms of discriminants. There are related questions for real quadratic fields and for the behavior as formula_1. The difficulty is in effective computation of bounds: for a given discriminant, it is easy to compute the class number, and there are several ineffective lower bounds on class number (meaning that they involve a constant that is not computed), but effective bounds (and explicit proofs of completeness of lists) are harder. Gauss's original conjectures. The problems are posed in Gauss's Disquisitiones Arithmeticae of 1801 (Section V, Articles 303 and 304). Gauss discusses imaginary quadratic fields in Article 303, stating the first two conjectures, and discusses real quadratic fields in Article 304, stating the third conjecture. The original Gauss class number problem for imaginary quadratic fields is significantly different and easier than the modern statement: he restricted to even discriminants, and allowed non-fundamental discriminants. Class number 2: solved, Baker (1971), Stark (1971) Class number 3: solved, Oesterlé (1985) Class numbers h up to 100: solved, Watkins 2004 Lists of discriminants of class number 1. For imaginary quadratic number fields, the (fundamental) discriminants of class number 1 are: formula_3 The non-fundamental discriminants of class number 1 are: formula_4 Thus, the even discriminants of class number 1, fundamental and non-fundamental (Gauss's original question) are: formula_5 Modern developments. In 1934, Hans Heilbronn proved the Gauss conjecture. Equivalently, for any given class number, there are only finitely many imaginary quadratic number fields with that class number. Also in 1934, Heilbronn and Edward Linfoot showed that there were at most 10 imaginary quadratic number fields with class number 1 (the 9 known ones, and at most one further). The result was ineffective (see effective results in number theory): it did not give bounds on the size of the remaining field. In later developments, the case "n" = 1 was first discussed by Kurt Heegner, using modular forms and modular equations to show that no further such field could exist. This work was not initially accepted; only with later work of Harold Stark and Bryan Birch (e.g. on the Stark–Heegner theorem and Heegner number) was the position clarified and Heegner's work understood. Practically simultaneously, Alan Baker proved what we now know as Baker's theorem on linear forms in logarithms of algebraic numbers, which resolved the problem by a completely different method. The case "n" = 2 was tackled shortly afterwards, at least in principle, as an application of Baker's work. The complete list of imaginary quadratic fields with class number 1 is formula_6 where "d" is one of formula_7 The general case awaited the discovery of Dorian Goldfeld in 1976 that the class number problem could be connected to the "L"-functions of elliptic curves. This effectively reduced the question of effective determination to one about establishing the existence of a multiple zero of such an "L"-function. With the proof of the Gross–Zagier theorem in 1986, a complete list of imaginary quadratic fields with a given class number could be specified by a finite calculation. All cases up to "n" = 100 were computed by Watkins in 2004. The class number of formula_8 for "d" = 1, 2, 3, ... is formula_9 (sequence in the OEIS). Real quadratic fields. The contrasting case of "real" quadratic fields is very different, and much less is known. That is because what enters the analytic formula for the class number is not "h", the class number, on its own — but "h" log "ε", where "ε" is a fundamental unit. This extra factor is hard to control. It may well be the case that class number 1 for real quadratic fields occurs infinitely often. The Cohen–Lenstra heuristics are a set of more precise conjectures about the structure of class groups of quadratic fields. For real fields they predict that about 75.45% of the fields obtained by adjoining the square root of a prime will have class number 1, a result that agrees with computations. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Q}(\\sqrt{d})" }, { "math_id": 1, "text": "d \\to -\\infty" }, { "math_id": 2, "text": "h(d) \\to \\infty\\text{ as }d\\to -\\infty." }, { "math_id": 3, "text": "d=-3,-4,-7,-8,-11,-19,-43,-67,-163." }, { "math_id": 4, "text": "d=-12,-16,-27,-28." }, { "math_id": 5, "text": "d=-4,-8,-12,-16,-28." }, { "math_id": 6, "text": "\\mathbf{Q}(\\sqrt{d})" }, { "math_id": 7, "text": "-1, -2, -3, -7, -11, -19, -43, -67, -163." }, { "math_id": 8, "text": "\\mathbf{Q}(\\sqrt{-d})" }, { "math_id": 9, "text": "1, 1, 1, 1, 2, 2, 1, 1, 1, 2, 1, 1, 2, 4, 2, 1, 4, 1, 1, 2, 4, 2, 3, 2, 1, 6, 1, 1, 6, 4, 3, 1, ..." } ]
https://en.wikipedia.org/wiki?curid=596282
5963160
Mash ingredients
Essential ingredients for brewing Mash ingredients, mash bill, mashbill, or grain bill are the materials that brewers use to produce the wort that they then ferment into alcohol. Mashing is the act of creating and extracting fermentable and non-fermentable sugars and flavor components from grain by steeping it in hot water, and then letting it rest at specific temperature ranges to activate naturally occurring enzymes in the grain that convert starches to sugars. The sugars separate from the mash ingredients, and then yeast in the brewing process converts them to alcohol and other fermentation products. A typical primary mash ingredient is grain that has been malted. Modern-day malt recipes generally consist of a large percentage of a light malt and, optionally, smaller percentages of more flavorful or highly colored types of malt. The former is called "base malt"; the latter is known as "specialty malts". The grain bill of a beer or whisky may vary widely in the number and proportion of ingredients. For example, in beer-making, a simple pale ale might contain a single malted grain, while a complex porter may contain a dozen or more ingredients. In whisky production, Bourbon uses a mash made primarily from maize (often mixed with rye or wheat and a small amount of malted barley), and single malt Scotch exclusively uses malted barley. Variables. Each particular ingredient has its own flavor that contributes to the final character of the beverage. In addition, different ingredients carry other characteristics, not directly relating to the flavor, which may dictate some of the choices made in brewing: nitrogen content, diastatic power, color, modification, and conversion. Nitrogen content. The nitrogen content of a grain relates to the mass fraction of the grain that is made up of protein, and is usually expressed as a percentage; this fraction is further refined by distinguishing what fraction of the protein is water-soluble, also usually expressed as a percentage; 40% is typical for most beermaking grains. Generally, brewers favor lower-nitrogen grains, while distillers favor high-nitrogen grains. In most beermaking, an average nitrogen content in the grains of at most 10% is sought; higher protein content, especially the presence of high-mass proteins, causes "chill haze", a cloudy visual quality to the beer. However, this is mostly a cosmetic desire dating from the mass production of glassware for presenting serving beverages; traditional styles such as sahti, saison, and bière de garde, as well as several Belgian styles, make no special effort to create a clear product. The quantity of high-mass proteins can be reduced during the mash by making use of a protease rest. In Britain, preferred brewers' grains are often obtained from winter harvests and grown in low-nitrogen soil; in central Europe, no special changes are made for the grain-growing conditions and multi-step decoction mashing is favored instead. Distillers, by contrast, are not as constrained by the amount of protein in their mash as the non-volatile nature of proteins means that none is included in the final distilled product. Therefore, distillers seek out higher-nitrogen grains to ensure a more efficiently made product. Higher-protein grains generally have more diastatic power. Diastatic power. Diastatic power (DP), also called the "diastatic activity" or "enzymatic power", is a property of malts (grains that have begun to germinate) that refers to the malt's ability to break down starches into simpler fermentable sugars during the mashing process. Germination produces a number of enzymes, such as amylase, that can convert the starch naturally present in barley and other grains into sugar. The mashing process activates these enzymes by soaking the grain in water at a controlled temperature. In general, the hotter a grain is kilned, the less its diastatic activity. As a consequence, only lightly colored grains can be used as base malts, with Munich malt being the darkest base malt generally available. Diastatic activity can also be provided by diastatic malt extract or by inclusion of separately-prepared brewing enzymes. Diastatic power for a grain is measured in degrees Lintner (°Lintner or °L, although the latter can conflict with the symbol °L for Lovibond color); or in Europe by Windisch-Kolbach units (°WK). The two measures are related by formula_0 formula_1. A malt with enough power to self-convert has a diastatic power near 35 °Lintner (94 °WK). Until recently, the most active, so-called "hottest", malts currently available were American six-row pale barley malts, which have a diastatic power of up to 160 °Lintner (544 °WK). Wheat malts have begun to appear on the market with diastatic power of up to 200 °Lintner. Although with the huskless wheat being somewhat difficult to work with, this is usually used in conjunction with barley, or as an addition to add high diastatic power to a mash. Color. In brewing, the color of a grain or product is evaluated by the Standard Reference Method (SRM), Lovibond (°L), American Society of Brewing Chemists (ASBC) or European Brewery Convention (EBC) standards. While SRM and ASBC originate in North America and EBC in Europe, all three systems can be found in use throughout the world; degrees Lovibond has fallen out of industry use but has remained in use in homebrewing circles as the easiest to implement without a spectrophotometer. The darkness of grains range from as light as less than 2 SRM/4 EBC for Pilsener malt to as dark as 700 SRM/1600 EBC for black malt and roasted barley. Modification. The quality of starches in a grain is variable with the strain of grain used and its growing conditions. "Modification" refers specifically to the extent to which starch molecules in the grain consist of simple chains of starch molecules versus branched chains; a fully modified grain contains only simple-chain starch molecules. A grain that is not fully modified requires mashing in multiple steps rather than at simply one temperature as the starches must be de-branched before amylase can work on them. One indicator of the degree of modification of a grain is that grain's Nitrogen ratio; that is, the amount of soluble Nitrogen (or protein) in a grain vs. the total amount of Nitrogen (or protein). This number is also referred to as the "Kolbach Index" and a malt with a Kolbach index between 36% and 42% is considered a malt that is highly modified and suitable for single infusion mashing. Maltsters use the length of the acrospire vs. the length of the grain to determine when the appropriate degree of modification has been reached before drying or kilning. Conversion. Conversion is the extent to which starches in the grain have been enzymatically broken down into sugars. A caramel or crystal malt is fully converted before it goes into the mash; most malted grains have little conversion; unmalted grains, meanwhile, have little or no conversion. Unconverted starch becomes sugar during the last steps of mashing, through the action of alpha and beta amylases. Malts. The oldest and most predominant ingredient in brewing is barley, which has been used in beer-making for thousands of years. Modern brewing predominantly uses malted barley for its enzymatic power, but ancient Babylonian recipes indicate that, without the ability to malt grain in a controlled fashion, baked bread was simply soaked in water . Malted barley dried at a sufficiently low temperature contains enzymes such as amylase, which convert starch into sugar. Therefore, sugars can be extracted from the barley's own starches simply by soaking the grain in water at a controlled temperature; this is mashing. Pilsner malt. Pilsner malt, the basis of pale lager, is quite pale and strongly flavored. Invented in the 1840s, Pilsner malt is the lightest-colored generally available malt, and also carries a strong, sweet malt flavor. Usually a pale lager's grain bill consists entirely of this malt, which has enough enzymatic power to be used as a base malt. The commercial desirability of light-colored beers has also led to some British brewers adopting Pilsner malt (sometimes described simply as "lager malt" in Britain) in creating golden ales. In Germany, Pilsner malt is also used in some interpretations of the Kölsch style. ASBC 1-2/EBC 3–4, DP 60 °Lintner. Pale malt. Pale malt is the basis of pale ale and bitter, and the precursor in production of most other British beer malts. Dried at temperatures sufficiently low to preserve all the brewing enzymes in the grain, it is light in color and, today, the cheapest barley malt available due to mass production. It can be used as a "base malt"—that is, as the malt constituting the majority of the grist—in many styles of beer. Typically, English pale malts are kilned at 95–105 °C. Color ASBC 2-3/EBC 5–7. Diastatic power (DP) 45 °Lintner. Mild malt. Mild malt is often used as the base malt for mild ale, and is similar in color to pale malt. Mild malt is kilned at slightly higher temperatures than pale malt to provide a less neutral, rounder flavor generally described as "nutty". ASBC 3/EBC 6. Amber malt. Amber malt is a more toasted form of pale malt, kilned at temperatures of 150–160 °C, and is used in brown porter; older formulations of brown porter use amber malt as a base malt (though this was diastatic and produced in different conditions from a modern amber malt). Amber malt has a bitter flavor that mellows on aging, and can be quite intensely flavored. In addition to its use in porter, it also appears in a diverse range of British beer recipes. ASBC 50-70/EBC 100–140; amber malt has no diastatic power. Stout malt. Stout malt is sometimes seen as a base malt for stout beer; light in color, it is prepared so as to maximize diastatic power in order to better convert the large quantities of dark malts and unmalted grain used in stouts. In practice, however, most stout recipes make use of pale malt for its much greater availability. ASBC 2-3/EBC 4–6, DP 60–70 °Lintner. Brown malt. Brown malt is a darker form of pale malt, and is used typically in brown ale as well as in porter and stout. Like amber malt, it can be prepared from pale malt at home by baking a thin layer of pale malt in an oven until the desired color is achieved. 50–70 °L, no enzymes. Chocolate malt. Chocolate malt is similar to pale and amber malts but kilned at even higher temperatures. Producing complex chocolate and cocoa flavours, it is used in porters and sweet stouts as well as dark mild ales. It contains no enzymes. ASBC 450-500/EBC 1100–1300. Black malt. Black malt, also called patent malt or black patent malt, is barley malt that has been kilned to the point of carbonizing, around 200 °C. The term "patent malt" comes from its invention in England in 1817, late enough that the inventor of the process for its manufacture, Daniel Wheeler, was awarded a patent. Black malt provides the colour and some of the flavour in black porter, contributing an acrid, ashy undertone to the taste. In small quantities, black malt can also be used to darken beer to a desired color, sometimes as a substitute for caramel colour. Due to its high kilning temperature, it contains no enzymes. ASBC 500-600/EBC &gt;1300. Crystal malt. Crystal malts, or caramel malts are prepared separately from pale malts. They are high-nitrogen malts that are wetted and roasted in a rotating drum before kilning. They produce strongly sweet toffee-like flavors and are sufficiently converted that they can be steeped without mashing to extract their flavor. Crystal malts are available in a range of colors, with darker-colored crystal malts kilned at higher temperatures producing stronger, more caramel-like overtones. Some of the sugars in crystal malts caramelize during kilning and become unfermentable. Hence, adding crystal malt increases the final sweetness of a beer. They contain no enzymes. ASBC 50-165/EBC 90–320; the typical British crystal malt used in pale ale and bitter is around ASBC 70–80. Distiller's malt. Standard distiller's malt or pot still malt is quite light and low in nitrogen compared to beer malts, these malts usually require a nitrogen of below 1.45%. These malts are used in the production of whiskey/whisky and generally originate from northern Scotland. Peated malt. Peated malt is distiller's malt that has been smoked over burning peat, which imparts the aroma and flavor characteristics of Islay whisky and some Irish whiskey. Recently, some brewers have also included peated malt in interpretations of Scotch ales, although this is generally ahistorical. When peat is used in large amounts for beer making, the resulting beer tends to have a very strong earthy and smoky flavor that most mainstream beer drinkers would find irregular. Vienna malt. Vienna malt or Helles malt is the characteristic grain of Vienna lager and Märzen; although it generally takes up only ten to fifteen percent of the grain bill in a beer, it can be used as a base malt. It has sufficient enzymatic power to self-convert, and it is somewhat darker and kilned at a higher temperature than Pilsner malt. ASBC 3-4/EBC 7–10, DP 50 °Lintner. Munich malt. Munich malt is used as the base malt of the bock beer style, especially doppelbock, and appears in dunkel lager and Märzens in smaller quantities. While a darker grain than pale malt, it has sufficient diastatic power to self-convert, despite being kilned at temperatures around 115 °C. It imparts "malty", although not necessarily sweet characteristics, depending on mashing temperatures. ASBC 4-6/EBC 10–15, DP 40 °Lintner. Rauchmalz. Rauchmalz is a German malt that is prepared by being dried over an open flame rather than via kiln. The grain has a smoky aroma and is an essential ingredient in Bamberg Rauchbier. Acid malt. Acid malt, also known as acidulated malt, whose grains contain lactic acid, can be used as a continental analog to Burtonization. Acid malt lowers the mash pH and provides a rounder, fuller character to the beer, enhancing the flavor of Pilseners and other light lagers. Lowering the pH also helps prevent beer spoilage through oxidation. Other malts. Honey malt is an intensely flavored, lightly colored malt. 18–20 °L. Melanoidin malt, a malt like the Belgian Aromatic malt, adds roundness and malt flavor to a beer with a comparably small addition in the grain bill. It also stabilizes the flavor. Unmalted barley. Unmalted barley kernels are used in mashes for some Irish whiskey. Roast barley are un-malted barley kernels toasted in an oven until almost black. Roast barley is, after base malt, usually the most-used grain in stout beers, contributing the majority of the flavor and the characteristic dark-brown color; undertones of chocolate and coffee are common. ASBC 500-600/EBC &gt;1300 or more, no diastatic activity. Black barley is like roast barley except even darker, and may be used in stouts. It has a strong, astringent flavor and contains no enzymes. Flaked barley is unmalted, dried barley rolled into flat flakes. It imparts a rich, grainy flavor to beer and is used in many stouts, especially Guinness stout; it also improves head formation and retention. Torrefied barley is barley kernels that have been heated until they pop like popcorn. Other grains. Wheat. Wheat malt. Beer brewed in the German Hefeweizen style relies heavily on malted wheat as a grain. Under the Reinheitsgebot, wheat was treated separately from barley, as it was the more expensive grain. Torrefied wheat. Torrefied wheat is used in British brewing to increase the size and retention of a head in beer. Generally it is used as an enhancer rather than for its flavor. Raw wheat. Belgian witbier and Lambic make heavy use of raw wheat in their grist. It provides the distinctive taste and clouded appearance in a witbier and the more complex carbohydrates needed for the wild yeast and bacteria that make a lambic. Wheat flour. Until the general availability of torrefied wheat, wheat flour was often used for similar purposes in brewing. Brewer's flour is only rarely available today, and is of a larger grist than baker's flour. Oats. Oats in the form of rolled or steel-cut oats are used as mash ingredients in Oatmeal Stout. Rye. The use of rye in a beer typifies the rye beer style, especially the German "Roggenbier". Rye is also used in the Slavic kvass and Finnish sahti farmhouse styles, as readily available grains in eastern Europe. However, the use of rye in brewing is considered difficult as rye lacks a hull (like wheat) and contains large quantities of beta-glucans compared to other grains; these long-chain sugars can leach out during a mash, creating a sticky gelatinous gum in the mash tun, and as a result brewing with rye requires a long, thorough beta-glucanase rest. Rye is said to impart a spicy, dry flavor to beer. Sorghum and millet. Sorghum and millet are often used in African brewing. As gluten-free grains, they have gained popularity in the Northern Hemisphere as base materials for beers suitable for people with Celiac disease. Sorghum produces a dark, hazy beer. However, sorghum malt is difficult to prepare and rarely commercially available outside certain African countries. Millet is an ingredient in chhaang and pomba, and both grains together are used in oshikundu. Rice and maize. In the US, rice and maize (corn) are often used by commercial breweries as a means of adding fermentable sugars to a beer cheaply, due to the ready availability and low price of the grains. Maize is also the base grain in chicha and some cauim, as well as Bourbon whiskey and Tennessee Whiskey; while rice is the base grain of happoshu and various mostly Asian fermented beverages often referred to as "rice wines" such as sake and makgeolli; maize is also used as an ingredient in some Belgian beers such as Rodenbach to lighten the body. Maize was originally introduced into the brewing of American lagers because of the high protein content of the six-row barley; adding maize, which is high in sugar but low in protein, helped thin out the body of the resulting beer. Increased amounts of maize use over time led to the development of the American pale lager style. Maize is generally not malted (although it is in some whiskey recipes) but instead introduced into the mash as flaked, dried kernels. Prior to a brew, rice and maize are cooked to allow the starch to gelatinize and thereby render it convertible. Non-cereal grains. Buckwheat and quinoa, while not cereal grasses (but are whole grains), both contain high levels of available starch and protein, while containing no gluten. Therefore, some breweries use these plants in the production of beer suitable for people with Celiac disease, either alone or in combination with sorghum. Syrups and extracts. Another way of adding sugar or flavoring to a malt beverage is the addition of natural or artificial sugar products such as honey, white sugar, Dextrose and/or malt extract. While these ingredients can be added during the mash, the enzymes in the mash do not act on them. Such ingredients can be added during the boil of the wort rather than the mash, and as such, are also known as "copper sugars". One syrup commonly used in mash, however, is dry or dried malt extract or DME. DME is prepared by mashing malt in the normal fashion, then concentrating and spray drying the resulting wort. DME is used extensively in homebrewing as a substitute for base malt. It typically has no diastatic power because the enzymes are denatured in the production process. Fruit beers, such as kriek lambic or framboise, are made using fruit. Regional differences. Britain. British brewing makes use of a wide variety of malts, with considerable stylistic freedom for the brewer to blend them. Many British malts were developed only as recently as the Industrial Revolution, as improvements in temperature-controlled kilning allowed finer control over the drying and toasting of the malted grains. The typical British brewer's malt is a well-modified, low-nitrogen barley grown in the east of England or southeast of Scotland. In England, the best-known brewer's malt is made from the Maris Otter strain of barley; other common strains are Halcyon, Pipkin, Chariot, and Fanfare. Most malts in current use in Britain are derived from pale malt and were invented no earlier than the reign of Queen Anne. Brewing malt production in Britain is thoroughly industrialized, with barley grown on dedicated land and malts prepared in bulk in large, purpose-build maltings and distributed to brewers around the country to order. Continental Europe. Before controlled-temperature kilning became available, malted grains were dried over wood fires; Rauchmalz () is malt dried using this traditional process. In Germany, beech is often used as the wood for the fire, imparting a strongly smoky flavor to the malt. This malt is then used as the primary component of rauchbier; alder-smoked malt is used in Alaskan smoked porters. Rauchmalz comes in several varieties, generally named for and corresponding to standard kilned varieties (e.g. Rauchpilsener to Pilsener); color and diastatic power are comparable to those for an equivalent kilned grain. Similarly to crystal malts in Britain, central Europe makes use of caramel malts, which are moistened and kilned at temperatures around 55–65 °C in a rotating drum before being heated to higher temperatures for browning. The lower-temperature moistened kilning causes conversion and mashing to take place in the oven, resulting in a grain's starches becoming mostly or entirely converted to sugar before darkening. Caramel malts are produced in color grades analogous to other lager malts: carapils for pilsener malt, caravienne or carahell for Vienna malt, and caramunch for Munich malt. Color and final kilning temperature are comparable to non-caramel analog malts; there is no diastatic activity. Carapils malt is sometimes also called dextrin malt. 10–120 °L. United States. American brewing combines British and Central European heritages, and as such uses all the above forms of beer malt; Belgian-style brewing is less common but its popularity is growing. In addition, America also makes use of some specialized malts: 6-row pale malt is a pale malt made from a different species of barley. Quite high in nitrogen, 6-row malt is used as a "hot" base malt for rapid, thorough conversion in a mash, as well as for extra body and fullness; the flavor is more neutral than 2-row malt. 1.8 °L, 160 °Lintner. Victory malt is a specialized lightly roasted 2-row malt that provides biscuity, caramel flavors to a beer. Similar in color to amber and brown malt, it is often an addition to American brown ale. 25 °L, no diastatic power. Other notable American barley malts include Special Roast and coffee malt. Special Roast is akin to a darker variety of victory malt. Belgium. Belgian brewing makes use of the same grains as central European brewing. In general, though, Belgian malts are slightly darker and sweeter than their central European counterparts. In addition, Belgian brewing uses some local malts: Pale malt in Belgium is generally darker than British pale malt. Kilning takes place at temperatures five to ten °C lower than for British pale malt, but for longer periods; diastatic power is comparable to that of British pale malt. ASBC 4/EBC 7. Special B is a dark, intensely sweet crystal malt providing a strong malt flavor. Biscuit malt is a lightly flavored roasted malt used to darken some Belgian beers. 45–50 EBC/25 °L. Aromatic malt, by contrast, provides an intensely malty flavor. Kilned at 115 °C, it retains enough diastatic power to self-convert. 50–55 EBC/20 °L. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{}^\\circ\\mbox{Lintner} = \\frac{{}^\\circ\\mbox{WK} + 16}{3.5}" }, { "math_id": 1, "text": "{}^\\circ\\mbox{WK} = \\left ( 3.5 \\times {}^\\circ\\mbox{Lintner} \\right ) - 16" } ]
https://en.wikipedia.org/wiki?curid=5963160
59632528
Second neighborhood problem
Unsolved problem about oriented graphs In mathematics, the second neighborhood problem is an unsolved problem about oriented graphs posed by Paul Seymour. Intuitively, it suggests that in a social network described by such a graph, someone will have at least as many friends-of-friends as friends. The problem is also known as the second neighborhood conjecture or Seymour’s distance two conjecture. Statement. An oriented graph is a finite directed graph obtained from a simple undirected graph by assigning an orientation to each edge. Equivalently, it is a directed graph that has no self-loops, no parallel edges, and no two-edge cycles. The first neighborhood of a vertex formula_0 (also called its open neighborhood) consists of all vertices at distance one from formula_0, and the second neighborhood of formula_0 consists of all vertices at distance two from formula_0. These two neighborhoods form disjoint sets, neither of which contains formula_0 itself. In 1990, Paul Seymour conjectured that, in every oriented graph, there always exists at least one vertex formula_0 whose second neighborhood is at least as large as its first neighborhood. Equivalently, in the square of the graph, the degree of formula_0 is at least doubled. The problem was first published by Nathaniel Dean and Brenda J. Latka in 1995, in a paper that studied the problem on a restricted class of oriented graphs, the tournaments (orientations of complete graphs). Dean had previously conjectured that every tournament obeys the second neighborhood conjecture, and this special case became known as Dean's conjecture. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: Does every oriented graph contain a Seymour vertex? A vertex in a directed graph whose second neighborhood is at least as large as its first neighborhood is called a Seymour vertex. In the second neighborhood conjecture, the condition that the graph have no two-edge cycles is necessary, for in graphs that have such cycles (for instance the complete oriented graph) all second neighborhoods may be empty or small. Partial results. proved Dean's conjecture, the special case of the second neighborhood problem for tournaments. For some graphs, a vertex of minimum out-degree will be a Seymour vertex. For instance, if a directed graph has a sink, a vertex of out-degree zero, then the sink is automatically a Seymour vertex, because its first and second neighborhoods both have size zero. In a graph without sinks, a vertex of out-degree one is always a Seymour vertex. In the orientations of triangle-free graphs, any vertex formula_0 of minimum out-degree is again a Seymour vertex, because for any edge from formula_0 to another vertex formula_1, the out-neighbors of formula_1 all belong to the second neighborhood of formula_0. For arbitrary graphs with higher vertex degrees, the vertices of minimum degree might not be Seymour vertices, but the existence of a low-degree vertex can still lead to the existence of a nearby Seymour vertex. Using this sort of reasoning, the second neighborhood conjecture has been proven to be true for any oriented graph that contains at least one vertex of out-degree ≤ 6. Random tournaments and some random directed graphs graphs have many Seymour vertices with high probability. Every oriented graph has a vertex whose second neighborhood is at least formula_2 times as big as the first neighborhood, where formula_3 is the real root of the polynomial formula_4. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v" }, { "math_id": 1, "text": "w" }, { "math_id": 2, "text": "\\gamma" }, { "math_id": 3, "text": "\\gamma=\\frac{1}{6}\\left(-1+\\sqrt[3]{53-6\\sqrt{78}}+\\sqrt[3]{53+6\\sqrt{78}}\\right) \\approx 0.657" }, { "math_id": 4, "text": "2x^3+x^2-1" } ]
https://en.wikipedia.org/wiki?curid=59632528
596405
Collider
Type of particle accelerator that performs particle collisions A collider is a type of particle accelerator that brings two opposing particle beams together such that the particles collide. Colliders may either be ring accelerators or linear accelerators. Colliders are used as a research tool in particle physics by accelerating particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for extremely short periods of time, and therefore may be hard or impossible to study in other ways. Explanation. In particle physics one gains knowledge about elementary particles by accelerating particles to very high kinetic energy and guiding them to colide with other particles. For sufficiently high energy, a reaction occurs that transforms the particles into other particles. Detecting these products gives insight into the physics involved. To do such experiments there are two possible setups: The collider setup is harder to construct but has the great advantage that according to special relativity the energy of an inelastic collision between two particles approaching each other with a given velocity is not just 4 times as high as in the case of one particle resting (as it would be in non-relativistic physics); it can be orders of magnitude higher if the collision velocity is near the speed of light. In the case of a collider where the collision point is at rest in the laboratory frame (i.e. formula_0), the center of mass energy formula_1 (the energy available for producing new particles in the collision) is simply formula_2, where formula_3 and formula_4 is the total energy of a particle from each beam. For a fixed target experiment where particle 2 is at rest, formula_5. History. The first serious proposal for a collider originated with a group at the Midwestern Universities Research Association (MURA). This group proposed building two tangent radial-sector FFAG accelerator rings. Tihiro Ohkawa, one of the authors of the first paper, went on to develop a radial-sector FFAG accelerator design that could accelerate two counterrotating particle beams within a single ring of magnets. The third FFAG prototype built by the MURA group was a 50 MeV electron machine built in 1961 to demonstrate the feasibility of this concept. Gerard K. O'Neill proposed using a single accelerator to inject particles into a pair of tangent storage rings. As in the original MURA proposal, collisions would occur in the tangent section. The benefit of storage rings is that the storage ring can accumulate a high beam flux from an injection accelerator that achieves a much lower flux. The first electron-positron colliders were built in late 1950s-early 1960s in Italy, at the Istituto Nazionale di Fisica Nucleare in Frascati near Rome, by the Austrian-Italian physicist Bruno Touschek and in the US, by the Stanford-Princeton team that included William C.Barber, Bernard Gittelman, Gerry O’Neill, and Burton Richter. Around the same time, the "VEP-1" electron-electron collider was independently developed and built under supervision of Gersh Budker in the Institute of Nuclear Physics in Novosibirsk, USSR. The first observations of particle reactions in the colliding beams were reported almost simultaneously by the three teams in mid-1964 - early 1965. In 1966, work began on the Intersecting Storage Rings at CERN, and in 1971, this collider was operational. The ISR was a pair of storage rings that accumulated and collided protons injected by the CERN Proton Synchrotron. This was the first hadron collider, as all of the earlier efforts had worked with electrons or with electrons and positrons. In 1968 construction began on the highest energy proton accelerator complex at Fermilab. It was eventually upgraded to become the Tevatron collider and in October 1985 the first proton-antiproton collisions were recorded at a center of mass energy of 1.6 TeV, making it the highest energy collider in the world, at the time. The energy had later reached 1.96 TeV and at the end of the operation in 2011 the collider luminosity exceeded 430 times its original design goal. Since 2009, the most high-energetic collider in the world is the Large Hadron Collider (LHC) at CERN. It currently operates at 13 TeV center of mass energy in proton-proton collisions. More than a dozen future particle collider projects of various types - circular and linear, colliding hadrons (proton-proton or ion-ion), leptons (electron-positron or muon-muon), or electrons and ions/protons - are currently under consideration for detail exploration of the Higgs/electroweak physics and discoveries at the post-LHC energy frontier. Operating colliders. Sources: Information was taken from the website Particle Data Group. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\vec p_1 = -\\vec p_2 " }, { "math_id": 1, "text": "E_\\mathrm{cm}" }, { "math_id": 2, "text": "E_\\mathrm{cm} = E_1 + E_2" }, { "math_id": 3, "text": "E_1" }, { "math_id": 4, "text": "E_2" }, { "math_id": 5, "text": "E_\\mathrm{cm}^2 = m_1^2 + m_2^2 + 2 m_2 E_1 " } ]
https://en.wikipedia.org/wiki?curid=596405