id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
61446733
Relativistic quantum cryptography
Relativistic quantum cryptography is a sub-field of quantum cryptography, in which in addition to exploiting the principles of quantum physics, the no-superluminal signalling principle of relativity theory stating that information cannot travel faster than light is exploited too. Technically speaking, relativistic quantum cryptography is a sub-field of relativistic cryptography, in which cryptographic protocols exploit the no-superluminal signalling principle, independently of whether quantum properties are used or not. However, in practice, the term relativistic quantum cryptography is used for relativistic cryptography too. History. In 1997 and 1998, some important tasks in mistrustful cryptography were shown to be impossible to achieve with unconditional security. Mayers and Lo and Chau showed that unconditionally secure quantum bit commitment was impossible. Lo showed that oblivious transfer and a broad class of secure computations were also impossible to achieve with unconditional security in quantum cryptography. Moreover, Lo and Chau showed that unconditionally secure ideal quantum coin tossing was impossible too. In this context, Kent provided in 1999 the first relativistic cryptographic protocols, for bit commitment and ideal coin tossing, which overcome the assumptions made by Mayers, Lo and Chau, and achieve unconditional security. Since then, other unconditionally secure relativistic protocols for bit commitment have been found by Kent and others, and other cryptographic tasks have been investigated in the setting of relativistic quantum cryptography. Basics. No-signalling and no-superluminal signalling. The no-signalling principle of quantum theory states that information cannot be communicated between two distinct locations "L0" and "L1" without the transmission of any physical systems, despite any quantum entanglement shared between "L0" and "L1". This implies, in particular, that without the transmission of any physical systems between "L0" and "L1", quantum correlation between "L0" and "L1" cannot be used to transmit information between "L0" and "L1", even if they are non-locally causal and violate Bell inequalities. According to relativity theory, physical systems cannot travel faster than the speed of light. Thus, it follows from the no-signalling principle that information cannot travel faster than the speed of light. This is called the no-superluminal signalling principle. The principle of no-superluminal signalling is the key physical principle exploited in relativistic cryptography. It guarantees that the outcome "x" of a random variable "X" obtained at some spacetime point "P" cannot influence the probability that a random variable "Y" takes some value "y" at a spacelike separated spacetime point "Q". Thus, for example, if two parties Alice and Bob have each two agents, with the first agent of Bob sending a secret message "x" to a first agent of Alice at the spacetime point "P", and with the second agent of Alice sending a secret message "y" to the second agent of Bob at the spacetime point "Q", with "P" and "Q" spacelike separated, then Bob can be guaranteed that the message "y" received from Alice was chosen independently of the message "x" that he gave Alice, and vice versa. This is a useful mathematical property that is exploited to prove the security of cryptographic protocols in relativistic cryptography. The setting. It is a fundamental requirement in relativistic cryptography that the parties implementing the cryptographic task have a good description of spacetime, at least within the region of spacetime where the task is implemented. For example, in protocols implemented near the Earth surface, it can be assumed that spacetime is close to Minkowski. Importantly, this means that, near the Earth surface, physical systems and information cannot travel faster than the speed of light through vacuum, which is approximately 300,000 km/s. In principle, relativistic cryptography can be applied with more general spacetimes, as long as the parties can guarantee that there are no mechanisms allowing instant communication, like wormholes. Another requirement is that the parties have access to a common reference frame, so that they can guarantee that some communication events are spacelike separated. In relativistic cryptography, it is assumed that each party participating in the cryptographic task has various trusted agents that collaborate to implement the task. The agents implement the protocol by performing different actions at various points in spacetime. The agents of the same party may communicate via authenticated and secure channels, which can be implemented with previously shared secure keys, for example using one-time pads. Various tasks investigated by relativistic cryptography consist in tasks of mistrustful cryptography, in which two or more mistrustful parties must collaborate to implement a cryptographic task while at the same time being guaranteed that other parties do not cheat. Examples of tasks in mistrustful cryptography are bit commitment, coin tossing, oblivious transfer and secure computations. Key distribution does not belong to mistrustful cryptography, because in this case the parties distributing the key trust each other. In relativistic cryptography, each participating party has various trusted agents, who collaborate with each other by performing different actions at various spacetime points. For example, Alice and Bob can be two companies with offices and laboratories at various locations in the Earth. Alice's offices and laboratories work in collaboration and trust each other. Similarly, Bob's offices and laboratories work in collaboration and trust each other. But Alice and Bob do not trust each other. Tasks investigated in relativistic cryptography. Bit commitment. Bit commitment is an important cryptographic task that has been widely investigated in relativistic cryptography. In bit commitment, Alice commits to a bit "b" at some time "t", and at some later time "t’ > t" Alice unveils her committed bit "b" to Bob. A bit commitment is said to be "hiding" if Bob cannot know "b" before Alice unveils. It is said to be "binding" if after the commitment time "t", Alice cannot choose the value of "b" and successfully unveil "b" to Bob. A bit commitment protocol is "secure" if it is hiding and binding. The Mayers-Lo-Chau no go theorem states that unconditionally secure bit commitment is impossible based only on the laws of quantum physics. It was shown by Kent that the Mayers-Lo-Chau theorem is not general enough because it excludes protocols that exploit the principle of no-superluminal signalling. Kent provided the first unconditionally secure bit commitment protocol in the setting of relativistic cryptography. Various protocols for bit commitment have been devised by Kent and others. Experimental demonstrations of relativistic bit commitment have been implemented. Coin tossing. In strong coin tossing, Alice and Bob are at different locations and they wish to toss a coin in such a way that Alice is guaranteed that Bob cannot bias the outcome, and Bob is guaranteed that Alice cannot bias the outcome either. It was shown by Lo and Chau that ideal strong coin tossing is impossible to achieve with unconditional security based only on the laws of quantum physics. However, Kent overcame this no-go theorem by providing a relativistic protocol for strong coin tossing that is unconditionally secure. This protocol is conceptually very simple and is illustrated here as an example of a protocol in relativistic cryptography. In Kent's coin tossing protocol, Alice has two agents "A0" and "A1", and Bob has two agents "B0" and "B1". "Ai" and "Bi" are at location "Li", for formula_0. Let "L0" and "L1" have a distant separation "D". Let us assume that spacetime is Minkowski. Thus, the minimum time that light takes to travel between "L0" and "L1" is "t = D/c", where "c" is the speed of light through vacuum. "A0" generates a random bit formula_1 in a secure laboratory and gives it to "B0" at a time "t0". "B1" generates a random bit "b" in a secure laboratory and gives it to "A1" at a time "t1". "B0" and "B1" communicate formula_1 and "b" through a secure and authenticated channel. Similarly, "A0" and "A1" communicate formula_1 and "b" through a secure and authenticated channel. Alice and Bob agree that the output of the toss "d" is the xor of the bits formula_1 and "b", formula_2. Alice and Bob agree on advance on the values of "t0" and "t1" in a common reference frame, in such a way that "|t0 - t1| < t". Thus, from the principle of no superluminal signalling, at receiving formula_1 from "A0", "B0" cannot send any signal that arrives to "B1" before "B1" gives "b" to "A1". Therefore, Alice is guaranteed that the bit "b" is chosen by Bob independently of the bit formula_1 chosen by her. Since Alice chooses formula_1 randomly, and since "b" is independent of formula_1, Alice is guaranteed that the bit formula_3 is random. With similar arguments, Bob is also guaranteed that the bit "d" is random. Variations of coin tossing have been investigated in relativistic cryptography by Colbeck and Kent. Oblivious transfer and secure computations. Lo showed that oblivious transfer and other secure computations cannot be achieved with unconditional security based only on the laws of quantum physics. This impossibility result by Lo extends to the more general setting of relativistic quantum cryptography. Colbeck showed that various secure computations are impossible to achieve with unconditional security in relativistic quantum cryptography. Position-based quantum cryptography. Position-based quantum cryptography consists in cryptographic tasks whose security exploit the location of a party, the principle of no-superluminal signalling and the laws of quantum physics. For example, in the problem of quantum location authentication, a prover wants to demonstrate his location "L" to a set of verifiers using quantum systems. A protocol for quantum location authentication works as follows. A set of verifiers at various locations that surround the location "L" send classical messages and quantum states towards the location "L". If the prover is at the location "L" then he can receive the signals at specific times and reply to the verifiers with requested classical messages and/or quantum states, which must be received by the verifiers at specific times. Quantum location authentication was first investigated by Kent in 2002, which he called ‘quantum tagging’, resulting in a filed US patent by Kent et al. in 2007, and a publication in the academic literature in 2010, after a paper on position-based quantum cryptography was published by Buhrman et al. There is a no-go theorem for quantum location authentication proved by Buhrman et al. stating that it is impossible for a set of verifiers to authenticate the location of a prover with unconditional security. This is because for any quantum location authentication protocol, a set of dishonest provers sharing a sufficient amount of entanglement and positioned between the verifiers and the location "L" can intercept all communications from the verifiers, including all transmitted quantum states, and then apply a non-local quantum operation which allows them to reply correctly and at the correct times to the verifiers. Since the dishonest provers do not need to be at the location "L" to do this, the quantum location authentication protocol is insecure. This no-go theorem assumes that the location "L" of the honest prover is his only credential. Kent showed that if the prover shares secret keys with the verifiers then location authentication can be implemented securely. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "i\\in\\{0,1\\}" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "d =a \\oplus b " }, { "math_id": 3, "text": "d = a\\oplus b" } ]
https://en.wikipedia.org/wiki?curid=61446733
6144674
Amie Street
Online music store from 2006-2010 Amie Street was an indie online music store and social network service created in 2006 by Brown University seniors Elliott Breece, Elias Roman, and Joshua Boltuch, in Providence, Rhode Island. The site was notable for its demand-based pricing. The company was later moved to Long Island City in Queens, New York. In late 2010, the site was sold to Amazon who redirected customers to their own website. History. Founded in early 2006, Amie Street opened to the public with a pre-alpha version on July 4, 2006, and was quickly scooped by Michael Arrington of TechCrunch. It grew and creating partnerships with various record labels including CD Baby, The Orchard, Nettwerk Music Group, and Daptone Records. A beta version was launched on October 4, 2006. On December 11, 2007, Amie Street Japan launched in partnership with Turbolinux. On August 5, 2007, Amie Street announced a site redesign and, led by Amazon.com, closed their Series A round of venture capital funding. Notable angel investors include Robin Richards, former president of MP3.com and David Hirsch, director of Google's B2B vertical markets group. In an email to Amie Street members on September 8, 2010, the website announced it would be redirecting all customers to Amazon.com starting on September 22, 2010, and ceasing to operate as amiestreet.com. Amie Street members had until September 22 to spend any credit they currently have with Amie Street, as it did not transfer to Amazon. Amazon shut down Amie Street soon after. Website features. Artists could upload their music directly to the site in MP3 format at whatever quality bit rate they choose, but when a record label or music distributor requires Amie Street to encode the music, they strive to achieve an average bit rate of 256 kbit/s using a variable bitrate. (Other formats such as AAC, FLAC, and Ogg are "on the way.") As users buy songs, the artist is credited quarterly. Artists keep 70% of the proceeds after US$5 in sales for each song. Albums were priced at the current total cost for each individual song on the album, capped at US$8.98 in most cases. PayPal, Payoneer prepaid MasterCards, or checks were used to make payments to artists. Amie Street used an algorithm to determine song prices based on demand. The price for a track started at zero when a song was uploaded, then rose according to demand and purchases of the song. The maximum price was 98¢. A ringtone service was announced on September 17, 2007, with variable pricing. On May 15, 2007, the web series lonelygirl15 teamed up with Amie Street to sell music featured in episodes of the show. RECs. Non-artist users could earn credit as well. They did this by RECing a song. When a user finds a song they believe will be a hit, they can REC it. If the song price increased from the moment they REC it, they will receive compensation based on the price increase. For example, if one RECs a song currently at 5¢ and it rises to 95¢, the user will cash out half the spread: 45¢ formula_0, just for RECing the song. If a user RECs a song when it is free, they are compensated with the full spread. RECing differentiates more popular music from less, as songs that are believed to be good will be RECed more often. Users get approximately 1 REC for every US$1 of Amie Street credit they purchase. Social networking. Users could connect with other users through the "friend" feature. RECs were sent out to friends to make it easier to find new music in a music "news feed". There was also a "fan" feature so users can easily connect with bands on the site. All of this information was available as separate RSS web feeds. Third party integration. Amie Street launched a Facebook Application in October 2007 called Fantasy Record Label. This application allowed Facebook users to create a "record label" with a collection of songs that were linked with an Amie Street account, and post the label on their Facebook profile page. Songs were ranked and as their score changed, each user's label would gain or lose points. These points could be converted into Amie Street credit and could be used to purchase music. Labels were also ranked and users were able to compete against each other for bragging rights. Download To Make A Difference campaign. On July 16, 2008, Amie Street launched "Download To Make A Difference", a benefit media campaign, donating money to charities for certain downloads. On July 29, 2008, Amie Street expanded the program with the exclusive release of "You & Me" an album by NYC indie rock band The Walkmen, giving proceeds to Memorial Sloan-Kettering Cancer Center. During its first week of sales, the album charted at #29 on Billboard's Top Digital Albums. Press. Amie Street was mentioned in several notable media organizations. These include "Rolling Stone", "The Wall Street Journal", "BusinessWeek", NPR, "The Washington Post", "Los Angeles Times", "Entertainment Weekly", TechCrunch, Boing Boing, Ars Technica, and "Wired". Ashley Alexandra Dupré. In March 2008, the site received additional attention because of the availability of two singles by Ashley Alexandra Dupré, the call girl at the center of the Eliot Spitzer prostitution scandal. An unsigned singer, her single "Move Ya Body" set a record for how fast it commanded the top price on the site following Dupré's identity as the call girl "Kristen" being revealed by "The New York Times" on March 12, 2008. While some speculated that she may have earned as much as US$300,000 - US$1.4 million from download sales of her singles on Amie Street, others estimated her earnings to be as low as US$13,720. Official sales numbers have not been released. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "=(\\tfrac{95-5}{2})" } ]
https://en.wikipedia.org/wiki?curid=6144674
6144801
Critical speed
In solid mechanics, in the field of rotordynamics, the critical speed is the theoretical angular velocity that excites the natural frequency of a rotating object, such as a shaft, propeller, leadscrew, or gear. As the speed of rotation approaches the object's natural frequency, the object begins to resonate, which dramatically increases system vibration. The resulting resonance occurs regardless of orientation. When the rotational speed is equal to the natural frequency, then that speed is referred to as a critical speed. Critical speed of shafts. All rotating shafts, even in the absence of external load, will deflect during rotation. The unbalanced mass of the rotating object causes deflection that will create resonant vibration at certain speeds, known as the critical speeds. The magnitude of deflection depends upon the following: In general, it is necessary to calculate the critical speed of a rotating shaft, such as a fan shaft, in order to avoid issues with noise and vibration. Critical speed equation. Like vibrating strings and other elastic structures, shafts and beams can vibrate in different mode shapes, with corresponding natural frequencies. The first vibrational mode corresponds to the lowest natural frequency. Higher modes of vibration correspond to higher natural frequencies. Often when considering rotating shafts, only the first natural frequency is needed. There are two main methods used to calculate critical speed—the Rayleigh–Ritz method and Dunkerley's method. Both calculate an approximation of the first natural frequency of vibration, which is assumed to be nearly equal to the critical speed of rotation. The Rayleigh–Ritz method is discussed here. For a shaft that is divided into "n" segments, the first natural frequency for a given beam, in "rad/s", can be approximated as: formula_0 where "g" is the acceleration of gravity, and the formula_1 are the weights of each segment, and the formula_2 are the static deflections (under gravitational loading only) of the center of each segment. Generally speaking, if "n" is 2 or higher, this method tends to slightly overestimate the first natural frequency, with the estimate becoming better the higher "n" is. If "n" is only 1, this method tends to underestimate the first natural frequency, but the equation simplifies to: formula_3 where formula_4 is the max static deflection of the shaft. These speeds are in "rad"/"s", but can be converted to "RPM" by multiplying by formula_5 . If a beam has multiple types of loading, deflections can be found for each, and then summed. If the shaft diameter changes along its length, deflection calculations become much more difficult. The static deflection expresses the relationship between rigidity of the shaft and inertial forces; it includes all the loads applied to the shaft when placed horizontally. However, the relationship is valid no matter what the orientation of the shaft is. A system's critical speeds depend upon the magnitude, location, and relative phase of shaft unbalance, the shaft's geometry and mechanical properties, and the stiffness and mass properties of the support structure. Many practical applications suggest as good practice that the maximum operating speed should not exceed 75% of the critical speed; however, some systems operate above the first critical speed, or "supercritically". In such cases, it is important to accelerate the shaft through the first natural frequency quickly so that large deflections don't develop. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\omega_{1} \\approx \\sqrt{\\frac {g \\sum_ {i = 1}^n {w_ {i} y_ {i}}} {\\sum_ {i = 1}^n {w_ {i} y_ {i}^2}}}" }, { "math_id": 1, "text": "w_i" }, { "math_id": 2, "text": "y_i" }, { "math_id": 3, "text": "\\omega_{1} \\approx \\sqrt{\\frac {g} {y_{max}}}" }, { "math_id": 4, "text": "y_{max}" }, { "math_id": 5, "text": "\\frac {60} {2*\\pi}" } ]
https://en.wikipedia.org/wiki?curid=6144801
6144888
Tunnell's theorem
On the congruent number problem: which integers are the area of a rational right triangle In number theory, Tunnell's theorem gives a partial resolution to the congruent number problem, and under the Birch and Swinnerton-Dyer conjecture, a full resolution. Congruent number problem. The congruent number problem asks which positive integers can be the area of a right triangle with all three sides rational. Tunnell's theorem relates this to the number of integral solutions of a few fairly simple Diophantine equations. Theorem. For a given square-free integer "n", define formula_0 Tunnell's theorem states that supposing "n" is a congruent number, if "n" is odd then 2"A""n" = "B"n and if "n" is even then 2"C""n" = "D""n". Conversely, if the Birch and Swinnerton-Dyer conjecture holds true for elliptic curves of the form formula_1, these equalities are sufficient to conclude that "n" is a congruent number. History. The theorem is named for Jerrold B. Tunnell, a number theorist at Rutgers University, who proved it in . Importance. The importance of Tunnell's theorem is that the criterion it gives is testable by a finite calculation. For instance, for a given formula_2, the numbers formula_3 can be calculated by exhaustively searching through formula_4 in the range formula_5.
[ { "math_id": 0, "text": "\\begin{align}\nA_n & = \\#\\{ (x,y,z) \\in \\mathbb{Z}^3 \\mid n = 2x^2 + y^2 + 32z^2 \\}, \\\\\nB_n & = \\#\\{ (x,y,z) \\in \\mathbb{Z}^3 \\mid n = 2x^2 + y^2 + 8z^2 \\}, \\\\\nC_n & = \\#\\{ (x,y,z) \\in \\mathbb{Z}^3 \\mid n = 8x^2 + 2y^2 + 64z^2 \\}, \\\\\nD_n & = \\#\\{ (x,y,z) \\in \\mathbb{Z}^3 \\mid n = 8x^2 + 2y^2 + 16z^2 \\}.\n\\end{align}" }, { "math_id": 1, "text": "y^2 = x^3 - n^2x" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "A_n,B_n,C_n,D_n" }, { "math_id": 4, "text": "x,y,z" }, { "math_id": 5, "text": "-\\sqrt{n},\\ldots,\\sqrt{n}" } ]
https://en.wikipedia.org/wiki?curid=6144888
6146906
Bias tee
Electrical circuit A bias tee is a three-port network used for setting the DC bias point of some electronic components without disturbing other components. The bias tee is a diplexer. The low-frequency port is used to set the bias; the high-frequency port passes the radio-frequency signals but blocks the biasing levels; the combined port connects to the device, which sees both the bias and RF. It is called a "tee" because the 3 ports are often arranged in the shape of a T. Design. Conceptually, the bias tee can be viewed as an ideal capacitor that allows AC through but blocks the DC bias and an ideal inductor that blocks AC but allows DC. Although some bias tees can be made with a simple inductor and capacitor, wideband bias tees are considerably more complicated because practical components have parasitic elements. Bias tees are designed for transmission-line environments. Typically, the characteristic impedance Zo will be 50 Ohms or 75 Ohms. The impedance of the capacitor (XC) is chosen to be much less than Zo, and the impedance of the inductor (XL) is chosen to be much greater than Zo: formula_0 where ω is the angular frequency (in radians per second) and f is the frequency (in Hertz). Bias tees are designed to operate over a range of signal frequencies. The reactances are chosen to have minimal impact at the lowest frequency. For wide-range bias tees, the inductive reactance must be large in value, even at the lowest frequency, hence the dimensions of the inductor must be large in size. A large inductor will have a stray capacitance (which creates its self-resonant frequency). At a high enough frequency, the stray capacitance presents a low-impedance shunt path for the RF signal, and the bias tee becomes ineffective. Practical wide-band bias tees must use elaborate circuit topologies to avoid the shunt path. Instead of one inductor, there will be a string of inductors in series, each with its own high resonant frequency, in addition to lower composite resonances shared between them. Additional resistors and capacitors will be inserted to prevent resonances. For example, a Picosecond Pulse Labs model 5580 bias tee works from 10 kHz to 15 GHz.(p3) Consequently, the simple design would need an inductance of at least 800 μH (XL about j 50 ohms at 10 kHz), and that inductor must still look like an inductor at 15 GHz. However, a typical commercial 820 μH inductor has a self-resonant frequency near 1.8 MHz – four orders of magnitude too low. Johnson gives an example of a wideband microstrip bias tee covering 50 kHz to 1 GHz using four inductors (330 nH, 910 nH, 18 μH, and 470 μH) in series. His design cribbed from a commercial bias tee. He modeled parasitic element values, simulated results, and optimized component selection. To show the advantage of additional components, Johnson provided a simulation of a bias tee that used just inductors and capacitors without Q suppression. Johnson provides both simulated and actual performance details. Girardi duplicated and improved on Johnson's design and points out some additional construction issues. Application. A bias tee is used to insert DC power into an AC signal to power remote antenna amplifiers or other devices. It is usually positioned at the receiving end of the coaxial cable to pass DC power from an external source to the coaxial cable running to powered device. A bias "T" consists of a feed inductor to deliver DC to a connector on the device side and a blocking capacitor to keep DC from passing through to the receiver. The RF signal is connected directly from one connector to the other with only the blocking capacitor in series. The internal blocking diode prevents damage to the bias "T" if reverse supply voltage is applied. Bias tees are used in a variety of applications, but are generally used to provide an RF signal and (DC) power to a remote device where running two separate cables would not be advantageous. Biasing is often used with photodiodes (vacuum and solid state), Microchannel plate detectors, transistors, and triodes, so that high frequencies from the signal do not leak into a common power supply rail. Conversely, noise from the power supply does not appear on the signal line. Other examples include: Power over Ethernet, active antennas, low-noise amplifiers, and down converters. The telephone line for plain old telephone service and some early microphones use a bias tee circuit—often with a gyrator replacing the inductor—this enables a thin cable with only 2 conductors to send power from the system to the device, and send audio from the device back to the system. Modern microphones often use 3 conductors in a phantom power circuit very similar to a bias tee circuit. Construction. There are several bias tee designs. A particular construction. The construction of the horizontal bar of the T is based on the rigid coaxial cable with air as dielectric. The radius is chosen to be as large as possible without allowing higher modes. The design of a bias "T" is based upon power going out to the remote device, but not being seen by the base station or receiver. It does this by using a capacitor on the RF output terminal, effectively creating an open circuit for the DC current. The incoming RF signal, or the one from the antenna, is the output for the DC power. This front-end of a bias "T" typically consists of a bandpass filter, a low noise amplifier, and a mixer coupled to a local oscillator. Capacitor. At one point a small slice is cut out of the center conductor, therefore a capacitor is formed and low frequencies are blocked. This kind of capacitor has the advantage that it is nearly invisible to higher frequencies. To pass frequencies down to 1 MHz the capacitance has to be increased. A dielectric like NPO multiplies the capacitance by a factor of 65. The thickness of the capacitor has to be minimal without leading to electric breakdown in the dielectric, this means to avoid any peaks in the electric field and this means smooth electrodes with rounded edges and a dielectric protruding between the electrodes (doorknob design). A stack of capacitors can be used, but every capacitor needs access to the surface of the inner conductor, because if it's hidden behind another capacitor the high frequencies won't see it, because the electric field needs a lot of time to travel through a dielectric with a high dielectric constant Coil. A small coil made of fine wire with an air core or MnFeZn-core connects the inner conductor of one of the sides of the capacitor with the a port in the outer conductor leading down the T. Frequencies above 1 GHz hit the coil from the side and apply an equal electric field to the whole coil. Therefore, no higher modes are excited within the coil. Because of the inductiveness of the coil almost no current leaks from the center conductor to the port. Frequencies between 1 MHz and 1 GHz do leak into this port, so there is a second coil with a cone shaped core outside of the outer conductor, but inside of a housing to avoid interference with other components. This cone acts like a tapered transmission line transformer. It starts with a high impedance, so a lot of power will be reflected, but the rest will travel down the coil and there is some leakage into the low frequency port. Oscillations. Any oscillations in the capacitor or the coil or the composed LC circuit are damped by the dielectric and the core. Also the small coil should have about 10 ohm resistance to further damp oscillations and avoid ripple on the transmitted spectrum. Footnotes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\begin{align}\nX_\\text{C} ~&=~ \\frac{1}{\\omega C} ~=~ \\frac{1}{2\\pi f C} ~\\ll~ Z_\\text{o} ~, \\\\\n\\\\\nX_\\text{L} ~&=~~ \\omega L ~~=~~ 2 \\pi f L ~~\\gg~ Z_\\text{o} ~,\\\\\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=6146906
6147487
Neural coding
Method by which information is represented in the brain Neural coding (or neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information. Overview. Neurons have an ability uncommon among the cells of the body to propagate signals rapidly over large distances by generating characteristic electrical pulses called action potentials: voltage spikes that can travel down axons. Sensory neurons change their activities by firing sequences of action potentials in various temporal patterns, with the presence of external sensory stimuli, such as light, sound, taste, smell and touch. Information about the stimulus is encoded in this pattern of action potentials and transmitted into and around the brain. Beyond this, specialized neurons, such as those of the retina, can communicate more information through graded potentials. These differ from action potentials because information about the strength of a stimulus directly correlates with the strength of the neurons' output. The signal decays much faster for graded potentials, necessitating short inter-neuron distances and high neuronal density. The advantage of graded potentials are higher information rates capable of encoding more states (i.e. higher fidelity) than spiking neurons. Although action potentials can vary somewhat in duration, amplitude and shape, they are typically treated as identical stereotyped events in neural coding studies. If the brief duration of an action potential (about 1 ms) is ignored, an action potential sequence, or spike train, can be characterized simply by a series of all-or-none point events in time. The lengths of interspike intervals (ISIs) between two successive spikes in a spike train often vary, apparently randomly. The study of neural coding involves measuring and characterizing how stimulus attributes, such as light or sound intensity, or motor actions, such as the direction of an arm movement, are represented by neuron action potentials or spikes. In order to describe and analyze neuronal firing, statistical methods and methods of probability theory and stochastic point processes have been widely applied. With the development of large-scale neural recording and decoding technologies, researchers have begun to crack the neural code and have already provided the first glimpse into the real-time neural code as memory is formed and recalled in the hippocampus, a brain region known to be central for memory formation. Neuroscientists have initiated several large-scale brain decoding projects. Encoding and decoding. The link between stimulus and response can be studied from two opposite points of view. Neural encoding refers to the map from stimulus to response. The main focus is to understand how neurons respond to a wide variety of stimuli, and to construct models that attempt to predict responses to other stimuli. Neural decoding refers to the reverse map, from response to stimulus, and the challenge is to reconstruct a stimulus, or certain aspects of that stimulus, from the spike sequences it evokes. Hypothesized coding schemes. A sequence, or 'train', of spikes may contain information based on different coding schemes. In some neurons the strength with which an postsynaptic partner responds may depend solely on the 'firing rate', the average number of spikes per unit time (a 'rate code'). At the other end, a complex 'temporal code' is based on the precise timing of single spikes. They may be locked to an external stimulus such as in the visual and auditory system or be generated intrinsically by the neural circuitry. Whether neurons use rate coding or temporal coding is a topic of intense debate within the neuroscience community, even though there is no clear definition of what these terms mean. Traditional View: Rate Code. The rate coding model of neuronal firing communication states that as the intensity of a stimulus increases, the frequency or rate of action potentials, or "spike firing", increases. Rate coding is sometimes called frequency coding. Rate coding is a traditional coding scheme, assuming that most, if not all, information about the stimulus is contained in the firing rate of the neuron. Because the sequence of action potentials generated by a given stimulus varies from trial to trial, neuronal responses are typically treated statistically or probabilistically. They may be characterized by firing rates, rather than as specific spike sequences. In most sensory systems, the firing rate increases, generally non-linearly, with increasing stimulus intensity. Under a rate coding assumption, any information possibly encoded in the temporal structure of the spike train is ignored. Consequently, rate coding is inefficient but highly robust with respect to the ISI 'noise'. During rate coding, precisely calculating firing rate is very important. In fact, the term "firing rate" has a few different definitions, which refer to different averaging procedures, such as an average over time (rate as a single-neuron spike count) or an average over several repetitions (rate of PSTH) of experiment. In rate coding, learning is based on activity-dependent synaptic weight modifications. Rate coding was originally shown by Edgar Adrian and Yngve Zotterman in 1926. In this simple experiment different weights were hung from a muscle. As the weight of the stimulus increased, the number of spikes recorded from sensory nerves innervating the muscle also increased. From these original experiments, Adrian and Zotterman concluded that action potentials were unitary events, and that the frequency of events, and not individual event magnitude, was the basis for most inter-neuronal communication. In the following decades, measurement of firing rates became a standard tool for describing the properties of all types of sensory or cortical neurons, partly due to the relative ease of measuring rates experimentally. However, this approach neglects all the information possibly contained in the exact timing of the spikes. During recent years, more and more experimental evidence has suggested that a straightforward firing rate concept based on temporal averaging may be too simplistic to describe brain activity. Spike-count rate (average over time). The spike-count rate, also referred to as temporal average, is obtained by counting the number of spikes that appear during a trial and dividing by the duration of trial. The length T of the time window is set by the experimenter and depends on the type of neuron recorded from and to the stimulus. In practice, to get sensible averages, several spikes should occur within the time window. Typical values are T = 100 ms or T = 500 ms, but the duration may also be longer or shorter (Chapter 1.5 in the textbook 'Spiking Neuron Models' ). The spike-count rate can be determined from a single trial, but at the expense of losing all temporal resolution about variations in neural response during the course of the trial. Temporal averaging can work well in cases where the stimulus is constant or slowly varying and does not require a fast reaction of the organism — and this is the situation usually encountered in experimental protocols. Real-world input, however, is hardly stationary, but often changing on a fast time scale. For example, even when viewing a static image, humans perform saccades, rapid changes of the direction of gaze. The image projected onto the retinal photoreceptors changes therefore every few hundred milliseconds (Chapter 1.5 in ) Despite its shortcomings, the concept of a spike-count rate code is widely used not only in experiments, but also in models of neural networks. It has led to the idea that a neuron transforms information about a single input variable (the stimulus strength) into a single continuous output variable (the firing rate). There is a growing body of evidence that in Purkinje neurons, at least, information is not simply encoded in firing but also in the timing and duration of non-firing, quiescent periods. There is also evidence from retinal cells, that information is encoded not only in the firing rate but also in spike timing. More generally, whenever a rapid response of an organism is required a firing rate defined as a spike-count over a few hundred milliseconds is simply too slow. Time-dependent firing rate (averaging over several trials). The time-dependent firing rate is defined as the average number of spikes (averaged over trials) appearing during a short interval between times t and t+Δt, divided by the duration of the interval. It works for stationary as well as for time-dependent stimuli. To experimentally measure the time-dependent firing rate, the experimenter records from a neuron while stimulating with some input sequence. The same stimulation sequence is repeated several times and the neuronal response is reported in a Peri-Stimulus-Time Histogram (PSTH). The time t is measured with respect to the start of the stimulation sequence. The Δt must be large enough (typically in the range of one or a few milliseconds) so that there is a sufficient number of spikes within the interval to obtain a reliable estimate of the average. The number of occurrences of spikes nK(t;t+Δt) summed over all repetitions of the experiment divided by the number K of repetitions is a measure of the typical activity of the neuron between time t and t+Δt. A further division by the interval length Δt yields time-dependent firing rate r(t) of the neuron, which is equivalent to the spike density of PSTH (Chapter 1.5 in ). For sufficiently small Δt, r(t)Δt is the average number of spikes occurring between times t and t+Δt over multiple trials. If Δt is small, there will never be more than one spike within the interval between t and t+Δt on any given trial. This means that r(t)Δt is also the fraction of trials on which a spike occurred between those times. Equivalently, r(t)Δt is the probability that a spike occurs during this time interval. As an experimental procedure, the time-dependent firing rate measure is a useful method to evaluate neuronal activity, in particular in the case of time-dependent stimuli. The obvious problem with this approach is that it can not be the coding scheme used by neurons in the brain. Neurons can not wait for the stimuli to repeatedly present in an exactly same manner before generating a response. Nevertheless, the experimental time-dependent firing rate measure can make sense, if there are large populations of independent neurons that receive the same stimulus. Instead of recording from a population of N neurons in a single run, it is experimentally easier to record from a single neuron and average over N repeated runs. Thus, the time-dependent firing rate coding relies on the implicit assumption that there are always populations of neurons. Temporal coding. When precise spike timing or high-frequency firing-rate fluctuations are found to carry information, the neural code is often identified as a temporal code. A number of studies have found that the temporal resolution of the neural code is on a millisecond time scale, indicating that precise spike timing is a significant element in neural coding. Such codes, that communicate via the time between spikes are also referred to as interpulse interval codes, and have been supported by recent studies. Neurons exhibit high-frequency fluctuations of firing-rates which could be noise or could carry information. Rate coding models suggest that these irregularities are noise, while temporal coding models suggest that they encode information. If the nervous system only used rate codes to convey information, a more consistent, regular firing rate would have been evolutionarily advantageous, and neurons would have utilized this code over other less robust options. Temporal coding supplies an alternate explanation for the “noise," suggesting that it actually encodes information and affects neural processing. To model this idea, binary symbols can be used to mark the spikes: 1 for a spike, 0 for no spike. Temporal coding allows the sequence 000111000111 to mean something different from 001100110011, even though the mean firing rate is the same for both sequences, at 6 spikes/10 ms. Until recently, scientists had put the most emphasis on rate encoding as an explanation for post-synaptic potential patterns. However, functions of the brain are more temporally precise than the use of only rate encoding seems to allow. In other words, essential information could be lost due to the inability of the rate code to capture all the available information of the spike train. In addition, responses are different enough between similar (but not identical) stimuli to suggest that the distinct patterns of spikes contain a higher volume of information than is possible to include in a rate code. Temporal codes (also called spike codes ), employ those features of the spiking activity that cannot be described by the firing rate. For example, time-to-first-spike after the stimulus onset, phase-of-firing with respect to background oscillations, characteristics based on the second and higher statistical moments of the ISI probability distribution, spike randomness, or precisely timed groups of spikes (temporal patterns) are candidates for temporal codes. As there is no absolute time reference in the nervous system, the information is carried either in terms of the relative timing of spikes in a population of neurons (temporal patterns) or with respect to an ongoing brain oscillation (phase of firing). One way in which temporal codes are decoded, in presence of neural oscillations, is that spikes occurring at specific phases of an oscillatory cycle are more effective in depolarizing the post-synaptic neuron. The temporal structure of a spike train or firing rate evoked by a stimulus is determined both by the dynamics of the stimulus and by the nature of the neural encoding process. Stimuli that change rapidly tend to generate precisely timed spikes (and rapidly changing firing rates in PSTHs) no matter what neural coding strategy is being used. Temporal coding in the narrow sense refers to temporal precision in the response that does not arise solely from the dynamics of the stimulus, but that nevertheless relates to properties of the stimulus. The interplay between stimulus and encoding dynamics makes the identification of a temporal code difficult. In temporal coding, learning can be explained by activity-dependent synaptic delay modifications. The modifications can themselves depend not only on spike rates (rate coding) but also on spike timing patterns (temporal coding), i.e., can be a special case of spike-timing-dependent plasticity. The issue of temporal coding is distinct and independent from the issue of independent-spike coding. If each spike is independent of all the other spikes in the train, the temporal character of the neural code is determined by the behavior of time-dependent firing rate r(t). If r(t) varies slowly with time, the code is typically called a rate code, and if it varies rapidly, the code is called temporal. Temporal coding in sensory systems. For very brief stimuli, a neuron's maximum firing rate may not be fast enough to produce more than a single spike. Due to the density of information about the abbreviated stimulus contained in this single spike, it would seem that the timing of the spike itself would have to convey more information than simply the average frequency of action potentials over a given period of time. This model is especially important for sound localization, which occurs within the brain on the order of milliseconds. The brain must obtain a large quantity of information based on a relatively short neural response. Additionally, if low firing rates on the order of ten spikes per second must be distinguished from arbitrarily close rate coding for different stimuli, then a neuron trying to discriminate these two stimuli may need to wait for a second or more to accumulate enough information. This is not consistent with numerous organisms which are able to discriminate between stimuli in the time frame of milliseconds, suggesting that a rate code is not the only model at work. To account for the fast encoding of visual stimuli, it has been suggested that neurons of the retina encode visual information in the latency time between stimulus onset and first action potential, also called latency to first spike or time-to-first-spike. This type of temporal coding has been shown also in the auditory and somato-sensory system. The main drawback of such a coding scheme is its sensitivity to intrinsic neuronal fluctuations. In the primary visual cortex of macaques, the timing of the first spike relative to the start of the stimulus was found to provide more information than the interval between spikes. However, the interspike interval could be used to encode additional information, which is especially important when the spike rate reaches its limit, as in high-contrast situations. For this reason, temporal coding may play a part in coding defined edges rather than gradual transitions. The mammalian gustatory system is useful for studying temporal coding because of its fairly distinct stimuli and the easily discernible responses of the organism. Temporally encoded information may help an organism discriminate between different tastants of the same category (sweet, bitter, sour, salty, umami) that elicit very similar responses in terms of spike count. The temporal component of the pattern elicited by each tastant may be used to determine its identity (e.g., the difference between two bitter tastants, such as quinine and denatonium). In this way, both rate coding and temporal coding may be used in the gustatory system – rate for basic tastant type, temporal for more specific differentiation. Research on mammalian gustatory system has shown that there is an abundance of information present in temporal patterns across populations of neurons, and this information is different from that which is determined by rate coding schemes. Groups of neurons may synchronize in response to a stimulus. In studies dealing with the front cortical portion of the brain in primates, precise patterns with short time scales only a few milliseconds in length were found across small populations of neurons which correlated with certain information processing behaviors. However, little information could be determined from the patterns; one possible theory is they represented the higher-order processing taking place in the brain. As with the visual system, in mitral/tufted cells in the olfactory bulb of mice, first-spike latency relative to the start of a sniffing action seemed to encode much of the information about an odor. This strategy of using spike latency allows for rapid identification of and reaction to an odorant. In addition, some mitral/tufted cells have specific firing patterns for given odorants. This type of extra information could help in recognizing a certain odor, but is not completely necessary, as average spike count over the course of the animal's sniffing was also a good identifier. Along the same lines, experiments done with the olfactory system of rabbits showed distinct patterns which correlated with different subsets of odorants, and a similar result was obtained in experiments with the locust olfactory system. Temporal coding applications. The specificity of temporal coding requires highly refined technology to measure informative, reliable, experimental data. Advances made in optogenetics allow neurologists to control spikes in individual neurons, offering electrical and spatial single-cell resolution. For example, blue light causes the light-gated ion channel channelrhodopsin to open, depolarizing the cell and producing a spike. When blue light is not sensed by the cell, the channel closes, and the neuron ceases to spike. The pattern of the spikes matches the pattern of the blue light stimuli. By inserting channelrhodopsin gene sequences into mouse DNA, researchers can control spikes and therefore certain behaviors of the mouse (e.g., making the mouse turn left). Researchers, through optogenetics, have the tools to effect different temporal codes in a neuron while maintaining the same mean firing rate, and thereby can test whether or not temporal coding occurs in specific neural circuits. Optogenetic technology also has the potential to enable the correction of spike abnormalities at the root of several neurological and psychological disorders. If neurons do encode information in individual spike timing patterns, key signals could be missed by attempting to crack the code while looking only at mean firing rates. Understanding any temporally encoded aspects of the neural code and replicating these sequences in neurons could allow for greater control and treatment of neurological disorders such as depression, schizophrenia, and Parkinson's disease. Regulation of spike intervals in single cells more precisely controls brain activity than the addition of pharmacological agents intravenously. Phase-of-firing code. Phase-of-firing code is a neural coding scheme that combines the spike count code with a time reference based on oscillations. This type of code takes into account a time label for each spike according to a time reference based on phase of local ongoing oscillations at low or high frequencies. It has been shown that neurons in some cortical sensory areas encode rich naturalistic stimuli in terms of their spike times relative to the phase of ongoing network oscillatory fluctuations, rather than only in terms of their spike count. The local field potential signals reflect population (network) oscillations. The phase-of-firing code is often categorized as a temporal code although the time label used for spikes (i.e. the network oscillation phase) is a low-resolution (coarse-grained) reference for time. As a result, often only four discrete values for the phase are enough to represent all the information content in this kind of code with respect to the phase of oscillations in low frequencies. Phase-of-firing code is loosely based on the phase precession phenomena observed in place cells of the hippocampus. Another feature of this code is that neurons adhere to a preferred order of spiking between a group of sensory neurons, resulting in firing sequence. Phase code has been shown in visual cortex to involve also high-frequency oscillations. Within a cycle of gamma oscillation, each neuron has its own preferred relative firing time. As a result, an entire population of neurons generates a firing sequence that has a duration of up to about 15 ms. Population coding. Population coding is a method to represent stimuli by using the joint activities of a number of neurons. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. From the theoretical point of view, population coding is one of a few mathematically well-formulated problems in neuroscience. It grasps the essential features of neural coding and yet is simple enough for theoretic analysis. Experimental studies have revealed that this coding paradigm is widely used in the sensory and motor areas of the brain. For example, in the visual area medial temporal (MT), neurons are tuned to the direction of object motion. In response to an object moving in a particular direction, many neurons in MT fire with a noise-corrupted and bell-shaped activity pattern across the population. The moving direction of the object is retrieved from the population activity, to be immune from the fluctuation existing in a single neuron's signal. When monkeys are trained to move a joystick towards a lit target, a single neuron will fire for multiple target directions. However it fires the fastest for one direction and more slowly depending on how close the target was to the neuron's "preferred" direction. If each neuron represents movement in its preferred direction, and the vector sum of all neurons is calculated (each neuron has a firing rate and a preferred direction), the sum points in the direction of motion. In this manner, the population of neurons codes the signal for the motion. This particular population code is referred to as population vector coding. Place-time population codes, termed the averaged-localized-synchronized-response (ALSR) code, have been derived for neural representation of auditory acoustic stimuli. This exploits both the place or tuning within the auditory nerve, as well as the phase-locking within each nerve fiber auditory nerve. The first ALSR representation was for steady-state vowels; ALSR representations of pitch and formant frequencies in complex, non-steady state stimuli were later demonstrated for voiced-pitch, and formant representations in consonant-vowel syllables. The advantage of such representations is that global features such as pitch or formant transition profiles can be represented as global features across the entire nerve simultaneously via both rate and place coding. Population coding has a number of other advantages as well, including reduction of uncertainty due to neuronal variability and the ability to represent a number of different stimulus attributes simultaneously. Population coding is also much faster than rate coding and can reflect changes in the stimulus conditions nearly instantaneously. Individual neurons in such a population typically have different but overlapping selectivities, so that many neurons, but not necessarily all, respond to a given stimulus. Typically an encoding function has a peak value such that activity of the neuron is greatest if the perceptual value is close to the peak value, and becomes reduced accordingly for values less close to the peak value. It follows that the actual perceived value can be reconstructed from the overall pattern of activity in the set of neurons. Vector coding is an example of simple averaging. A more sophisticated mathematical technique for performing such a reconstruction is the method of maximum likelihood based on a multivariate distribution of the neuronal responses. These models can assume independence, second order correlations, or even more detailed dependencies such as higher order maximum entropy models, or copulas. Correlation coding. The correlation coding model of neuronal firing claims that correlations between action potentials, or "spikes", within a spike train may carry additional information above and beyond the simple timing of the spikes. Early work suggested that correlation between spike trains can only reduce, and never increase, the total mutual information present in the two spike trains about a stimulus feature. However, this was later demonstrated to be incorrect. Correlation structure can increase information content if noise and signal correlations are of opposite sign. Correlations can also carry information not present in the average firing rate of two pairs of neurons. A good example of this exists in the pentobarbital-anesthetized marmoset auditory cortex, in which a pure tone causes an increase in the number of correlated spikes, but not an increase in the mean firing rate, of pairs of neurons. Independent-spike coding. The independent-spike coding model of neuronal firing claims that each individual action potential, or "spike", is independent of each other spike within the spike train. Position coding. A typical population code involves neurons with a Gaussian tuning curve whose means vary linearly with the stimulus intensity, meaning that the neuron responds most strongly (in terms of spikes per second) to a stimulus near the mean. The actual intensity could be recovered as the stimulus level corresponding to the mean of the neuron with the greatest response. However, the noise inherent in neural responses means that a maximum likelihood estimation function is more accurate. This type of code is used to encode continuous variables such as joint position, eye position, color, or sound frequency. Any individual neuron is too noisy to faithfully encode the variable using rate coding, but an entire population ensures greater fidelity and precision. For a population of unimodal tuning curves, i.e. with a single peak, the precision typically scales linearly with the number of neurons. Hence, for half the precision, half as many neurons are required. In contrast, when the tuning curves have multiple peaks, as in grid cells that represent space, the precision of the population can scale exponentially with the number of neurons. This greatly reduces the number of neurons required for the same precision. Sparse coding. The sparse code is when each item is encoded by the strong activation of a relatively small set of neurons. For each item to be encoded, this is a different subset of all available neurons. In contrast to sensor-sparse coding, sensor-dense coding implies that all information from possible sensor locations is known. As a consequence, sparseness may be focused on temporal sparseness ("a relatively small number of time periods are active") or on the sparseness in an activated population of neurons. In this latter case, this may be defined in one time period as the number of activated neurons relative to the total number of neurons in the population. This seems to be a hallmark of neural computations since compared to traditional computers, information is massively distributed across neurons. Sparse coding of natural images produces wavelet-like oriented filters that resemble the receptive fields of simple cells in the visual cortex. The capacity of sparse codes may be increased by simultaneous use of temporal coding, as found in the locust olfactory system. Given a potentially large set of input patterns, sparse coding algorithms (e.g. sparse autoencoder) attempt to automatically find a small number of representative patterns which, when combined in the right proportions, reproduce the original input patterns. The sparse coding for the input then consists of those representative patterns. For example, the very large set of English sentences can be encoded by a small number of symbols (i.e. letters, numbers, punctuation, and spaces) combined in a particular order for a particular sentence, and so a sparse coding for English would be those symbols. Linear generative model. Most models of sparse coding are based on the linear generative model. In this model, the symbols are combined in a linear fashion to approximate the input. More formally, given a k-dimensional set of real-numbered input vectors formula_0, the goal of sparse coding is to determine n k-dimensional basis vectors formula_1, corresponding to neuronal receptive fields, along with a sparse n-dimensional vector of weights or coefficients formula_2 for each input vector, so that a linear combination of the basis vectors with proportions given by the coefficients results in a close approximation to the input vector: formula_3. The codings generated by algorithms implementing a linear generative model can be classified into codings with "soft sparseness" and those with "hard sparseness". These refer to the distribution of basis vector coefficients for typical inputs. A coding with soft sparseness has a smooth Gaussian-like distribution, but peakier than Gaussian, with many zero values, some small absolute values, fewer larger absolute values, and very few very large absolute values. Thus, many of the basis vectors are active. Hard sparseness, on the other hand, indicates that there are many zero values, "no" or "hardly any" small absolute values, fewer larger absolute values, and very few very large absolute values, and thus few of the basis vectors are active. This is appealing from a metabolic perspective: less energy is used when fewer neurons are firing. Another measure of coding is whether it is "critically complete" or "overcomplete". If the number of basis vectors n is equal to the dimensionality k of the input set, the coding is said to be critically complete. In this case, smooth changes in the input vector result in abrupt changes in the coefficients, and the coding is not able to gracefully handle small scalings, small translations, or noise in the inputs. If, however, the number of basis vectors is larger than the dimensionality of the input set, the coding is "overcomplete". Overcomplete codings smoothly interpolate between input vectors and are robust under input noise. The human primary visual cortex is estimated to be overcomplete by a factor of 500, so that, for example, a 14 x 14 patch of input (a 196-dimensional space) is coded by roughly 100,000 neurons. Other models are based on matching pursuit, a sparse approximation algorithm which finds the "best matching" projections of multidimensional data, and dictionary learning, a representation learning method which aims to find a sparse matrix representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. Biological evidence. Sparse coding may be a general strategy of neural systems to augment memory capacity. To adapt to their environments, animals must learn which stimuli are associated with rewards or punishments and distinguish these reinforced stimuli from similar but irrelevant ones. Such tasks require implementing stimulus-specific associative memories in which only a few neurons out of a population respond to any given stimulus and each neuron responds to only a few stimuli out of all possible stimuli. Theoretical work on sparse distributed memory has suggested that sparse coding increases the capacity of associative memory by reducing overlap between representations. Experimentally, sparse representations of sensory information have been observed in many systems, including vision, audition, touch, and olfaction. However, despite the accumulating evidence for widespread sparse coding and theoretical arguments for its importance, a demonstration that sparse coding improves the stimulus-specificity of associative memory has been difficult to obtain. In the "Drosophila" olfactory system, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. Sparseness is controlled by a negative feedback circuit between Kenyon cells and GABAergic anterior paired lateral (APL) neurons. Systematic activation and blockade of each leg of this feedback circuit shows that Kenyon cells activate APL neurons and APL neurons inhibit Kenyon cells. Disrupting the Kenyon cell–APL feedback loop decreases the sparseness of Kenyon cell odor responses, increases inter-odor correlations, and prevents flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor-specificity of memories. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\vec{\\xi }\\in \\mathbb{R}^{k}" }, { "math_id": 1, "text": "\\vec{b_1}, \\ldots, \\vec{b_n} \\in \\mathbb{R}^{k}" }, { "math_id": 2, "text": "\\vec{s} \\in \\mathbb{R}^{n}" }, { "math_id": 3, "text": "\\vec{\\xi} \\approx \\sum_{j=1}^{n} s_{j}\\vec{b}_{j}" } ]
https://en.wikipedia.org/wiki?curid=6147487
61476
Radius of convergence
Domain of convergence of power series In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or formula_0. When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function (singularities are those values of the argument for which the function is not defined), the radius of convergence is the shortest or minimum of all the respective distances (which are all non-negative numbers) calculated from the center of the disk of convergence to the respective singularities of the function. Definition. For a power series "f" defined as: formula_1 where The radius of convergence "r" is a nonnegative real number or formula_0 such that the series converges if formula_2 and diverges if formula_3 Some may prefer an alternative definition, as existence is obvious: formula_4 On the boundary, that is, where |"z" − "a"| = "r", the behavior of the power series may be complicated, and the series may converge for some values of "z" and diverge for others. The radius of convergence is infinite if the series converges for all complex numbers "z". Finding the radius of convergence. Two cases arise: Theoretical radius. The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number formula_6 "lim sup" denotes the limit superior. The root test states that the series converges if "C" < 1 and diverges if "C" > 1. It follows that the power series converges if the distance from "z" to the center "a" is less than formula_7 and diverges if the distance exceeds that number; this statement is the Cauchy–Hadamard theorem. Note that "r" = 1/0 is interpreted as an infinite radius, meaning that "f" is an entire function. The limit involved in the ratio test is usually easier to compute, and when that limit exists, it shows that the radius of convergence is finite. formula_8 This is shown as follows. The ratio test says the series converges if formula_9 That is equivalent to formula_10 Practical estimation of radius in the case of real coefficients. Usually, in scientific applications, only a finite number of coefficients formula_5 are known. Typically, as formula_11 increases, these coefficients settle into a regular behavior determined by the nearest radius-limiting singularity. In this case, two main techniques have been developed, based on the fact that the coefficients of a Taylor series are roughly exponential with ratio formula_12 where "r" is the radius of convergence. Radius of convergence in complex analysis. A power series with a positive radius of convergence can be made into a holomorphic function by taking its argument to be a complex variable. The radius of convergence can be characterized by the following theorem: The radius of convergence of a power series "f" centered on a point "a" is equal to the distance from "a" to the nearest point where "f" cannot be defined in a way that makes it holomorphic. The set of all points whose distance to "a" is strictly less than the radius of convergence is called the "disk of convergence". "The nearest point" means the nearest point in the complex plane, not necessarily on the real line, even if the center and all coefficients are real. For example, the function formula_29 has no singularities on the real line, since formula_30 has no real roots. Its Taylor series about 0 is given by formula_31 The root test shows that its radius of convergence is 1. In accordance with this, the function "f"("z") has singularities at ±"i", which are at a distance 1 from 0. For a proof of this theorem, see analyticity of holomorphic functions. A simple example. The arctangent function of trigonometry can be expanded in a power series: formula_32 It is easy to apply the root test in this case to find that the radius of convergence is 1. A more complicated example. Consider this power series: formula_33 where the rational numbers "B""n" are the Bernoulli numbers. It may be cumbersome to try to apply the ratio test to find the radius of convergence of this series. But the theorem of complex analysis stated above quickly solves the problem. At "z" = 0, there is in effect no singularity since the singularity is removable. The only non-removable singularities are therefore located at the "other" points where the denominator is zero. We solve formula_34 by recalling that if "z" = "x" + "iy" and "e""iy" = cos("y") + "i" sin("y") then formula_35 and then take "x" and "y" to be real. Since "y" is real, the absolute value of cos("y") + "i" sin("y") is necessarily 1. Therefore, the absolute value of "e""z" can be 1 only if "e""x" is 1; since "x" is real, that happens only if "x" = 0. Therefore "z" is purely imaginary and cos("y") + "i" sin("y") = 1. Since "y" is real, that happens only if cos("y") = 1 and sin("y") = 0, so that "y" is an integer multiple of 2π. Consequently the singular points of this function occur at "z" = a nonzero integer multiple of 2π"i". The singularities nearest 0, which is the center of the power series expansion, are at ±2π"i". The distance from the center to either of those points is 2π, so the radius of convergence is 2π. Convergence on the boundary. If the power series is expanded around the point "a" and the radius of convergence is "r", then the set of all points "z" such that |"z" − "a"| = "r" is a circle called the "boundary" of the disk of convergence. A power series may diverge at every point on the boundary, or diverge on some points and converge at other points, or converge at all the points on the boundary. Furthermore, even if the series converges everywhere on the boundary (even uniformly), it does not necessarily converge absolutely. Example 1: The power series for the function "f"("z") = 1/(1 − "z"), expanded around "z" = 0, which is simply formula_36 has radius of convergence 1 and diverges at every point on the boundary. Example 2: The power series for "g"("z") = −ln(1 − "z"), expanded around "z" = 0, which is formula_37 has radius of convergence 1, and diverges for "z" = 1 but converges for all other points on the boundary. The function "f"("z") of Example 1 is the derivative of "g"("z"). Example 3: The power series formula_38 has radius of convergence 1 and converges everywhere on the boundary absolutely. If "h" is the function represented by this series on the unit disk, then the derivative of "h"("z") is equal to "g"("z")/"z" with "g" of Example 2. It turns out that "h"("z") is the dilogarithm function. Example 4: The power series formula_39 has radius of convergence 1 and converges uniformly on the entire boundary |"z"| = 1, but does not converge absolutely on the boundary. Rate of convergence. If we expand the function formula_40 around the point "x" = 0, we find out that the radius of convergence of this series is formula_0 meaning that this series converges for all complex numbers. However, in applications, one is often interested in the precision of a numerical answer. Both the number of terms and the value at which the series is to be evaluated affect the accuracy of the answer. For example, if we want to calculate sin(0.1) accurate up to five decimal places, we only need the first two terms of the series. However, if we want the same precision for "x" = 1 we must evaluate and sum the first five terms of the series. For sin(10), one requires the first 18 terms of the series, and for sin(100) we need to evaluate the first 141 terms. So for these particular values the fastest convergence of a power series expansion is at the center, and as one moves away from the center of convergence, the rate of convergence slows down until you reach the boundary (if it exists) and cross over, in which case the series will diverge. Abscissa of convergence of a Dirichlet series. An analogous concept is the abscissa of convergence of a Dirichlet series formula_41 Such a series converges if the real part of "s" is greater than a particular number depending on the coefficients "a""n": the abscissa of convergence. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\infty" }, { "math_id": 1, "text": "f(z) = \\sum_{n=0}^\\infty c_n (z-a)^n, " }, { "math_id": 2, "text": "|z-a| < r" }, { "math_id": 3, "text": "|z-a| > r." }, { "math_id": 4, "text": "r=\\sup \\left\\{ |z-a|\\ \\left|\\ \\sum_{n=0}^\\infty c_n(z-a)^n\\ \\text{ converges } \\right.\\right\\} " }, { "math_id": 5, "text": "c_n" }, { "math_id": 6, "text": "C = \\limsup_{n\\to\\infty}\\sqrt[n]{|c_n(z-a)^n|} = \\limsup_{n\\to\\infty} \\left(\\sqrt[n]{|c_n|}\\right) |z-a|" }, { "math_id": 7, "text": "r = \\frac{1}{\\limsup_{n\\to\\infty}\\sqrt[n]{|c_n|}}" }, { "math_id": 8, "text": "r = \\lim_{n\\to\\infty} \\left| \\frac{c_{n}}{c_{n+1}} \\right|." }, { "math_id": 9, "text": " \\lim_{n\\to\\infty} \\frac{|c_{n+1}(z-a)^{n+1}|}{|c_n(z-a)^n|} < 1. " }, { "math_id": 10, "text": " |z - a| < \\frac{1}{\\lim_{n\\to\\infty} \\frac{|c_{n+1}|}{|c_n|}} = \\lim_{n\\to\\infty} \\left|\\frac{c_n}{c_{n+1}}\\right|. " }, { "math_id": 11, "text": "n" }, { "math_id": 12, "text": "1/r" }, { "math_id": 13, "text": "\\lim_{n\\to \\infty} {c_n / c_{n-1}}" }, { "math_id": 14, "text": "1/r = \\lim_{n \\to \\infty} {c_n / c_{n-1}}" }, { "math_id": 15, "text": "r" }, { "math_id": 16, "text": "c_n/c_{n-1}" }, { "math_id": 17, "text": "1/n" }, { "math_id": 18, "text": "1/n=0" }, { "math_id": 19, "text": "n=\\infty" }, { "math_id": 20, "text": "b_n^2=\\frac{c_{n+1}c_{n-1} - c_n^2}{c_n c_{n-2} - c_{n-1}^2} \\quad n=3,4,5,\\ldots." }, { "math_id": 21, "text": "b_n" }, { "math_id": 22, "text": "p" }, { "math_id": 23, "text": "\\pm\\theta" }, { "math_id": 24, "text": "-(p+1)/r" }, { "math_id": 25, "text": "\\frac{1}{2} \\left(\\frac{c_{n-1}b_n}{c_n} + \\frac{c_{n+1}}{c_n b_n}\\right)" }, { "math_id": 26, "text": "1/n^2" }, { "math_id": 27, "text": "1/n^2=0" }, { "math_id": 28, "text": "\\cos\\theta" }, { "math_id": 29, "text": "f(z)=\\frac 1 {1+z^2}" }, { "math_id": 30, "text": "1+z^2" }, { "math_id": 31, "text": "\\sum_{n=0}^\\infty (-1)^n z^{2n}." }, { "math_id": 32, "text": "\\arctan(z)=z-\\frac{z^3} 3 + \\frac{z^5} 5 -\\frac{z^7} 7 +\\cdots ." }, { "math_id": 33, "text": "\\frac z {e^z-1}=\\sum_{n=0}^\\infty \\frac{B_n}{n!} z^n " }, { "math_id": 34, "text": "e^z - 1 = 0" }, { "math_id": 35, "text": "e^z = e^x e^{iy} = e^x(\\cos(y)+i\\sin(y))," }, { "math_id": 36, "text": " \\sum_{n=0}^\\infty z^n," }, { "math_id": 37, "text": " \\sum_{n=1}^\\infty \\frac{1}{n} z^n," }, { "math_id": 38, "text": " \\sum_{n=1}^\\infty \\frac 1 {n^2} z^n " }, { "math_id": 39, "text": "\\sum_{i=1}^\\infty a_i z^i \\text{ where } a_i = \\frac{(-1)^{n-1}}{2^nn}\\text{ for } n = \\lfloor\\log_2(i)\\rfloor+1\\text{, the unique integer with }2^{n-1}\\le i < 2^n," }, { "math_id": 40, "text": "\\sin x = \\sum^{\\infty}_{n=0} \\frac{(-1)^n}{(2n+1)!} x^{2n+1} = x - \\frac{x^3}{3!} + \\frac{x^5}{5!} - \\cdots\\text{ for all } x" }, { "math_id": 41, "text": "\\sum_{n=1}^\\infty \\frac{a_n}{n^s}." } ]
https://en.wikipedia.org/wiki?curid=61476
614763
Stark effect
Spectral line splitting in electrical field The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to the presence of an external electric field. It is the electric-field analogue of the Zeeman effect, where a spectral line is split into several components due to the presence of the magnetic field. Although initially coined for the static case, it is also used in the wider context to describe the effect of time-dependent electric fields. In particular, the Stark effect is responsible for the pressure broadening (Stark broadening) of spectral lines by charged particles in plasmas. For most spectral lines, the Stark effect is either linear (proportional to the applied electric field) or quadratic with a high accuracy. The Stark effect can be observed both for emission and absorption lines. The latter is sometimes called the inverse Stark effect, but this term is no longer used in the modern literature. History. The effect is named after the German physicist Johannes Stark, who discovered it in 1913. It was independently discovered in the same year by the Italian physicist Antonino Lo Surdo, and in Italy it is thus sometimes called the Stark–Lo Surdo effect. The discovery of this effect contributed importantly to the development of quantum theory and Stark was awarded with the Nobel Prize in Physics in the year 1919. Inspired by the magnetic Zeeman effect, and especially by Hendrik Lorentz's explanation of it, Woldemar Voigt performed classical mechanical calculations of quasi-elastically bound electrons in an electric field. By using experimental indices of refraction he gave an estimate of the Stark splittings. This estimate was a few orders of magnitude too low. Not deterred by this prediction, Stark undertook measurements on excited states of the hydrogen atom and succeeded in observing splittings. By the use of the Bohr–Sommerfeld ("old") quantum theory, Paul Epstein and Karl Schwarzschild were independently able to derive equations for the linear and quadratic Stark effect in hydrogen. Four years later, Hendrik Kramers derived formulas for intensities of spectral transitions. Kramers also included the effect of fine structure, with corrections for relativistic kinetic energy and coupling between electron spin and orbital motion. The first quantum mechanical treatment (in the framework of Werner Heisenberg's matrix mechanics) was by Wolfgang Pauli. Erwin Schrödinger discussed at length the Stark effect in his third paper on quantum theory (in which he introduced his perturbation theory), once in the manner of the 1916 work of Epstein (but generalized from the old to the new quantum theory) and once by his (first-order) perturbation approach. Finally, Epstein reconsidered the linear and quadratic Stark effect from the point of view of the new quantum theory. He derived equations for the line intensities which were a decided improvement over Kramers's results obtained by the old quantum theory. While the first-order-perturbation (linear) Stark effect in hydrogen is in agreement with both the old Bohr–Sommerfeld model and the quantum-mechanical theory of the atom, higher-order corrections are not. Measurements of the Stark effect under high field strengths confirmed the correctness of the new quantum theory. Mechanism. Overview. An electric field pointing from left to right, for example, tends to pull nuclei to the right and electrons to the left. In another way of viewing it, if an electronic state has its electron disproportionately to the left, its energy is lowered, while if it has the electron disproportionately to the right, its energy is raised. Other things being equal, the effect of the electric field is greater for outer electron shells, because the electron is more distant from the nucleus, so it travels farther left and farther right. The Stark effect can lead to splitting of degenerate energy levels. For example, in the Bohr model, an electron has the same energy whether it is in the 2s state or any of the 2p states. However, in an electric field, there will be hybrid orbitals (also called quantum superpositions) of the 2s and 2p states where the electron tends to be to the left, which will acquire a lower energy, and other hybrid orbitals where the electron tends to be to the right, which will acquire a higher energy. Therefore, the formerly degenerate energy levels will split into slightly lower and slightly higher energy levels. Multipole expansion. The Stark effect originates from the interaction between a charge distribution (atom or molecule) and an external electric field. The interaction energy of a continuous charge distribution formula_0, confined within a finite volume formula_1, with an external electrostatic potential formula_2 is formula_3 This expression is valid classically and quantum-mechanically alike. If the potential varies weakly over the charge distribution, the multipole expansion converges fast, so only a few first terms give an accurate approximation. Namely, keeping only the zero- and first-order terms, formula_4 where we introduced the electric field formula_5 and assumed the origin 0 to be somewhere within formula_1. Therefore, the interaction becomes formula_6 where formula_7 and formula_8 are, respectively, the total charge (zero moment) and the dipole moment of the charge distribution. Classical macroscopic objects are usually neutral or quasi-neutral (formula_9), so the first, monopole, term in the expression above is identically zero. This is also the case for a neutral atom or molecule. However, for an ion this is no longer true. Nevertheless, it is often justified to omit it in this case, too. Indeed, the Stark effect is observed in spectral lines, which are emitted when an electron "jumps" between two bound states. Since such a transition only alters the internal degrees of freedom of the radiator but not its charge, the effects of the monopole interaction on the initial and final states exactly cancel each other. Perturbation theory. Turning now to quantum mechanics an atom or a molecule can be thought of as a collection of point charges (electrons and nuclei), so that the second definition of the dipole applies. The interaction of atom or molecule with a uniform external field is described by the operator formula_10 This operator is used as a perturbation in first- and second-order perturbation theory to account for the first- and second-order Stark effect. First order. Let the unperturbed atom or molecule be in a "g"-fold degenerate state with orthonormal zeroth-order state functions formula_11. (Non-degeneracy is the special case "g" = 1). According to perturbation theory the first-order energies are the eigenvalues of the "g" × "g" matrix with general element formula_12 If "g" = 1 (as is often the case for electronic states of molecules) the first-order energy becomes proportional to the expectation (average) value of the dipole operator formula_13, formula_14 Because the electric dipole moment is a vector (tensor of the first rank), the diagonal elements of the perturbation matrix Vint vanish between states with a certain parity. Atoms and molecules possessing inversion symmetry do not have a (permanent) dipole moment and hence do not show a linear Stark effect. In order to obtain a non-zero matrix Vint for systems with an inversion center it is necessary that some of the unperturbed functions formula_15 have opposite parity (obtain plus and minus under inversion), because only functions of opposite parity give non-vanishing matrix elements. Degenerate zeroth-order states of opposite parity occur for excited hydrogen-like (one-electron) atoms or Rydberg states. Neglecting fine-structure effects, such a state with the principal quantum number "n" is "n"2-fold degenerate and formula_16 where formula_17 is the azimuthal (angular momentum) quantum number. For instance, the excited "n" = 4 state contains the following formula_18 states, formula_19 The one-electron states with even formula_18 are even under parity, while those with odd formula_18 are odd under parity. Hence hydrogen-like atoms with "n"&gt;1 show first-order Stark effect. The first-order Stark effect occurs in rotational transitions of symmetric top molecules (but not for linear and asymmetric molecules). In first approximation a molecule may be seen as a rigid rotor. A symmetric top rigid rotor has the unperturbed eigenstates formula_20 with 2(2"J"+1)-fold degenerate energy for |K| &gt; 0 and (2"J"+1)-fold degenerate energy for K=0. Here "D""J""MK" is an element of the Wigner D-matrix. The first-order perturbation matrix on basis of the unperturbed rigid rotor function is non-zero and can be diagonalized. This gives shifts and splittings in the rotational spectrum. Quantitative analysis of these Stark shift yields the permanent electric dipole moment of the symmetric top molecule. Second order. As stated, the quadratic Stark effect is described by second-order perturbation theory. The zeroth-order eigenproblem formula_21 is assumed to be solved. The perturbation theory gives formula_22 with the components of the polarizability tensor α defined by formula_23 The energy "E"(2) gives the quadratic Stark effect. Neglecting the hyperfine structure (which is often justified — unless extremely weak electric fields are considered), the polarizability tensor of atoms is isotropic, formula_24 For some molecules this expression is a reasonable approximation, too. For the ground state formula_25 is "always" positive, i.e., the quadratic Stark shift is always negative. Problems. The perturbative treatment of the Stark effect has some problems. In the presence of an electric field, states of atoms and molecules that were previously bound (square-integrable), become formally (non-square-integrable) resonances of finite width. These resonances may decay in finite time via field ionization. For low lying states and not too strong fields the decay times are so long, however, that for all practical purposes the system can be regarded as bound. For highly excited states and/or very strong fields ionization may have to be accounted for. (See also the article on the Rydberg atom). Applications. The Stark effect is at the basis of the spectral shift measured for voltage-sensitive dyes used for imaging of the firing activity of neurons. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho(\\mathbf{r})" }, { "math_id": 1, "text": "\\mathcal{V}" }, { "math_id": 2, "text": "\\phi(\\mathbf{r})" }, { "math_id": 3, "text": " V_{\\mathrm{int}} = \\int_\\mathcal{V} \\rho(\\mathbf{r}) \\phi(\\mathbf{r}) \\, d^3 \\mathbf r." }, { "math_id": 4, "text": "\\phi(\\mathbf{r}) \\approx \\phi(\\mathbf{0}) - \\sum_{i=1}^3 r_i F_i," }, { "math_id": 5, "text": "F_i \\equiv - \\left. \\left(\\frac{\\partial \\phi}{\\partial r_i} \\right)\\right|_{\\mathbf{0}}" }, { "math_id": 6, "text": "V_{\\mathrm{int}} \\approx \\phi(\\mathbf{0}) \\int_\\mathcal{V} \\rho(\\mathbf{r}) d^3r - \\sum_{i=1}^3 F_i \\int_\\mathcal{V} \\rho(\\mathbf{r}) r_i d^3r \\equiv q \\phi(\\mathbf{0}) - \\sum_{i=1}^3 \\mu_i F_i = q \\phi(\\mathbf{0}) - \\boldsymbol{\\mu} \\cdot \\mathbf{F} , " }, { "math_id": 7, "text": "q" }, { "math_id": 8, "text": "\\mathbf{\\mu}" }, { "math_id": 9, "text": "q = 0" }, { "math_id": 10, "text": " V_{\\mathrm{int}} = - \\mathbf{F}\\cdot \\boldsymbol{\\mu}." }, { "math_id": 11, "text": " \\psi^0_1, \\ldots, \\psi^0_g " }, { "math_id": 12, "text": "\n(\\mathbf{V}_{\\mathrm{int}})_{kl} = \\langle \\psi^0_k | V_{\\mathrm{int}} | \\psi^0_l \\rangle =\n-\\mathbf{F}\\cdot \\langle \\psi^0_k | \\boldsymbol{\\mu} | \\psi^0_l \\rangle,\n\\qquad k,l=1,\\ldots, g.\n" }, { "math_id": 13, "text": "\\boldsymbol{\\mu}" }, { "math_id": 14, "text": "\nE^{(1)} = -\\mathbf{F}\\cdot \\langle \\psi^0_1 | \\boldsymbol{\\mu} | \\psi^0_1 \\rangle =\n-\\mathbf{F}\\cdot \\langle \\boldsymbol{\\mu} \\rangle.\n" }, { "math_id": 15, "text": " \\psi^0_i" }, { "math_id": 16, "text": "n^2 = \\sum_{\\ell=0}^{n-1} (2 \\ell + 1)," }, { "math_id": 17, "text": "\\ell " }, { "math_id": 18, "text": "\\ell" }, { "math_id": 19, "text": "16 = 1 + 3 + 5 +7 \\;\\; \\Longrightarrow\\;\\; n=4\\;\\text{contains}\\; s\\oplus p\\oplus d\\oplus f." }, { "math_id": 20, "text": "|JKM \\rangle = (D^J_{MK})^* \\quad\\text{with}\\quad M,K= -J,-J+1,\\dots,J" }, { "math_id": 21, "text": "H^{(0)} \\psi^0_k = E^{(0)}_k \\psi^0_k, \\quad k=0,1, \\ldots, \\quad E^{(0)}_0 < E^{(0)}_1 \\le E^{(0)}_2, \\dots " }, { "math_id": 22, "text": "\nE^{(2)}_k = \\sum_{k' \\neq k} \\frac{\\langle \\psi^0_k | V_\\mathrm{int} | \\psi^0_{k^\\prime} \\rangle \\langle \\psi^0_{k'} | V_\\mathrm{int} | \\psi^0_k \\rangle}{E^{(0)}_k - E^{(0)}_{k'}} \\equiv -\\frac{1}{2} \\sum_{i,j=1}^3 \\alpha_{ij} F_i F_j\n" }, { "math_id": 23, "text": "\n\\alpha_{ij} = -2\\sum_{k' \\neq k} \\frac{\\langle \\psi^0_k | \\mu_i | \\psi^0_{k'} \\rangle \\langle \\psi^0_{k'} | \\mu_j | \\psi^0_k \\rangle}{E^{(0)}_k - E^{(0)}_{k'}}.\n" }, { "math_id": 24, "text": "\\alpha_{ij} \\equiv \\alpha_0 \\delta_{ij} \\Longrightarrow E^{(2)} = -\\frac{1}{2} \\alpha_0 F^2." }, { "math_id": 25, "text": "\\alpha_0" } ]
https://en.wikipedia.org/wiki?curid=614763
61478
Analytic function
Type of function in mathematics In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions. A function is analytic if and only if its Taylor series about formula_0 converges to the function in some neighborhood for every formula_0 in its domain. It is important to note that it is a neighborhood and not just at some point formula_0, since every differentiable function has at least a tangent line at every point, which is its Taylor series of order 1. So just having a polynomial expansion at singular points is not enough, and the Taylor series must also converge to the function on points adjacent to formula_0 to be considered an analytic function. As a counterexample see the Weierstrass function or the Fabius function. Definitions. Formally, a function formula_1 is "real analytic" on an open set formula_2 in the real line if for any formula_3 one can write formula_4 in which the coefficients formula_5 are real numbers and the series is convergent to formula_6 for formula_7 in a neighborhood of formula_8. Alternatively, a real analytic function is an infinitely differentiable function such that the Taylor series at any point formula_8 in its domain formula_9 converges to formula_6 for formula_7 in a neighborhood of formula_8 pointwise. The set of all real analytic functions on a given set formula_2 is often denoted by formula_10, or just by formula_11 if the domain is understood. A function formula_1 defined on some subset of the real line is said to be real analytic at a point formula_7 if there is a neighborhood formula_2 of formula_7 on which formula_1 is real analytic. The definition of a "complex analytic function" is obtained by replacing, in the definitions above, "real" with "complex" and "real line" with "complex plane". A function is complex analytic if and only if it is holomorphic i.e. it is complex differentiable. For this reason the terms "holomorphic" and "analytic" are often used interchangeably for such functions. Examples. Typical examples of analytic functions are Typical examples of functions that are not analytic are Alternative characterizations. The following conditions are equivalent: Complex analytic functions are exactly equivalent to holomorphic functions, and are thus much more easily characterized. For the case of an analytic function with several variables (see below), the real analyticity can be characterized using the Fourier–Bros–Iagolnitzer transform. In the multivariable case, real analytic functions satisfy a direct generalization of the third characterization. Let formula_21 be an open set, and let formula_22. Then formula_1 is real analytic on formula_23 if and only if formula_24 and for every compact formula_25 there exists a constant formula_17 such that for every multi-index formula_26 the following bound holds formula_27 Properties of analytic functions. A polynomial cannot be zero at too many points unless it is the zero polynomial (more precisely, the number of zeros is at most the degree of the polynomial). A similar but weaker statement holds for analytic functions. If the set of zeros of an analytic function ƒ has an accumulation point inside its domain, then ƒ is zero everywhere on the connected component containing the accumulation point. In other words, if ("rn") is a sequence of distinct numbers such that ƒ("r""n") = 0 for all "n" and this sequence converges to a point "r" in the domain of "D", then ƒ is identically zero on the connected component of "D" containing "r". This is known as the identity theorem. Also, if all the derivatives of an analytic function at a point are zero, the function is constant on the corresponding connected component. These statements imply that while analytic functions do have more degrees of freedom than polynomials, they are still quite rigid. Analyticity and differentiability. As noted above, any analytic function (real or complex) is infinitely differentiable (also known as smooth, or formula_31). (Note that this differentiability is in the sense of real variables; compare complex derivatives below.) There exist smooth real functions that are not analytic: see non-analytic smooth function. In fact there are many such functions. The situation is quite different when one considers complex analytic functions and complex derivatives. It can be proved that any complex function differentiable (in the complex sense) in an open set is analytic. Consequently, in complex analysis, the term "analytic function" is synonymous with "holomorphic function". Real versus complex analytic functions. Real and complex analytic functions have important differences (one could notice that even from their different relationship with differentiability). Analyticity of complex functions is a more restrictive property, as it has more restrictive necessary conditions and complex analytic functions have more structure than their real-line counterparts. According to Liouville's theorem, any bounded complex analytic function defined on the whole complex plane is constant. The corresponding statement for real analytic functions, with the complex plane replaced by the real line, is clearly false; this is illustrated by formula_32 Also, if a complex analytic function is defined in an open ball around a point "x"0, its power series expansion at "x"0 is convergent in the whole open ball (holomorphic functions are analytic). This statement for real analytic functions (with open ball meaning an open interval of the real line rather than an open disk of the complex plane) is not true in general; the function of the example above gives an example for "x"0 = 0 and a ball of radius exceeding 1, since the power series 1 − "x"2 + "x"4 − "x"6... diverges for |"x"| ≥ 1. Any real analytic function on some open set on the real line can be extended to a complex analytic function on some open set of the complex plane. However, not every real analytic function defined on the whole real line can be extended to a complex function defined on the whole complex plane. The function ƒ("x") defined in the paragraph above is a counterexample, as it is not defined for "x" = ±"i". This explains why the Taylor series of ƒ("x") diverges for |"x"| &gt; 1, i.e., the radius of convergence is 1 because the complexified function has a pole at distance 1 from the evaluation point 0 and no further poles within the open disc of radius 1 around the evaluation point. Analytic functions of several variables. One can define analytic functions in several variables by means of power series in those variables (see power series). Analytic functions of several variables have some of the same properties as analytic functions of one variable. However, especially for complex analytic functions, new and interesting phenomena show up in 2 or more complex dimensions: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " x_0 " }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "D" }, { "math_id": 3, "text": "x_0\\in D" }, { "math_id": 4, "text": "\nf(x) = \\sum_{n=0}^\\infty a_{n} \\left( x-x_0 \\right)^{n} = a_0 + a_1 (x-x_0) + a_2 (x-x_0)^2 + \\cdots\n" }, { "math_id": 5, "text": "a_0, a_1, \\dots" }, { "math_id": 6, "text": "f(x)" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "x_0" }, { "math_id": 9, "text": " T(x) = \\sum_{n=0}^{\\infty} \\frac{f^{(n)}(x_0)}{n!} (x-x_0)^{n}" }, { "math_id": 10, "text": "\\mathcal{C}^{\\,\\omega}(D)" }, { "math_id": 11, "text": "\\mathcal{C}^{\\,\\omega}" }, { "math_id": 12, "text": "\\mathbb{R}^{2}" }, { "math_id": 13, "text": "f \\in \\mathcal{C}^\\infty_0(\\R^n)" }, { "math_id": 14, "text": "\\R^n" }, { "math_id": 15, "text": "G \\subset \\mathbb{C}" }, { "math_id": 16, "text": "K \\subset D" }, { "math_id": 17, "text": "C" }, { "math_id": 18, "text": "x \\in K" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": " \\left| \\frac{d^k f}{dx^k}(x) \\right| \\leq C^{k+1} k!" }, { "math_id": 21, "text": "U \\subset \\R^n" }, { "math_id": 22, "text": "f: U \\to \\R" }, { "math_id": 23, "text": "U" }, { "math_id": 24, "text": "f \\in C^\\infty(U)" }, { "math_id": 25, "text": "K \\subseteq U" }, { "math_id": 26, "text": "\\alpha \\in \\Z_{\\geq 0}^n" }, { "math_id": 27, "text": " \\sup_{x \\in K} \\left | \\frac{\\partial^\\alpha f}{\\partial x^\\alpha}(x) \\right | \\leq C^{|\\alpha|+1}\\alpha!" }, { "math_id": 28, "text": "\\Omega \\subseteq \\mathbb{C}" }, { "math_id": 29, "text": "u:\\Omega \\to \\mathbb{C}" }, { "math_id": 30, "text": "A_\\infty(\\Omega)" }, { "math_id": 31, "text": "\\mathcal{C}^{\\infty}" }, { "math_id": 32, "text": "f(x)=\\frac{1}{x^2+1}." } ]
https://en.wikipedia.org/wiki?curid=61478
61479806
Stable count distribution
Probability distribution In probability theory, the stable count distribution is the conjugate prior of a one-sided stable distribution. This distribution was discovered by Stephen Lihn (Chinese: 藺鴻圖) in his 2017 study of daily distributions of the S&amp;P 500 and the VIX. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it. Of the three parameters defining the distribution, the stability parameter formula_0 is most important. Stable count distributions have formula_1. The known analytical case of formula_2 is related to the VIX distribution (See Section 7 of ). All the moments are finite for the distribution. Definition. Its standard distribution is defined as formula_3 where formula_4 and formula_5 Its location-scale family is defined as formula_6 where formula_7, formula_8, and formula_5 In the above expression, formula_9 is a one-sided stable distribution, which is defined as following. Let formula_10 be a standard stable random variable whose distribution is characterized by formula_11, then we have formula_12 where formula_1. Consider the Lévy sum formula_13 where formula_14, then formula_15 has the density formula_16 where formula_17. Set formula_18, we arrive at formula_19 without the normalization constant. The reason why this distribution is called "stable count" can be understood by the relation formula_20. Note that formula_21 is the "count" of the Lévy sum. Given a fixed formula_0, this distribution gives the probability of taking formula_21 steps to travel one unit of distance. Integral form. Based on the integral form of formula_9 and formula_22, we have the integral form of formula_23 as formula_24 Based on the double-sine integral above, it leads to the integral form of the standard CDF: formula_25 where formula_26 is the sine integral function. The Wright representation. In "Series representation", it is shown that the stable count distribution is a special case of the Wright function (See Section 4 of ): formula_27 This leads to the Hankel integral: (based on (1.4.3) of ) formula_28where Ha represents a Hankel contour. Alternative derivation – lambda decomposition. Another approach to derive the stable count distribution is to use the Laplace transform of the one-sided stable distribution, (Section 2.4 of ) formula_29where formula_1. Let formula_30, and one can decompose the integral on the left hand side as a product distribution of a standard Laplace distribution and a standard stable count distribution, formula_31 where formula_32. This is called the "lambda decomposition" (See Section 4 of ) since the LHS was named as "symmetric lambda distribution" in Lihn's former works. However, it has several more popular names such as "exponential power distribution", or the "generalized error/normal distribution", often referred to when formula_33. It is also the Weibull survival function in Reliability engineering. Lambda decomposition is the foundation of Lihn's framework of asset returns under the stable law. The LHS is the distribution of asset returns. On the RHS, the Laplace distribution represents the lepkurtotic noise, and the stable count distribution represents the volatility. Stable Vol distribution. A variant of the stable count distribution is called the stable vol distribution formula_34. The Laplace transform of formula_35 can be re-expressed in terms of a Gaussian mixture of formula_34 (See Section 6 of ). It is derived from the lambda decomposition above by a change of variable such that formula_36 where formula_37 This transformation is named generalized Gauss transmutation since it generalizes the Gauss-Laplace transmutation, which is equivalent to formula_38. Connection to Gamma and Poisson distributions. The shape parameter of the Gamma and Poisson Distributions is connected to the inverse of Lévy's stability parameter formula_39. The upper regularized gamma function formula_40 can be expressed as an incomplete integral of formula_41 as formula_42 By replacing formula_41 with the decomposition and carrying out one integral, we have: formula_43 Reverting formula_44 back to formula_45, we arrive at the decomposition of formula_40 in terms of a stable count: formula_46 Differentiate formula_40 by formula_47, we arrive at the desired formula: formula_48 This is in the form of a product distribution. The term formula_49 in the RHS is associated with a Weibull distribution of shape formula_50. Hence, this formula connects the stable count distribution to the probability density function of a Gamma distribution (here) and the probability mass function of a Poisson distribution (here, formula_51). And the shape parameter formula_50 can be regarded as inverse of Lévy's stability parameter formula_39. Connection to Chi and Chi-squared distributions. The degrees of freedom formula_52 in the chi and chi-squared Distributions can be shown to be related to formula_53. Hence, the original idea of viewing formula_54 as an integer index in the lambda decomposition is justified here. For the chi-squared distribution, it is straightforward since the chi-squared distribution is a special case of the gamma distribution, in that formula_55. And from above, the shape parameter of a gamma distribution is formula_39. For the chi distribution, we begin with its CDF formula_56, where formula_57. Differentiate formula_56 by formula_47 , we have its density function as formula_58 This formula connects formula_59 with formula_0 through the formula_60 term. Connection to generalized Gamma distributions. The generalized gamma distribution is a probability distribution with two shape parameters, and is the super set of the gamma distribution, the Weibull distribution, the exponential distribution, and the half-normal distribution. Its CDF is in the form of formula_61. Differentiate formula_63 by formula_47, we arrive at the product-distribution formula: formula_64 where formula_65 denotes the PDF of a generalized gamma distribution, whose CDF is parametrized as formula_63. This formula connects formula_66 with formula_0 through the formula_67 term. The formula_68 term is an exponent representing the second degree of freedom in the shape-parameter space. This formula is singular for the case of a Weibull distribution since formula_50 must be one for formula_69; but for formula_70 to exist, formula_50 must be greater than one. When formula_71, formula_70 is a delta function and this formula becomes trivial. The Weibull distribution has its distinct way of decomposition as following. Connection to Weibull distribution. For a Weibull distribution whose CDF is formula_72, its shape parameter formula_73 is equivalent to Lévy's stability parameter formula_74. A similar expression of product distribution can be derived, such that the kernel is either a one-sided Laplace distribution formula_75 or a Rayleigh distribution formula_76. It begins with the complementary CDF, which comes from Lambda decomposition: formula_77 By taking derivative on formula_47, we obtain the product distribution form of a Weibull distribution PDF formula_78 as formula_79 where formula_80 and formula_81. it is clear that formula_82 from the formula_83 and formula_84 terms. Asymptotic properties. For stable distribution family, it is essential to understand its asymptotic behaviors. From, for small formula_85, formula_86 This confirms formula_87. For large formula_85, formula_88 This shows that the tail of formula_89 decays exponentially at infinity. The larger formula_90 is, the stronger the decay. This tail is in the form of a generalized gamma distribution, where in its formula_91 parametrization, formula_92, formula_93, and formula_94. Hence, it is equivalent to formula_95, whose CDF is parametrized as formula_96. Moments. The "n"-th moment formula_97 of formula_19 is the formula_98-th moment of formula_9. All positive moments are finite. This in a way solves the thorny issue of diverging moments in the stable distribution. (See Section 2.4 of ) formula_99 The analytic solution of moments is obtained through the Wright function: formula_100 where formula_101(See (1.4.28) of ) Thus, the mean of formula_19 is formula_102 The variance is formula_103 And the lowest moment is formula_104 by applying formula_105 when formula_106. The "n"-th moment of the stable vol distribution formula_107 is formula_108 Moment generating function. The MGF can be expressed by a Fox-Wright function or Fox H-function: formula_109 As a verification, at formula_110, formula_111 (see below) can be Taylor-expanded to formula_112 via formula_113. Known analytical case – quartic stable count. When formula_114, formula_115 is the Lévy distribution which is an inverse gamma distribution. Thus formula_116 is a shifted gamma distribution of shape 3/2 and scale formula_117, formula_118 where formula_7, formula_8. Its mean is formula_119 and its standard deviation is formula_120. This called "quartic stable count distribution". The word "quartic" comes from Lihn's former work on the lambda distribution where formula_121. At this setting, many facets of stable count distribution have elegant analytical solutions. The "p"-th central moments are formula_122. The CDF is formula_123 where formula_124 is the lower incomplete gamma function. And the MGF is formula_125. (See Section 3 of ) Special case when α → 1. As formula_126 becomes larger, the peak of the distribution becomes sharper. A special case of formula_19 is when formula_127. The distribution behaves like a Dirac delta function, formula_128 where formula_129, and formula_130. Likewise, the stable vol distribution at formula_131 also becomes a delta function, formula_132 Series representation. Based on the series representation of the one-sided stable distribution, we have: formula_133. This series representation has two interpretations: formula_138 The proof is obtained by the reflection formula of the Gamma function: formula_139, which admits the mapping: formula_140 in formula_137. The Wright representation leads to analytical solutions for many statistical properties of the stable count distribution and establish another connection to fractional calculus. Applications. Stable count distribution can represent the daily distribution of VIX quite well. It is hypothesized that VIX is distributed like formula_141 with formula_142 and formula_143 (See Section 7 of ). Thus the stable count distribution is the first-order marginal distribution of a volatility process. In this context, formula_144 is called the "floor volatility". In practice, VIX rarely drops below 10. This phenomenon justifies the concept of "floor volatility". A sample of the fit is shown below: One form of mean-reverting SDE for formula_145 is based on a modified Cox–Ingersoll–Ross (CIR) model. Assume formula_146 is the volatility process, we have formula_147 where formula_148 is the so-called "vol of vol". The "vol of vol" for VIX is called VVIX, which has a typical value of about 85. This SDE is analytically tractable and satisfies the Feller condition, thus formula_146 would never go below formula_144. But there is a subtle issue between theory and practice. There has been about 0.6% probability that VIX did go below formula_144. This is called "spillover". To address it, one can replace the square root term with formula_149, where formula_150 provides a small leakage channel for formula_146 to drift slightly below formula_144. Extremely low VIX reading indicates a very complacent market. Thus the spillover condition, formula_151, carries a certain significance - When it occurs, it usually indicates the calm before the storm in the business cycle. Generation of Random Variables. As the modified CIR model above shows, it takes another input parameter formula_152 to simulate sequences of stable count random variables. The mean-reverting stochastic process takes the form of formula_153 which should produce formula_154 that distributes like formula_155 as formula_156. And formula_152 is a user-specified preference for how fast formula_157 should change. By solving the Fokker-Planck equation, the solution for formula_158 in terms of formula_159 is formula_160 It can also be written as a ratio of two Wright functions, formula_161 When formula_162, this process is reduced to the modified CIR model where formula_163. This is the only special case where formula_164 is a straight line. Likewise, if the asymptotic distribution is formula_34 as formula_156, the formula_164 solution, denoted as formula_165 below, is formula_166 When formula_167, it is reduced to a quadratic polynomial: formula_168. Stable Extension of the CIR Model. By relaxing the rigid relation between the formula_169 term and the formula_152 term above, the stable extension of the CIR model can be constructed as formula_170 which is reduced to the original CIR model at formula_162: formula_171. Hence, the parameter formula_62 controls the mean-reverting speed, the location parameter formula_172 sets where the mean is, formula_152 is the volatility parameter, and formula_0 is the shape parameter for the stable law. By solving the Fokker-Planck equation, the solution for the PDF formula_173 at formula_174 is formula_175 To make sense of this solution, consider asymptotically for large formula_47, formula_173's tail is still in the form of a generalized gamma distribution, where in its formula_176 parametrization, formula_92, formula_177, and formula_178. It is reduced to the original CIR model at formula_162 where formula_179 with formula_180 and formula_181; hence formula_182. Fractional calculus. Relation to Mittag-Leffler function. From Section 4 of, the inverse Laplace transform formula_183 of the Mittag-Leffler function formula_184 is (formula_185) formula_186 On the other hand, the following relation was given by Pollard (1948), formula_187 Thus by formula_188, we obtain the relation between stable count distribution and Mittag-Leffter function: formula_189 This relation can be verified quickly at formula_114 where formula_190 and formula_191. This leads to the well-known quartic stable count result: formula_192 Relation to time-fractional Fokker-Planck equation. The ordinary Fokker-Planck equation (FPE) is formula_193, where formula_194 is the Fokker-Planck space operator, formula_195 is the diffusion coefficient, formula_196 is the temperature, and formula_197 is the external field. The time-fractional FPE introduces the additional fractional derivative formula_198 such that formula_199, where formula_200 is the fractional diffusion coefficient. Let formula_201 in formula_183, we obtain the kernel for the time-fractional FPE (Eq (16) of ) formula_202 from which the fractional density formula_203 can be calculated from an ordinary solution formula_204 via formula_205 Since formula_206 via change of variable formula_207, the above integral becomes the product distribution with formula_208, similar to the "lambda decomposition" concept, and scaling of time formula_209: formula_210 Here formula_211 is interpreted as the distribution of impurity, expressed in the unit of formula_212, that causes the anomalous diffusion. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha" }, { "math_id": 1, "text": "0<\\alpha<1" }, { "math_id": 2, "text": "\\alpha=1/2" }, { "math_id": 3, "text": "\\mathfrak{N}_\\alpha(\\nu)=\\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} \\frac{1}{\\nu} L_\\alpha\\left(\\frac{1}{\\nu}\\right),\n" }, { "math_id": 4, "text": "\\nu>0" }, { "math_id": 5, "text": "0<\\alpha<1." }, { "math_id": 6, "text": "\\mathfrak{N}_\\alpha(\\nu;\\nu_0,\\theta)= \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} \n\\frac{1}{\\nu-\\nu_0} L_\\alpha\\left(\\frac{\\theta}{\\nu-\\nu_0}\\right),\n" }, { "math_id": 7, "text": "\\nu>\\nu_0" }, { "math_id": 8, "text": "\\theta>0" }, { "math_id": 9, "text": "L_\\alpha(x)" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "f(x;\\alpha,\\beta,c,\\mu)" }, { "math_id": 12, "text": "L_\\alpha(x)=f(x;\\alpha,1,\\cos\\left(\\frac{\\pi\\alpha}{2}\\right)^{1/\\alpha},0)," }, { "math_id": 13, "text": "Y = \\sum_{i=1}^N X_i" }, { "math_id": 14, "text": "X_i\\sim L_\\alpha(x)" }, { "math_id": 15, "text": "Y\n" }, { "math_id": 16, "text": "\\frac{1}{\\nu} L_\\alpha\\left(\\frac{x}{\\nu}\\right)" }, { "math_id": 17, "text": "\\nu=N^{1/\\alpha}" }, { "math_id": 18, "text": "x=1" }, { "math_id": 19, "text": "\\mathfrak{N}_\\alpha(\\nu)\n" }, { "math_id": 20, "text": "\\nu=N^{1/\\alpha}" }, { "math_id": 21, "text": "N" }, { "math_id": 22, "text": " q=\\exp(-i\\alpha\\pi/2)\n" }, { "math_id": 23, "text": "\\mathfrak{N}_\\alpha(\\nu)" }, { "math_id": 24, "text": " \\begin{align} \\mathfrak{N}_\\alpha(\\nu)\n& = \\frac{2}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)} \\int_0^\\infty e^{-\\text{Re}(q)\\,t^\\alpha} \n \\frac{1}{\\nu} \\sin(\\frac{t}{\\nu})\\sin(-\\text{Im}(q)\\,t^\\alpha) \\,dt, \\text{ or } \n\\\\ & = \\frac{2}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)} \\int_0^\\infty e^{-\\text{Re}(q)\\,t^\\alpha} \n \\frac{1}{\\nu} \\cos(\\frac{t}{\\nu})\\cos(\\text{Im}(q)\\,t^\\alpha) \\,dt .\n\\\\ \\end{align}\n" }, { "math_id": 25, "text": " \\begin{align} \\Phi_\\alpha(x)\n& = \\frac{2}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)} \\int_0^x \\int_0^\\infty e^{-\\text{Re}(q)\\,t^\\alpha} \n \\frac{1}{\\nu} \\sin(\\frac{t}{\\nu})\\sin(-\\text{Im}(q)\\,t^\\alpha) \\,dt\\,d\\nu\n\\\\ & = 1- \\frac{2}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)} \\int_0^\\infty e^{-\\text{Re}(q)\\,t^\\alpha} \n \\sin(-\\text{Im}(q)\\,t^\\alpha) \\,\\text{Si}(\\frac{t}{x}) \\,dt,\n\\\\ \\end{align}\n" }, { "math_id": 26, "text": " \\text{Si}(x)=\\int_0^x \\frac{\\sin(x)}{x}\\,dx" }, { "math_id": 27, "text": "\\mathfrak{N}_\\alpha(\\nu) = \n\\frac{1}{\\Gamma\\left( \\frac{1}{\\alpha}+1 \\right)} \nW_{-\\alpha,0}(-\\nu^\\alpha)\n, \\, \\text{where} \\,\\, \nW_{\\lambda,\\mu}(z) = \\sum_{n=0}^\\infty \n\\frac{z^n}{n!\\,\\Gamma(\\lambda n+\\mu)}. " }, { "math_id": 28, "text": "\\mathfrak{N}_\\alpha(\\nu) = \n\\frac{1}{\\Gamma\\left( \\frac{1}{\\alpha}+1 \\right)}\n\\frac{1}{2 \\pi i}\n\\int_{Ha} e^{t-(\\nu t)^\\alpha} \\, dt,\n\\, \n" }, { "math_id": 29, "text": "\\int_0^\\infty e^{-z x} L_\\alpha(x) \\, dx = e^{-z^\\alpha}," }, { "math_id": 30, "text": "x=1/\\nu" }, { "math_id": 31, "text": "\\frac{1}{2} \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} e^{-|z|^\\alpha} =\n\\int_0^\\infty \\frac{1}{\\nu} \\left( \\frac{1}{2} e^{-|z|/\\nu} \\right)\n\\left(\\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} \\frac{1}{\\nu} L_\\alpha \\left( \\frac{1}{\\nu} \\right) \\right) \\, d\\nu \n=\n\\int_0^\\infty \\frac{1}{\\nu} \\left( \\frac{1}{2} e^{-|z|/\\nu} \\right)\n\\mathfrak{N}_\\alpha(\\nu) \\, d\\nu \n," }, { "math_id": 32, "text": "z \\in \\mathsf{R}" }, { "math_id": 33, "text": "\\alpha>1" }, { "math_id": 34, "text": "V_{\\alpha}(s)" }, { "math_id": 35, "text": "e^{-|z|^\\alpha}" }, { "math_id": 36, "text": "\\frac{1}{2} \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} e^{-|z|^\\alpha} =\n\\frac{1}{2} \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} e^{-(z^2)^{\\alpha/2}} =\n\\int_0^\\infty \\frac{1}{s} \\left( \\frac{1}{\\sqrt{2 \\pi}} e^{-\\frac{1}{2} (z/s)^2} \\right) V_{\\alpha}(s) \\, ds\n," }, { "math_id": 37, "text": "\n\\begin{align}\n\nV_{\\alpha}(s) &= \\displaystyle\n\\frac{\\sqrt{2 \\pi} \\,\\Gamma(\\frac{2}{\\alpha}+1)}{\\Gamma(\\frac{1}{\\alpha}+1)} \\,\n\\mathfrak{N}_{\\frac{\\alpha}{2}}(2 s^2), \\,\\, 0 < \\alpha \\leq 2\n\\\\\n &= \\displaystyle\n \\frac{ \\sqrt{2\\pi} }{ \\Gamma(\\frac{1}{\\alpha}+1) }\n \\, W_{-\\frac{\\alpha}{2},0} \\left( -{(\\sqrt{2} s)}^\\alpha \\right)\n\\end{align}\n" }, { "math_id": 38, "text": "V_{1}(s) = 2 \\sqrt{2 \\pi} \\, \\mathfrak{N}_{\\frac{1}{2}}(2 s^2) = s \\, e^{-s^2/2}" }, { "math_id": 39, "text": "1/\\alpha" }, { "math_id": 40, "text": "Q(s,x)" }, { "math_id": 41, "text": "e^{-{u^\\alpha}}" }, { "math_id": 42, "text": "\n Q(\\frac{1}{\\alpha}, z^\\alpha) =\n \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)}\n \\displaystyle\\int_z^\\infty \n e^{-{u^\\alpha}} \\, du.\n" }, { "math_id": 43, "text": "\n Q(\\frac{1}{\\alpha}, z^\\alpha) =\n \\displaystyle\\int_z^\\infty \\, du\n \\displaystyle\\int_0^\\infty \n \\frac{1}{\\nu} \\left( e^{-u/\\nu} \\right)\n \\, \\mathfrak{N}_{\\alpha}\\left(\\nu\\right) \\, d\\nu\n = \\displaystyle\\int_0^\\infty \n \\left( e^{-z/\\nu} \\right)\n \\, \\mathfrak{N}_{\\alpha}\\left(\\nu\\right) \\, d\\nu.\n\n" }, { "math_id": 44, "text": "(\\frac{1}{\\alpha}, z^\\alpha)" }, { "math_id": 45, "text": "(s,x)" }, { "math_id": 46, "text": "\n Q(s,x) =\n \\displaystyle\\int_0^\\infty e^{\\left( -{x^s}/{\\nu} \\right)}\n \\, \\mathfrak{N}_{{1}/{s}}\\left(\\nu\\right) \\, d\\nu. \\,\\, (s > 1)\n" }, { "math_id": 47, "text": "x" }, { "math_id": 48, "text": "\n\\begin{align}\n \\frac{1}{\\Gamma(s)} x^{s-1} e^{-x} & =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{\\nu} \\left[ s\\, x^{s-1} e^{\\left( -{x^s}/{\\nu} \\right)} \\right]\n \\, \\mathfrak{N}_{{1}/{s}}\\left(\\nu\\right) \\, d\\nu\n\\\\\n& =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{t} \\left[ s\\, {\\left( \\frac{x}{t} \\right)}^{s-1} e^{-{\\left( x/t \\right)}^s} \\right]\n \\, \\left[ \\mathfrak{N}_{{1}/{s}}\\left(t^s\\right) \\, s \\, t^{s-1} \\right] \\, dt\n \\,\\,\\, (\\nu = t^s)\n\\\\\n& =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{t} \\, \\text{Weibull}\\left( \\frac{x}{t}; s\\right)\n \\, \\left[ \\mathfrak{N}_{{1}/{s}}\\left(t^s\\right) \\, s \\, t^{s-1} \\right] \\, dt\n\\end{align}\n" }, { "math_id": 49, "text": "\\left[ s\\, {\\left( \\frac{x}{t} \\right)}^{s-1} e^{-{\\left( x/t \\right)}^s} \\right]" }, { "math_id": 50, "text": "s" }, { "math_id": 51, "text": "s \\rightarrow s+1" }, { "math_id": 52, "text": "k" }, { "math_id": 53, "text": "2/\\alpha" }, { "math_id": 54, "text": "\\lambda = 2/\\alpha" }, { "math_id": 55, "text": "\\chi^2_k \\sim \\text{Gamma} \\left(\\frac{k}{2}, \\theta=2 \\right)" }, { "math_id": 56, "text": "P \\left( \\frac{k}2, \\frac{x^2}2 \\right)" }, { "math_id": 57, "text": "P(s,x) = 1 - Q(s,x)" }, { "math_id": 58, "text": "\n\\begin{align}\n \\chi_k(x) =\n \\frac{x^{k-1} e^{-x^2/2}} \n {2^{\\frac{k}2-1} \\Gamma \\left( \\frac{k}2 \\right)} \n & =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{\\nu} \\left[ 2^{-\\frac{k}2} \\,k \\, x^{k-1} e^{\\left( -2^{-\\frac{k}2} \\, {x^k}/{ \\nu} \\right)} \\right]\n \\, \\mathfrak{N}_{\\frac{2}{k}}\\left(\\nu\\right) \\, d\\nu\n\\\\\n& =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{t} \\left[ k\\, {\\left( \\frac{x}{t} \\right)}^{k-1} e^{-{\\left( x/t \\right)}^k} \\right]\n \\, \\left[ \\mathfrak{N}_{\\frac{2}{k}}\\left( 2^{-\\frac{k}2} t^k \\right) \\, 2^{-\\frac{k}2} \\, k \\, t^{k-1} \\right] \\, dt,\n \\,\\,\\, (\\nu = 2^{-\\frac{k}2} t^k)\n\\\\\n& =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{t} \\, \\text{Weibull}\\left( \\frac{x}{t}; k\\right)\n \\, \\left[ \\mathfrak{N}_{\\frac{2}{k}}\\left( 2^{-\\frac{k}2} t^k \\right) \\, 2^{-\\frac{k}2} \\, k \\, t^{k-1} \\right] \\, dt\n\n\\end{align}\n" }, { "math_id": 59, "text": "2/k" }, { "math_id": 60, "text": "\n\\mathfrak{N}_{\\frac{2}{k}}\\left( \\cdot \\right)\n" }, { "math_id": 61, "text": "P(s, x^c) = 1 - Q(s, x^c)" }, { "math_id": 62, "text": "a" }, { "math_id": 63, "text": "P(s,x^c)" }, { "math_id": 64, "text": "\n\\begin{align}\n \\text{GenGamma}(x; s, c) & =\n \\displaystyle\\int_0^\\infty \n \\frac{1}{t} \\, \\text{Weibull}\\left( \\frac{x}{t}; sc\\right)\n \\, \\left[ \\mathfrak{N}_{\\frac{1}{s}}\\left(t^{sc}\\right) \\, sc \\, t^{sc-1} \\right] \\, dt\n \\,\\, (s \\geq 1)\n\\end{align}\n" }, { "math_id": 65, "text": "\\text{GenGamma}(x; s, c)" }, { "math_id": 66, "text": "1/s" }, { "math_id": 67, "text": "\n\\mathfrak{N}_{\\frac{1}{s}}\\left( \\cdot \\right)\n" }, { "math_id": 68, "text": "sc" }, { "math_id": 69, "text": "\\text{GenGamma}(x; 1, c) = \\text{Weibull}(x; c)" }, { "math_id": 70, "text": "\\mathfrak{N}_{\\frac{1}{s}}\\left(\\nu\\right)" }, { "math_id": 71, "text": "s\\rightarrow 1" }, { "math_id": 72, "text": "F(x;k,\\lambda) = 1 - e^{-(x/\\lambda)^k} \\,\\, (x>0)" }, { "math_id": 73, "text": " k " }, { "math_id": 74, "text": " \\alpha " }, { "math_id": 75, "text": "F(x;1,\\sigma)" }, { "math_id": 76, "text": "F(x;2,\\sqrt{2} \\sigma)" }, { "math_id": 77, "text": "\n 1-F(x;k,1) =\n \\begin{cases}\n \\displaystyle\\int_0^\\infty \\frac{1}{\\nu} \\, (1-F(x;1,\\nu)) \n \\left[ \\Gamma \\left( \\frac{1}{k}+1 \\right) \\mathfrak{N}_k(\\nu) \\right] \\, d\\nu ,\n & 1 \\geq k > 0; \\text{or } \\\\\n \\displaystyle\\int_0^\\infty \\frac{1}{s} \\, (1-F(x;2,\\sqrt{2} s)) \n \\left[ \\sqrt{\\frac{2}{\\pi}} \\, \\Gamma \\left( \\frac{1}{k}+1 \\right) V_k(s) \\right] \\, ds ,\n & 2 \\geq k > 0.\n \\end{cases}\n" }, { "math_id": 78, "text": "\\text{Weibull}(x;k)" }, { "math_id": 79, "text": "\n \\text{Weibull}(x;k) =\n \\begin{cases}\n \\displaystyle\\int_0^\\infty \\frac{1}{\\nu} \\, \\text{Laplace}(\\frac{x}{\\nu}) \n \\left[ \\Gamma \\left( \\frac{1}{k}+1 \\right) \\frac{1}{\\nu} \\mathfrak{N}_k(\\nu) \\right] \\, d\\nu ,\n & 1 \\geq k > 0; \\text{or } \\\\\n \\displaystyle\\int_0^\\infty \\frac{1}{s} \\, \\text{Rayleigh}(\\frac{x}{s}) \n \\left[ \\sqrt{\\frac{2}{\\pi}} \\, \\Gamma \\left( \\frac{1}{k}+1 \\right) \\frac{1}{s} V_k(s) \\right] \\, ds ,\n & 2 \\geq k > 0.\n \\end{cases}\n" }, { "math_id": 80, "text": "\\text{Laplace}(x) = e^{-x}" }, { "math_id": 81, "text": "\\text{Rayleigh}(x) = x e^{-x^2/2} " }, { "math_id": 82, "text": " k = \\alpha " }, { "math_id": 83, "text": " \\mathfrak{N}_k(\\nu) " }, { "math_id": 84, "text": " V_k(s) " }, { "math_id": 85, "text": "\\nu" }, { "math_id": 86, "text": " \\begin{align} \\mathfrak{N}_\\alpha(\\nu)\n& \\rightarrow B(\\alpha) \\,\\nu^{\\alpha}, \\text{ for } \\nu \\rightarrow 0 \\text{ and } B(\\alpha)>0.\n\\\\ \\end{align}\n" }, { "math_id": 87, "text": " \\mathfrak{N}_\\alpha(0)=0\n" }, { "math_id": 88, "text": " \\begin{align} \\mathfrak{N}_\\alpha(\\nu)\n& \\rightarrow \\nu^{\\frac{\\alpha}{2(1-\\alpha)}} e^{-A(\\alpha) \\,\\nu^{\\frac{\\alpha}{1-\\alpha}}}, \n \\text{ for } \\nu \\rightarrow \\infty \\text{ and } A(\\alpha)>0.\n\\\\ \\end{align}\n" }, { "math_id": 89, "text": " \\mathfrak{N}_\\alpha(\\nu)\n" }, { "math_id": 90, "text": " \\alpha\n" }, { "math_id": 91, "text": "f(x; a, d, p)" }, { "math_id": 92, "text": "p = \\frac{\\alpha}{1-\\alpha}" }, { "math_id": 93, "text": "a = A(\\alpha)^{-1/p}" }, { "math_id": 94, "text": "d = 1 + \\frac{p}{2}" }, { "math_id": 95, "text": "\\text{GenGamma}(\\frac{x}{a}; s = \\frac{1}{\\alpha} -\\frac{1}{2}, c = p)" }, { "math_id": 96, "text": "P\\left( s,\\left( \\frac{x}{a} \\right)^c \\right)" }, { "math_id": 97, "text": "m_n " }, { "math_id": 98, "text": "-(n+1)" }, { "math_id": 99, "text": "\n\n\\begin{align} m_n & \n= \\int_0^\\infty \\nu^n \\mathfrak{N}_\\alpha(\\nu) d\\nu\n= \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} \\int_0^\\infty \\frac{1}{t^{n+1}} L_\\alpha(t) \\, dt.\n\\\\ \\end{align}\n " }, { "math_id": 100, "text": "\n\n\\begin{align} m_n & \n= \\frac{1}{\\Gamma(\\frac{1}{\\alpha}+1)} \\int_0^\\infty \\nu^{n} W_{-\\alpha,0}(-\\nu^\\alpha) \\, d\\nu \n\\\\ & \n= \\frac{\\Gamma(\\frac{n+1}{\\alpha})}{\\Gamma(n+1)\\Gamma(\\frac{1}{\\alpha})}, \\, n \\geq -1.\n\\\\ \\end{align}\n " }, { "math_id": 101, "text": "\\int_0^\\infty r^\\delta W_{-\\nu,\\mu}(-r)\\,dr =\n\\frac{\\Gamma(\\delta+1)}{\\Gamma(\\nu\\delta+\\nu+\\mu)}\n, \\, \\delta>-1,0<\\nu<1,\\mu>0. " }, { "math_id": 102, "text": "m_1=\\frac{\\Gamma(\\frac{2}{\\alpha})}{\\Gamma(\\frac{1}{\\alpha})} " }, { "math_id": 103, "text": "\\sigma^2= \n\\frac{\\Gamma(\\frac{3}{\\alpha})}{2\\Gamma(\\frac{1}{\\alpha})}\n- \\left[ \\frac{\\Gamma(\\frac{2}{\\alpha})}{\\Gamma(\\frac{1}{\\alpha})} \\right]^2 " }, { "math_id": 104, "text": " m_{-1} = \\frac{1}{\\Gamma(\\frac{1}{\\alpha} + 1)} " }, { "math_id": 105, "text": " \\Gamma(\\frac{x}{y}) \\to y\\Gamma(x)" }, { "math_id": 106, "text": " x \\to 0 " }, { "math_id": 107, "text": "V_\\alpha(s)" }, { "math_id": 108, "text": "\n\\begin{align} \nm_n(V_\\alpha) & = 2^{-\\frac{n}{2}} \\sqrt{\\pi} \\,\n\\frac{\\Gamma(\\frac{n+1}{\\alpha})}{\\Gamma(\\frac{n+1}{2}) \\Gamma(\\frac{1}{\\alpha})}, \\, n \\geq -1.\n\\end{align}\n " }, { "math_id": 109, "text": "\\begin{align} M_\\alpha(s) \n& = \\sum_{n=0}^\\infty \\frac{m_n\\,s^n}{n!}\n = \\frac{1}{\\Gamma(\\frac{1}{\\alpha})} \\sum_{n=0}^\\infty \n \\frac{\\Gamma(\\frac{n+1}{\\alpha})\\,s^n}{\\Gamma(n+1)^2}\n\\\\ & = \\frac{1}{\\Gamma(\\frac{1}{\\alpha})} {}_1\\Psi_1\\left[(\\frac{1}{\\alpha},\\frac{1}{\\alpha});(1,1); s\\right]\n ,\\,\\,\\text{or}\n\\\\ & = \\frac{1}{\\Gamma(\\frac{1}{\\alpha})} H^{1,1}_{1,2}\\left[-s \\bigl|\n \\begin{matrix} (1-\\frac{1}{\\alpha}, \\frac{1}{\\alpha}) \\\\ (0,1);(0,1) \\end{matrix}\n \\right]\n\\\\ \\end{align} " }, { "math_id": 110, "text": "\\alpha=\\frac{1}{2} " }, { "math_id": 111, "text": "M_{\\frac{1}{2}}(s) = (1-4s)^{-\\frac{3}{2}} " }, { "math_id": 112, "text": "{}_1\\Psi_1\\left[(2,2);(1,1); s\\right] \n=\\sum_{n=0}^\\infty \\frac{\\Gamma(2n+2)\\,s^n}{\\Gamma(n+1)^2}\n " }, { "math_id": 113, "text": "\\Gamma(\\frac{1}{2}-n) = \\sqrt{\\pi} \\frac{(-4)^n n!}{(2n)!} " }, { "math_id": 114, "text": "\\alpha=\\frac{1}{2}" }, { "math_id": 115, "text": "L_{1/2}(x)" }, { "math_id": 116, "text": "\\mathfrak{N}_{1/2}(\\nu;\\nu_0,\\theta) " }, { "math_id": 117, "text": "4\\theta " }, { "math_id": 118, "text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu;\\nu_0,\\theta) = \\frac{1}{4\\sqrt{\\pi}\\theta^{3/2}} (\\nu-\\nu_0)^{1/2} e^{-(\\nu-\\nu_0)/4\\theta},\n" }, { "math_id": 119, "text": "\\nu_0+6\\theta " }, { "math_id": 120, "text": "\\sqrt{24}\\theta " }, { "math_id": 121, "text": "\\lambda=2/\\alpha=4" }, { "math_id": 122, "text": "\\frac{2 \\Gamma(p+3/2)}{\\Gamma(3/2)} 4^p\\theta^p" }, { "math_id": 123, "text": "\\frac{2}{\\sqrt{\\pi}} \\gamma\\left(\\frac{3}{2}, \\frac{\\nu-\\nu_0}{4\\theta} \\right)" }, { "math_id": 124, "text": "\\gamma(s,x)" }, { "math_id": 125, "text": "M_{\\frac{1}{2}}(s) = e^{s\\nu_0}(1-4s\\theta)^{-\\frac{3}{2}}" }, { "math_id": 126, "text": "\\alpha\n" }, { "math_id": 127, "text": "\\alpha\\rightarrow1" }, { "math_id": 128, "text": "\\mathfrak{N}_{\\alpha\\to 1}(\\nu) \\to \\delta(\\nu-1),\n" }, { "math_id": 129, "text": "\\delta(x) = \\begin{cases} \\infty, & \\text{if }x=0 \\\\ 0, & \\text{if }x\\neq 0 \\end{cases}\n" }, { "math_id": 130, "text": "\\int_{0_-}^{0_+} \\delta(x) dx = 1\n" }, { "math_id": 131, "text": "\\alpha \\to 2" }, { "math_id": 132, "text": "V_{\\alpha\\to 2}(s) \\to \\delta(s- \\frac{1}{\\sqrt{2}}).\n" }, { "math_id": 133, "text": "\\begin{align} \\mathfrak{N}_\\alpha(x) \n& =\n\\frac{1}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)}\n\\sum_{n=1}^\\infty\\frac{-\\sin(n(\\alpha+1)\\pi)}{n!}{x}^{\\alpha n}\\Gamma(\\alpha n+1)\n\\\\ & = \n\\frac{1}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)}\n\\sum_{n=1}^\\infty\\frac{(-1)^{n+1} \\sin(n\\alpha\\pi)}{n!}{x}^{\\alpha n}\\Gamma(\\alpha n+1)\n\\\\ \\end{align}\n " }, { "math_id": 134, "text": "\\mathfrak{N}_\\alpha(x) = \n\\frac{\\alpha^2 x^\\alpha}{\\Gamma \\left(\\frac{1}{\\alpha}\\right)} \nH_\\alpha(x^\\alpha), " }, { "math_id": 135, "text": "H_\\alpha(k) " }, { "math_id": 136, "text": "E_\\alpha(-x) " }, { "math_id": 137, "text": "W_{\\lambda,\\mu}(z) " }, { "math_id": 138, "text": "\\begin{align} \\mathfrak{N}_\\alpha(x) \n& =\n\\frac{1}{\\pi\\Gamma(\\frac{1}{\\alpha}+1)}\n\\sum_{n=1}^\\infty\\frac{(-1)^{n} {x}^{\\alpha n}}{n!}\\,\n\\sin((\\alpha n+1)\\pi)\\Gamma(\\alpha n+1)\n\\\\ & = \n\\frac{1}{\\Gamma \\left(\\frac{1}{\\alpha}+1\\right)} \nW_{-\\alpha,0}(-x^\\alpha), \\, \\text{where} \\,\\, \nW_{\\lambda,\\mu}(z) = \\sum_{n=0}^\\infty \n\\frac{z^n}{n!\\,\\Gamma(\\lambda n+\\mu)}, \n\\lambda>-1.\n\\\\ \\end{align} " }, { "math_id": 139, "text": "\\sin((\\alpha n+1)\\pi)\\Gamma(\\alpha n+1) = \\pi/\\Gamma(-\\alpha n) " }, { "math_id": 140, "text": "\\lambda=-\\alpha,\\mu=0,z=-x^\\alpha\n " }, { "math_id": 141, "text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu;\\nu_0,\\theta)\n" }, { "math_id": 142, "text": "\\nu_0=10.4\n" }, { "math_id": 143, "text": "\\theta=1.6\n" }, { "math_id": 144, "text": "\\nu_0\n" }, { "math_id": 145, "text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu;\\nu_0,\\theta) " }, { "math_id": 146, "text": "S_t " }, { "math_id": 147, "text": " dS_t = \\frac{\\sigma^2}{8\\theta} (6\\theta+\\nu_0-S_t) \\, dt + \\sigma \\sqrt{S_t-\\nu_0} \\, dW, " }, { "math_id": 148, "text": "\\sigma\n" }, { "math_id": 149, "text": "\\sqrt{\\max(S_t-\\nu_0,\\delta\\nu_0)}" }, { "math_id": 150, "text": "\\delta\\nu_0\\approx 0.01 \\, \\nu_0" }, { "math_id": 151, "text": "S_t<\\nu_0" }, { "math_id": 152, "text": "\\sigma" }, { "math_id": 153, "text": " dS_t = \\sigma^2 \\mu_{\\alpha}\\left( \\frac{S_t}{\\theta} \\right) \\, dt + \\sigma \\sqrt{S_t} \\, dW, " }, { "math_id": 154, "text": "\\{S_t\\}" }, { "math_id": 155, "text": "\\mathfrak{N}_{\\alpha}(\\nu;\\theta)" }, { "math_id": 156, "text": "t \\rightarrow \\infty" }, { "math_id": 157, "text": "S_t" }, { "math_id": 158, "text": "\\mu_{\\alpha}(x)" }, { "math_id": 159, "text": "\\mathfrak{N}_{\\alpha}(x)" }, { "math_id": 160, "text": " \\begin{array}{lcl}\n\\mu_\\alpha(x) & = & \\displaystyle \n \\frac{1}{2} \\frac{\\left( x {d \\over dx} +1 \\right) \\mathfrak{N}_{\\alpha}(x)}{\\mathfrak{N}_{\\alpha}(x)}\n\\\\\n & = & \\displaystyle \n \\frac{1}{2} \\left[ x {d \\over dx} \\left( \\log \\mathfrak{N}_{\\alpha}(x) \\right) +1 \\right]\n\\end{array} " }, { "math_id": 161, "text": " \\begin{array}{lcl}\n\\mu_\\alpha(x) & = & \\displaystyle \n -\\frac{1}{2} \\frac{W_{-\\alpha,-1}(-x^\\alpha)}{\\Gamma(\\frac{1}{\\alpha}+1) \\, \\mathfrak{N}_{\\alpha}(x)}\n\\\\\n & = & \\displaystyle \n -\\frac{1}{2} \\frac{W_{-\\alpha,-1}(-x^\\alpha)}{W_{-\\alpha,0}(-x^\\alpha)}\n\\end{array} " }, { "math_id": 162, "text": "\\alpha = 1/2" }, { "math_id": 163, "text": "\\mu_{1/2}(x) = \\frac{1}{8} (6-x)" }, { "math_id": 164, "text": "\\mu_\\alpha(x)" }, { "math_id": 165, "text": "\\mu(x; V_{\\alpha})" }, { "math_id": 166, "text": " \\begin{array}{lcl}\n\\mu(x; V_{\\alpha}) & = & \\displaystyle \n - \\frac{ W_{-\\frac{\\alpha}{2},-1}(-{(\\sqrt{2} x)}^\\alpha) }{ W_{-\\frac{\\alpha}{2},0}(-{(\\sqrt{2} x)}^\\alpha)}\n -\\frac{1}{2} \n\\end{array} " }, { "math_id": 167, "text": "\\alpha = 1" }, { "math_id": 168, "text": "\\mu(x;V_1) = 1 - \\frac{x^2}{2}" }, { "math_id": 169, "text": "\\sigma^2" }, { "math_id": 170, "text": " dr_t = a \\, \\left[ \\frac{8b}{6} \\,\n\\mu_{\\alpha}\\left( \\frac{6}{b} r_t \\right) \\right] \\, dt + \\sigma \\sqrt{r_t} \\, dW, \n" }, { "math_id": 171, "text": " dr_t = a \\left( b - r_t \\right) dt + \\sigma \\sqrt{r_t} \\, dW" }, { "math_id": 172, "text": "b" }, { "math_id": 173, "text": "p(x)" }, { "math_id": 174, "text": "r_\\infty" }, { "math_id": 175, "text": " \\begin{array}{lcl}\np(x) & \\propto & \\displaystyle \n \\exp \\left[ \\int^{x} \\frac{dx}{x} \\left(\n 2 D \\, \\mu_{\\alpha}\\left( \\frac{6}{b} x \\right) - 1\n \\right) \\right]\n , \\text{ where } D = \\frac{4ab}{3 \\sigma^2}\n\\\\\n & = & \\displaystyle \n \\mathfrak{N}_{\\alpha}\\left( \\frac{6}{b} x \\right) ^D \\, x^{D-1}\n\\end{array} " }, { "math_id": 176, "text": "f(x; a', d, p)" }, { "math_id": 177, "text": "a' = \\frac{b}{6} (D\\,A(\\alpha))^{-1/p}" }, { "math_id": 178, "text": "d = D \\left( 1 + \\frac{p}{2} \\right)" }, { "math_id": 179, "text": " p(x) \\propto x^{d-1}e^{-x/a'} " }, { "math_id": 180, "text": " d = \\frac{2ab}{\\sigma^2} " }, { "math_id": 181, "text": " A(\\alpha) = \\frac{1}{4} " }, { "math_id": 182, "text": " \\frac{1}{a'} = \\frac{6}{b} \\left(\\frac{D}{4}\\right) = \\frac{2a}{\\sigma^2} " }, { "math_id": 183, "text": "H_\\alpha(k)" }, { "math_id": 184, "text": "E_\\alpha(-x)" }, { "math_id": 185, "text": "k>0 " }, { "math_id": 186, "text": "H_\\alpha(k)= \\mathcal{L}^{-1}\\{E_\\alpha(-x)\\}(k) \n= \\frac{2}{\\pi} \\int_0^\\infty E_{2\\alpha}(-t^2) \\cos(kt) \\,dt. " }, { "math_id": 187, "text": "H_\\alpha(k) = \n\\frac{1}{\\alpha} \\frac{1}{k^{1+1/\\alpha}} \nL_\\alpha \\left( \\frac{1}{k^{1/\\alpha}} \\right). " }, { "math_id": 188, "text": "k=\\nu^\\alpha " }, { "math_id": 189, "text": "\\mathfrak{N}_\\alpha(\\nu) = \n\\frac{\\alpha^2 \\nu^\\alpha}{\\Gamma \\left(\\frac{1}{\\alpha}\\right)} \nH_\\alpha(\\nu^\\alpha). " }, { "math_id": 190, "text": " H_{\\frac{1}{2}}(k)=\\frac{1}{\\sqrt{\\pi}} \\,e^{-k^2/4} " }, { "math_id": 191, "text": "k^2=\\nu " }, { "math_id": 192, "text": "\\mathfrak{N}_{\\frac{1}{2}}(\\nu) = \n\\frac{\\nu^{1/2}}{4\\,\\Gamma (2)} \\times\n\\frac{1}{\\sqrt{\\pi}} \\,e^{-\\nu/4}\n= \n\\frac{1}{4\\,\\sqrt{\\pi}} \\nu^{1/2}\\,e^{-\\nu/4}. " }, { "math_id": 193, "text": "\\frac{\\partial P_1(x,t)}{\\partial t} = K_1\\, \\tilde{L}_{FP} P_1(x,t)\n " }, { "math_id": 194, "text": "\\tilde{L}_{FP} = \\frac{\\partial}{\\partial x} \\frac{F(x)}{T} + \\frac{\\partial^2}{\\partial x^2} \n " }, { "math_id": 195, "text": "K_1 \n " }, { "math_id": 196, "text": "T\n " }, { "math_id": 197, "text": "F(x)\n " }, { "math_id": 198, "text": "\\,_0D_t^{1-\\alpha}\n " }, { "math_id": 199, "text": "\\frac{\\partial P_\\alpha(x,t)}{\\partial t} = K_\\alpha \\,_0D_t^{1-\\alpha} \\tilde{L}_{FP} P_\\alpha(x,t)\n " }, { "math_id": 200, "text": "K_\\alpha\n " }, { "math_id": 201, "text": "k=s/t^\\alpha" }, { "math_id": 202, "text": "n(s,t) = \n\\frac{1}{\\alpha} \\frac{t}{s^{1+1/\\alpha}} \nL_\\alpha \\left( \\frac{t}{s^{1/\\alpha}} \\right) " }, { "math_id": 203, "text": "P_\\alpha(x,t) " }, { "math_id": 204, "text": "P_1(x,t) " }, { "math_id": 205, "text": "P_\\alpha(x,t) = \\int_0^\\infty \nn\\left( \\frac{s}{K},t\\right) \\,P_1(x,s) \\,ds, \\text{ where } K=\\frac{K_\\alpha}{K_1}.\n " }, { "math_id": 206, "text": "n(\\frac{s}{K},t)\\,ds = \\Gamma \\left(\\frac{1}{\\alpha}+1\\right)\n\\frac{1}{\\nu}\\, \n\\mathfrak{N}_\\alpha(\\nu; \\theta=K^{1/\\alpha}) \\,d\\nu\n " }, { "math_id": 207, "text": "\\nu t = s^{1/\\alpha} " }, { "math_id": 208, "text": "\\mathfrak{N}_\\alpha(\\nu)\n " }, { "math_id": 209, "text": "t \\Rightarrow (\\nu t)^\\alpha\n " }, { "math_id": 210, "text": "P_\\alpha(x,t) = \n\\Gamma \\left(\\frac{1}{\\alpha}+1\\right)\n\\int_0^\\infty \n\\frac{1}{\\nu}\\, \n\\mathfrak{N}_\\alpha(\\nu; \\theta=K^{1/\\alpha}) \n \\,P_1(x,(\\nu t)^\\alpha) \\,d\\nu.\n " }, { "math_id": 211, "text": "\\mathfrak{N}_\\alpha(\\nu; \\theta=K^{1/\\alpha})\n " }, { "math_id": 212, "text": "K^{1/\\alpha} " } ]
https://en.wikipedia.org/wiki?curid=61479806
61480071
2 Chronicles 29
Second Book of Chronicles, chapter 29 2 Chronicles 29 is the twenty-ninth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Hezekiah, king of Judah. Text. This chapter was originally written in the Hebrew language and is divided into 36 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008). Fragments containing parts of this chapter were found among the Dead Sea Scrolls, that is, 4Q118 (4QChr; 50–25 BCE) with extant verses 1–3. There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Hezekiah, king of Judah (29:1–19). Among the kings of Judah (after David and Solomon), the Chronicles record more extensively about Hezekiah (2 Chronicles 29–32), focusing his reform and restoration of the Temple and worship. Verses 1–19 parallel to 2 Kings 18:1–12 with the addition of material of temple cleansing. In the first month of his (sole) reign, Hezekiah opened and repaired the doors of the temple (verse 3), which was shut by this father (2 Chronicles 28:24), but because the main building was still unclean, Hezekiah held the meeting with the priests and Levites at the square on the east of the temple (verse 4). Hezekiah's speech (verses 5, 11) encouraged the Levites (and priests) to resume their traditional tasks, recalling the sins of the ancestors and their effects (verses 6–10). Verses 12–19 report the cleansing of the temple, involving two sons each of seven Levite families (the number "seven" is a keyword in this chapter, cf. e.g. verses 17, 21), the last three of them (Asaph, Heman, and Jeduthun) are the families of singers ascribed to the Levites (cf. 1 Chronicles 15:5–8 about the first three names). w. 15–17, having After sanctified themselves, the priests removed unholy things from the temple then the Levites carried them from the forecourt into the Kidron valley (verses 15–17). The sanctification of the temple was completed on the sixteenth day of the first month, forcing a delay in the Passover feast (which should fall on the fifteenth day of the first month). The report to Hezekiah explicitly mentioned Ahaz who removed the temple utensils from Solomon's temple, which in post-exilic times were continued to be used in the second temple, after being returned by the Persians to those who resettled Jerusalem. "Hezekiah began to reign when he was twenty-five years old, and he reigned twenty-nine years in Jerusalem. His mother's name was Abijah the daughter of Zechariah." Hezekiah restores Temple worship (29:20–36). The feast after the consecration of the temple here was without parallel in the Hebrew Bible, a mixture of rituals in the book of Leviticus and Numbers for sanctifying altars (Numbers 7:88), consisting of three parts: preparations, sacrifices made by the princes and the sacrifices made by the people, which were given voluntarily and joyfully. It also reflects the consecration of the second temple (Ezra 6:17; 8:35; cf. Ezekiel 43–45). The people's voluntary offerings here are not mentioned elsewhere in the Chronicles. "So they killed the bulls, and the priests received the blood and sprinkled it on the altar. Likewise they killed the rams and sprinkled the blood on the altar. They also killed the lambs and sprinkled the blood on the altar." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61480071
61480073
2 Chronicles 30
Second Book of Chronicles, chapter 30 2 Chronicles 30 is the thirtieth chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Hezekiah, king of Judah. Text. This chapter was originally written in the Hebrew language and is divided into 27 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). Hezekiah proclaims a Passover (30:1–12). This section of the chapter records the preparations for the Passover feast. Hezekiah took all the significant measures to establish the Passover feast in the first year of his reign, because at that time the northern kingdom had already fallen, so Hezekiah had to quickly make a final attempt to restore the unity of the YHWH worship in all area of the former united kingdom of Israel. In line with the Chronicler's 'democratic convictions', the king, the princes and the people made a collective decision to celebrate Passover (verses 1–7), and to have a good preparation for such a feast (verse 3), they made a decision to delay it until the second month, which had happened before (Numbers 9:6–13). Following this decision Hezekiah sent messengers to all parts the country and particularly to the northern kingdom, reciting a message (using a play on words): "return to YHWH, he will return to you". The demise of the northern kingdom (unreported in the Chronicles) may have caused various reactions to the invitation: most (remaining) inhabitants of the northern kingdom reacted with 'derision and scorn', although some accepted by 'humbling themselves', whereas in the sovereign southern kingdom the Judeans reacted with 'one heart' (verse 12), which is attributed by the Chronicler to stem from God's actions (cf. 1 Chronicles 29). "And Hezekiah sent to all Israel and Judah, and wrote letters also to Ephraim and Manasseh, that they should come to the house of the Lord at Jerusalem, to keep the passover unto the Lord God of Israel." Hezekiah celebrates the Passover (30:13–27). This section reports the celebration of Passover and contains the first description of a complete pilgrimage in the Hebrew Bible. Once the temple was cleansed (chapter 29), Jerusalem was also liberated from all foreign influence, so only legitimate worship for YHWH took place. The national celebration of the Passover recalls the 'heady days of the United Monarchy'. Many participants (especially from the northern tribes of Ephraim, Manasseh, Issachar and Zebulun), the officials, the priests and the Levites had not properly cleansed themselves, but they were trying hard to show their change of heart, and that is what seems most important, so Hezekiah called upon YHWH to pardon those people and YHWH granted the petition (verse 20). According to the law, any person making the sacrifice should be responsible for its slaughter, but since many of them had not been cleansed properly, the Levites assumed this role (verse 16). The important character of the feast (typical in the Chronicles) is the voluntary action with a great sense of unity (involving priests, Levites, and laymen), joy, and generosity prevalent amongst the king and the notables, and many animals are sacrificed. The Chronicler notes that such scenes had not happened since the days of Solomon, referring not just to the celebration alone, but also to an impression of possible reunification since the kingdom fell apart after Solomon's death. "And the children of Israel that were present at Jerusalem kept the feast of unleavened bread seven days with great gladness: and the Levites and the priests praised the LORD day by day, singing with loud instruments unto the LORD." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61480073
61480074
2 Chronicles 31
Second Book of Chronicles, chapter 31 2 Chronicles 31 is the thirty-first chapter of the Second Book of Chronicles the Old Testament in the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingdom of Judah until its destruction by the Babylonians under Nebuchadnezzar and the beginning of restoration under Cyrus the Great of Persia (2 Chronicles 10 to 36). The focus of this chapter is the reign of Hezekiah, king of Judah. Text. This chapter was originally written in the Hebrew language and is divided into 21 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The reform of Hezekiah (31:1–10). Verse 1 parallels to 2 Kings 18:4 summarizing Hezekiah's reforming measures. Once the temple service was reinstalled, Hezekiah reinstated the priestly and Levitical divisions as set out by David (1 Chronicles 23–26; 1 Chronicles 28:13, 21) and implemented by Solomon (cf. 2 Chronicles 8:14; 23:18–19), then Hezekiah organized the tithes. The king also contributed to the support of the Temple and its workers, like David did (1 Chronicles 22:14–16; 29:2–5), and likewise responded by the people with their generosity (cf. 1 Chronicles 29:6–9). The chief priest in Hezekiah's time was Azariah, which has the same name of the chief priest under Solomon, both from the lineage of Zadok, portraying Hezekiah in similar light to David and Solomon although their names were not explicitly mentioned in this chapter. " In the third month they began to lay the foundation of the heaps, and finished them in the seventh month." Hezekiah organizes the priests (31:11–21). The abundance of tithes needed to stored so king Hezekiah ordered to build or renovate more chambers for the purpose. Two Levites (Cononiah and Shimei) administered "the offerings and the tithes and the dedicated things", with the assistance of ten 'overseers' (Jehiel, Azaziah, Nahath, Asahel, Jerimoth, Jozabad, Eliel, Ismachiah, Mahath, and Benaiah), leading to the 'round sum of twelve' (verses 11–13). Verses 14–19 detail the distribution of the offerings as the financial support for the priests and Levites on the basis of the number and size of the family (cf. ), both in Jerusalem and in the other cities. Verses 20–21 parallel to in the assessment of Hezekiah's reign, praising Hezekiah more than for any other king of Judah, describing him in a specific threefold praise: "good and right and faithful before the LORD". "Also of the sons of Aaron the priests, which were in the fields of the suburbs of their cities, in every several city, the men that were expressed by name, to give portions to all the males among the priests, and to all that were reckoned by genealogies among the Levites." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61480074
614840
Hendecagon
Shape with eleven sides In geometry, a hendecagon (also undecagon or endecagon) or 11-gon is an eleven-sided polygon. (The name "hendecagon", from Greek "hendeka" "eleven" and "–gon" "corner", is often preferred to the hybrid "undecagon", whose first part is formed from Latin "undecim" "eleven".) Regular hendecagon. A "regular hendecagon" is represented by Schläfli symbol {11}. A regular hendecagon has internal angles of 147.27 degrees (=147 formula_0 degrees). The area of a regular hendecagon with side length "a" is given by formula_1 As 11 is not a Fermat prime, the regular hendecagon is not constructible with compass and straightedge. Because 11 is not a Pierpont prime, construction of a regular hendecagon is still impossible even with the usage of an angle trisector. Close approximations to the regular hendecagon can be constructed. For instance, the ancient Greek mathematicians approximated the side length of a hendecagon inscribed in a unit circle as being 14/25 units long. The hendecagon can be constructed exactly via neusis construction and also via two-fold origami. Approximate construction. The following construction description is given by T. Drummond from 1800: "Draw the radius A B, bisect it in C—with an opening of the compasses equal to half the radius, upon A and C as centres describe the arcs C D I and A D—with the distance I D upon I describe the arc D O and draw the line C O, which will be the extent of one side of a hendecagon sufficiently exact for practice." On a unit circle: Symmetry. The "regular hendecagon" has Dih11 symmetry, order 22. Since 11 is a prime number there is one subgroup with dihedral symmetry: Dih1, and 2 cyclic group symmetries: Z11, and Z1. These 4 symmetries can be seen in 4 distinct symmetries on the hendecagon. John Conway labels these by a letter and group order. Full symmetry of the regular form is r22 and no symmetry is labeled a1. The dihedral symmetries are divided depending on whether they pass through vertices (d for diagonal) or edges (p for perpendiculars), and i when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as g for their central gyration orders. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g11 subgroup has no degrees of freedom but can be seen as directed edges. Use in coinage. The Canadian dollar coin, the loonie, is similar to, but not exactly, a regular hendecagonal prism, as are the Indian 2-rupee coin and several other lesser-used coins of other nations. The cross-section of a loonie is actually a Reuleaux hendecagon. The United States Susan B. Anthony dollar has a hendecagonal outline along the inside of its edges. Related figures. The hendecagon shares the same set of 11 vertices with four regular hendecagrams: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{3}{11}" }, { "math_id": 1, "text": "A = \\frac{11}{4}a^2 \\cot \\frac{\\pi}{11} \\simeq 9.36564\\,a^2." }, { "math_id": 2, "text": "b=0.563692\\ldots" }, { "math_id": 3, "text": "a=2\\sin(\\frac{\\pi}{11})=0.563465\\ldots" }, { "math_id": 4, "text": "\\delta=b-a=2.27\\ldots\\cdot10^{-4}" } ]
https://en.wikipedia.org/wiki?curid=614840
61491450
Perovskite nanocrystal
Class of semiconductor nanocrystals Perovskite nanocrystals are a class of semiconductor nanocrystals, which exhibit unique characteristics that separate them from traditional quantum dots. Perovskite nanocrystals have an ABX3 composition where A = cesium, methylammonium (MA), or formamidinium (FA); B = lead or tin; and X = chloride, bromide, or iodide. Their unique qualities largely involve their unusual band-structure which renders these materials effectively defect tolerant or able to emit brightly without surface passivation. This is in contrast to other quantum dots such as CdSe which must be passivated with an epitaxially matched shell to be bright emitters. In addition to this, lead-halide perovskite nanocrystals remain bright emitters when the size of the nanocrystal imposes only weak quantum confinement. This enables the production of nanocrystals that exhibit narrow emission linewidths regardless of their polydispersity. The combination of these attributes and their easy-to-perform synthesis has resulted in numerous articles demonstrating the use of perovskite nanocrystals as both classical and quantum light sources with considerable commercial interest. Perovskite nanocrystals have been applied to numerous other optoelectronic applications such as light emitting diodes, lasers, visible communication, scintillators, solar cells, and photodetectors. Physical properties. Perovskite nanocrystals possess numerous unique attributes: defect tolerance, high quantum yield, fast rates of radiative decay and narrow emission line width in weak confinement, which make them ideal candidates for a variety of optoelectronic applications. Bulk vs. nano. The intriguing optoelectronic properties of lead halide perovskites were first studied in single crystals and thin films.: From these reports, it was discovered that these materials possess high carrier mobility, long carrier lifetimes, long carrier diffusion lengths, and small effective carrier masses. Unlike their nanocrystal counterparts, bulk ABX3 materials are non-luminescent at room temperature, but they do exhibit bright photoluminescence once cooled to cryogenic temperatures. Defect-tolerance. Contrary to the characteristics of other colloidal quantum dots such as CdSe, ABX3 QDs are shown to be bright, high quantum yield (above 80%) and stable emitters with narrow linewidths without surface passivation. In II-VI systems, the presence of dangling bonds on the surface results in photoluminescence quenching and photoluminescent intermittence or blinking. The lack of sensitivity to the surface can be rationalized from the electronic band structure and density of states calculations for these materials. Unlike conventional II-VI semiconductors where the band gap is formed by bonding and antibonding orbitals, the frontier orbitals in ABX3 QDs are formed by antibonding orbitals composed of Pb 6s 6p and X np orbitals (n is the principle quantum number for the corresponding halogen atom). As a result, dangling bonds (under-coordinated atoms) result in intraband states or shallow traps instead of deep mid-gap states (e.g. d in CdSe QDs. This observation was corroborated by computational studies which demonstrated that the electronic structure of CsPbX3 materials exhibits a trap-free band gap. Furthermore, band structure calculations performed by various groups have demonstrated that these are direct band gap materials at their R-point (a critical point of the Brillouin zone) with a composition dependent band gaps. Photoluminescence. It was discovered in 2015 that the photoluminescence of perovskite nanocrystals can be post-synthetically tuned across the visible spectral range through halide substitution to obtain , , , , and ; there was no evidence of . The change in band-gap with composition can be described by Vegard's Law, which describes the change in lattice parameter as a function of the change in composition for a solid solution. However, the change in lattice parameter can be rewritten to describe the change in band gap for many semiconductors. The change in band gap directly affects the energy or wavelength of light that can be absorbed by the material and therefore its color. Furthermore, this directly alters the energy of emitted light according to the Stokes shift of the material. This quick, post-synthetic anion-tunability is in contrast to other quantum dot systems where emission wavelength is primarily tuned through particle size by altering the degree of quantum confinement. Aside from tuning the absorption edge and emission wavelength by anion substitution, it was also observed that the A-site cation also affects both properties. This occurs as a result of the distortion of the perovskite structure and the tilting of octahedra due to the size of the A-cation. Cs, which yields a Goldschmidt tolerance factor of less than one, results in a distorted, orthorhombic structure at room temperature. This results in reduced orbital overlap between the halide and lead atoms and blue shifts the absorption and emission spectra. On the other hand, FA yields a cubic structure and results in FAPbX3 having red shifted absorption and emission spectra as compared to both Cs and MA. Of these three cations, MA is intermediate size between Cs and FA and therefore results in absorption and emission spectra intermediate between those of Cs and FA. Through the combination of both anionic and cationic tuning, the whole spectrum ranging from near-UV to near-IR can be covered. Absorption Coefficient. Recent studies have demonstrated that CsPbBr3 nanocrystals have an absorption coefficient of 2x105 cm−1 at 335 nm and 8x104 cm−1 at 400 nm. Single Dot Spectroscopy of Perovskite Nanocrystals. Blinking and Spectral diffusion. Spectroscopic studies of individual nanocrystals have revealed blinking-free emission and very low spectral diffusion without a passivating shell around the NCs. Studies have also demonstrated blinking-free emission at room temperature with a strongly reduced Auger recombination rate at room temperature (CsPbI3 NCs). Exciton fine-structure and the Rashba effect. It was observed that emission from perovskite nanocrystals may be the result of a bright (optically active) triplet state. Several effects have been suggested to play a role on the exciton fine structure such as electron-hole exchange interactions, crystal field and shape anisotropy, as well as the Rashba effect. Recent reports have described the presence of the Rashba effect within bulk- and nano- and . While it has been reported that the Rashba effect contributes to the existence of a lowest energy triplet state , recent work on has indicated the presence of a lower lying dark state, which can be activated with the application of a magnetic field. Coherent emission. Numerous quantum optical technologies require coherent light sources. Perovskite nanocrystals have been demonstrated as sources of such light as well as suitable materials for the generation of single photons with high coherence. Self-assembly and Superfluorescence. Monodisperse perovskite nanocrystals can be assembled into cubic superlattices, which can range from a few hundreds of nanometers to tens of microns in size and show tunable photoluminescence by changing nanocrystal composition via anion exchange (for example, from green-emitting CsPbBr3 nanocrystal superlattices to yellow and orange emitting CsPb(I1−xBrx)3 nanocrystal superlattices to red-emitting CsPbI3 ones). These superlattices have been reported to exhibit very high degree of structural order and unusual optical phenomena such as superfluorescence. In the case of these superlattices, it was reported that the dipoles of the individual nanocrystals can become aligned and then simultaneously emit several pulses of light. Chemical properties. Synthesis. Early attempts were made to prepare MAPbX3 perovskites as nanocrystals in 2014 by non-template synthesis. It was not until 2015 that CsPbX3 nanocrystals were prepared by the Kovalenko research group at ETH Zurich. by a hot-injection synthesis. Since then numerous other synthetic routes towards the successful preparation of ABX3 NCs have been demonstrated. Hot-injection. The majority of papers reporting on ABX3 NCs make use of a hot injection procedure in which one of the reagents is swiftly injected into a hot solution containing the other reagents and ligands. The combination of high temperature and rapid addition of the reagent result in a rapid reaction that results in supersaturation and nucleation occurring over a very short period of time with a large number of nuclei. After a short period of time, the reaction is quenched by quickly cooling to room temperature. Since 2015, several articles detailing improvements to this approach with zwitterionic ligands, branched ligands and post-synthetic treatments have been reported. Recently, soy-lecithin was demonstrated to be a ligand system for these nanocrystals that could stabilize concentrations from several ng/mL up to 400 mg/mL. Co-precipitation. A second, popular method for the preparation of ABX3 NCs relies on the ionic nature of APbX3 materials. Briefly, a polar, aprotic solvent such as DMF or DMSO is used to dissolve the starting reagents such as PbBr2, CsBr, oleic acid, and an amine. The subsequent addition of this solution into a non-polar solvent reduces the polarity of the solution and causes precipitation of the ABX3 phase. Microfluidics. Microfluidics have been also used to synthesize CsPbX3 NCs and to screen and study synthetic parameters. Recently, a modular microfluidic platform has been developed at North Carolina State University to further optimize the synthesis and composition of these materials. Other routes. Outside of the traditional synthetic routes, several papers have reported that CsPbX3 NCs could be prepared on supports or within porous structures even without ligands. Dirin et al. first demonstrated that bright NCs of CsPbX3 could be prepared without organic ligands within the pores of mesoporous silica. By using mesoporous silica as a template, the size of CsPbX3 nanodomains is restricted to the pore size. This allows for greater control over emission wavelength via quantum confinement and illustrates the defect tolerant nature of these materials. This concept was later extended to the preparation of ligand-free APbX3 NCs on alkali-halide supports that could be shelled with NaBr without deteriorating their optical properties and protecting the nanocrystals against a number of polar solvents. As a result of the low melting point and ionic nature of ABX3 materials, several studies have demonstrated that bright ABX3 nanocrystals can also be prepared by ball-milling. With NCs, the composition can be tuned via ion exchange i.e. the ability to post-synthetically exchange the ions in the lattice for those added. This has been shown to be possible for both anions and cations. Anion exchange. The anions in the lead halide perovskites are highly mobile. The mobility arises from the diffusion of halide vacancies throughout the lattice, with an activation barrier of 0.29 eV and 0.25 eV for CsPbCl3 and CsPbBr3 respectively. (see: physical properties). This was used by Nedelcu et al. and Akkerman et al., to demonstrate that the composition of cesium lead halide perovskite nanocrystals could be tuned continuously from CsPbCl3 to CsPbBr3 and from CsPbBr3 to CsPbI3 to obtain emission across the entire visible spectrum. While this was first observed in a colloidal suspension, this was also shown in solid pellets of alkali halide salts pressed with previously synthesized nanocrystals. This same phenomenon has also been observed for MAPbX3 and FAPbX3 NCs. Cation exchange and doping. Although several reports showed that CsPbX3 NCs could be doped with Mn2+, they accomplished this through the addition of the Mn precursor during the synthesis, and not through cation exchange. Cation exchange can be used to partially exchange Pb2+ with Sn2+, Zn2+, or Cd2+ over the course of several hours. In addition to these cations, gold was also shown to be a suitable candidate for cation exchange yielding a mixed-valent, and distorted, perovskite with the composition Cs2Au(I)Au(III)Br6. A-site cation exchange has also been shown to be a viable route for the transformation of CsPbBr3 to MAPbBr3 and from CsPbI3 to FAPbI3. Ligand-assisted reprecipitation (LARP). Ligand-assisted reprecipitation method is dedicated for the preparation of perovskite nanoplatelets (NPls). In this method, the precursors in different solvents whether polar like Dimethylformamide and Dimethyl sulfoxide or non-polar like toluene and hexane are added in the presence of the ligands to form the perovskite NPls theough supersaturation. The NPls thickness obtained from this method depends on the concentration of the ligands as well as the chain length of the organic ligands. Therefore, the thickness can be controlled by ratio between A-cation-oleate and lead-halide precursors in the reaction medium. By adjusting the toluene and acetone during the synthesis, the NPls are crystallized and precipitated at room temperature with these two solvents, respectively.&lt;ref name="pmidhttps://doi.org/10.1002/adma.202107105"&gt;&lt;/ref&gt; Morphology. Nanomaterials can be prepared with various morphologies that range from spherical particles/quantum wells (0D) to wires (1D) and platelets or sheets (2D), and this has been previously demonstrated for QDs such as CdSe. While the initial report of lead halide perovskite NCs covered cubic particles, subsequent reports demonstrated that these materials could also be prepared as both platelets (2D) and wires (1D). Due to the varying degrees of quantum confinement present in these different shapes, the optical properties (emission spectrum and mean lifetime) change. As an example of the effect of morphology, cubic nanocrystals of CsPbBr3 can emit from 470 nm to 520 nm based on their size (470 nm emission requires nanocrystals with an average diameter of less than 4 nm). Within this same composition (CsPbBr3), nanoplatelets exhibit emission that is blue shifted from that of cubes with the wavelength depending on the number of monolayers contained within the platelet (from 440 nm for three monolayers to 460 nm for 5 monolayers). Nanowires of CsPbBr3, on the other hand, emit from 473 nm to 524 nm depending on the width of the wire prepared with lifetimes also in the range of 2.5 ns – 20.6 ns. Similarly to CsPbBr3, MAPbBr3 NCs also exhibit morphologically dependent optical properties with nanocrystals of MAPbBr3 emitting from 475 nm to 520 nm and exhibiting average lifetimes on the order of 240 ns depending on their composition. Nanoplatelets and nanowires have been reported to emit at 465 nm and 532 nm, respectively. Structure and composition. Perovskite nanocrystals all have the general composition ABX3 in which A is a large, central cation (typically MA, FA, or Cs) that sits in a cavity surrounded by corner-sharing BX6 octahedra (B = Pb, Sn; X = Cl, Br, I). Depending on the composition, the crystal structure can vary from orthorhombic to cubic, and the stability of a given composition can be qualitatively predicted by its goldschmidt tolerance factor formula_0 where t is the calculated tolerance factor and r is the ionic radius of the A, B, and X ions, respectively. Structures with tolerance factors between 0.8 and 1 are expected to have cubic symmetry and form three dimensional perovskite structures such as those observed in CaTiO3. Furthermore, tolerance factors of t &gt; 1 yield hexagonal structures (CsNiBr3 type), and t &lt; 0.8 result in NH4CdCl3 type structures. If the A-site cation is too large (t &gt;1), but packs efficiently, 2D perovskites can be formed. Distortions and Phase transitions. The corner-sharing BX6 octahedra form a three-dimensional framework through bridging halides. The angle (Φ) formed by B-X-B (metal-halide-metal) can be used to judge the closeness of a given structure to that of an ideal perovskite. Although these octahedra are interconnected and form a framework, the individual octahedra are able to tilt with respect to one another. This tilting is affected by the size of the "A" cation as well as external stimuli such as temperature or pressure. If the B-X-B angle deviates too far from 180°, phase transitions towards non-luminescent or all-together non-perovskite phases can occur. If the B-X-B angle does not deviate very far from 180°, the overall structure of the perovskite remains as a 3D network of interconnected octahedra, but the optical properties may change. This distortion increases the band gap of the material as the overlap between Pb and X based orbitals is reduced. For example, changing the A cation from Cs to MA or FA alters the tolerance factor and decreases the band gap as the B-X-B bond angle approaches 180° and the orbital overlap between the lead and halide atoms increases. These distortions can further manifest themselves as deviations in the band gap from that expected by Vegard's Law for solid solutions. Crystal structure and twinning in nanocrystals. The room temperature crystal structures of the various bulk lead-halide perovskites have been extensively studied and have been reported for the APbX3 perovskites. The average crystal structures of the nanocrystals tend to agree with those of the bulk. Studies have, however, shown that these structures are dynamic and deviate from the predicted structures due to the presence of twinned nanodomains. Surface chemistry. Calculations as well as empirical observations have demonstrated that perovskite nanocrystals are defect-tolerant semiconductor materials. As a result, they do not require epitaxial shelling or surface passivation since they are insensitive to surface defect states. In general, the perovskite nanocrystal surface is considered to be both ionic and highly dynamic. However, the ionic properties caused the instability of perovskite nanocrystals in humid condition and the degradation process can be accelerated by photoirradiation, which can alter the electronic properties of nanocrystals. Initial reports utilized dynamically bound oleylammonium and oleate ligands that exhibited an equilibrium between bound and unbound states. This resulted in severe instability with respect to purification and washing, which was improved in 2018 with the introduction of zwitterionic ligands. The stability and quality of these colloidal materials was further improved in 2019 when it was demonstrated that deep traps could be generated by the partial destruction of the lead-halide octahedra, and that they could also be subsequently repaired to restore the quantum yield of nanocrystals. Applications and Devices. Light-emitting Diodes. Perovskite NCs are promising materials for the emitting layer of light-emitting diodes (LEDs) as they offer potential advantages over organic LEDs (OLEDs) such as the elimination of precious metals (Ir, Pt) and simpler syntheses. The first report of green electroluminescence (EL) was from MAPbBr3 NCs although no efficiency values were reported. It was later observed that MAPbBr3 NCs could form in a polymer matrix when the precursors for MAPbBr3 thin films were mixed with an aromatic polyidmide precursor. The authors of this study obtained green EL with an external quantum efficiency (EQE) of up to 1.2%. The first LEDs based on colloidal CsPbX3 NCs demonstrated blue, green and orange EL with sub-1% EQE. Since then, efficiencies have reached above 8% for green LEDs (CsPbBr3 NCs), above 7% for red LEDs (CsPbI3 NCs), and above 1% for blue LEDs ). Lasers. Perovskite MAPbX3 thin films have been shown to be promising materials for optical gain applications such as lasers and optical amplifiers. Afterwards, the lasing properties of colloidal perovskite NCs such as CsPbX3 nanocubes, MAPbBr3 nanoplatelets and FAPbX3 nanocubes were also demonstrated. Thresholds as low as 2 uJ cm−2 have been reported for colloidal NCs (CsPbX3) and 220 nJ cm−2 for MAPbI3 nanowires. Interestingly, perovskite NCs show efficient optical gain properties not only under resonant excitation, but also under two-photon excitation where the excitation light falls into the transparent range of the active material. While the nature of optical gain in perovskites is not yet clearly understood, the dominant hypothesis is that the population inversion of excited states required for gain appears to be due to bi-excitonic states in the perovskite. Photocatalysis. Perovskite nanocrystals have also been investigated as potential photocatalysts. Security. Perovskite nanocrystals doped with large cations such as ethylene diamine (en) were demonstrated to exhibit hypsochromaticity concomitantly with lengthened photoluminescence lifetimes relative to their undoped counterparts. This phenomenon was utilized by researchers to generate single color luminescent QR codes that could only be deciphered by measuring the photoluminescence lifetime. The lifetime measurements were carried out utilizing both time correlated single photon counting equipment as well as a prototype time-of-flight fluorescence imaging device developed by CSEM. Other phases. Ternary cesium lead halides have multiple stable phases that can be formed; these include CsPbX3 (perovskite), Cs4PbX6 (so called "zero-dimensional" phase due to disconnected [PbX6]4- octahedra), and CsPb2X5. All three phases have been prepared colloidally either by a direct synthesis or via nanocrystal transformations. A rising research interest in these compounds created a disagreement within the community around the zero-dimensional Cs4PbBr6 phase. Two contradicting claims exist regarding the optical properties of this material: i) the phase exhibits high photoluminescent quantum yield emission at 510-530 nm and ii) the phase is non-luminescent in the visible spectrum. It was later demonstrated that pure, Cs4PbBr6 NCs were non-luminescent, and that these could be converted to luminescent CsPbX3 NCs and vice versa. A similar debate had occurred regarding the CsPb2Br5 phase, which was also reported as being strongly luminescent. This phase, like the Cs4PbBr6 phase, is a wide gap semiconductor (~3.1 eV), but it is also an indirect-semiconductor and is non-luminescent. The non-luminescent nature of this phase was further demonstrated in NH4Pb2Br5. Lead-free perovskite nanocrystals. Given the toxicity of lead, there is ongoing research into the discovery of lead-free perovskites for optoelectronics. Several lead-free perovskites have been prepared colloidally: Cs3Bi2I9, Cs2PdX6, CsSnX3. CsSnX3 NCs, although the closest lead-free analogue to the highly luminescent CsPbX3 NCs, do not exhibit high quantum yields (&lt;1% PLQY) CsSnX3 NCs are also sensitive towards O2 which causes oxidation of Sn(II) to Sn(IV) and renders the NCs non-luminescent. Another approach to this problem relies on the replacement of the Pb(II) cation with the combination of a monovalent and a trivalent cation i.e. B(II) replaced with B(I) and B(III). Double perovskite nanocrystals such as Cs2AgBiX6 (X = Cl, Br, I), Cs2AgInCl6 (including Mn-doped variant), and Cs2AgInxBi1−xCl6 (including Na-doped variant) have been studied as potential alternatives to lead-halide perovskites, although none exhibit narrow, high PLQY emission. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " t={(r_a+r_x) \\over \\sqrt{2} (r_b+r_x)} " } ]
https://en.wikipedia.org/wiki?curid=61491450
6149634
Naccache–Stern cryptosystem
The Naccache–Stern cryptosystem is a homomorphic public-key cryptosystem whose security rests on the higher residuosity problem. The Naccache–Stern cryptosystem was discovered by David Naccache and Jacques Stern in 1998. Scheme Definition. Like many public key cryptosystems, this scheme works in the group formula_0 where "n" is a product of two large primes. This scheme is homomorphic and hence malleable. Key Generation. The public key is the numbers σ,"n","g" and the private key is the pair "p","q". When "k"=1 this is essentially the Benaloh cryptosystem. Message Encryption. This system allows encryption of a message "m" in the group formula_4. Then "E(m)" is an encryption of the message "m". Message Decryption. To decrypt, we first find "m" mod "p""i" for each "i", and then we apply the Chinese remainder theorem to calculate "m" mod formula_7. Given a ciphertext "c", to decrypt, we calculate formula_9 where formula_10. Security. The semantic security of the Naccache–Stern cryptosystem rests on an extension of the quadratic residuosity problem known as the higher residuosity problem.
[ { "math_id": 0, "text": "(\\mathbb{Z}/n\\mathbb{Z})^*" }, { "math_id": 1, "text": "u = \\prod_{i=1}^{k/2} p_i" }, { "math_id": 2, "text": "v = \\prod_{k/2+1}^k p_i" }, { "math_id": 3, "text": "\\sigma = uv = \\prod_{i=1}^k p_i" }, { "math_id": 4, "text": "\\mathbb{Z}/\\sigma\\mathbb{Z}" }, { "math_id": 5, "text": "x \\in \\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 6, "text": "E(m) = x^\\sigma g^m \\mod n" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "c_i \\equiv c^{\\phi(n)/p_i} \\mod n" }, { "math_id": 9, "text": " \\begin{matrix} c^{\\phi(n)/p_i} &\\equiv& x^{\\sigma \\phi(n)/p_i} g^{m\\phi(n)/p_i} \\mod n\\\\ &\\equiv& g^{(m_i + y_ip_i)\\phi(n)/p_i} \\mod n \\\\ &\\equiv& g^{m_i\\phi(n)/p_i} \\mod n \\end{matrix}" }, { "math_id": 10, "text": "m_i \\equiv m \\mod p_i" }, { "math_id": 11, "text": "c_i" }, { "math_id": 12, "text": "g^{j\\phi(n)/p_i}" } ]
https://en.wikipedia.org/wiki?curid=6149634
614988
Alfréd Rényi
Hungarian mathematician (1921–1970) Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician known for his work in probability theory, though he also made contributions in combinatorics, graph theory, and number theory. Life. Rényi was born in Budapest to Artúr Rényi and Borbála Alexander; his father was a mechanical engineer, while his mother was the daughter of philosopher and literary critic Bernhard Alexander; his uncle was Franz Alexander, a Hungarian-American psychoanalyst and physician. He was prevented from enrolling in university in 1939 due to the anti-Jewish laws then in force, but enrolled at the University of Budapest in 1940 and finished his studies in 1944. At this point, he was drafted to forced labour service, from which he managed to escape during transportation of his company. He was in hiding with false documents for six months. Biographers tell an incredible story about Rényi: after half of a year in hiding, he managed to get hold of a soldier's uniform and march his parents out of the Budapest Ghetto, where they were captive. That mission required enormous courage and planning skills. Rényi then completed his PhD in 1947 at the University of Szeged, under the advisement of Frigyes Riesz. He did his postgraduate in Moscow and Leningrad, where he collaborated with a prominent Soviet mathematician Yuri Linnik. Rényi married Katalin Schulhof (who used Kató Rényi as her married name), herself a mathematician, in 1946; their daughter Zsuzsanna was born in 1948. After a brief assistant professorship at Budapest, he was appointed Professor Extraordinary at the University of Debrecen in 1949. In 1950, he founded the Mathematics Research Institute of the Hungarian Academy of Sciences, now bearing his name, and directed it until his early death. He also headed the Department of Probability and Mathematical Statistics of the Eötvös Loránd University, from 1952. He was elected a corresponding member (1949), then full member (1956), of the Hungarian Academy of Sciences. Work. Rényi proved, using the large sieve, that there is a number formula_0 such that every even number is the sum of a prime number and a number that can be written as the product of at most formula_0 primes. Chen's theorem, a strengthening of this result, shows that the theorem is true for "K" = 2, for all sufficiently large even numbers. The case "K" = 1 is the still-unproven Goldbach conjecture. In information theory, he introduced the spectrum of Rényi entropies of order "α", giving an important generalisation of the Shannon entropy and the Kullback–Leibler divergence. The Rényi entropies give a spectrum of useful diversity indices, and lead to a spectrum of fractal dimensions. The Rényi–Ulam game is a guessing game where some of the answers may be wrong. In probability theory, he is also known for his parking constants, which characterize the solution to the following problem: given a street of some length and cars of unit length parking on a random free position on the street, what is the mean density of cars when there are no more free positions? The solution to that problem is asymptotically equal to 0.7475979 (sequence in the OEIS). Thus, random parking is 25.2% less efficient than optimal packing. He wrote 32 joint papers with Paul Erdős, the most well-known of which are his papers introducing the Erdős–Rényi model of random graphs. The corpus of his bibliography was compiled by the mathematician Pál Medgyessy. Quotations. Rényi, who was addicted to coffee, is the source of the quote: "A mathematician is a device for turning coffee into theorems", which is often ascribed to Erdős. It has been suggested that this sentence was originally formulated in German, where it can be interpreted as a double entendre on the meaning of the word "Satz" (theorem or coffee residue), but it is more likely that the original formulation was in Hungarian. He is also famous for having said, "If I feel unhappy, I do mathematics to become happy. If I am happy, I do mathematics to keep happy." Remembrance. The Alfréd Rényi Prize, awarded by the Hungarian Academy of Science, was established in his honor. In 1950 Rényi founded the Mathematics Research Institute of the Hungarian Academy of Sciences. It was renamed the Alfréd Rényi Institute of Mathematics in July 1999. Books. "Foundations of Probability" and "Probability Theory" have both been reprinted by Dover Publications. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K" } ]
https://en.wikipedia.org/wiki?curid=614988
614998
Pál Turán
Hungarian mathematician Pál Turán (; 18 August 1910 – 26 September 1976) also known as Paul Turán, was a Hungarian mathematician who worked primarily in extremal combinatorics. In 1940, because of his Jewish origins, he was arrested by the Nazis and sent to a labour camp in Transylvania, later being transferred several times to other camps. While imprisoned, Turán came up with some of his best theories, which he was able to publish after the war. Turán had a long collaboration with fellow Hungarian mathematician Paul Erdős, lasting 46 years and resulting in 28 joint papers. Biography. Early years. Turán was born into a Hungarian Jewish family in Budapest on 18 August 1910. Pál's outstanding mathematical abilities showed early, already in secondary school he was the best student. At the same period of time, Turán and Pál Erdős were famous answerers in the journal "KöMaL". On 1 September 1930, at a mathematical seminar at the University of Budapest, Turan met Erdős. They would collaborate for 46 years and produce 28 scientific papers together. Turán received a teaching degree at the University of Budapest in 1933. In the same year he published two major scientific papers in the journals of the American and London Mathematical Societies. He got the PhD degree under Lipót Fejér in 1935 at Eötvös Loránd University. As a Jew, he fell victim to numerus clausus, and could not get a stable job for several years. He made a living as a tutor, preparing applicants and students for exams. It was not until 1938 that he got a job at a rabbinical training school in Budapest as a teacher's assistant, by which time he had already had 16 major scientific publications and an international reputation as one of Hungary's leading mathematicians. He married Edit (Klein) Kóbor in 1939; they had one son, Róbert. In World War II. In September 1940 Turán was interned in labour service. As he recalled later, his five years in labour camps eventually saved his life: they saved him from ending up in a concentration camp, where 550,000 of the 770,000 Hungarian Jews were murdered during World War II. In 1940 Turán ended up in Transylvania for railway construction. Turán said that one day while working another prisoner addressed him by his surname, saying that he was working extremely clumsily: "An officer was standing nearby, watching us work. When he heard my name, he asked the comrade whether I was a mathematician. It turned out, that the officer, Joshef Winkler, was an engineer. In his youth, he had placed in a mathematical competition; in civilian life he was a proof-reader at the print shop where the periodical of the Third Class of the Academy (Mathematical and Natural sciences) was printed. There he had seen some of my manuscripts." Winkler wanted to help Turán and managed to get him transferred to an easier job. Turán was sent to the sawmill's warehouse, where he had to show the carriers the right-sized timbers. During this period, Turán composed and was partly able to record a long paper on the Riemann zeta function. Turán was subsequently transferred several times to other camps. As he later recalled, the only way he was able to keep his sanity was through mathematics, solving problems in his head and thinking through problems. In July 1944 Turán worked on a brick factory near Budapest. His and the other prisoners' task was to carry the brick cars from the kilns to the warehouses on rails that crossed at several points with other tracks. At these crossings the trolleys would "bounce" and some of the bricks would fall out, causing a lot of problems for the workers. This situation led Turan to consider how to achieve the minimum number of crossings for m kilns and n warehouses. It was only after the war, in 1952, that he was able to work seriously on this problem. Turán was liberated in 1944, after which he was able to return to work at the rabbinical school in Budapest. After WWII. Turán became associate professor at the University of Budapest in 1945 and full professor in 1949. In the early post-war years, the streets were patrolled by soldiers. On occasion, random people were seized and sent to penal camps in Siberia. Once such a patrol stopped Turan, who was on his way home from university. The soldiers questioned the mathematician and then forced him to show them the contents of his briefcase. Seeing a reprint of an article from a pre-War Soviet magazine among the papers, the soldiers immediately let the mathematician go. The only thing Turán said about that day in his correspondence with Erdös was that he had "come across an extremely interesting way of applying number theory..." In 1952 he married again, the second marriage was to Vera Sós, a mathematician. They had a son, György, in 1953. The couple published several papers together. One of his students said Turán was a very passionate and active man - in the summer he held maths seminars by the pool in between his swimming and rowing training. In 1960 he celebrated his 50th birthday and the birth of his third son, Tamás, by swimming across the Danube. Turán was a member of the editorial boards of leading mathematical journals, he worked as a visiting professor at many of the top universities in the world. He was a member of the Polish, American and Austrian Mathematical Societies. In 1970, he was invited to serve on the committee of the Fields Prize. Turán also founded and served as the president of the János Bolyai Mathematical Society. Death. Around 1970 Turán was diagnosed with leukaemia, but the diagnosis was revealed only to his wife Vera Sós, who decided not to tell him about his illness. In 1976 she told Erdős. Sós was sure that Turán was ‘too much in love with life’ and would have fallen into despair at the news of his fatal illness, and would not have been able to work properly. Erdős said that Turán did not lose his spirit even in the Nazi camps and did brilliant work there. Erdős regretted that Turán had been kept unaware of his illness because he had put off certain works and books 'for later', hoping that he would soon feel better, and in the end was never able to finish them. Turán died in Budapest on 26 September 1976 of leukemia, aged 66. Work. Turán worked primarily in number theory, but also did much work in analysis and graph theory. Number theory. In 1934, Turán used the Turán sieve to give a new and very simple proof of a 1917 result of G. H. Hardy and Ramanujan on the normal order of the number of distinct prime divisors of a number "n", namely that it is very close to formula_0. In probabilistic terms he estimated the variance from formula_0. Halász says "Its true significance lies in the fact that it was the starting point of probabilistic number theory". The Turán–Kubilius inequality is a generalization of this work. Turán was very interested in the distribution of primes in arithmetic progressions, and he coined the term "prime number race" for irregularities in the distribution of prime numbers among residue classes. With his coauthor Knapowski he proved results concerning Chebyshev's bias. The Erdős–Turán conjecture makes a statement about primes in arithmetic progression. Much of Turán's number theory work dealt with the Riemann hypothesis and he developed the power sum method (see below) to help with this. Erdős said "Turán was an 'unbeliever,' in fact, a 'pagan': he did not believe in the truth of Riemann's hypothesis." Analysis. Much of Turán's work in analysis was tied to his number theory work. Outside of this he proved Turán's inequalities relating the values of the Legendre polynomials for different indices, and, together with Paul Erdős, the Erdős–Turán equidistribution inequality. Graph theory. Erdős wrote of Turán, "In 1940–1941 he created the area of extremal problems in graph theory which is now one of the fastest-growing subjects in combinatorics." The field is known more briefly today as extremal graph theory. Turán's best-known result in this area is Turán's graph theorem, that gives an upper bound on the number of edges in a graph that does not contain the complete graph "Kr" as a subgraph. He invented the Turán graph, a generalization of the complete bipartite graph, to prove his theorem. He is also known for the Kővári–Sós–Turán theorem bounding the number of edges that can exist in a bipartite graph with certain forbidden subgraphs, and for raising Turán's brick factory problem, namely of determining the crossing number of a complete bipartite graph. Power sum method. Turán developed the power sum method to work on the Riemann hypothesis. The method deals with inequalities giving lower bounds for sums of the form formula_1 hence the name "power sum". Aside from its applications in analytic number theory, it has been used in complex analysis, numerical analysis, differential equations, transcendental number theory, and estimating the number of zeroes of a function in a disk. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ln \\ln n" }, { "math_id": 1, "text": " \\max_{\\nu=m+1,\\dots,m+n} \\left | \\sum_{j=1}^n b_j z_j^\\nu \\right |, " } ]
https://en.wikipedia.org/wiki?curid=614998
6150200
Small area estimation
Statistical techniques involving the estimation of parameters for small sub-populations Small area estimation is any of several statistical techniques involving the estimation of parameters for small sub-populations, generally used when the sub-population of interest is included in a larger survey. The term "small area" in this context generally refers to a small geographical area such as a county. It may also refer to a "small domain", i.e. a particular demographic within an area. If a survey has been carried out for the population as a whole (for example, a nation or statewide survey), the sample size within any particular small area may be too small to generate accurate estimates from the data. To deal with this problem, it may be possible to use additional data (such as census records) that exists for these small areas in order to obtain estimates. One of the more common small area models in use today is the 'nested area unit level regression model', first used in 1988 to model corn and soybean crop areas in Iowa. The initial survey data, in which farmers reported the area they had growing either corn or soybeans, was compared to estimates obtained from satellite mapping of the farms. The final model resulting from this for unit/farm 'j' in county 'i' is formula_0, where 'y' denotes the reported crop area, formula_1 is the regression coefficient, 'x' is the farm-level estimate for either corn or soybean usage from the satellite data and formula_2 represents the county-level effect of any area characteristics unaccounted for. The Fay-Herriot model, a random effects model, has been used to make estimates for small domains when the sample from each domain is too small for fixed effects.
[ { "math_id": 0, "text": "y_{ij} = x_{ij}'\\beta +\\mu_i +\\epsilon_{ij} \\," }, { "math_id": 1, "text": "\\beta \\," }, { "math_id": 2, "text": "\\mu \\, " } ]
https://en.wikipedia.org/wiki?curid=6150200
61502277
Coate–Loury model
Model of affirmative action The Coate–Loury model of affirmative action was developed by Stephen Coate and Glenn Loury in 1993. The model seeks to answer the question of whether, by mandating expanded opportunities for minorities in the present, these policies are rendered unnecessary in the future. Affirmative action may lead to one of two outcomes: Coate and Loury concluded that either equilibrium is possible under certain assumptions. Model framework. The exposition of the Coate–Loury model follows the notes of David Autor. The authors make three assumptions as a starting point for their model: Employers are able to observe worker's identity formula_0, where the fraction of the population that is formula_1 is formula_2, and a noisy signal of the worker's qualification level formula_3. Employers can assign workers to either Task 0 or Task 1, with only qualified workers being successful at Task 1. Employers get a net return formula_4 from assigning a worker to Task 1 of the form:formula_5The ratio of net gain to loss formula_6. The distribution of formula_7 depends on whether or not the worker is qualified, which is assumed to not differ between formula_8 and formula_1. Let formula_9 be the probability that the signal does not exceed formula_7, given that the worker is qualified; formula_10 is the probability that the signal does not exceed formula_7, given that the worker is unqualified. The corresponding probability density functions are formula_11 and formula_12. Let formula_13 be the likelihood ratio, and assume that it is non-increasing on formula_3. This implies that:formula_14Therefore, higher values of the signal are more likely if the worker is qualified. This implies that formula_15 has the monotone likelihood ratio (MLR) property. Employers' decision rule. For a worker from group formula_8 or formula_1, the fraction of qualified workers in the group is formula_16. Using Bayes' rule, the employer’s posterior probability that the worker is qualified, given the worker’s signal, is:formula_17 The expected benefit of assigning a worker to Task 1 is:formula_18Then the employer will assign a worker to Task 1 if the return is positive, which implies that:formula_19Based on the MLR assumption, there exists a threshold standard formula_20 that depends on group membership, so that workers with formula_21 are placed in Task 1:formula_22This implies that a higher qualification rate of a group will lead to a lower threshold hiring standard formula_23. Workers' investment decision. The expected gross benefit to obtaining appropriate qualification for a worker is:formula_24where formula_25 is gross benefit of being assigned to Task 1 and formula_26 is the passing standard. Given the assumption that employers have rational expectations, only the "true" probability that a worker is qualified should matter - not the employer's beliefs about the probability. Note that formula_27 is a single-peaked function with formula_28, since there would be no point to investing if all workers were assigned to Task 1 or no workers were assigned to Task 1. This implies that the gross benefit to investing will rise so long as the marginal probability of being assigned to Task 1 is increasing in formula_26. To see this, note that the derivative of the gross benefit with respect to formula_26 is:formula_29This is only positive if formula_30. Since the boundary points are equal to zero, it follows that formula_31 must sometimes be above 1 and sometimes below 1 in the interval. Workers will invest if formula_32, so the share of workers investing will be formula_33. If formula_34 is continuous and formula_35, it will have the property that when the gross benefit is rising in formula_26, the net benefit should also be rising. Equilibrium. An equilibrium is a fixed point of the aforementioned hiring and investment policies where beliefs are self-confirming, such that:formula_36A discriminatory equilibrium formula_37 can occur whenever the equilibrium equation has multiple solutions. In this case, it is possible that employers will believe that members of formula_8 are less qualified than members of formula_1, which will be confirmed by the investment behavior of members of formula_8. Proposition 1 (p. 1226) proves that, under reasonable conditions, if a solution exists to the equilibrium condition, then at least two solutions will exist. At this point, there are several observations that can be made: Affirmative action. Under the assumption that a discriminatory equilibrium exists, with the further assumption of no differences in skill distributions, an affirmative action policy can be easily rationalized. Coate and Loury consider the policy where the rate of assignment for formula_8 and formula_1 workers to Task 1 is equalized. Let formula_2 be the proportion of formula_1 in the population. Let formula_38 be the ex ante probability that a worker is assigned to Task 1:formula_39And let formula_40 be the expected payoff from hiring this worker:formula_41Under affirmative action, the employers' optimization problem is to solve:formula_42where the equality constraint on the ex ante probabilities is the affirmative action constraint. The equivalent Lagrangian formula_43 is:formula_44where formula_45 is the Lagrange multiplier. Proposition 2 (p. 1229) develops a condition for the existence of a nondiscriminatory equilibrium under affirmative action. In particular, if any group of workers facing standard formula_26 invest so that the fraction formula_33 is qualified, then all equilibria are self-confirming:formula_46In this case, the affirmative action policy would equate employers' beliefs about members of each group. Patronizing equilibrium. However, it is not in general true that affirmative action under the model's assumptions leads to the nondiscriminatory equilibrium. If at formula_47 the employer lowered the threshold formula_48, then the fraction of workers investing would fall, and the employers' beliefs about the fraction who are qualified would not be satisfied. Therefore, a policy that lowered formula_47 would not be self-enforcing. Coate and Loury define an equilibrium where affirmative action constraint is permanently binding as a "patronizing equilibrium", where employers are compelled to lower their hiring standards for members of formula_8, relative to a member of formula_1. Therefore, the following conditions hold in a patronizing equilibrium:formula_49There are several possible negative effects on members of formula_8 from being trapped in a patronizing equilibrium: Recalling the Lagrangian that was developed earlier, we may consider the first-order optimality conditions. Computing formula_50 and rearranging terms gives us:formula_51where the ratios of net gain to loss for each group are:formula_52Given a shadow price of equality formula_45, employers act as if they must pay the tax of formula_53 for each formula_1 assigned to Task 1 instead of Task 0, while receiving the subsidy formula_54 for each formula_8 put into Task 1 rather than Task 0. Therefore, employers generally respond to the affirmative action constraint by lowering the standard for formula_8 and raising it for formula_1. Proposition 4 (p. 1234) shows that, under reasonable assumptions, the marginal productivity of formula_8 and formula_1 hires is not equated.
[ { "math_id": 0, "text": "\\mathcal{I} \\in \\{ B,W \\}" }, { "math_id": 1, "text": "W" }, { "math_id": 2, "text": "\\lambda" }, { "math_id": 3, "text": "\\theta \\in [0,1]" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "x = \\begin{cases} x_{q} > 0, \\quad &(\\text{Worker Qualified}) \\\\ -x_{u} <0, \\quad &(\\text{Worker Unqualified}) \\end{cases}" }, { "math_id": 6, "text": "r = x_{q}/x_{u}" }, { "math_id": 7, "text": "\\theta" }, { "math_id": 8, "text": "B" }, { "math_id": 9, "text": "F_{q}(\\theta)" }, { "math_id": 10, "text": "F_{u}(\\theta)" }, { "math_id": 11, "text": "f_{q}(\\theta)" }, { "math_id": 12, "text": "f_{u}(\\theta)" }, { "math_id": 13, "text": "\\varphi(\\theta) = f_{u}(\\theta)/f_{q}(\\theta)" }, { "math_id": 14, "text": "F_{q}(\\theta) \\leq F_{u}(\\theta), \\quad \\forall \\theta \\in [0,1]" }, { "math_id": 15, "text": "\\varphi(\\theta)" }, { "math_id": 16, "text": "\\pi" }, { "math_id": 17, "text": "\\begin{aligned}\n\\xi(\\pi,\\theta) &= {\\pi f_{q}(\\theta)\\over{\\pi f_{q}(\\theta) + (1 - \\pi )f_{u}(\\theta)}} \\\\\n&= {1\\over{1 + {1-\\pi\\over{\\pi}} \\varphi(\\theta) }}\n\\end{aligned}" }, { "math_id": 18, "text": "\\xi(\\pi,\\theta)x_{q} - \\left[1 - \\xi(\\pi,\\theta) \\right]x_{u}" }, { "math_id": 19, "text": "\\begin{aligned}\nr &\\geq {1-\\xi(\\pi,\\theta)\\over{\\xi(\\pi,\\theta)}} \\\\\n&\\geq \\left({1-\\pi\\over{\\pi}}\\right)\\varphi(\\theta)\n\\end{aligned}" }, { "math_id": 20, "text": "s^{*}(\\pi)" }, { "math_id": 21, "text": "\\theta > s^{*}" }, { "math_id": 22, "text": "s^{*}(\\pi) = \\min \\{ \\theta \\in [0,1], \\quad r\\geq [(1-\\pi)/\\pi]\\varphi(\\theta) \\}" }, { "math_id": 23, "text": "s^{*}" }, { "math_id": 24, "text": "\\begin{aligned}\n\\beta(s) &= \\omega\\left\\{[1-F_{q}(s)] - [1-F_{u}(s)] \\right\\} \\\\\n&= \\omega [F_{u}(s)-F_{q}(s)]\n\\end{aligned}" }, { "math_id": 25, "text": "\\omega" }, { "math_id": 26, "text": "s" }, { "math_id": 27, "text": "\\beta(s)" }, { "math_id": 28, "text": "\\beta(0) = \\beta(1) = 0" }, { "math_id": 29, "text": "{\\partial\\beta\\over{\\partial s}} = \\omega[f_{u}(s)-f_{q}(s)]" }, { "math_id": 30, "text": "\\varphi(s) > 1" }, { "math_id": 31, "text": "\\varphi(s)" }, { "math_id": 32, "text": "\\beta(s) \\geq c" }, { "math_id": 33, "text": "G[\\beta(s)]" }, { "math_id": 34, "text": "G(\\cdot)" }, { "math_id": 35, "text": "G(0)=0" }, { "math_id": 36, "text": "\\pi_{i} = G\\{ \\beta[s^{*}(\\pi_{i})] \\}, \\quad i \\in \\{B,W\\}" }, { "math_id": 37, "text": "(\\pi_{B} < \\pi_{W})" }, { "math_id": 38, "text": "\\rho(s,\\pi)" }, { "math_id": 39, "text": "\\rho(s,\\pi) = \\pi[1-F_{q}(s)] + (1-\\pi)[1-F_{u}(s)]" }, { "math_id": 40, "text": "P(s,\\pi)" }, { "math_id": 41, "text": "P(s,\\pi) = \\pi[1-F_{q}(s)]x_{q} - (1-\\pi)[1-F_{u}(s)]x_{u}" }, { "math_id": 42, "text": "\\max_{s_{w},s_{b}} \\; (1-\\lambda)P(s_{b},\\pi_{b}) + \\lambda P(s_{w},\\pi_{w}), \\quad \\text{s.t.} \\; \\rho(s_{b},\\pi_{b}) = \\rho(s_{w},\\pi_{w})" }, { "math_id": 43, "text": "\\mathcal{L}" }, { "math_id": 44, "text": "\\mathcal{L}(s_{b},s_{w},\\gamma;\\pi_{b},\\pi_{w}) = (1-\\lambda)P(s_{b},\\pi_{b}) + \\lambda P(s_{w},\\pi_{w}) + \\gamma \\left[ \\rho(s_{b},\\pi_{b}) - \\rho(s_{w},\\pi_{w})\\right]" }, { "math_id": 45, "text": "\\gamma" }, { "math_id": 46, "text": "\\widehat{\\rho}(s) = \\rho\\left\\{s,G[\\beta(s)]\\right\\}" }, { "math_id": 47, "text": "s_{w}" }, { "math_id": 48, "text": "s' < s_{w}" }, { "math_id": 49, "text": "s_{b}^{*} < s_{w}^{*}, \\quad \\pi_{b} < \\pi_{w}" }, { "math_id": 50, "text": "\\partial\\mathcal{L}/\\partial s_{i}" }, { "math_id": 51, "text": "\\begin{aligned}\nr_{b}(\\gamma) &= {1-\\pi_{b}\\over{\\pi_{b}}}\\varphi(s_{b}) \\\\\nr_{w}(\\gamma) &= {1-\\pi_{w}\\over{\\pi_{w}}}\\varphi(s_{w})\n\\end{aligned}" }, { "math_id": 52, "text": "r_{b}(\\gamma) = {x_{q} + \\gamma/(1-\\lambda)\\over{x_{u}-\\gamma/(1-\\lambda)}}, \\quad r_{w}(\\gamma) = {x_{q} + \\gamma/\\lambda\\over{x_{u}-\\gamma/\\lambda}}" }, { "math_id": 53, "text": "\\gamma/\\lambda" }, { "math_id": 54, "text": "\\gamma/(1-\\lambda)" } ]
https://en.wikipedia.org/wiki?curid=61502277
615108
Poynting–Robertson effect
Process in which solar radiation causes a dust grain orbiting a star to lose angular momentum The Poynting–Robertson effect, also known as Poynting–Robertson drag, named after John Henry Poynting and Howard P. Robertson, is a process by which solar radiation causes a dust grain orbiting a star to lose angular momentum relative to its orbit around the star. This is related to radiation pressure tangential to the grain's motion. This causes dust that is small enough to be affected by this drag, but too large to be blown away from the star by radiation pressure, to spiral slowly into the star. In the case of the Solar System, this can be thought of as affecting dust grains from to in diameter. Larger dust is likely to collide with another object long before such drag can have an effect. Poynting initially gave a description of the effect in 1903 based on the luminiferous aether theory, which was superseded by the theories of relativity in 1905–1915. In 1937 Robertson described the effect in terms of general relativity. History. Robertson considered dust motion in a beam of radiation emanating from a point source. A. W. Guess later considered the problem for a spherical source of radiation and found that for particles far from the source the resultant forces are in agreement with those concluded by Poynting. Source of the effect. The effect can be understood in two ways, depending on the reference frame chosen. From the perspective of the grain of dust circling a star (panel (a) of the figure), the star's radiation appears to be coming from a slightly forward direction (aberration of light). Therefore the absorption of this radiation leads to a force with a component against the direction of movement. The angle of aberration is extremely small since the radiation is moving at the speed of light while the dust grain is moving many orders of magnitude slower than that. From the perspective of the star (panel (b) of the figure), the dust grain absorbs sunlight entirely in a radial direction, thus the grain's angular momentum is not affected by it. But the "re-emission" of photons, which is isotropic in the frame of the grain (a), is no longer isotropic in the frame of the star (b). This anisotropic emission causes the photons to carry away angular momentum from the dust grain. Note that this anisotropic emission does not imply that an isolated radiating body in motion would decelerate (which would violate the principle of relativity). In this case, there would still be a net deceleration force (i.e. a decrease in momentum over time), but since the body's mass decreases as energy is radiated away, its velocity can remain constant. The Poynting–Robertson drag can be understood as an effective force opposite the direction of the dust grain's orbital motion, leading to a drop in the grain's angular momentum. While the dust grain thus spirals slowly into the star, its orbital speed increases continuously. The Poynting–Robertson force is equal to: formula_0 where "v" is the grain's velocity, "c" is the speed of light, "W" is the power of the incoming radiation, "r" the grain's radius, "G" is the universal gravitational constant, "M"s the Sun's mass, "L"s is the solar luminosity and "R" the grain's orbital radius. Relation to other forces. The Poynting–Robertson effect is more pronounced for smaller objects. Gravitational force varies with mass, which is formula_1 (where formula_2 is the radius of the dust), while the power it receives and radiates varies with surface area (formula_3). So for large objects the effect is negligible. The effect is also stronger closer to the Sun. Gravity varies as formula_4 (where R is the radius of the orbit) whereas the Poynting–Robertson force varies as formula_5, so the effect also gets relatively stronger as the object approaches the Sun. This tends to reduce the eccentricity of the object's orbit in addition to dragging it in. In addition, as the size of the particle increases, the surface temperature is no longer approximately constant, and the radiation pressure is no longer isotropic in the particle's reference frame. If the particle rotates slowly, the radiation pressure may contribute to the change in angular momentum, either positively or negatively. Radiation pressure affects the effective force of gravity on the particle: it is felt more strongly by smaller particles, and blows very small particles away from the Sun. It is characterized by the dimensionless dust parameter formula_6, the ratio of the force due to radiation pressure to the force of gravity on the particle: formula_7 where formula_8 is the Mie scattering coefficient, and formula_9 is the density and formula_10 is the size (the radius) of the dust grain. Impact of the effect on dust orbits. Particles with formula_11 have radiation pressure at least half as strong as gravity, and will pass out of the Solar System on hyperbolic orbits if their initial velocities were Keplerian. For rocky dust particles, this corresponds to a diameter of less than 1 μm. Particles with formula_12 may spiral inwards or outwards depending on their size and initial velocity vector; they tend to stay in eccentric orbits. Particles with formula_13 take around 10,000 years to spiral into the Sun from a circular orbit at 1 AU. In this regime, inspiraling time and particle diameter are both roughly formula_14. Note that, if the initial grain velocity was not Keplerian, then circular or any confined orbit is possible for formula_15. It has been theorized that the slowing down of the rotation of Sun's outer layer may be caused by a similar effect. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_{\\rm PR} = \\frac{v}{c^2}W = \\frac{r^2 L_{\\rm s}}{4 c^2}\\sqrt{\\frac{G M_{\\rm s}}{R^5}}" }, { "math_id": 1, "text": " \\propto r^3 " }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": " \\propto r^2 " }, { "math_id": 4, "text": "\\frac{1}{R^2}" }, { "math_id": 5, "text": "\\frac{1}{R^{2.5}}" }, { "math_id": 6, "text": " \\beta " }, { "math_id": 7, "text": "\n\\beta = { F_{\\rm r} \\over F_{\\rm g} } \n= { 3L Q_{\\rm PR} \\over { 16 \\pi GMc \\rho s } }\n" }, { "math_id": 8, "text": "Q_{\\rm PR} " }, { "math_id": 9, "text": " \\rho " }, { "math_id": 10, "text": "s" }, { "math_id": 11, "text": "\\beta \\geq 0.5 " }, { "math_id": 12, "text": "0.1 < \\beta < 0.5 " }, { "math_id": 13, "text": "\\beta \\approx 0.1" }, { "math_id": 14, "text": "\\propto {1 \\over \\beta}" }, { "math_id": 15, "text": "\\beta < 1 " } ]
https://en.wikipedia.org/wiki?curid=615108
61512316
Bailout embedding
Type of dynamical system In the theory of dynamical systems, a bailout embedding is a system defined as formula_0 Here the function "k"("x") &lt; 0 on a set of unwanted orbits; otherwise "k"("x") &gt; 0. The trajectories of the full system of a bailout embedding "bail out"—that is, detach—from the embedding, into a larger space, in which they move around. If, after some time these orbits arrive at a stable neighbourhood of the embedding, "k"("x") &gt; 0, they collapse once more onto the embedding; that is, onto the original dynamics. The bailout embedding forms in this way an enlarged version of the dynamical system, one in which particular sets of orbits are cut from the asymptotic or limit set, while maintaining the dynamics of a different set of orbits—the wanted set—as attractors of the larger dynamical system. With a choice of "k"("x") = −("γ" + ∇"f"), these dynamics are seen to detach from unstable regions such as saddle points in conservative systems. One important application of the bailout embedding concept is to divergence-free flows; the most important class of these are Hamiltonian systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n&\\frac{d}{dt} (u-f(x)) = -k(x) (u-f(x)), \\\\[8pt]\n&\\frac{dx}{dt} = u.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=61512316
61514054
AES-GCM-SIV
Authenticated encryption mode with resistance against nonce reuse AES-GCM-SIV is a mode of operation for the Advanced Encryption Standard which provides similar (but slightly worse) performance to Galois/Counter Mode as well as misuse resistance in the event of the reuse of a cryptographic nonce. The construction is defined in RFC 8452. About. AES-GCM-SIV is designed to preserve both privacy and integrity even if nonces are repeated. To accomplish this, encryption is a function of a nonce, the plaintext message, and optional additional associated data (AAD). In the event a nonce is misused (i.e. used more than once), nothing is revealed except in the case that same message is encrypted multiple times with the same nonce. When that happens, an attacker is able to observe repeat encryptions, since encryption is a deterministic function of the nonce and message. However, beyond that, no additional information is revealed to the attacker. For this reason, AES-GCM-SIV is an ideal choice in cases that unique nonces cannot be guaranteed, such as multiple servers or network devices encrypting messages under the same key without coordination. Operation. Like Galois/Counter Mode, AES-GCM-SIV combines the well-known counter mode of encryption with the Galois mode of authentication. The key feature is the use of a synthetic initialization vector which is computed with Galois field multiplication using a construction called POLYVAL (a little-endian variant of Galois/Counter Mode's GHASH). POLYVAL is run over the combination of nonce, plaintext, and additional data, so that the IV is different for each combination. POLYVAL is defined over GF(2128) by the polynomial: formula_0 Note that GHASH is defined over the "reverse" polynomial: formula_1 This change provides efficiency benefits on little-endian architectures. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Implementations. Implementations of AES-GCM-SIV are available, among others, in the following languages:
[ { "math_id": 0, "text": "x^{128} + x^{127} + x^{126} + x^{121} + 1" }, { "math_id": 1, "text": "x^{128} + x^7 + x^2 + x + 1" } ]
https://en.wikipedia.org/wiki?curid=61514054
61516900
Gurzhi effect
The Gurzhi effect was theoretically predicted by Radii Gurzhi in 1963, and it consists of decreasing of electric resistance formula_0 of a finite size conductor with increasing of its temperature formula_1 (i.e. the situation formula_2 for some temperature interval). Gurzhi effect usually being considered as the evidence of electron hydrodynamic transport in conducting media. The mechanism of Gurzhi effect is the following. The value of the resistance of the conductor is inverse to the formula_3 — a mean free path corresponding to the momentum loss from the electrons+phonons systemformula_4where formula_5 is the average distance which electron pass between two consecutive interactions with a boundary, and formula_6 is a mean free path corresponding to other possibilities of momentum loss. The electron reflection from the boundary is assumed to be diffusive. When temperature is low we have ballistic transport with formula_7, formula_8, where formula_9 is a width of the conductor, formula_10is a mean free path corresponding to effective normal electron-electron collisions (i.e. collisions without total electrons+phonons momentum loss). For low temperatures phonon emitted by electron quickly interacts with another electron without loss of total electron+phonons momentum and formula_11, where formula_12is a mean free path corresponding to the electron-phonon collisions. Also we assume formula_13. Thus the resistance for lowest temperatures is a constant formula_14(see the picture). The Gurzhi effect appears when the temperature is increased to have formula_15 . In this regime the electron diffusive length between two consecutive interactions with the boundary can be considered as momentum loss free path: formula_16, and the resistance is proportional to formula_17, and thus we have a negative derivative formula_2 . Therefore, Gurzhi effect can be observed when formula_18. Gurzhi effect corresponds to unusual situation when electrical resistance depends on a frequency of normal collisions. As one can see this effect appears due to the presence of a boundaries with finite characteristic size formula_9. Later Gurzhi's group discovered a special role of electron hydrodynamics in a spin transport. In such a case magnetic inhomogeneity plays role of a "boundary" with spin-diffusion length as a characteristic size instead of formula_9 as before. This magnetic inhomogeneity stops electrons of the one spin component which becomes an effective scatterers for electrons of another spin component. In this case magnetoresistance of a conductor depends on the frequency of normal electron-electron collisions as well as in the Gurzhi effect. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": " dR/dT < 0 " }, { "math_id": 3, "text": "l_{lost}=\\min\\{l_{boundary}, l_V\\}" }, { "math_id": 4, "text": "R\\propto \\frac{1}{l_{lost}}," }, { "math_id": 5, "text": "l_{boundary}" }, { "math_id": 6, "text": "l_{V}" }, { "math_id": 7, "text": "l_{ee} \\gg d" }, { "math_id": 8, "text": "l_{lost} \\approx l_{boundary} \\approx d" }, { "math_id": 9, "text": "d" }, { "math_id": 10, "text": "l_{ee}" }, { "math_id": 11, "text": "l_{ee}\\approx l_{ep}" }, { "math_id": 12, "text": "l_{ep}\\propto T^{-5}" }, { "math_id": 13, "text": "d \\ll l_V" }, { "math_id": 14, "text": "R \\propto d^{-1}" }, { "math_id": 15, "text": "l_{ee} \\ll d" }, { "math_id": 16, "text": "l_{lost}\\approx l_{boundary} \\approx d^2/l_{ee}" }, { "math_id": 17, "text": "R \\propto l_{ee}(T)/d^2 \\propto T^{-5}d^{-2}" }, { "math_id": 18, "text": " l_{ee}\\ll d \\ll d^2/l_{ee} \\ll l_V " } ]
https://en.wikipedia.org/wiki?curid=61516900
61519107
The Case Against Education
2018 book by Bryan Caplan The Case Against Education: Why the Education System Is a Waste of Time and Money is a book written by libertarian economist Bryan Caplan and published in 2018 by Princeton University Press. Drawing on the economic concept of job market signaling and research in educational psychology, the book argues that much of higher education is very inefficient and has only a small effect in improving human capital, contrary to the conventional consensus in labor economics. Caplan argues that the primary function of education is not to enhance students' skills but to certify their intelligence, conscientiousness, and conformity—attributes that are valued by employers. He ultimately estimates that approximately 80% of individuals' return to education is the result of signaling, with the remainder due to human capital accumulation. Summary. Human capital model. The foundation of the drive to increase educational attainment across the board is the human capital model of education, which began with the research of Gary Becker. The model suggests that increasing educational attainment causes increased prosperity by endowing students with increased skills. As a consequence, subsidies to education are seen as a positive investment that increases economic growth and creates spillover effects by improving civic engagement, happiness, health, etc. Caplan argues against the model due to several contradictions, though he does not dispute that higher educational attainment is strongly correlated with increased individual income. He highlights how most adults rarely remember much of what they were taught in school not related to their career besides English and math, and even the latter two are inadequate. He also analyzes the sheepskin effect, where the largest increases in income from higher educational attainment occur after attaining an academic degree, but not for those who dropped out of college despite usually having completed some courses. He finally criticizes educational inflation, the increasing educational requirements for occupations that do not require them, as indicating educational attainment is relative and not nearly as beneficial for society as portrayed. Present value of learning, adjusted for forgetting. The simple human capital model tends to assume that knowledge is retained indefinitely, while a ubiquitous theme in educational interventions is that "fadeout" (i.e., forgetting) reliably occurs. To take a simple example, we may compute the present value of a marginal fact formula_0 that increases a person's productivity by formula_1 as:formula_2where formula_3 is the discount rate used to compute the present value. If formula_4is $100 and formula_3 is 5%, then the present value of learning formula_0 is $2,000. But this is at odds with the concept of fadeout. To correct for this, assume that the probability density function for retaining formula_0 follows an exponential distribution—with the corresponding survival function formula_5. Then the present value of learning formula_0, accounting for fadeout, is given by:formula_6Since the expected value of an exponential distribution is formula_7, we may tune this parameter based on assumptions about how long formula_0 is retained. Below is a table showing what the present value is based on and the expected retention time of the fact: Regardless of the retention time assumption, the present value of learning formula_0 is significantly reduced. Signaling model. The main alternative to the human capital model of education is the signaling model of education. The idea of job market signaling through educational attainment goes back to the work of Michael Spence. The model Spence developed suggested that, even if a student did not gain any skills through an educational program, the program can still be useful so long as the signal from completing the program is correlated with traits that predict job performance. Throughout the book, Caplan details a series of observations that suggest a significant role for signaling in the return to education: Given the above signs of signaling, Caplan argues in ch. 5–6 that the selfish return to education is greater than the social return to education, suggesting that greater educational attainment creates a negative externality (p. 198). In other words, status is zero-sum; skill is not (p. 229). Cost-benefit analysis of going to college. For many students, Caplan argues that most of the negative social return to pursuing further education comes from the incursion of student debt and lost employment opportunities for students who are unlikely to complete college (p. 210-211, ch. 8). He suggests that these students would be better served by vocational education. Policy recommendations. Caplan advocates two major policy responses to the problem of signaling in education: The first recommendation is that government needs to sharply cut education funding, since public education spending in the United States across all levels tops $1 trillion annually. The second recommendation is to encourage greater vocational education, because students who are unlikely to succeed in college should develop practical skills to function in the labor market. Caplan argues for an increased emphasis on vocational education that is similar in nature to the systems in Germany and Switzerland. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "PV(F) = \\int_{0}^{\\infty}e^{-rt}Vdt = {V\\over{r}}" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "V " }, { "math_id": 5, "text": "S(t) = e^{-\\lambda t} " }, { "math_id": 6, "text": "PV(F) = \\int_{0}^{\\infty}e^{-rt}S(t)Vdt = \\int_{0}^{\\infty}e^{-(r+\\lambda)t}Vdt = {V\\over{r + \\lambda}}" }, { "math_id": 7, "text": "\\lambda^{-1}" } ]
https://en.wikipedia.org/wiki?curid=61519107
6152185
Softmax function
Smooth approximation of one-hot arg max &lt;templatestyles src="Machine learning/styles.css"/&gt; The softmax function, also known as softargmax or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes. Definition. The softmax function takes as input a vector z of K real numbers, and normalizes it into a probability distribution consisting of K probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval formula_0, and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities. Formally, the standard (unit) softmax function formula_1, where formula_2, takes a vector formula_3 and computes each component of vector formula_4 with formula_5 In words, the softmax applies the standard exponential function to each element formula_6 of the input vector formula_7 (consisting of formula_8 real numbers), and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vector formula_9 is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input vector. For example, the standard softmax of formula_10 is approximately formula_11, which amounts to assigning almost all of the total unit weight in the result to the position of the vector's maximal element (of 8). In general, instead of e a different base b &gt; 0 can be used. As above, if b &gt; 1 then larger input components will result in larger output probabilities, and increasing the value of b will create probability distributions that are more concentrated around the positions of the largest input values. Conversely, if 0 &lt; b &lt; 1 then smaller input components will result in larger output probabilities, and decreasing the value of b will create probability distributions that are more concentrated around the positions of the smallest input values. Writing formula_12 or formula_13 (for real β) yields the expressions: formula_14 A value proportional to the reciprocal of β is sometimes referred to as the "temperature": formula_15, where k is typically 1 or the Boltzmann constant and T is the temperature. A higher temperature results in a more uniform output distribution (i.e. with higher entropy; it is "more random"), while a lower temperature results in a sharper output distribution, with one value dominating. In some fields, the base is fixed, corresponding to a fixed scale, while in others the parameter β (or T) is varied. Interpretations. Smooth arg max. The Softmax function is a smooth approximation to the arg max function: the function whose value is the "index" of a vector's largest element. The name "softmax" may be misleading. Softmax is not a smooth maximum (that is, a smooth approximation to the maximum function). The term "softmax" is also used for the closely related LogSumExp function, which is a smooth maximum. For this reason, some prefer the more accurate term "softargmax", though the term "softmax" is conventional in machine learning. This section uses the term "softargmax" for clarity. Formally, instead of considering the arg max as a function with categorical output formula_16 (corresponding to the index), consider the arg max function with one-hot representation of the output (assuming there is a unique maximum arg): formula_17 where the output coordinate formula_18 if and only if formula_19 is the arg max of formula_20, meaning formula_6 is the unique maximum value of formula_21. For example, in this encoding formula_22 since the third argument is the maximum. This can be generalized to multiple arg max values (multiple equal formula_6 being the maximum) by dividing the 1 between all max args; formally 1/k where k is the number of arguments assuming the maximum. For example, formula_23 since the second and third argument are both the maximum. In case all arguments are equal, this is simply formula_24 Points z with multiple arg max values are singular points (or singularities, and form the singular set) – these are the points where arg max is discontinuous (with a jump discontinuity) – while points with a single arg max are known as non-singular or regular points. With the last expression given in the introduction, softargmax is now a smooth approximation of arg max: as &amp;NoBreak;&amp;NoBreak;, softargmax converges to arg max. There are various notions of convergence of a function; softargmax converges to arg max pointwise, meaning for each fixed input z as &amp;NoBreak;&amp;NoBreak;, formula_25 However, softargmax does not converge uniformly to arg max, meaning intuitively that different points converge at different rates, and may converge arbitrarily slowly. In fact, softargmax is continuous, but arg max is not continuous at the singular set where two coordinates are equal, while the uniform limit of continuous functions is continuous. The reason it fails to converge uniformly is that for inputs where two coordinates are almost equal (and one is the maximum), the arg max is the index of one or the other, so a small change in input yields a large change in output. For example, formula_26 but formula_27 and formula_28 for all inputs: the closer the points are to the singular set formula_29, the slower they converge. However, softargmax does converge compactly on the non-singular set. Conversely, as &amp;NoBreak;&amp;NoBreak;, softargmax converges to arg min in the same way, where here the singular set is points with two arg "min" values. In the language of tropical analysis, the softmax is a deformation or "quantization" of arg max and arg min, corresponding to using the log semiring instead of the max-plus semiring (respectively min-plus semiring), and recovering the arg max or arg min by taking the limit is called "tropicalization" or "dequantization". It is also the case that, for any fixed β, if one input &amp;NoBreak;&amp;NoBreak; is much larger than the others "relative" to the temperature, formula_30, the output is approximately the arg max. For example, a difference of 10 is large relative to a temperature of 1: formula_31 However, if the difference is small relative to the temperature, the value is not close to the arg max. For example, a difference of 10 is small relative to a temperature of 100: formula_32 As &amp;NoBreak;&amp;NoBreak;, temperature goes to zero, formula_33, so eventually all differences become large (relative to a shrinking temperature), which gives another interpretation for the limit behavior. Probability theory. In probability theory, the output of the softargmax function can be used to represent a categorical distribution – that is, a probability distribution over K different possible outcomes. Statistical mechanics. In statistical mechanics, the softargmax function is known as the Boltzmann distribution (or Gibbs distribution): the index set formula_34 are the microstates of the system; the inputs formula_6 are the energies of that state; the denominator is known as the partition function, often denoted by Z; and the factor β is called the coldness (or thermodynamic beta, or inverse temperature). Applications. The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression), multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the jth class given a sample vector x and a weighting vector w is: formula_35 This can be seen as the composition of K linear functions formula_36 and the softmax function (where formula_37 denotes the inner product of formula_38 and formula_39). The operation is equivalent to applying a linear operator defined by formula_39 to vectors formula_38, thus transforming the original, probably highly-dimensional, input to vectors in a K-dimensional space formula_40. Neural networks. The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression. Since the function maps a vector and a specific index formula_19 to a real value, the derivative needs to take the index into account: formula_41 This expression is symmetrical in the indexes formula_42 and thus may also be expressed as formula_43 Here, the Kronecker delta is used for simplicity (cf. the derivative of a sigmoid function, being expressed via the function itself). To ensure stable numerical computations subtracting the maximum value from the input vector is common. This approach, while not altering the output or the derivative theoretically, enhances stability by directly controlling the maximum exponent value computed. If the function is scaled with the parameter formula_44, then these expressions must be multiplied by formula_44. See multinomial logit for a probability model which uses the softmax activation function. Reinforcement learning. In the field of reinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is: formula_45 where the action value formula_46 corresponds to the expected reward of following action a and formula_47 is called a temperature parameter (in allusion to statistical mechanics). For high temperatures (formula_48), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (formula_49), the probability of the action with the highest expected reward tends to 1. Computational complexity and remedies. In neural network applications, the number K of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words. This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine the formula_6, followed by the application of the softmax function itself) computationally expensive. What's more, the gradient descent backpropagation method for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times. Approaches that reorganize the softmax layer for more efficient calculation include the hierarchical softmax and the differentiated softmax. The hierarchical softmax (introduced by Morin and Bengio in 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected "classes" of outcomes, forming latent variables. The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf. Ideally, when the tree is balanced, this would reduce the computational complexity from formula_50 to formula_51. In practice, results depend on choosing a good strategy for clustering the outcomes into classes. A Huffman tree was used for this in Google's word2vec models (introduced in 2013) to achieve scalability. A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor. These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling). Mathematical properties. Geometrically the softmax function maps the vector space formula_40 to the boundary of the standard formula_52-simplex, cutting the dimension by one (the range is a formula_53-dimensional simplex in formula_8-dimensional space), due to the linear constraint that all output sum to 1 meaning it lies on a hyperplane. Along the main diagonal formula_54 softmax is just the uniform distribution on outputs, formula_55: equal scores yield equal probabilities. More generally, softmax is invariant under translation by the same value in each coordinate: adding formula_56 to the inputs formula_57 yields formula_58, because it multiplies each exponent by the same factor, formula_59 (because formula_60), so the ratios do not change: formula_61 Geometrically, softmax is constant along diagonals: this is the dimension that is eliminated, and corresponds to the softmax output being independent of a translation in the input scores (a choice of 0 score). One can normalize input scores by assuming that the sum is zero (subtract the average: formula_62 where formula_63), and then the softmax takes the hyperplane of points that sum to zero, formula_64, to the open simplex of positive values that sum to 1formula_65, analogously to how the exponent takes 0 to 1, formula_66 and is positive. By contrast, softmax is not invariant under scaling. For instance, formula_67 but formula_68 The standard logistic function is the special case for a 1-dimensional axis in 2-dimensional space, say the "x"-axis in the (x, y) plane. One variable is fixed at 0 (say formula_69), so formula_66, and the other variable can vary, denote it formula_70, so formula_71 the standard logistic function, and formula_72 its complement (meaning they add up to 1). The 1-dimensional input could alternatively be expressed as the line formula_73, with outputs formula_74 and formula_75 The softmax function is also the gradient of the LogSumExp function, a smooth maximum: formula_76 where the LogSumExp function is defined as formula_77. History. The softmax function was used in statistical mechanics as the Boltzmann distribution in the foundational paper , formalized and popularized in the influential textbook . The use of the softmax in decision theory is credited to R. Duncan Luce, who used the axiom of independence of irrelevant alternatives in rational choice theory to deduce the softmax in Luce's choice axiom for relative preferences. In machine learning, the term "softmax" is credited to John S. Bridle in two 1989 conference papers, : and : &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;We are concerned with feed-forward non-linear networks (multi-layer perceptrons, or MLPs) with multiple outputs. We wish to treat the outputs of the network as probabilities of alternatives ("e.g." pattern classes), conditioned on the inputs. We look for appropriate output non-linearities and for appropriate criteria for adaptation of the parameters of the network ("e.g." weights). We explain two modifications: probability scoring, which is an alternative to squared error minimisation, and a normalised exponential (softmax) multi-input generalisation of the logistic non-linearity. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Example. With an input of (1, 2, 3, 4, 1, 2, 3), the softmax is approximately (0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175). The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which are significantly below the maximum value. But note: a change of "temperature" changes the output. When the temperature is multiplied by 10, the inputs are effectively (0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3) and the softmax is approximately (0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153). This shows that high temperatures de-emphasize the maximum value. Computation of this example using Python code: »&gt; import numpy as np »&gt; z = np.array([1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]) »&gt; beta = 1.0 »&gt; np.exp(beta * z) / np.sum(np.exp(beta * z)) array([0.02364054, 0.06426166, 0.1746813, 0.474833, 0.02364054, 0.06426166, 0.1746813]) Alternatives. The softmax function generates probability predictions densely distributed over its support. Other functions like sparsemax or α-entmax can be used when sparse probability predictions are desired. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(0, 1)" }, { "math_id": 1, "text": "\\sigma\\colon \\R^K \\to (0, 1)^K" }, { "math_id": 2, "text": "K \\ge 1" }, { "math_id": 3, "text": "\\mathbf{z} = (z_1, \\dotsc, z_K) \\in \\R^K" }, { "math_id": 4, "text": "\\sigma(\\mathbf{z}) \\in (0, 1)^K" }, { "math_id": 5, "text": "\\sigma(\\mathbf{z})_i = \\frac{e^{z_i}}{\\sum_{j=1}^K e^{z_j}}\\,." }, { "math_id": 6, "text": "z_i" }, { "math_id": 7, "text": "\\mathbf z" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "\\sigma(\\mathbf z)" }, { "math_id": 10, "text": "(1,2,8)" }, { "math_id": 11, "text": "(0.001,0.002,0.997)" }, { "math_id": 12, "text": "b = e^\\beta" }, { "math_id": 13, "text": "b = e^{-\\beta}" }, { "math_id": 14, "text": "\\sigma(\\mathbf{z})_i = \\frac{e^{\\beta z_i}}{\\sum_{j=1}^K e^{\\beta z_j}} \\text{ or } \\sigma(\\mathbf{z})_i = \\frac{e^{-\\beta z_i}}{\\sum_{j=1}^K e^{-\\beta z_j}} \\text{ for } i = 1, \\dotsc , K ." }, { "math_id": 15, "text": "\\beta = 1 / kT" }, { "math_id": 16, "text": "1, \\dots, n" }, { "math_id": 17, "text": "\\operatorname{arg\\,max}(z_1,\\, \\dots,\\, z_n) = (y_1,\\, \\dots,\\, y_n) = (0,\\, \\dots,\\, 0,\\, 1,\\, 0,\\, \\dots,\\, 0)," }, { "math_id": 18, "text": "y_i = 1" }, { "math_id": 19, "text": "i" }, { "math_id": 20, "text": "(z_1, \\dots, z_n)" }, { "math_id": 21, "text": "(z_1,\\, \\dots,\\, z_n)" }, { "math_id": 22, "text": "\\operatorname{arg\\,max}(1, 5, 10) = (0, 0, 1)," }, { "math_id": 23, "text": "\\operatorname{arg\\,max}(1,\\, 5,\\, 5) = (0,\\, 1/2,\\, 1/2)," }, { "math_id": 24, "text": "\\operatorname{arg\\,max}(z, \\dots, z) = (1/n, \\dots, 1/n)." }, { "math_id": 25, "text": "\\sigma_\\beta(\\mathbf{z}) \\to \\operatorname{arg\\,max}(\\mathbf{z})." }, { "math_id": 26, "text": "\\sigma_\\beta(1,\\, 1.0001) \\to (0, 1)," }, { "math_id": 27, "text": "\\sigma_\\beta(1,\\, 0.9999) \\to (1,\\, 0)," }, { "math_id": 28, "text": "\\sigma_\\beta(1,\\, 1) = 1/2" }, { "math_id": 29, "text": "(x, x)" }, { "math_id": 30, "text": "T = 1/\\beta" }, { "math_id": 31, "text": "\\sigma(0,\\, 10) := \\sigma_1(0,\\, 10) = \\left(1/\\left(1 + e^{10}\\right),\\, e^{10}/\\left(1 + e^{10}\\right)\\right) \\approx (0.00005,\\, 0.99995)" }, { "math_id": 32, "text": "\\sigma_{1/100}(0,\\, 10) = \\left(1/\\left(1 + e^{1/10}\\right),\\, e^{1/10}/\\left(1 + e^{1/10}\\right)\\right) \\approx (0.475,\\, 0.525)." }, { "math_id": 33, "text": "T = 1/\\beta \\to 0" }, { "math_id": 34, "text": "{1,\\, \\dots,\\, k}" }, { "math_id": 35, "text": "P(y=j\\mid \\mathbf{x}) = \\frac{e^{\\mathbf{x}^\\mathsf{T}\\mathbf{w}_j}}{\\sum_{k=1}^K e^{\\mathbf{x}^\\mathsf{T}\\mathbf{w}_k}}" }, { "math_id": 36, "text": "\\mathbf{x} \\mapsto \\mathbf{x}^\\mathsf{T}\\mathbf{w}_1, \\ldots, \\mathbf{x} \\mapsto \\mathbf{x}^\\mathsf{T}\\mathbf{w}_K" }, { "math_id": 37, "text": "\\mathbf{x}^\\mathsf{T}\\mathbf{w}" }, { "math_id": 38, "text": "\\mathbf{x}" }, { "math_id": 39, "text": "\\mathbf{w}" }, { "math_id": 40, "text": "\\mathbb{R}^K" }, { "math_id": 41, "text": "\\frac{\\partial}{\\partial q_k}\\sigma(\\textbf{q}, i) = \\sigma(\\textbf{q}, i)(\\delta_{ik} - \\sigma(\\textbf{q}, k))." }, { "math_id": 42, "text": "i, k" }, { "math_id": 43, "text": " \\frac{\\partial}{\\partial q_k}\\sigma(\\textbf{q}, i) = \\sigma(\\textbf{q}, k)(\\delta_{ik} - \\sigma(\\textbf{q}, i))." }, { "math_id": 44, "text": "\\beta" }, { "math_id": 45, "text": "P_t(a) = \\frac{\\exp(q_t(a)/\\tau)}{\\sum_{i=1}^n\\exp(q_t(i)/\\tau)} \\text{,}" }, { "math_id": 46, "text": "q_t(a)" }, { "math_id": 47, "text": "\\tau" }, { "math_id": 48, "text": "\\tau \\to \\infty" }, { "math_id": 49, "text": "\\tau \\to 0^+" }, { "math_id": 50, "text": "O(K)" }, { "math_id": 51, "text": "O(\\log_2 K)" }, { "math_id": 52, "text": "(K-1)" }, { "math_id": 53, "text": "(K - 1)" }, { "math_id": 54, "text": "(x,\\, x,\\, \\dots,\\, x)," }, { "math_id": 55, "text": "(1/n, \\dots, 1/n)" }, { "math_id": 56, "text": "\\mathbf{c} = (c,\\, \\dots,\\, c)" }, { "math_id": 57, "text": "\\mathbf{z}" }, { "math_id": 58, "text": "\\sigma(\\mathbf{z} + \\mathbf{c}) = \\sigma(\\mathbf{z})" }, { "math_id": 59, "text": "e^c" }, { "math_id": 60, "text": "e^{z_i + c} = e^{z_i} \\cdot e^c" }, { "math_id": 61, "text": "\\sigma(\\mathbf{z} + \\mathbf{c})_j = \\frac{e^{z_j + c}}{\\sum_{k=1}^K e^{z_k + c}} = \\frac{e^{z_j} \\cdot e^c}{\\sum_{k=1}^K e^{z_k} \\cdot e^c} = \\sigma(\\mathbf{z})_j." }, { "math_id": 62, "text": "\\mathbf{c}" }, { "math_id": 63, "text": "c = \\frac{1}{n} \\sum z_i" }, { "math_id": 64, "text": "\\sum z_i = 0" }, { "math_id": 65, "text": "\\sum \\sigma(\\mathbf{z})_i = 1" }, { "math_id": 66, "text": "e^0 = 1" }, { "math_id": 67, "text": "\\sigma\\bigl((0,\\, 1)\\bigr) = \\bigl(1/(1 + e),\\, e/(1 + e)\\bigr)" }, { "math_id": 68, "text": "\\sigma\\bigl((0, 2)\\bigr) = \\bigl(1/\\left(1 + e^2\\right),\\, e^2/\\left(1 + e^2\\right)\\bigr)." }, { "math_id": 69, "text": "z_2 = 0" }, { "math_id": 70, "text": "z_1 = x" }, { "math_id": 71, "text": "e^{z_1}/\\sum_{k=1}^2 e^{z_k} = e^x/\\left(e^x + 1\\right)," }, { "math_id": 72, "text": "e^{z_2}/\\sum_{k=1}^2 e^{z_k} = 1/\\left(e^x + 1\\right)," }, { "math_id": 73, "text": "(x/2,\\, -x/2)" }, { "math_id": 74, "text": "e^{x/2}/\\left(e^{x/2} + e^{-x/2}\\right) = e^x/\\left(e^x + 1\\right)" }, { "math_id": 75, "text": "e^{-x/2}/\\left(e^{x/2} + e^{-x/2}\\right) = 1/\\left(e^x + 1\\right)." }, { "math_id": 76, "text": "\\frac{\\partial}{\\partial z_i} \\operatorname{LSE}(\\mathbf{z}) = \\frac{\\exp z_i}{\\sum_{j=1}^{K} \\exp z_j} = \\sigma(\\mathbf{z})_i, \\quad \\text{ for } i = 1, \\dotsc , K, \\quad \\mathbf{z} = (z_1,\\, \\dotsc,\\, z_K) \\in\\R^K," }, { "math_id": 77, "text": "\\operatorname{LSE}(z_1,\\, \\dots,\\, z_n) = \\log\\left(\\exp(z_1) + \\cdots + \\exp(z_n)\\right)" } ]
https://en.wikipedia.org/wiki?curid=6152185
615222
Multivariable calculus
Calculus of functions of several variables Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables ("multivariate"), rather than just one. Multivariable calculus may be thought of as an elementary part of advanced calculus. For advanced calculus, see calculus on Euclidean space. The special case of calculus in three dimensional space is often called "vector calculus". Introduction. In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces: The consequence of the first difference is the difference in the definition of the limit and differentiation. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators. The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined. Limits. A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions. A limit along a path may be defined by considering a parametrised path formula_0 in n-dimensional Euclidean space. Any function formula_1 can then be projected on the path as a 1D function formula_2. The limit of formula_3 to the point formula_4 along the path formula_5 can hence be defined as Note that the value of this limit can be dependent on the form of formula_5, i.e. the path chosen, not just the point which the limit approaches. For example, consider the function formula_6 If the point formula_7 is approached through the line formula_8, or in parametric form: Then the limit along the path will be: On the other hand, if the path formula_9 (or parametrically, formula_10) is chosen, then the limit becomes: Since taking different paths towards the same point yields different values, a general limit at the point formula_7 cannot be defined for the function. A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function formula_11 that the limit of formula_3 to some point formula_12 is L, if and only if for all continuous functions formula_0 such that formula_13. Continuity. From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function formula_11 that formula_3 is continuous at the point formula_14, if and only if for all continuous functions formula_0 such that formula_13. As with limits, being continuous along "one" path formula_5 does not imply multivariate continuity. Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example. For example, for a real-valued function formula_15 with two real-valued parameters, formula_16, continuity of formula_3 in formula_17 for fixed formula_18 and continuity of formula_3 in formula_18 for fixed formula_17 does not imply continuity of formula_3. Consider formula_19 It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle formula_20. Furthermore, the functions defined for constant formula_17 and formula_18 and formula_21 by formula_22 and formula_23 are continuous. Specifically, formula_24 for all x and y. Therefore, formula_25 and moreover, along the coordinate axes, formula_26 and formula_27. Therefore the function is continuous along both individual arguments. However, consider the parametric path formula_28. The parametric function becomes Therefore, It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates. Differentiation. Directional derivative. The derivative of a single-variable function is defined as Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function formula_32 along some path formula_0: Unlike limits, for which the value depends on the exact form of the path formula_5, it can be shown that the derivative along the path depends only on the tangent vector of the path at formula_4, i.e. formula_39, provided that formula_3 is Lipschitz continuous at formula_4, and that the limit exits for at least one such path. It is therefore possible to generate the definition of the directional derivative as follows: The directional derivative of a scalar-valued function formula_40 along the unit vector formula_41 at some point formula_12 is or, when expressed in terms of ordinary differentiation, which is a well defined expression because formula_42 is a scalar function with one variable in formula_38. It is not possible to define a unique scalar derivative without a direction; it is clear for example that formula_43. It is also possible for directional derivatives to exist for some directions but not for others. Partial derivative. The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant. A partial derivative may be thought of as the directional derivative of the function along a coordinate axis. Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator (formula_44) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable. Multiple integration. The multiple integral extends the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a "repeated integral" or "iterated integral" as long as the integrand is continuous throughout the domain of integration. The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves. Fundamental theorem of calculus in multiple dimensions. In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus: In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds. Applications and uses. Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular, Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics. Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data. Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus. Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s(t): \\mathbb{R} \\to \\mathbb{R}^n" }, { "math_id": 1, "text": "f(\\overrightarrow{x}): \\mathbb{R}^n \\to \\mathbb{R}^m" }, { "math_id": 2, "text": "f(s(t))" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "s(t_0)" }, { "math_id": 5, "text": "s(t)" }, { "math_id": 6, "text": "f(x,y) = \\frac{x^2y}{x^4+y^2}." }, { "math_id": 7, "text": "(0,0)" }, { "math_id": 8, "text": "y=kx" }, { "math_id": 9, "text": "y=\\pm x^2" }, { "math_id": 10, "text": "x(t)=t,\\, y(t)=\\pm t^2" }, { "math_id": 11, "text": "f: \\mathbb{R}^n \\to \\mathbb{R}^m" }, { "math_id": 12, "text": "x_0 \\in \\mathbb{R}^n" }, { "math_id": 13, "text": "s(t_0)=x_0" }, { "math_id": 14, "text": "x_0" }, { "math_id": 15, "text": "f: \\mathbb{R}^2 \\to \\mathbb{R}" }, { "math_id": 16, "text": "f(x,y)" }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "y" }, { "math_id": 19, "text": "\nf(x,y)=\n\\begin{cases}\n\\frac{y}{x}-y & \\text{if}\\quad 0 \\leq y < x \\leq 1 \\\\\n\\frac{x}{y}-x & \\text{if}\\quad 0 \\leq x < y \\leq 1 \\\\\n1-x & \\text{if}\\quad 0 < x=y \\\\\n0 & \\text{everywhere else}.\n\\end{cases}\n" }, { "math_id": 20, "text": "(0,1)\\times (0,1)" }, { "math_id": 21, "text": "0 \\le a \\le 1" }, { "math_id": 22, "text": "g_a(x) = f(x,a)\\quad" }, { "math_id": 23, "text": "\\quad h_a(y) = f(a,y)\\quad" }, { "math_id": 24, "text": "g_0(x) = f(x,0) = h_0(0,y) = f(0,y) = 0" }, { "math_id": 25, "text": "f(0,0)=0" }, { "math_id": 26, "text": "\\lim_{x \\to 0} f(x,0) = 0" }, { "math_id": 27, "text": "\\lim_{y \\to 0} f(0,y) = 0" }, { "math_id": 28, "text": "x(t) = t,\\, y(t) = t" }, { "math_id": 29, "text": "g: \\mathbb{R}^m \\to \\mathbb{R}^p" }, { "math_id": 30, "text": "f(x_0) \\in \\mathbb{R}^m" }, { "math_id": 31, "text": "g \\circ f: \\mathbb{R}^n \\to \\mathbb{R}^p" }, { "math_id": 32, "text": "f: \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 33, "text": "g: \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 34, "text": "fg: \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 35, "text": "f/g : \\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 36, "text": "g(x_0) \\neq 0" }, { "math_id": 37, "text": "|f|" }, { "math_id": 38, "text": "t" }, { "math_id": 39, "text": "s'(t_0)" }, { "math_id": 40, "text": "f:\\mathbb{R}^n \\to \\mathbb{R}" }, { "math_id": 41, "text": "\\hat{\\bold{u}}" }, { "math_id": 42, "text": "f(x_0+\\hat{\\bold{u}}t)" }, { "math_id": 43, "text": "\\nabla_{\\hat{\\bold{u}}}f(x_0) = - \\nabla_{-\\hat{\\bold{u}}}f(x_0)" }, { "math_id": 44, "text": "\\nabla" } ]
https://en.wikipedia.org/wiki?curid=615222
6152753
Landau set
Generalized Condorcet set In the study of electoral systems, the uncovered set (also called the Landau set or the Fishburn set) is a set of candidates that generalizes the notion of a Condorcet winner whenever there is a Condorcet paradox. The Landau set can be thought of as the Pareto frontier for a set of candidates, when the frontier is determined by pairwise victories. The Landau set is a nonempty subset of the Smith set. It was first discovered by Nicholas Miller. Definition. The Landau set consists of all "undominated" or "uncovered" candidates"." One candidate (the "Fishburn winner") covers another (the "Fishburn loser") if they would win any matchup the Fishburn loser would win. Thus, the Fishburn winner has all the pairwise victories of the Fishburn loser, as well as at least one other pairwise victory. In set-theoretic notation, formula_0 is a candidate such that for every other candidate formula_1, there is some candidate formula_2 (possibly the same as formula_0 or formula_1) such that formula_2 is not preferred to formula_0 and formula_1 is not preferred to formula_2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "z" }, { "math_id": 2, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=6152753
61528407
Alírio Rodrigues
Portuguese chemical engineer Alírio Rodrigues is a Portuguese chemical engineer. He is emeritus Professor of Chemical Engineering at Universidade do Porto and Director of the Laboratory of Separation and Reaction Engineering. His research interests are in the fields of chemical engineering, bioengineering and materials engineering. He is the author of over 600 articles on catalysis and reaction engineering, a number of books, and of six patents. He is among the most cited chemical engineers according to the Shanghai Academic Ranking of World Universities He is on the Editorial Board of Chemical Engineering Journal. The press has been interested in his research on perfumed clothing. Rodrigues equation. The Rodrigues equation is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. The equation is: formula_0 where HETP is the height equivalent theoretical plate formula_1 and formula_2 is the intraparticular Péclet number. Awards and honours. In 2008 he was awarded with Viviana Silva the Sustainability Award of the Institution of Chemical Engineers (IChemE). Rodrigues has been awarded the 2019 medal for excellence in I+D+i of the Spanish association of chemists and chemical engineers as "recognition of an extraordinary research career in the area of separation processes and chemical reaction". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " HETP = A + \\frac{B}{u} + C \\cdot f( \\lambda ) \\cdot u " }, { "math_id": 1, "text": " f( \\lambda ) = \\frac{3}{ \\lambda } \\left [ \\frac{1}{\\tanh( \\lambda )} - \\frac{1}{ \\lambda } \\right ] " }, { "math_id": 2, "text": "\\lambda" } ]
https://en.wikipedia.org/wiki?curid=61528407
61530151
Adrian Kent
British theoretical physicist Adrian Kent is a British theoretical physicist, Professor of Quantum Physics at the University of Cambridge, member of the Centre for Quantum Information and Foundations, and Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics. His research areas are the foundations of quantum theory, quantum information science and quantum cryptography. He is known as the inventor of relativistic quantum cryptography. In 1999 he published the first unconditionally secure protocols for bit commitment and coin tossing, which were also the first relativistic cryptographic protocols. He is a co-inventor of quantum tagging, or quantum position authentication, providing the first schemes for position-based quantum cryptography. In 2005 he published with Lucien Hardy and Jonathan Barrett the first security proof of quantum key distribution based on the no-signalling principle. Work. Field theory. Kent's early contributions to physics were on topics related to conformal field theory. Together with Peter Goddard and David Olive, he devised the coset construction that classifies the unitary highest weight representations of the Virasoro algebra, and he described the Virasoro algebra's singular vectors. In addition, he investigated the representation theory of formula_0 superconformal algebras. Quantum cryptography. Kent is inventor of the field of relativistic quantum cryptography, where security of the cryptographic tasks is guaranteed from the properties of quantum information and from the relativistic physical principle stating that information cannot travel faster than the speed of light (no-signalling). In 1999 he published the first unconditionally secure protocols for bit commitment and strong coin tossing, relativistic protocols that evade no-go theorem by Mayers, Lo and Chau, and by Lo and Chau, respectively. He is a co-inventor of quantum tagging, or quantum position authentication, where the properties of quantum information and the no-signalling principle are used to authenticate the location of an object. He published with Lucien Hardy and Jonathan Barrett the first security proof for quantum key distribution based on the no-signalling principle, where two parties can generate a secure secret key even if their devices are not trusted and they are not described by quantum theory, as long as they satisfy the no-signalling principle. With Roger Colbeck, he invented quantum randomness expansion, a task where an initial private random string is expanded into a larger private random string. Quantum foundations. Kent is a critic of the many-worlds interpretation of quantum mechanics, as well as the consistent histories interpretation. He has outlined a solution to the quantum reality problem, also called the quantum measurement problem, that is consistent with relativistic quantum theory, proposing that physical reality is described by a randomly chosen configuration of physical quantities (or beables) like the stress–energy tensor, whose sample space is mathematically well defined and respects Lorentzian symmetry. He has proposed Causal Quantum Theory as an extension of quantum theory, according to which local causality holds and the reduction of the quantum state is a well-defined physical process, claiming that current Bell-type experiments have not completely ruled out this theory. He discovered the no-summoning theorem, which extends the no-cloning theorem of quantum information to Minkowski spacetime. Other work. Kent is a member of the advisory panel for the Cambridge Centre for the Study of Existential Risk. He has discussed the mathematics of risk assessments for global catastrophes. He has proposed a solution to Fermi’s paradox, hypothesizing that various intelligent extra-terrestrial civilizations have existed, interacted and competed for resources, and have evolved to avoid advertising their existence. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N=2" } ]
https://en.wikipedia.org/wiki?curid=61530151
61532
Absolute convergence
Mode of convergence of an infinite series In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series formula_0 is said to converge absolutely if formula_1 for some real number formula_2 Similarly, an improper integral of a function, formula_3 is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if formula_4 A convergent series that is not absolutely convergent is called conditionally convergent. Absolute convergence is important for the study of infinite series, because its definition guarantees that a series will have some "nice" behaviors of finite sums that not all convergent series possess. For instance, rearrangements do not change the value of the sum, which is not necessarily true for conditionally convergent series. Background. When adding a finite number of terms, addition is both associative and commutative, meaning that grouping and rearrangment do not alter the final sum. For instance, formula_5 is equal to both formula_6 and formula_7. However, associativity and commutativity do not necessarily hold for infinite sums. One example is the alternating harmonic series formula_8 whose terms are fractions that alternate in sign. This series is convergent and can be evaluated using the Maclaurin series for the function formula_9, which converges for all formula_10 satisfying formula_11: formula_12 Substituting formula_13 reveals that the original sum is equal to formula_14. The sum can also be rearranged as follows: formula_15 In this rearrangement, the reciprocal of each odd number is grouped with the reciprocal of twice its value, while the reciprocals of every multiple of 4 are evaluated separately. However, evaluating the terms inside the parentheses yields formula_16 or half the original series. The violation of the associativity and commutativity of addition reveals that the alternating harmonic series is conditionally convergent. Indeed, the sum of the absolute values of each term is formula_17, or the divergent harmonic series. According to the Riemann series theorem, any conditionally convergent series can be permuted so that its sum is any finite real number or so that it diverges. When an absolutely convergent series is rearranged, its sum is always preserved. Definition for real and complex numbers. A sum of real numbers or complex numbers formula_18 is absolutely convergent if the sum of the absolute values of the terms formula_19 converges. Sums of more general elements. The same definition can be used for series formula_18 whose terms formula_20 are not numbers but rather elements of an arbitrary abelian topological group. In that case, instead of using the absolute value, the definition requires the group to have a norm, which is a positive real-valued function formula_21 on an abelian group formula_22 (written additively, with identity element 0) such that: In this case, the function formula_30 induces the structure of a metric space (a type of topology) on formula_31 Then, a formula_22-valued series is absolutely convergent if formula_32 In particular, these statements apply using the norm formula_33 (absolute value) in the space of real numbers or complex numbers. In topological vector spaces. If formula_34 is a topological vector space (TVS) and formula_35 is a (possibly uncountable) family in formula_34 then this family is absolutely summable if If formula_34 is a normable space and if formula_35 is an absolutely summable family in formula_38 then necessarily all but a countable collection of formula_46's are 0. Absolutely summable families play an important role in the theory of nuclear spaces. Relation to convergence. If formula_22 is complete with respect to the metric formula_47 then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality. In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is a Banach space. If a series is convergent but not absolutely convergent, it is called conditionally convergent. An example of a conditionally convergent series is the alternating harmonic series. Many standard tests for divergence and convergence, most notably including the ratio test and the root test, demonstrate absolute convergence. This is because a power series is absolutely convergent on the interior of its disk of convergence. Proof that any absolutely convergent series of complex numbers is convergent. Suppose that formula_48 is convergent. Then equivalently, formula_49 is convergent, which implies that formula_50 and formula_51 converge by termwise comparison of non-negative terms. It suffices to show that the convergence of these series implies the convergence of formula_52 and formula_53 for then, the convergence of formula_54 would follow, by the definition of the convergence of complex-valued series. The preceding discussion shows that we need only prove that convergence of formula_55 implies the convergence of formula_56 Let formula_55 be convergent. Since formula_57 we have formula_58 Since formula_59 is convergent, formula_60 is a bounded monotonic sequence of partial sums, and formula_61 must also converge. Noting that formula_62 is the difference of convergent series, we conclude that it too is a convergent series, as desired. Alternative proof using the Cauchy criterion and triangle inequality. By applying the Cauchy criterion for the convergence of a complex series, we can also prove this fact as a simple implication of the triangle inequality. By the Cauchy criterion, formula_63 converges if and only if for any formula_64 there exists formula_65 such that formula_66 for any formula_67 But the triangle inequality implies that formula_68 so that formula_69 for any formula_70 which is exactly the Cauchy criterion for formula_71 Proof that any absolutely convergent series in a Banach space is convergent. The above result can be easily generalized to every Banach space formula_72 Let formula_73 be an absolutely convergent series in formula_74 As formula_75 is a Cauchy sequence of real numbers, for any formula_76 and large enough natural numbers formula_77 it holds: formula_78 By the triangle inequality for the norm ǁ⋅ǁ, one immediately gets: formula_79 which means that formula_80 is a Cauchy sequence in formula_38 hence the series is convergent in formula_74 Rearrangements and unconditional convergence. Real and complex numbers. When a series of real or complex numbers is absolutely convergent, any rearrangement or reordering of that series' terms will still converge to the same value. This fact is one reason absolutely convergent series are useful: showing a series is absolutely convergent allows terms to be paired or rearranged in convenient ways without changing the sum's value. The Riemann rearrangement theorem shows that the converse is also true: every real or complex-valued series whose terms cannot be reordered to give a different value is absolutely convergent. Series with coefficients in more general space. The term unconditional convergence is used to refer to a series where any rearrangement of its terms still converges to the same value. For any series with values in a normed abelian group formula_22, as long as formula_22 is complete, every series which converges absolutely also converges unconditionally. Stated more formally: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Let formula_22 be a normed abelian group. Suppose formula_81 If formula_82 is any permutation, then formula_83 For series with more general coefficients, the converse is more complicated. As stated in the previous section, for real-valued and complex-valued series, unconditional convergence always implies absolute convergence. However, in the more general case of a series with values in any normed abelian group formula_22, the converse does not always hold: there can exist series which are not absolutely convergent, yet unconditionally convergent. For example, in the Banach space ℓ∞, one series which is unconditionally convergent but not absolutely convergent is: formula_84 where formula_85 is an orthonormal basis. A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space has an unconditionally convergent series that is not absolutely convergent. Proof of the theorem. For any formula_64 we can choose some formula_86 such that: formula_87 Let formula_88 where formula_89 so that formula_90 is the smallest natural number such that the list formula_91 includes all of the terms formula_92 (and possibly others). Finally for any integer formula_93 let formula_94 so that formula_95 and thus formula_96 This shows that formula_97 that is: formula_98 Q.E.D. Products of series. The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely. That is, suppose that formula_99 The Cauchy product is defined as the sum of terms formula_100 where: formula_101 If either the formula_20 or formula_102 sum converges absolutely then formula_103 Absolute convergence over sets. A generalization of the absolute convergence of a series, is the absolute convergence of a sum of a function over a set. We can first consider a countable set formula_34 and a function formula_104 We will give a definition below of the sum of formula_105 over formula_38 written as formula_106 First note that because no particular enumeration (or "indexing") of formula_34 has yet been specified, the series formula_107 cannot be understood by the more basic definition of a series. In fact, for certain examples of formula_34 and formula_108 the sum of formula_105 over formula_34 may not be defined at all, since some indexing may produce a conditionally convergent series. Therefore we define formula_109 only in the case where there exists some bijection formula_110 such that formula_111 is absolutely convergent. Note that here, "absolutely convergent" uses the more basic definition, applied to an indexed series. In this case, the value of the sum of formula_105 over formula_34 is defined by formula_112 Note that because the series is absolutely convergent, then every rearrangement is identical to a different choice of bijection formula_113 Since all of these sums have the same value, then the sum of formula_105 over formula_34 is well-defined. Even more generally we may define the sum of formula_105 over formula_34 when formula_34 is uncountable. But first we define what it means for the sum to be convergent. Let formula_34 be any set, countable or uncountable, and formula_114 a function. We say that the sum of formula_105 over formula_34 converges absolutely if formula_115 There is a theorem which states that, if the sum of formula_105 over formula_34 is absolutely convergent, then formula_105 takes non-zero values on a set that is at most countable. Therefore, the following is a consistent definition of the sum of formula_105 over formula_34 when the sum is absolutely convergent. formula_116 Note that the final series uses the definition of a series over a countable set. Some authors define an iterated sum formula_117 to be absolutely convergent if the iterated series formula_118 This is in fact equivalent to the absolute convergence of formula_119 That is to say, if the sum of formula_105 over formula_38 formula_120 converges absolutely, as defined above, then the iterated sum formula_117 converges absolutely, and vice versa. Absolute convergence of integrals. The integral formula_121 of a real or complex-valued function is said to converge absolutely if formula_122 One also says that formula_105 is absolutely integrable. The issue of absolute integrability is intricate and depends on whether the Riemann, Lebesgue, or Kurzweil-Henstock (gauge) integral is considered; for the Riemann integral, it also depends on whether we only consider integrability in its proper sense (formula_105 and formula_40 both bounded), or permit the more general case of improper integrals. As a standard property of the Riemann integral, when formula_123 is a bounded interval, every continuous function is bounded and (Riemann) integrable, and since formula_105 continuous implies formula_124 continuous, every continuous function is absolutely integrable. In fact, since formula_125 is Riemann integrable on formula_126 if formula_105 is (properly) integrable and formula_127 is continuous, it follows that formula_128 is properly Riemann integrable if formula_105 is. However, this implication does not hold in the case of improper integrals. For instance, the function formula_129 is improperly Riemann integrable on its unbounded domain, but it is not absolutely integrable: formula_130 Indeed, more generally, given any series formula_131 one can consider the associated step function formula_132 defined by formula_133 Then formula_134 converges absolutely, converges conditionally or diverges according to the corresponding behavior of formula_135 The situation is different for the Lebesgue integral, which does not handle bounded and unbounded domains of integration separately ("see below"). The fact that the integral of formula_124 is unbounded in the examples above implies that formula_105 is also not integrable in the Lebesgue sense. In fact, in the Lebesgue theory of integration, given that formula_105 is measurable, formula_105 is (Lebesgue) integrable if and only if formula_124 is (Lebesgue) integrable. However, the hypothesis that formula_105 is measurable is crucial; it is not generally true that absolutely integrable functions on formula_126 are integrable (simply because they may fail to be measurable): let formula_136 be a nonmeasurable subset and consider formula_137 where formula_138 is the characteristic function of formula_139 Then formula_105 is not Lebesgue measurable and thus not integrable, but formula_140 is a constant function and clearly integrable. On the other hand, a function formula_105 may be Kurzweil-Henstock integrable (gauge integrable) while formula_124 is not. This includes the case of improperly Riemann integrable functions. In a general sense, on any measure space formula_141 the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts: are essentially built into the definition of the Lebesgue integral. In particular, applying the theory to the counting measure on a set formula_142 one recovers the notion of unordered summation of series developed by Moore–Smith using (what are now called) nets. When formula_143 is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide. Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one. For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textstyle\\sum_{n=0}^\\infty a_n" }, { "math_id": 1, "text": "\\textstyle\\sum_{n=0}^\\infty \\left|a_n\\right| = L" }, { "math_id": 2, "text": "\\textstyle L." }, { "math_id": 3, "text": "\\textstyle\\int_0^\\infty f(x)\\,dx," }, { "math_id": 4, "text": "\\textstyle\\int_0^\\infty |f(x)|dx = L." }, { "math_id": 5, "text": "(1+2)+3" }, { "math_id": 6, "text": "1+(2+3)" }, { "math_id": 7, "text": "(3+2)+1" }, { "math_id": 8, "text": "S = \\sum_{n=1}^\\infty \\frac{(-1)^{n+1}}{n}=1-\\frac12+\\frac13-\\frac14+\\frac15-\\frac16+\\cdots " }, { "math_id": 9, "text": "\\ln(1+x) " }, { "math_id": 10, "text": "x " }, { "math_id": 11, "text": "-1<x\\leq1 " }, { "math_id": 12, "text": "\\ln(1+x)=\\sum_{n=1}^\\infty \\frac{(-1)^{n+1} x^n}{n}=x-\\frac{x^2}{2}+\\frac{x^3}{3}-\\frac{x^4}{4}+\\cdots " }, { "math_id": 13, "text": "x=1 " }, { "math_id": 14, "text": "\\ln2 " }, { "math_id": 15, "text": "S=\\left(1-\\frac12\\right)-\\frac14+\\left(\\frac13-\\frac{1}{6}\\right)-\\frac{1}{8}+\\left(\\frac15-\\frac{1}{10}\\right)-\\frac{1}{12}+\\cdots " }, { "math_id": 16, "text": "S=\\frac12-\\frac14+\\frac16-\\frac18+\\frac{1}{10}-\\frac{1}{12}+\\cdots " }, { "math_id": 17, "text": "1+\\frac12+\\frac13+\\frac14+\\cdots " }, { "math_id": 18, "text": "\\sum_{n=0}^{\\infty} a_n" }, { "math_id": 19, "text": "\\sum_{n=0}^{\\infty} |a_n|" }, { "math_id": 20, "text": "a_n" }, { "math_id": 21, "text": "\\|\\cdot\\|: G \\to \\R_+" }, { "math_id": 22, "text": "G" }, { "math_id": 23, "text": "\\|0\\| = 0." }, { "math_id": 24, "text": "x \\in G," }, { "math_id": 25, "text": "\\|x\\| = 0" }, { "math_id": 26, "text": "x = 0." }, { "math_id": 27, "text": "\\|-x\\| = \\|x\\|." }, { "math_id": 28, "text": "x, y \\in G," }, { "math_id": 29, "text": "\\|x+y\\| \\leq \\|x\\| + \\|y\\|." }, { "math_id": 30, "text": "d(x,y) = \\|x-y\\|" }, { "math_id": 31, "text": "G." }, { "math_id": 32, "text": "\\sum_{n=0}^{\\infty} \\|a_n\\| < \\infty." }, { "math_id": 33, "text": "|x|" }, { "math_id": 34, "text": "X" }, { "math_id": 35, "text": "\\left(x_\\alpha\\right)_{\\alpha \\in A}" }, { "math_id": 36, "text": "\\lim_{H \\in \\mathcal{F}(A)} x_H" }, { "math_id": 37, "text": "\\left(x_H\\right)_{H \\in \\mathcal{F}(A)}" }, { "math_id": 38, "text": "X," }, { "math_id": 39, "text": "\\mathcal{F}(A)" }, { "math_id": 40, "text": "A" }, { "math_id": 41, "text": "\\subseteq" }, { "math_id": 42, "text": "x_H := \\sum_{i \\in H} x_i" }, { "math_id": 43, "text": "p" }, { "math_id": 44, "text": "\\left(p \\left(x_\\alpha\\right)\\right)_{\\alpha \\in A}" }, { "math_id": 45, "text": "\\R." }, { "math_id": 46, "text": "x_\\alpha" }, { "math_id": 47, "text": "d," }, { "math_id": 48, "text": "\\sum \\left|a_k\\right|, a_k \\in \\Complex" }, { "math_id": 49, "text": "\\sum \\left[ \\operatorname{Re}\\left(a_k\\right)^2 + \\operatorname{Im}\\left(a_k\\right)^2 \\right]^{1/2}" }, { "math_id": 50, "text": "\\sum \\left|\\operatorname{Re}\\left(a_k\\right)\\right|" }, { "math_id": 51, "text": "\\sum\\left|\\operatorname{Im}\\left(a_k\\right)\\right|" }, { "math_id": 52, "text": "\\sum \\operatorname{Re}\\left(a_k\\right)" }, { "math_id": 53, "text": "\\sum \\operatorname{Im}\\left(a_k\\right)," }, { "math_id": 54, "text": "\\sum a_k=\\sum \\operatorname{Re}\\left(a_k\\right) + i \\sum \\operatorname{Im}\\left(a_k\\right)" }, { "math_id": 55, "text": "\\sum \\left|a_k\\right|, a_k\\in\\R" }, { "math_id": 56, "text": "\\sum a_k." }, { "math_id": 57, "text": "0 \\leq a_k + \\left|a_k\\right| \\leq 2\\left|a_k\\right|," }, { "math_id": 58, "text": "0 \\leq \\sum_{k = 1}^n (a_k + \\left|a_k\\right|) \\leq \\sum_{k = 1}^n 2\\left|a_k\\right|." }, { "math_id": 59, "text": "\\sum 2\\left|a_k\\right|" }, { "math_id": 60, "text": "s_n=\\sum_{k = 1}^n \\left(a_k + \\left|a_k\\right|\\right)" }, { "math_id": 61, "text": "\\sum \\left(a_k + \\left|a_k\\right|\\right)" }, { "math_id": 62, "text": "\\sum a_k = \\sum \\left(a_k + \\left|a_k\\right|\\right) - \\sum \\left|a_k\\right|" }, { "math_id": 63, "text": "\\sum |a_i|" }, { "math_id": 64, "text": "\\varepsilon > 0," }, { "math_id": 65, "text": "N" }, { "math_id": 66, "text": "\\left|\\sum_{i=m}^n \\left|a_i\\right| \\right| = \\sum_{i=m}^n |a_i| < \\varepsilon" }, { "math_id": 67, "text": "n > m \\geq N." }, { "math_id": 68, "text": "\\big|\\sum_{i=m}^n a_i\\big| \\leq \\sum_{i=m}^n |a_i|," }, { "math_id": 69, "text": "\\left|\\sum_{i=m}^n a_i\\right| < \\varepsilon" }, { "math_id": 70, "text": "n > m \\geq N," }, { "math_id": 71, "text": "\\sum a_i." }, { "math_id": 72, "text": "(X, \\|\\,\\cdot\\,\\|)." }, { "math_id": 73, "text": "\\sum x_n" }, { "math_id": 74, "text": "X." }, { "math_id": 75, "text": "\\sum_{k=1}^n\\|x_k\\|" }, { "math_id": 76, "text": "\\varepsilon > 0" }, { "math_id": 77, "text": "m > n" }, { "math_id": 78, "text": "\\left| \\sum_{k=1}^m \\|x_k\\| - \\sum_{k=1}^n \\|x_k\\| \\right| = \\sum_{k=n+1}^m \\|x_k\\| < \\varepsilon." }, { "math_id": 79, "text": "\\left\\|\\sum_{k=1}^m x_k - \\sum_{k=1}^n x_k\\right\\| = \\left\\|\\sum_{k=n+1}^m x_k\\right\\| \\leq \\sum_{k=n+1}^m \\|x_k\\| < \\varepsilon," }, { "math_id": 80, "text": "\\sum_{k=1}^n x_k" }, { "math_id": 81, "text": "\\sum_{i=1}^\\infty a_i = A \\in G, \\quad \\sum_{i=1}^\\infty \\|a_i\\|<\\infty." }, { "math_id": 82, "text": "\\sigma : \\N \\to \\N" }, { "math_id": 83, "text": "\\sum_{i=1}^\\infty a_{\\sigma(i)}=A." }, { "math_id": 84, "text": "\\sum_{n=1}^\\infty \\tfrac{1}{n} e_n," }, { "math_id": 85, "text": "\\{e_n\\}_{n=1}^{\\infty}" }, { "math_id": 86, "text": "\\kappa_\\varepsilon, \\lambda_\\varepsilon \\in \\N," }, { "math_id": 87, "text": "\\begin{align}\n\\text{ for all } N > \\kappa_\\varepsilon &\\quad \\sum_{n=N}^\\infty \\|a_n\\| < \\tfrac{\\varepsilon}{2} \\\\ \n\\text{ for all } N > \\lambda_\\varepsilon &\\quad \\left\\|\\sum_{n=1}^N a_n - A\\right\\| < \\tfrac{\\varepsilon}{2}\n\\end{align}" }, { "math_id": 88, "text": "\\begin{align}\nN_\\varepsilon &=\\max \\left\\{\\kappa_\\varepsilon, \\lambda_\\varepsilon \\right\\} \\\\\nM_{\\sigma,\\varepsilon} &= \\max \\left\\{\\sigma^{-1}\\left(\\left\\{ 1, \\ldots, N_\\varepsilon \\right\\}\\right)\\right\\}\n\\end{align}" }, { "math_id": 89, "text": "\\sigma^{-1}\\left(\\left\\{1, \\ldots, N_\\varepsilon\\right\\}\\right) = \\left\\{\\sigma^{-1}(1), \\ldots, \\sigma^{-1}\\left(N_\\varepsilon\\right)\\right\\}" }, { "math_id": 90, "text": "M_{\\sigma,\\varepsilon}" }, { "math_id": 91, "text": "a_{\\sigma(1)}, \\ldots, a_{\\sigma\\left(M_{\\sigma,\\varepsilon}\\right)}" }, { "math_id": 92, "text": "a_1, \\ldots, a_{N_\\varepsilon}" }, { "math_id": 93, "text": " N > M_{\\sigma,\\varepsilon}" }, { "math_id": 94, "text": "\\begin{align}\nI_{\\sigma,\\varepsilon} &= \\left\\{ 1,\\ldots,N \\right\\}\\setminus \\sigma^{-1}\\left(\\left \\{ 1, \\ldots, N_\\varepsilon \\right \\}\\right) \\\\\nS_{\\sigma,\\varepsilon} &= \\min \\sigma\\left(I_{\\sigma,\\varepsilon}\\right) = \\min \\left\\{\\sigma(k) \\ : \\ k \\in I_{\\sigma,\\varepsilon}\\right\\} \\\\\nL_{\\sigma,\\varepsilon} &= \\max \\sigma\\left(I_{\\sigma,\\varepsilon}\\right) = \\max \\left\\{\\sigma(k) \\ : \\ k \\in I_{\\sigma,\\varepsilon}\\right\\} \\\\\n\\end{align}" }, { "math_id": 95, "text": "\\begin{align}\n\\left\\|\\sum_{i\\in I_{\\sigma,\\varepsilon}} a_{\\sigma(i)}\\right\\| \n&\\leq \\sum_{i \\in I_{\\sigma,\\varepsilon}} \\left\\|a_{\\sigma(i)}\\right\\| \\\\\n&\\leq \\sum_{j = S_{\\sigma,\\varepsilon}}^{L_{\\sigma,\\varepsilon}} \\left\\|a_j\\right\\| && \\text{ since } I_{\\sigma,\\varepsilon} \\subseteq \\left\\{S_{\\sigma,\\varepsilon}, S_{\\sigma,\\varepsilon} + 1, \\ldots, L_{\\sigma,\\varepsilon}\\right\\} \\\\\n&\\leq \\sum_{j = N_\\varepsilon + 1}^{\\infty} \\left\\|a_j\\right\\| && \\text{ since } S_{\\sigma,\\varepsilon} \\geq N_{\\varepsilon} + 1 \\\\\n&< \\frac{\\varepsilon}{2}\n\\end{align}" }, { "math_id": 96, "text": "\\begin{align}\n\\left\\|\\sum_{i=1}^N a_{\\sigma(i)}-A \\right\\| &= \\left\\| \\sum_{i \\in \\sigma^{-1}\\left(\\{ 1,\\dots,N_\\varepsilon \\}\\right)} a_{\\sigma(i)} - A + \n\\sum_{i\\in I_{\\sigma,\\varepsilon}} a_{\\sigma(i)} \\right\\| \\\\\n&\\leq \\left\\|\\sum_{j=1}^{N_\\varepsilon} a_j - A \\right\\| + \\left\\|\\sum_{i\\in I_{\\sigma,\\varepsilon}} a_{\\sigma(i)} \\right\\| \\\\\n&< \\left\\|\\sum_{j=1}^{N_\\varepsilon} a_j - A \\right\\| + \\frac{\\varepsilon}{2}\\\\\n&< \\varepsilon\n\\end{align}" }, { "math_id": 97, "text": "\\text{ for all } \\varepsilon > 0, \\text{ there exists } M_{\\sigma,\\varepsilon}, \\text{ for all } N > M_{\\sigma,\\varepsilon} \\quad \\left\\|\\sum_{i=1}^N a_{\\sigma(i)} - A\\right\\| < \\varepsilon," }, { "math_id": 98, "text": "\\sum_{i=1}^\\infty a_{\\sigma(i)} = A." }, { "math_id": 99, "text": "\\sum_{n=0}^\\infty a_n = A \\quad \\text{ and } \\quad \\sum_{n=0}^\\infty b_n = B." }, { "math_id": 100, "text": "c_n" }, { "math_id": 101, "text": "c_n = \\sum_{k=0}^n a_k b_{n-k}." }, { "math_id": 102, "text": "b_n" }, { "math_id": 103, "text": "\\sum_{n=0}^\\infty c_n = A B." }, { "math_id": 104, "text": "f : X \\to \\R." }, { "math_id": 105, "text": "f" }, { "math_id": 106, "text": "\\sum_{x \\in X} f(x)." }, { "math_id": 107, "text": "\\sum_{x \\in X}f(x)" }, { "math_id": 108, "text": "f," }, { "math_id": 109, "text": "\\sum_{x \\in X} f(x)" }, { "math_id": 110, "text": "g : \\Z^+ \\to X" }, { "math_id": 111, "text": "\\sum_{n=1}^\\infty f(g(n))" }, { "math_id": 112, "text": "\\sum_{x \\in X}f(x) := \\sum_{n=1}^\\infty f(g(n))" }, { "math_id": 113, "text": "g." }, { "math_id": 114, "text": "f : X \\to \\R" }, { "math_id": 115, "text": "\\sup\\left\\{\\sum_{x \\in A} |f(x)|: A\\subseteq X, A \\text{ is finite }\\right\\} < \\infty." }, { "math_id": 116, "text": "\\sum_{x \\in X} f(x) := \\sum_{x \\in X : f(x) \\neq 0} f(x)." }, { "math_id": 117, "text": "\\sum_{m=1}^\\infty \\sum_{n=1}^\\infty a_{m,n}" }, { "math_id": 118, "text": "\\sum_{m=1}^\\infty \\sum_{n=1}^\\infty |a_{m,n}| < \\infty." }, { "math_id": 119, "text": "\\sum_{(m,n) \\in \\N \\times \\N} a_{m,n}." }, { "math_id": 120, "text": "\\sum_{(m,n) \\in \\N \\times \\N} a_{m,n}," }, { "math_id": 121, "text": "\\int_A f(x)\\,dx" }, { "math_id": 122, "text": "\\int_A \\left|f(x)\\right|\\,dx < \\infty." }, { "math_id": 123, "text": "A=[a,b]" }, { "math_id": 124, "text": "|f|" }, { "math_id": 125, "text": "g\\circ f" }, { "math_id": 126, "text": "[a,b]" }, { "math_id": 127, "text": "g" }, { "math_id": 128, "text": "|f|=|\\cdot|\\circ f" }, { "math_id": 129, "text": "f:[1,\\infty) \\to \\R : x \\mapsto \\frac{\\sin x}{x}" }, { "math_id": 130, "text": "\\int_1^\\infty \\frac{\\sin x}{x}\\,dx = \\frac{1}{2}\\bigl[\\pi - 2\\,\\mathrm{Si}(1)\\bigr] \\approx 0.62, \\text{ but } \\int_1^\\infty \\left|\\frac{\\sin x}{x}\\right| dx = \\infty." }, { "math_id": 131, "text": "\\sum_{n=0}^\\infty a_n" }, { "math_id": 132, "text": "f_a: [0,\\infty) \\to \\R" }, { "math_id": 133, "text": "f_a([n,n+1)) = a_n." }, { "math_id": 134, "text": "\\int_0^\\infty f_a \\, dx" }, { "math_id": 135, "text": "\\sum_{n=0}^\\infty a_n." }, { "math_id": 136, "text": "S \\subset [a,b]" }, { "math_id": 137, "text": "f = \\chi_S - 1/2," }, { "math_id": 138, "text": "\\chi_S" }, { "math_id": 139, "text": "S." }, { "math_id": 140, "text": "|f| \\equiv 1/2" }, { "math_id": 141, "text": "A," }, { "math_id": 142, "text": "S," }, { "math_id": 143, "text": "S = \\N" } ]
https://en.wikipedia.org/wiki?curid=61532
6153368
Flow chemistry
Chemical reaction run in a continuous stream In flow chemistry, also called reactor engineering, a chemical reaction is run in a continuously flowing stream rather than in batch production. In other words, pumps move fluid into a reactor, and where tubes join one another, the fluids contact one another. If these fluids are reactive, a reaction takes place. Flow chemistry is a well-established technique for use at a large scale when manufacturing large quantities of a given material. However, the term has only been coined recently for its application on a laboratory scale by chemists and describes small pilot plants, and lab-scale continuous plants. Often, microreactors are used. Running flow reactions. Choosing to run a chemical reaction using flow chemistry, either in a microreactor or other mixing device offers a variety of pros and cons. Advantages. Typical drivers are higher yields/selectivities, less needed manpower, or a higher safety level. Disadvantages. The drawbacks have been discussed in view of establishing small scale continuous production processes by Pashkova and Greiner. Continuous flow reactors. Continuous reactors are typically tube-like and manufactured from non-reactive materials such as stainless steel, glass, and polymers. Mixing methods include diffusion alone (if the diameter of the reactor is small e.g. &lt;1 mm, such as in microreactors) and static mixers. Continuous flow reactors allow good control over reaction conditions including heat transfer, time, and mixing. The residence time of the reagents in the reactor (i.e. the amount of time that the reaction is heated or cooled) is calculated from the volume of the reactor and the flow rate through it: formula_0 Therefore, to achieve a longer residence time, reagents can be pumped more slowly and/or a larger volume reactor used. Production rates can vary from nanoliters to liters per minute. Some examples of flow reactors are spinning disk reactors; spinning tube reactors; multi-cell flow reactors; oscillatory flow reactors; microreactors; hex reactors; and 'aspirator reactors'. In an aspirator reactor a pump propels one reagent, which causes a reactant to be sucked in. This type of reactor was patented around 1941 by the Nobel company for the production of nitroglycerin. Flow reactor scale. The smaller scale of microflow reactors or microreactors can make them ideal for process development experiments. Although it is possible to operate flow processes at a ton scale, synthetic efficiency benefits from improved thermal and mass transfer as well as mass transport. Key application areas. Use of gases in flow. Laboratory scale flow reactors are ideal systems for using gases, particularly those that are toxic or associated with other hazards. The gas reactions that have been most successfully adapted to flow are hydrogenation and carbonylation, although work has also been performed using other gases, e.g. ethylene and ozone. Reasons for the suitability of flow systems for hazardous gas handling are: Photochemistry in combination with flow chemistry. Continuous flow photochemistry offers multiple advantages over batch photochemistry. Photochemical reactions are driven by the number of photons that are able to activate molecules causing the desired reaction. The large surface area to volume ratio of a microreactor maximizes the illumination, and at the same time allows for efficient cooling, which decreases the thermal side products. Electrochemistry in combination with flow chemistry. Continuous flow electrochemistry like continuous photochemistry offers many advantages over analogous batch conditions. Electrochemistry like Photochemical reactions can be considered as 'reagent-less' reactions. In an electrochemical reaction the reaction is facilitated by the number of electrons that are able to activate molecules causing the desired reaction. Continuous electrochemistry apparatus reduces the distance between the electrodes used to allow better control of the number of electrons transferred to the reaction media enabling better control and selectivity. Recent developments in electrochemical flow-systems enabled the combination of reaction-oriented electrochemical flow systems with species-focused spectroscopy which allows a complete analysis of reactions involving multiple electron transfer steps, as well as unstable intermediates. These systems which are referred to as spectroelectrochemistry systems can enable the use of UV-vis as well as more complex methods such as electrochemiluminescence. Furthermore, using electrochemistry allows another degree of flexibility since the user has control not only on the flow parameters and the nature of the electrochemical measurement itself but also on the geometry or nature of the electrode (or electrodes in the case of an electrode array). Process development. The process development change from a serial approach to a parallel approach. In batch the chemist works first followed by the chemical engineer. In flow chemistry this changes to a parallel approach, where chemist and chemical engineer work interactively. Typically there is a plant setup in the lab, which is a tool for both. This setup can be either commercial or noncommercial. The development scale can be small (ml/hour) for idea verification using a chip system and in the range of a couple of liters per hour for scalable systems like the flow miniplant technology. Chip systems are mainly used for a liquid-liquid application while flow miniplant systems can deal with solids or viscous material. Scale up of microwave reactions. Microwave reactors are frequently used for small-scale batch chemistry. However, due to the extremes of temperature and pressure reached in a microwave it is often difficult to transfer these reactions to conventional non-microwave apparatus for subsequent development, leading to difficulties with scaling studies. A flow reactor with suitable high-temperature ability and pressure control can directly and accurately mimic the conditions created in a microwave reactor. This eases the synthesis of larger quantities by extending reaction time. Manufacturing scale solutions. Flow systems can be scaled to the tons per hour scale. Plant redesign (batch to conti for an existing plant), Unit Operation (exchanging only one reaction step) and Modular Multi-purpose (Cutting a continuous plant into modular units) are typical realization solutions for flow processes. Other uses of flow. It is possible to run experiments in flow using more sophisticated techniques, such as solid phase chemistries. Solid phase reagents, catalysts or scavengers can be used in solution and pumped through glass columns, for example, the synthesis of alkaloid natural product oxomaritidine using solid phase chemistries. There is an increasing interest in polymerization as a continuous flow process. For example, Reversible Addition-Fragmentation chain Transfer or RAFT polymerization. Continuous flow techniques have also been used for the controlled generation of nanoparticles. The very rapid mixing and excellent temperature control of microreactors are able to give consistent and narrow particle size distribution of nanoparticles. Segmented flow chemistry. As discussed above, running experiments in continuous flow systems is difficult, especially when one is developing new chemical reactions, which requires screening of multiple components, varying stoichiometry, temperature, and residence time. In continuous flow, experiments are performed serially, which means one experimental condition can be tested. Experimental throughput is highly variable and as typically five times the residence time is needed for obtaining steady state. For temperature variation the thermal mass of the reactor as well as peripherals such as fluid baths needs to be considered. More often than not, the analysis time needs to be considered. Segmented flow is an approach that improves upon the speed in which screening, optimization, and libraries can be conducted in flow chemistry. Segmented flow uses a "Plug Flow" approach where specific volumetric experimental mixtures are created and then injected into a high-pressure flow reactor. Diffusion of the segment (reaction mixture) is minimized by using immiscible solvent on the leading and rear ends of the segment. One of the primary benefits of segmented flow chemistry is the ability to run experiments in a serial/parallel manner where experiments that share the same residence time and temperature can be repeatedly created and injected. In addition, the volume of each experiment is independent of that of the volume of the flow tube thereby saving a significant amount of reactant per experiment. When performing reaction screening and libraries, segment composition is typically varied by the composition of matter. When performing reaction optimization, segments vary by stoichiometry. Segmented flow is also used with online LCMS, both analytical and preparative where the segments are detected when exiting the reactor using UV and subsequently diluted for analytical LCMS or injected directly for preparative LCMS. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Residence time} = \\frac{\\text{Reactor Volume}}{\\text{Flow Rate}}" } ]
https://en.wikipedia.org/wiki?curid=6153368
6153416
Inductive sensor
Device which uses electromagnetic induction to sense objects An inductive sensor is a device that uses the principle of electromagnetic induction to detect or measure objects. An inductor develops a magnetic field when an electric current flows through it; alternatively, a current will flow through a circuit containing an inductor when the magnetic field through it changes. This effect can be used to detect metallic objects that interact with a magnetic field. Non-metallic substances, such as liquids or some kinds of dirt, do not interact with the magnetic field, so an inductive sensor can operate in wet or dirty conditions. Principle. The inductive sensor is based on Faraday's law of induction. The temporal variations of the magnetic flux Φ through a coil with N turns will induce a voltage e which follows: formula_0 which can be expressed in a simpler way: formula_1 by assuming that the induced magnetic field B is homogeneous over a section S (the magnetic flux will be expressed formula_2). One form of inductive sensor drives a coil with an oscillator. A metallic object approaching the coil will alter the inductance of the coil, producing a change in frequency or a change in the current in the coil. These changes can be detected, amplified, compared to a threshold and used to switch an external circuit. The coil may have a ferromagnetic core to make the magnetic field more intense and to increase the sensitivity of the device. A coil with no ferromagnetic core ("air core") can also be used, especially if the oscillator coil must cover a large area. Another form of inductive sensor uses one coil to produce a changing magnetic field, and a second coil (or other device) to sense the changes in the magnetic field produced by an object, for example, due to eddy currents induced in a metal object. Applications. Search coil magnetometer. Inductive sensors constitute the main element to build a search coil magnetometer, also known as a search coil. These are used in many fields of research: magnetotellurics, electromagnetic waves measurement, space magnetometers to investigate electromagnetic waves in space plasma as well as natural electromagnetic waves observations on Earth. Inductive proximity sensor (proximity switch). An inductive proximity sensor is a non-contact electronic proximity sensor. It is used for positioning and detection of metal objects. The sensing range of an inductive switch is dependent on the type of metal being detected. Ferrous metals, such as iron and steel, allow for a longer sensing range, while nonferrous metals, such as aluminum and copper, may reduce the sensing range by up to 60 percent. Since the output of an inductive sensor has two possible states, an inductive sensor is sometimes referred to as an inductive proximity switch. The sensor consists of an induction loop or detector coil. Most often this is physically a number of turns of insulated magnet wire wound around a high magnetic permeability core, such as a ferrite ceramic rod or coil form, and the winding may or may not have a feedback tap some number of turns from one end of the total winding. It is connected to a capacitance to form a tuned frequency oscillator tank circuit. In conjunction with a voltage or current gain device like a transistor or operational amplifier, this forms a tuned frequency oscillator. When power is applied, the resulting oscillation is a high frequency alternating electric current in the coil that has a constantly changing magnetic field able to induce eddy currents in proximal (target) conductors. The closer the target is and the greater its conductivity (metals are good conductors, for example), the greater the induced eddy currents are and the more effect their resulting opposing magnetic fields have on the magnitude and frequency of the oscillation. Its magnitude is reduced as the load is increased in a non-magnetic conductor like aluminum because the induced field in the target opposes the source induction field, lowering net inductive impedance and therefore simultaneously tuning the oscillation frequency higher. But that magnitude is less affected if the target is a highly magnetically permeable material, like iron, as that high permeability increases the coil inductance, lowering the frequency of oscillation. A change in oscillation magnitude may be detected with a simple amplitude modulation detector like a diode that passes the peak voltage value to a small filter to produce a reflective DC voltage value, while a frequency change may be detected by one of several kinds frequency discriminator circuits, like a phase lock loop detector, to see in what direction and how much the frequency shifts. Either the magnitude change or the amount of frequency change can serve to define a proximity distance at which the sensors go from on to off, or vice versa. Common applications of inductive sensors include metal detectors, traffic lights, car washes, and a host of automated industrial processes. Because the sensor does not require physical contact it is particularly useful for applications where access presents challenges or where dirt is prevalent. Traffic sensor. To control traffic signals at an intersection of roads, an induction loop can be buried in the pavement. A circuit connected to the loop can detect the change in its inductance when a vehicle passes over or stops on the loop. This can be used to detect vehicles and adjust the timing of traffic signals or provide a turning signal at a busy intersection. Nuclear magnetic resonance. Inductive sensors, also referred (in this area) as "NMR coils" or "radiofrequency coils", are used to detect the magnetic component of the electromagnetic field associated to the nuclear spin precession in Nuclear magnetic resonance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " e = - N\\frac{d \\Phi}{dt}" }, { "math_id": 1, "text": " e = - N\\times S\\frac{dB}{dt}" }, { "math_id": 2, "text": " \\Phi = B\\times S " } ]
https://en.wikipedia.org/wiki?curid=6153416
615354
Bombe
Codebreaking device created at Bletchley Park (United Kingdom) The bombe () was an electro-mechanical device used by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The US Navy and US Army later produced their own machines to the same functional specification, albeit engineered differently both from each other and from Polish and British bombes. The British bombe was developed from a device known as the "bomba" (), which had been designed in Poland at the Biuro Szyfrów (Cipher Bureau) by cryptologist Marian Rejewski, who had been breaking German Enigma messages for the previous seven years, using it and earlier machines. The initial design of the British bombe was produced in 1939 at the UK Government Code and Cypher School (GC&amp;CS) at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. The first bombe, code-named "Victory", was installed in March 1940 while the second version, "Agnus Dei" or "Agnes", incorporating Welchman's new design, was working by August 1940. The bombe was designed to discover some of the daily settings of the Enigma machines on the various German military networks: specifically, the set of rotors in use and their positions in the machine; the rotor core start positions for the message—the message key—and one of the wirings of the plugboard. The Enigma machine. The Enigma is an electro-mechanical rotor machine used for the encryption and decryption of secret messages. It was developed in Germany in the 1920s. The repeated changes of the electrical pathway from the keyboard to the lampboard implement a polyalphabetic substitution cipher, which turns plaintext into ciphertext and back again. The Enigma's scrambler contains rotors with 26 electrical contacts on each side, whose wiring diverts the current to a different position on the two sides. When a key is pressed on the keyboard, an electric current flows through an entry drum at the right-hand end of the scrambler, then through the set of rotors to a reflecting drum (or reflector) which turns it back through the rotors and entry drum, and out to illuminate one of the lamps on the lampboard. At each key depression, the right-hand or "fast" rotor advances one position, which causes the encipherment to change. In addition, once per rotation, the right-hand rotor causes the middle rotor to advance; the middle rotor similarly causes the left-hand (or "slow") rotor to advance. Each rotor's position is indicated by a letter of the alphabet showing through a window. The Enigma operator rotates the wheels by hand to set the start position for enciphering or deciphering a message. The three-letter sequence indicating the start position of the rotors is the "message key". There are 263 17,576 different message keys and different positions of the set of three rotors. By opening the lid of the machine and releasing a compression bar, the set of three rotors on their spindle can be removed from the machine and their sequence (called the "wheel order" at Bletchley Park) altered. Multiplying 17,576 by the six possible wheel orders gives 105,456 different ways that the scrambler can be set up. Although 105,456 is a large number, it does not guarantee security. A brute-force attack is possible: one could imagine using 100 code clerks who each tried to decode a message using 1000 distinct rotor settings. The Poles developed card catalogs so they could easily find rotor positions; Britain built "codice_0" (the German word for one) catalogs. Less intensive methods were also possible. If all message traffic for a day used the same rotor starting position, then frequency analysis for each position could recover the polyalphabetic substitutions. If different rotor starting positions were used, then overlapping portions of a message could be found using the index of coincidence. Many major powers (including the Germans) could break Enigma traffic if they knew the rotor wiring. The German military knew the Enigma was weak. In 1930, the German army introduced an additional security feature, a plugboard ("Steckerbrett" in German; each plug is a "Stecker", and the British cryptologists also used the word) that further scrambled the letters, both before and after they passed through the rotor-reflector system. The Enigma encryption is a self-inverse function, meaning that it substitutes letters reciprocally: if codice_1 is transformed into codice_2, then codice_2 is transformed into codice_1. The plugboard transformation maintained the self-inverse quality, but the plugboard wiring, unlike the rotor positions, does not change during the encryption. This regularity was exploited by Welchman's "diagonal board" enhancement to the bombe, which vastly increased its efficiency. With six plug leads in use (leaving 14 letters "unsteckered"), there were 100,391,791,500 possible ways of setting up the plugboard. An important feature of the machine from a cryptanalyst's point of view, and indeed Enigma's Achilles' heel, was that the reflector in the scrambler prevented a letter from being enciphered as itself. Any putative solution that gave, for any location, the same letter in the proposed plaintext and the ciphertext could therefore be eliminated. In the lead-up to World War II, the Germans made successive improvements to their military Enigma machines. By January 1939, additional rotors had been introduced so that three rotors were chosen from a set of five (hence there were now 60 possible wheel orders) for the army and air force Enigmas, and three out of eight (making 336 possible wheel orders) for the navy machines. In addition, ten leads were used on the plugboard, leaving only six letters unsteckered. This meant that the air force and army Enigmas could be set up in 1.5×1019 ways. In 1941 the German navy introduced a version of Enigma with a rotatable reflector (the M4 or Four-rotor Enigma) for communicating with its U-boats. This could be set up in 1.8×1020 different ways. Four-rotor Enigma. By late 1941 a change in German Navy fortunes in the Battle of the Atlantic, combined with intelligence reports, convinced Admiral Karl Dönitz that the Allies were able to read the German Navy's coded communications, and a fourth rotor with unknown wiring was added to German Navy Enigmas used for U-boat communications, producing the "Triton" system, known at Bletchley Park as "Shark". This was coupled with a thinner reflector design to make room for the extra rotor. The Triton was designed in such a way that it remained compatible with three-rotor machines when necessary: one of the extra 'fourth' rotors, the 'beta', was designed so that when it was paired with the thin 'B' reflector, and the rotor and ring were set to 'A', the pair acted as a 'B' reflector coupled with three rotors. Fortunately for the Allies, in December 1941, before the machine went into official service, a submarine accidentally sent a message with the fourth rotor in the wrong position, and then retransmitted the message with the rotor in the correct position to emulate the three-rotor machine. In February 1942 the change in the number of rotors used became official, and the Allies' ability to read German submarines' messages ceased until a snatch from a captured U-boat revealed not only the four-rotor machine's ability to emulate a three-rotor machine, but also that the fourth rotor did not move during a message. This along with the aforementioned retransmission eventually allowed the code breakers to figure out the wiring of both the 'beta' and 'gamma' fourth rotors. The first half of 1942 was the "Second Happy Time" for the German U-boats, with renewed success in attacking Allied shipping, as the US had just entered war unprepared for the onslaught, lacking in anti-submarine warfare (ASW) aircraft, ships, personnel, doctrine and organization. Also, the security of the new Enigma and the Germans' ability to read Allied convoy messages sent in Naval Cipher No. 3 contributed to their success. Between January and March 1942, German submarines sank 216 ships off the US east coast. In May 1942 the US began using the convoy system and requiring a blackout of coastal cities so that ships would not be silhouetted against their lights, but this yielded only slightly improved security for Allied shipping. The Allies' failure to change their cipher for three months, together with the fact that Allied messages never contained any raw Enigma decrypts (or even mentioned that they were decrypting messages), helped convince the Germans that their messages were secure. Conversely, the Allies learned that the Germans had broken the naval cipher almost immediately from Enigma decrypts, but lost many ships due to the delay in changing the cipher. The principle of the bombe. The following settings of the Enigma machine must be discovered to decipher German military Enigma messages. Once these are known, all the messages for that network for that day (or pair of days in the case of the German navy) could be decrypted. Internal settings (that required the lid of the Enigma machine to be opened) External settings (that could be changed without opening the Enigma machine) The bombe identified possible initial positions of the rotor cores and the "stecker partner" of a specified letter for a set of wheel orders. Manual techniques were then used to complete the decryption process. In the words of Gordon Welchman, "... the task of the bombe was simply to reduce the assumptions of wheel order and scrambler positions that required 'further analysis' to a manageable number". Structure. The bombe was an electro-mechanical device that replicated the action of several Enigma machines wired together. A standard German Enigma employed, at any one time, a set of three rotors, each of which could be set in any of 26 positions. The standard British bombe contained 36 Enigma equivalents, each with three drums wired to produce the same scrambling effect as the Enigma rotors. A bombe could run two or three jobs simultaneously. Each job would have a 'menu' that had to be run against a number of different wheel orders. If the menu contained 12 or fewer letters, three different wheel orders could be run on one bombe; if more than 12 letters, only two. In order to simulate Enigma rotors, each rotor drum of the bombe had two complete sets of contacts, one for input towards the reflector and the other for output from the reflector, so that the reflected signal could pass back through a separate set of contacts. Each drum had 104 wire brushes, which made contact with the plate onto which they were loaded. The brushes and the corresponding set of contacts on the plate were arranged in four concentric circles of 26. The outer pair of circles (input and output) were equivalent to the current in an Enigma passing in one direction through the scrambler, and the inner pair equivalent to the current flowing in the opposite direction. The interconnections within the drums between the two sets of input and output contacts were both identical to those of the relevant Enigma rotor. There was permanent wiring between the inner two sets of contacts of the three input/output plates. From there, the circuit continued to a plugboard located on the left-hand end panel, which was wired to imitate an Enigma reflector and then back through the outer pair of contacts. At each end of the "double-ended Enigma", there were sockets on the back of the machine, into which 26-way cables could be plugged. The bombe drums were arranged with the top one of the three simulating the left-hand rotor of the Enigma scrambler, the middle one the middle rotor, and the bottom one the right-hand rotor. The top drums were all driven in synchrony by an electric motor. For each full rotation of the top drums, the middle drums were incremented by one position, and likewise for the middle and bottom drums, giving the total of 26 × 26 × 26 = positions of the 3-rotor Enigma scrambler. The drums were colour-coded according to which Enigma rotor they emulated: I red; II maroon; III green; IV yellow; V brown; VI cobalt (blue); VII jet (black); VIII silver. At each position of the rotors, an electric current would or would not flow in each of the 26 wires, and this would be tested in the bombe's comparator unit. For a large number of positions, the test would lead to a logical contradiction, ruling out that setting. If the test did not lead to a contradiction, the machine would stop. The operator would then find the point at which the test passed, record the candidate solution by reading the positions of the indicator drums and the indicator unit on the Bombe's right-hand end panel. The operator then restarted the run. The candidate solutions, "stops" as they were called, were processed further to eliminate as many false stops as possible. Typically, there were many false bombe stops before the correct one was found. The candidate solutions for the set of wheel orders were subject to extensive further cryptanalytical work. This progressively eliminated the false stops, built up the set of plugboard connections and established the positions of the rotor alphabet rings. Eventually, the result would be tested on a Typex machine that had been modified to replicate an Enigma, to see whether that decryption produced German language. Bombe menu. A bombe run involved a cryptanalyst first obtaining a "crib" — a section of plaintext that was thought to correspond to the ciphertext. Finding cribs was not at all straightforward; it required considerable familiarity with German military jargon and the communication habits of the operators. However, the codebreakers were aided by the fact that the Enigma would never encrypt a letter to itself. This helped in testing a possible crib against the ciphertext, as it could rule out a number of cribs and positions, where the same letter occurred in the same position in both the plaintext and the ciphertext. This was termed a "crash" at Bletchley Park. Once a suitable crib had been decided upon, the cryptanalyst would produce a "menu" for wiring up the bombe to test the crib against the ciphertext. The following is a simplified explanation of the process of constructing a menu. Suppose that the crib is ATTACKATDAWN to be tested against a certain stretch of ciphertext, say, WSNPNLKLSTCS. The letters of the crib and the ciphertext were compared to establish pairings between the ciphertext and the crib plaintext. These were then graphed as in the diagram. It should be borne in mind that the relationships are reciprocal so that A in the plaintext associated with W in the ciphertext is the same as W in the plaintext associated with A in the ciphertext. At position 1 of the plaintext-ciphertext comparison, the letter A is associated with W, but A is also associated with P at position 4, K at position 7 and T at position 10. Building up these relationships into such a diagram provided the menu from which the bombe connections and drum start positions would be set up. In the illustration, there are three sequences of letters which form loops (or "cycles" or "closures"), ATLK, TNS and TAWCN. The more loops in the menu, the more candidate rotor settings the bombe could reject, and hence the fewer false stops. Alan Turing conducted a very substantial analysis (without any electronic aids) to estimate how many bombe stops would be expected according to the number of letters in the menu and the number of loops. Some of his results are given in the following table. Recent bombe simulations have shown similar results. Stecker values. The German military Enigma included a plugboard ("Steckerbrett" in German) which swapped letters (indicated here by "P") before and after the main scrambler's change (indicated by "S"). The plugboard connections were known to the cryptanalysts as Stecker values. If there had been no plugboard, it would have been relatively straightforward to test a rotor setting; a Typex machine modified to replicate Enigma could be set up and the crib letter A encrypted on it, and compared with the ciphertext, W. If they matched, the next letter would be tried, checking that T encrypted to S and so on for the entire length of the crib. If at any point the letters failed to match, the initial rotor setting would be rejected; most incorrect settings would be ruled out after testing just two letters. This test could be readily mechanised and applied to all settings of the rotors. However, with the plugboard, it was much harder to perform trial encryptions because it was unknown what the crib and ciphertext letters were transformed to by the plugboard. For example, in the first position, "P"(A) and "P"(W) were unknown because the plugboard settings were unknown. Turing's solution to working out the stecker values (plugboard connections) was to note that, even though the values for, say, "P"(A) or "P"(W), were unknown, the crib still provided known relationships amongst these values; that is, the values after the plugboard transformation. Using these relationships, a cryptanalyst could reason from one to another and, potentially, derive a logical contradiction, in which case the rotor setting under consideration could be ruled out. A worked example of such reasoning might go as follows: a cryptanalyst might suppose that "P"(A) Y. Looking at position 10 of the crib:ciphertext comparison, we observe that A encrypts to T, or, expressed as a formula: T = "P"("S"10("P"(A))) Due to the function "P" being its own inverse, we can apply it to both sides of the equation and obtain the following: "P"(T) "S"10("P"(A)) This gives us a relationship between "P"(A) and "P"(T). If "P"(A) = Y, and for the rotor setting under consideration "S"10(Y) = Q (say), we can deduce that "P"(T) = "S"10("P"(A)) "S"10(Y) = Q While the crib does not allow us to determine what the values after the plugboard are, it does provide a constraint between them. In this case, it shows how "P"(T) is completely determined if "P"(A) is known. Likewise, we can also observe that T encrypts to L at position 8. Using "S"8, we can deduce the steckered value for L as well using a similar argument, to get, say, "P"(L) = "S"8("P"(T)) "S"8(Q) = G Similarly, in position 6, K encrypts to L. As the Enigma machine is self-reciprocal, this means that at the same position L would also encrypt to K. Knowing this, we can apply the argument once more to deduce a value for "P"(K), which might be: "P"(K) "S"6("P"(L)) "S"6(G) = F And again, the same sort of reasoning applies at position 7 to get: "P"(A) "S"7("P"(K)) "S"7(F) = N However, in this case, we have derived a contradiction, since, by hypothesis, we assumed that "P"(A) = Y at the outset. This means that the initial assumption must have been incorrect, and so that (for this rotor setting) "P"(A) ≠ Y (this type of argument is termed "reductio ad absurdum" or "proof by contradiction"). The cryptanalyst hypothesised one plugboard interconnection for the bombe to test. The other stecker values and the ring settings were worked out by hand methods. Automated deduction. To automate these logical deductions, the bombe took the form of an electrical circuit. Current flowed around the circuit near-instantaneously, and represented all the possible logical deductions which could be made at that position. To form this circuit, the bombe used several sets of Enigma rotor stacks wired up together according to the instructions given on a menu, derived from a crib. Because each Enigma machine had 26 inputs and outputs, the replica Enigma stacks are connected to each other using 26-way cables. In addition, each Enigma stack rotor setting is offset a number of places as determined by its position in the crib; for example, an Enigma stack corresponding to the fifth letter in the crib would be four places further on than that corresponding to the first letter. In practice. Practical bombes used several stacks of rotors spinning together to test multiple hypotheses about possible setups of the Enigma machine, such as the order of the rotors in the stack. While Turing's bombe worked in theory, it required impractically long cribs to rule out sufficiently large numbers of settings. Gordon Welchman came up with a way of using the symmetry of the Enigma stecker to increase the power of the bombe. His suggestion was an attachment called the "diagonal board" that further improved the bombe's effectiveness. The British Bombe. The Polish cryptologic "bomba" (Polish: "bomba kryptologiczna"; plural "bomby") had been useful only as long as three conditions were met. First, the form of the indicator had to include the repetition of the message key; second, the number of rotors available had to be limited to three, giving six different "wheel orders" (the three rotors and their order within the machine); and third, the number of plug-board leads had to remain relatively small so that the majority of letters were "unsteckered". Six machines were built, one for each possible rotor order. The "bomby" were delivered in November 1938, but barely a month later the Germans introduced two additional rotors for loading into the Enigma scrambler, increasing the number of wheel orders by a factor of ten. Building another 54 "bomby" was beyond the Poles' resources. Also, on 1 January 1939, the number of plug-board leads was increased to ten. The Poles therefore had to return to manual methods, the Zygalski sheets. Alan Turing designed the British bombe on a more general principle, the assumption of the presence of text, called a "crib", that cryptanalysts could predict was likely to be present at a defined point in the message. This technique is termed a "known plaintext attack" and had been used to a limited extent by the Poles, e.g., the Germans' use of "ANX" — "AN", German for "To", followed by "X" as a spacer. A £100,000 budget for the construction of Turing's machine was acquired and the contract to build the bombes was awarded to the British Tabulating Machine Company (BTM) at Letchworth. BTM placed the project under the direction of Harold 'Doc' Keen. Each machine was about wide, tall, deep and weighed about a ton. On the front of each bombe were 108 places where drums could be mounted. The drums were in three groups of 12 triplets. Each triplet, arranged vertically, corresponded to the three rotors of an Enigma scrambler. The bombe drums' input and output contacts went to cable connectors, allowing the bombe to be wired up according to the menu. The 'fast' drum rotated at a speed of 50.4 rpm in the first models and 120 rpm in later ones, when the time to set up and run through all 17,576 possible positions for one rotor order was about 20 minutes. The first bombe was named "Victory". It was installed in "Hut 1" at Bletchley Park on 18 March 1940. It was based on Turing's original design and so lacked a diagonal board. On 26 April 1940, captured a German trawler ("Schiff 26", the "Polares") flying a Dutch flag; included in the capture were some Enigma keys for 23 to 26 April. Bletchley retrospectively attacked some messages sent during this period using the captured material and an ingenious Bombe menu where the Enigma fast rotors were all in the same position. In May and June 1940, Bletchley succeeded in breaking six days of naval traffic, 22–27 April 1940. Those messages were the first breaks of "Kriegsmarine" messages of the war, "[b]ut though this success expanded Naval Section's knowledge of the Kriegsmarines's signals organization, it neither affected naval operations nor made further naval Enigma solutions possible." The second bombe, named "Agnus dei", later shortened to "Agnes", or "Aggie", was equipped with Welchman's diagonal board, and was installed on 8 August 1940; "Victory" was later returned to Letchworth to have a diagonal board fitted. The bombes were later moved from "Hut 1" to "Hut 11". The bombe was referred to by Group Captain Winterbotham as a "Bronze Goddess" because of its colour. The devices were more prosaically described by operators as being "like great big metal bookcases". During 1940, 178 messages were broken on the two machines, nearly all successfully. Because of the danger of bombes at Bletchley Park being lost if there were to be a bombing raid, bombe outstations were established, at Adstock, Gayhurst and Wavendon, all in Buckinghamshire. In June–August 1941 there were 4 to 6 bombes at Bletchley Park, and when Wavendon was completed, Bletchley, Adstock and Wavenden had a total of 24 to 30 bombes. When Gayhurst became operational there were a total of 40 to 46 bombes, and it was expected that the total would increase to about 70 bombes run by some 700 Wrens (Women's Royal Naval Service). But in 1942 with the introduction of the naval four-rotor Enigma, "far more than seventy bombes" would be needed. New outstations were established at Stanmore and Eastcote, and the Wavendon and Adstock bombes were moved to them, though the Gayhurst site was retained. The few bombes left at Bletchley Park were used for demonstration and training purposes only. Production of bombes by BTM at Letchworth in wartime conditions was nowhere near as rapid as the Americans later achieved at NCR in Dayton, Ohio. Sergeant Jones was given the overall responsibility for Bombe maintenance by Edward Travis. Later Squadron Leader and not to be confused with Eric Jones, he was one of the original bombe maintenance engineers, and experienced in BTM techniques. Welchman said that later in the war when other people tried to maintain them, they realised how lucky they were to have him. About 15 million delicate wire brushes on the drums had to make reliable contact with the terminals on the template. There were 104 brushes per drum, 720 drums per bombe, and ultimately around 200 bombes. After World War II, some fifty bombes were retained at RAF Eastcote, while the rest were destroyed. The surviving bombes were put to work, possibly on Eastern bloc ciphers. Smith cites the official history of the bombe as saying that "some of these machines were to be stored away but others were required to run new jobs and sixteen machines were kept comparatively busy on menus." and "It is interesting to note that most of the jobs came up and the operating, checking and other times maintained were faster than the best times during the war periods." Response to the four-rotor Enigma. A program was initiated by Bletchley Park to design much faster bombes that could decrypt the four-rotor system in a reasonable time. There were two streams of development. One, code-named Cobra, with an electronic sensing unit, was produced by Charles Wynn-Williams of the Telecommunications Research Establishment (TRE) at Malvern and Tommy Flowers of the General Post Office (GPO). The other, code-named Mammoth, was designed by Harold Keen at BTM, Letchworth. Initial delivery was scheduled for August or September 1942. The dual development projects created considerable tension between the two teams, both of which cast doubts on the viability of the opposing team's machine. After considerable internal rivalry and dispute, Gordon Welchman (by then, Bletchley Park's Assistant Director for mechanisation) was forced to step in to resolve the situation. Ultimately, Cobra proved unreliable and Mammoth went into full-scale production. Unlike the situation at Bletchley Park, the United States armed services did not share a combined cryptanalytical service. Indeed, there was considerable rivalry between the US Army's facility, the Signals Intelligence Service (SIS), and that of the US Navy known as OP-20-G. Before the US joined the war, there was collaboration with Britain, albeit with a considerable amount of caution on Britain's side because of the extreme importance of Germany and her allies not learning that its codes were being broken. Despite some worthwhile collaboration amongst the cryptanalysts, their superiors took some time to achieve a trusting relationship in which both British and American bombes were used to mutual benefit. In February 1941, Captain Abe Sinkov and Lieutenant Leo Rosen of the US Army, and US Naval Lieutenants Robert Weeks and Prescott Currier, arrived at Bletchley Park bringing, amongst other things, a replica of the 'Purple' cipher machine for the Bletchley Park's Japanese section in Hut 7. The four returned to America after ten weeks, with a naval radio direction finding unit and many documents including a 'paper Enigma'. Currier later wrote:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;There was complete cooperation. We went everywhere, including Hut 6. We watched the entire operation and had all the techniques explained in great detail. We were thoroughly briefed on the latest techniques in the solution of Enigma and the operations of the bombes. We had ample opportunity to take as many notes as we wanted and to watch first hand all operations involved. The main response to the Four-rotor Enigma was the US Navy bombe, which was manufactured in much less constrained facilities than were available in wartime Britain. US Navy Bombe. Colonel John Tiltman, who later became Deputy Director at Bletchley Park, visited the US Navy cryptanalysis office (OP-20-G) in April 1942 and recognised America's vital interest in deciphering U-boat traffic. The urgent need, doubts about the British engineering workload and slow progress, prompted the US to start investigating designs for a Navy bombe, based on the full blueprints and wiring diagrams received by US Naval Lieutenants Robert Ely and Joseph Eachus at Bletchley Park in July 1942. Funding for a full, $2 million, navy development effort was requested on 3 September 1942 and approved the following day. Commander Edward Travis, Deputy Director and Frank Birch, Head of the German Naval Section travelled from Bletchley Park to Washington in September 1942. With Carl Frederick Holden, US Director of Naval Communications they established, on 2 October 1942, a UK:US accord which may have "a stronger claim than BRUSA to being the forerunner of the UKUSA Agreement," being the first agreement "to establish the special Sigint relationship between the two countries," and "it set the pattern for UKUSA, in that the United States was very much the senior partner in the alliance." It established a relationship of "full collaboration" between Bletchley Park and OP-20-G. An all electronic solution to the problem of a fast bombe was considered, but rejected for pragmatic reasons, and a contract was let with the National Cash Register Corporation (NCR) in Dayton, Ohio. This established the United States Naval Computing Machine Laboratory. Engineering development was led by NCR's Joseph Desch. Alan Turing, who had written a memorandum to OP-20-G (probably in 1941), was seconded to the British Joint Staff Mission in Washington in December 1942, because of his exceptionally wide knowledge about the bombes and the methods of their use. He was asked to look at the bombes that were being built by NCR and at the security of certain speech cipher equipment under development at Bell Labs. He visited OP-20-G, and went to NCR in Dayton on 21 December. He was able to show that it was not necessary to build 336 Bombes, one for each possible rotor order, by utilising techniques such as Banburismus. The initial order was scaled down to 96 machines. The US Navy bombes used drums for the Enigma rotors in much the same way as the British bombes. They had eight Enigma-equivalents on the front and eight on the back. The fast drum rotated at 1,725 rpm, 34 times the speed of the early British bombes. 'Stops' were detected electronically using thermionic valves (vacuum tubes)—mostly thyratrons—for the high-speed circuits. When a 'stop' was found the machine over-ran as it slowed, reversed to the position found and printed it out before restarting. The running time for a 4-rotor run was about 20 minutes, and for a 3-rotor run, about 50 seconds. Each machine was wide, high, deep and weighed 2.5 tons. The first machine was completed and tested on 3 May 1943. By 22 June, the first two machines, called 'Adam' and 'Eve' broke a particularly difficult German naval cipher, the "Offizier" settings for 9 and 10 June. A P Mahon, who had joined the Naval Section in Hut 8 in 1941, reported in his official 1945 "History of Hut Eight 1939-1945":&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The American bombe was in its essentials the same as the English bombe though it functioned rather better as they were not handicapped by having to make it, as Keen was forced to do owing to production difficulties, on the framework of a 3 wheel machine. By late autumn [1943] new American machines were coming into action at the rate of about 2 a week, the ultimate total being in the region of 125. These bombes were faster, and soon more available, than the British bombes at Bletchley Park and its outstations. Consequently, they were put to use for Hut 6 as well as Hut 8 work. In Alexander's "Cryptographic History of Work on German Naval Enigma", he wrote as follows. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;When the Americans began to turn out bombes in large numbers there was a constant interchange of signal - cribs, keys, message texts, cryptographic chat and so on. This all went by cable being first encyphered on the combined Anglo-American cypher machine, C.C.M. Most of the cribs being of operational urgency rapid and efficient communication was essential and a high standard was reached on this; an emergency priority signal consisting of a long crib with crib and message text repeated as a safeguard against corruption would take under an hour from the time we began to write the signal out in Hut 8 to the completion of its decyphering in Op. 20 G. As a result of this we were able to use the Op. 20 G bombes almost as conveniently as if they had been at one of our outstations 20 or 30 miles away. Ch. VIII para. 11 Production was stopped in September 1944 after 121 bombes had been made. The last-manufactured US Navy bombe is on display at the US National Cryptologic Museum. Jack Ingram, former Curator of the museum, describes being told of the existence of a second bombe and searching for it but not finding it whole. Whether it remains in storage in pieces, waiting to be discovered, or no longer exists, is unknown. US Army Bombe. The US Army Bombe was physically very different from the British and US Navy bombes. The contract for its creation was signed with Bell Labs on 30 September 1942. The machine was designed to analyse 3-rotor, not 4-rotor traffic. It was known as "003" or "Madame X". It did not use drums to represent the Enigma rotors, using instead telephone-type relays. It could, however, handle one problem that the bombes with drums could not. The set of ten bombes consisted of a total of 144 Enigma-equivalents, each mounted on a rack approximately long high and wide. There were 12 control stations which could allocate any of the Enigma-equivalents into the desired configuration by means of plugboards. Rotor order changes did not require the mechanical process of changing drums, but was achieved in about half a minute by means of push buttons. A 3-rotor run took about 10 minutes. Bombe rebuild. In 1994 a group led by John Harper of the BCS Computer Conservation Society started a project to build a working replica of a bombe. The project required detailed research, and took thirteen years of effort before the replica was completed, which was then put on display at the Bletchley Park museum. In March 2009 it won an Engineering Heritage Award. The Bombe rebuild was relocated to The National Museum of Computing on Bletchley Park in May 2018, the new gallery officially re-opening on 23 June 2018. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{{26 \\choose 2} \\cdot {24 \\choose 2} \\cdot {22 \\choose 2} \\cdot \\dots \\cdot {8 \\choose 2}}{10!} = 150\\,738\\,274\\,937\\,250" } ]
https://en.wikipedia.org/wiki?curid=615354
6153844
Reaction calorimeter
Apparatus for measuring reaction energyA reaction calorimeter is a calorimeter that measures the amount of energy released (in exothermic reactions) or absorbed (in endothermic reactions) by a chemical reaction. It does this by measuring the total change in temperature of an exact amount of water in a vessel. Methods. Heat flow calorimetry. Heat flow calorimetry measures the heat flowing across the reactor wall and quantifies this in relation to other energy flows within the reactor. formula_0 where: formula_1 → process heating (or cooling) power (W) formula_2 → overall heat transfer coefficient (W/(m2K)) formula_3 → heat transfer area (m2) formula_4 → process temperature (K) formula_5 → jacket temperature (K) Heat flow calorimetry allows the user to measure heat while the process temperature remains under control. While the driving force "Tr − Tj" is measured with a relatively high resolution, the overall heat transfer coefficient "U" or the calibration factor "UA" is determined by calibration before and after the reaction takes place. These factors are affected by the product composition, process temperature, agitation rate, viscosity, and liquid level. Heat balance calorimetry. In heat balance calorimetry, the cooling/heating jacket controls the temperature of the process. Heat is measured by monitoring the heat gained or lost by the heat transfer fluid. formula_6 where: formula_1 → is the process heating (or cooling) power (W) formula_7 → is the mass flow of heat transfer fluid (kg/s) formula_8 → is the specific heat of heat transfer fluid (J/(kg K)) formula_9 → is the inlet temperature of heat transfer fluid (K) formula_10 → is the outlet temperature of heat transfer fluid (K) Heat balance calorimetry is considered an effective method for measuring heat, as it involves quantifying the heat entering and leaving the system through the heating/cooling jacket using the heat transfer fluid, whose properties are well known. This method effectively measures heat loss or gain, circumventing many calibration issues associated with heat flow and power compensation calorimetry. However, it is less effective in traditional batch vessels, where significant heat shifts in the cooling/heating jacket can obscure the process's heat signal. Power compensation calorimetry. Power compensation calorimetry is a variation of the heat flow technique. This method utilizes a cooling jacket operating at constant flow and temperature. The process temperature is regulated by adjusting the power of an electrical heater. At the start of the experiment, the electrical heat and cooling power are balanced. As the process's heat load changes, the electrical power is adjusted to maintain the desired process temperature. The heat liberated or absorbed by the process is determined from the difference between the initial electrical power and the electrical power required at the time of measurement. While power compensation calorimetry requires less preparation than heat flow calorimetry, it faces similar limitations. Changes in product composition, liquid level, process temperature, agitation, or viscosity can impact the instrument's calibration. Additionally, the presence of an electrical heating element is not optimal for process operations. Another limitation of this method is that the maximum heat it can measure is equal to the initial electrical power applied to the heater. formula_11 where: formula_12 is the current supplied to the heater formula_13 is the voltage supplied to the heater formula_14 is the current supplied to the heater at equilibrium (assuming constant voltage / resistance) Constant flux calorimetry. Constant flux heating and cooling jackets use variable geometry cooling jackets and can operate with cooling jackets at a substantially constant temperature. These reaction calorimeters are simpler to use and are much more tolerant of changes in the process conditions. Constant flux calorimetry is an advanced temperature control mechanism used to generate accurate calorimetry. It operates by controlling the jacket area of a laboratory reactor while maintaining a constant inlet temperature of the thermal fluid. This method allows for precise temperature control, even during strongly exothermic or endothermic events, as additional cooling can be achieved by increasing the area over which heat is exchanged. This system is generally more accurate than heat balance calorimetry, as changes in the delta temperature (Tout - Tin) are magnified by keeping the fluid flow as low as possible. One of the main advantages of constant flux calorimetry is the ability to dynamically measure heat transfer coefficient (U). According to the heat balance equation: formula_15 From the heat flow equation that formula_16 These equations can be rearranged to: formula_17 This allows for the monitoring of U as a function of time. Instrumentation. In traditional heat flow calorimeters, one reactant is added continuously in small amounts, similar to a semi-batch process, to achieve complete conversion of the reaction. Unlike in a tubular reactor, this approach results in longer residence times, different substance concentrations, and flatter temperature profiles. Consequently, the selectivity of less well-defined reactions can be affected, potentially leading to the formation of by-products or secondary products. These by-products can alter the measured heat of reaction as different chemical bonds are formed. The amount of by-product or secondary product can be determined by calculating the yield of the desired product. A continuous reaction calorimeter is a similar instrument used to obtain thermodynamic information on continuous processes in tubular reactors. It can record an axial temperature profile along the tubular reactor, allowing the determination of the specific heat of reaction through heat balances and segmental dynamic parameters. The system typically includes a tubular reactor, dosing systems, preheaters, temperature sensors, and flow meters. If the heat of reaction measured in the heat flow calorimeter (HFC) differs from that measured in a plug flow reactor (PFR) calorimeter, it is likely that side reactions have occurred. These discrepancies may arise due to differences in temperatures and residence times. The total measured heat (Qr) comprises partially overlapped reaction enthalpies (ΔHr) of both main and side reactions, depending on their degrees of conversion (U). Continuous reaction calorimeters offer significant advantages in the study of continuous processes, particularly in industrial applications where consistent and reproducible reaction conditions are critical. These instruments can provide detailed insights into the thermodynamics and kinetics of reactions under steady-state conditions. The use of precise dosing systems ensures accurate control over reactant flow rates, while preheaters can stabilize the temperature of incoming reactants, minimizing temperature fluctuations that could affect the reaction rate and selectivity. Moreover, advanced temperature sensors and flow meters enable real-time monitoring and control of the process parameters, facilitating the optimization of reaction conditions. By recording axial temperature profiles, researchers can identify temperature gradients within the reactor, which can inform adjustments to improve heat transfer efficiency and reaction uniformity. This capability is particularly important in scaling up reactions from laboratory to industrial scale, where maintaining consistent product quality and yield is essential. Continuous reaction calorimeters also allow for the study of reaction mechanisms and the identification of intermediate species. By analyzing the heat flow data in conjunction with other analytical techniques, such as spectroscopy or chromatography, researchers can gain a comprehensive understanding of the reaction pathways and the factors influencing selectivity and yield. This information is invaluable for developing efficient and sustainable chemical processes, reducing waste, and minimizing energy consumption. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Q = U A (T_r-T_j)" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "U" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "T_r" }, { "math_id": 5, "text": "T_j" }, { "math_id": 6, "text": "Q = m_s C_{ps}(T_i - T_o)" }, { "math_id": 7, "text": "m_s" }, { "math_id": 8, "text": "C_{ps}" }, { "math_id": 9, "text": "T_i" }, { "math_id": 10, "text": "T_o" }, { "math_id": 11, "text": "Q = IV\\,\\,\\,\\,\\,\\mathrm {or}\\,\\,\\,\\,\\,\\,(I - I_0)V" }, { "math_id": 12, "text": "I" }, { "math_id": 13, "text": "V" }, { "math_id": 14, "text": "I_0" }, { "math_id": 15, "text": "Q=m_f\\;C_p\\;(T_{in}-T_{out})" }, { "math_id": 16, "text": "Q=U\\;A\\;LMTD" }, { "math_id": 17, "text": "U=\\frac{m_f\\;C_p\\;(T_{in}-T_{out})}{A\\;LMTD}" } ]
https://en.wikipedia.org/wiki?curid=6153844
61539873
Diamond norm
In quantum information, the diamond norm, also known as completely bounded trace norm, is a norm on the space of quantum operations, or more generally on any linear map that acts on complex matrices. Its main application is to measure the "single use distinguishability" of two quantum channels. If an agent is randomly given one of two quantum channels, permitted to pass one state through the unknown channel, and then measures the state in an attempt to determine which operation they were given, then their maximal probability of success is determined by the diamond norm of the difference of the two channels. Although the diamond norm can be efficiently computed via semidefinite programming, it is in general difficult to obtain analytical expressions and those are known only for a few particular cases. Definition. The diamond norm is the trace norm of the output of a trivial extension of a linear map, maximized over all possible inputs with trace norm at most one. More precisely, let formula_0 be a linear transformation, where formula_1 denotes the formula_2 complex matrices, let formula_3 be the identity map on formula_2 matrices, and formula_4. Then the diamond norm of formula_5 is given by formula_6 where formula_7 denotes the trace norm. The diamond norm induces the diamond distance, which in the particular case of completely positive, trace non-increasing maps formula_8 is given by formula_9 where the maximization is done over all density matrices formula_10 of dimension formula_11. Discrimination of quantum channels. In the task of single-shot discrimination of quantum channels, an agent is given one of the channels formula_8 with probabilities "p" and "1-p", respectively, and attempts to guess which channel they received by preparing a state formula_10, passing it through the unknown channel, and making a measurement on the resulting state. The maximal probability that the agent guesses correctly is given by formula_12 Semidefinite programming formulation. The diamond norm can be efficiently calculated via semidefinite programming. Let formula_13 be a linear map, as before, and formula_14 its Choi state, defined as formula_15. The diamond norm of formula_5 is then given by the solution of the following semidefinite programming problem: formula_16 where formula_17 and formula_18 are Hermitian matrices. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi: M_n(\\mathbb{C}) \\to M_m(\\mathbb{C})" }, { "math_id": 1, "text": " M_n(\\mathbb{C})" }, { "math_id": 2, "text": "n \\times n " }, { "math_id": 3, "text": "\\mathbb{1}_n: M_n(\\mathbb{C}) \\to M_n(\\mathbb{C})" }, { "math_id": 4, "text": " X \\in M_{n^2}(\\mathbb{C})" }, { "math_id": 5, "text": "\\Phi" }, { "math_id": 6, "text": "\\|\\Phi\\|_\\diamond := \\max_{X;\\|X\\|_1 \\le 1}\\|(\\Phi \\otimes \\mathbb{1}_n)X\\|_1, " }, { "math_id": 7, "text": "\\|\\cdot\\|_1" }, { "math_id": 8, "text": "\\mathcal E ,\\mathcal F " }, { "math_id": 9, "text": "d_\\diamond(\\mathcal E,\\mathcal F) := \\|\\mathcal E - \\mathcal F\\|_\\diamond = \\max_{\\rho}\\|(\\mathcal{E} \\otimes \\mathbb{1}_n)\\rho - (\\mathcal{F} \\otimes \\mathbb{1}_n)\\rho\\|_1, " }, { "math_id": 10, "text": "\\rho" }, { "math_id": 11, "text": "n^2" }, { "math_id": 12, "text": "p_\\text{succ} = \\frac12 + \\frac12 \\|p \\mathcal E - (1-p)\\mathcal F\\|_\\diamond " }, { "math_id": 13, "text": "\\Phi : A \\to B" }, { "math_id": 14, "text": " J(\\Phi) \\in A \\otimes B " }, { "math_id": 15, "text": " J(\\Phi) := \\sum_{ij} |i\\rangle\\langle j| \\otimes \\Phi(|i\\rangle\\langle j|) " }, { "math_id": 16, "text": "\n\\begin{align}\n\\min_{Y,\\sigma} \\quad& \\operatorname{tr}(Y J(\\Phi)) \\\\\n\\text{subject to} \\quad &-\\sigma \\otimes I \\preceq Y \\preceq\\sigma \\otimes I \\\\\n& \\quad\\quad\\quad \\operatorname{tr}(\\sigma) = 1\n\\end{align}\n" }, { "math_id": 17, "text": " Y \\in A \\otimes B" }, { "math_id": 18, "text": " \\sigma \\in A " } ]
https://en.wikipedia.org/wiki?curid=61539873
615402
Ball valve
Flow control device A ball valve is a flow control device which uses a hollow, perforated, and pivoting ball to control fluid flowing through it. It is open when the hole through the middle of the ball is in line with the flow inlet, and closed when it is pivoted 90 degrees by the valve handle, blocking the flow. The handle lies flat in alignment with the flow when open, and is perpendicular to it when closed, making for easy visual confirmation of the valve's status. The shut position 1/4 turn could be in either clockwise or counter-clockwise direction. Ball valves are durable, performing well after many cycles, and reliable, closing securely even after long periods of disuse. These qualities make them an excellent choice for shutoff and control applications, where they are often preferred to gates and globe valves, but they lack the fine control of those alternatives, in throttling applications. The ball valve's ease of operation, repair, and versatility lend it to extensive industrial use, supporting pressures up to and temperatures up to , depending on design and materials used. Sizes typically range from 0.2 to 48 in (5 to 1200 mm). Valve bodies are made of metal, plastic, or metal with a ceramic; floating balls are often chrome plated for durability. One disadvantage of a ball valve is that when used for controlling water flow, they trap water in the center cavity while in the closed position. In the event of ambient temperatures falling below freezing point, the sides can crack due to the expansion associated with ice formation. Some means of insulation or heat tape in this situation will usually prevent damage. Another option for cold climates is the "freeze tolerant ball valve". This style of ball valve incorporates a freeze plug in the side so in the event of a freeze-up, the freeze plug ruptures, acting as a 'sacrificial' fail point, allowing an easier repair. Instead of replacing the whole valve, all that is required is the fitting of a new freeze plug. For cryogenic fluids, or product that may expand inside of the ball, there is a vent drilled into the upstream side of the valve. This is referred to as a vented ball. A ball valve should not be confused with a "ball-check valve", a type of check valve that uses a solid ball to prevent undesired backflow. Other types of quarter-turn valves include the butterfly valve and plug valve and freeze proof ball valve. Types. There are five general body styles of ball valves: "single body", "three-piece body", "split body", "top entry", and "welded". The difference is based on how the pieces of the valve—especially the casing that contains the ball itself—are manufactured and assembled. The valve operation is the same in each case. The one-piece bodies provide a very rigid construction, in some versions the ball is removable from the valve without taking the entire valve out of the line however multi-piece bodies offer greater scope for ingenuity of design. In addition, there are different styles related to the bore of the ball mechanism itself. And depending on the working pressure, the ball valves are categorized as low-pressure ball valves and high-pressure ball valves. In most industries, the ball valves with working pressures higher than are considered high-pressure ball valves. Usually the maximum working pressure for the high-pressure ball valves is and depends on the structure, sizes and sealing materials. The maximum working pressure of high-pressure ball valves can be up to . High-pressure ball valves are often used in hydraulic systems, so they are also known as hydraulic ball valves. Ball valves in sizes up to generally come in a single piece, two or three-piece designs. One-piece ball valves are almost always reduced bore, are relatively inexpensive, and are generally replaced instead of repaired. Two-piece ball valves generally have a slightly reduced (or standard) bore, and can be either throw-away or repairable. The three-piece design allows for the center part of the valve containing the ball, stem and seats to be easily removed from the pipeline. This facilitates efficient cleaning of deposited sediments, replacement of seats and gland packings, polishing out of small scratches on the ball, all this without removing the pipes from the valve body. The design concept of a three-piece valve is for it to be repairable. Each valve is heated to a certain degree, while the excess material is trimmed from the body. Full bore. A full bore (sometimes full port) ball valve has an oversized ball so that the hole in the ball is the same size as the pipeline resulting in lower friction loss. Flow is unrestricted but the valve is larger and more expensive so this is only used where free flow is required, for example in pipelines that require pigging. Reduced bore, or reduced port. In reduced bore (sometimes reduced port) ball valves, flow through the valve is one pipe size smaller than the valve's pipe size resulting in the flow area being smaller than the pipe. As the flow discharge remains constant and is equal to the area of flow (A) times velocity (V), formula_0 the velocity increases with reduced area of flow. V port. A V port ball valve has either a 'v' shaped ball or a 'v' shaped seat. This allows for linear and even equal percentage flow characteristics. When the valve is in the closed position and opening is commenced the small end of the 'v' is opened first allowing stable flow control during this stage. This type of design requires a generally more robust construction due to higher velocities of the fluids, which might damage a standard valve. When machined correctly these are excellent control valves, offering superior leakage performance. Cavity filler. Many industries encounter problems with residues in the ball valve. Where the fluid is meant for human consumption, residues may also be a health hazard, and where the fluid changes from time to time contamination of one fluid with another may occur. Residues arise because in the half-open position of the ball valve a gap is created between the ball bore and the body in which fluid can be trapped. To avoid the fluid getting into this cavity, the cavity has to be plugged, which can be done by extending the seats in such a manner that it is always in contact with the ball. This type of ball valve is known as Cavity Filler Ball Valve. There are a few types of ball valves related to the attachment and lateral movement of the ball: Trunnion, floating and actuated. A trunnion ball valve has additional mechanical anchoring of the ball at the top and the bottom, suitable for larger and higher pressure valves (generally above and ). A floating ball valve is one where the ball is not held in place by a trunnion. In normal operation, this will cause the ball to float downstream slightly. This causes the seating mechanism to compress under the ball pressing against it. Furthermore, in some types, in the event of some force causing the seat mechanism to dissipate (such as extreme heat from fire outside the valve), the ball will float all the way to metal body which is designed to seal against the ball providing a somewhat failsafe design. Manually operated ball valves can be closed quickly and thus there is a danger of water hammer. Some ball valves are equipped with an actuator that may be pneumatically, hydraulically or motor operated. These valves can be used either for on/off or flow control. A pneumatic flow control valve is also equipped with a positioner which transforms the control signal into actuator position and valve opening accordingly. Materials of construction. Body materials may include, but are not limited to, any of these materials: &lt;templatestyles src="Div col/styles.css"/&gt; There are many different types of seats and seals that are used in ball valves as well. Valves are usually manufactured with different materials, each with specific applications they are good for due to their chemical compatibility, pressures, and temperatures. Some of the commonly used materials include brass, stainless steel, bronze etc. These material choices ensure that valves are suitable for their respective functions, providing efficient and reliable performance in various industries and applications. &lt;templatestyles src="Div col/styles.css"/&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A_1 V_1 = A_2 V_2" } ]
https://en.wikipedia.org/wiki?curid=615402
61547718
Mathematics of artificial neural networks
An artificial neural network (ANN) combines biological principles with advanced statistics to solve problems in domains such as pattern recognition and game-play. ANNs adopt the basic model of neuron analogues connected to each other in a variety of ways. Structure. Neuron. A neuron with label formula_0 receiving an input formula_1 from predecessor neurons consists of the following components: formula_6 formula_8 Often the output function is simply the identity function. An "input neuron" has no predecessor but serves as input interface for the whole network. Similarly an "output neuron" has no successor and thus serves as output interface of the whole network. Propagation function. The "propagation function" computes the "input" formula_1 to the neuron formula_0 from the outputs formula_9and typically has the form formula_10 Bias. A bias term can be added, changing the form to the following: formula_11 where formula_12 is a bias. Neural networks as functions. Neural network models can be viewed as defining a function that takes an input (observation) and produces an output (decision) formula_13 or a distribution over formula_14 or both formula_14 and formula_15. Sometimes models are intimately associated with a particular learning rule. A common use of the phrase "ANN model" is really the definition of a "class" of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons, number of layers or their connectivity). Mathematically, a neuron's network function formula_16 is defined as a composition of other functions formula_17, that can further be decomposed into other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between functions. A widely used type of composition is the "nonlinear weighted sum", where formula_18, where formula_19 (commonly referred to as the activation function) is some predefined function, such as the hyperbolic tangent, sigmoid function, softmax function, or rectifier function. The important characteristic of the activation function is that it provides a smooth transition as input values change, i.e. a small change in input produces a small change in output. The following refers to a collection of functions formula_20 as a vector formula_21. This figure depicts such a decomposition of formula_22, with dependencies between variables indicated by arrows. These can be interpreted in two ways. The first view is the functional view: the input formula_23 is transformed into a 3-dimensional vector formula_24, which is then transformed into a 2-dimensional vector formula_25, which is finally transformed into formula_22. This view is most commonly encountered in the context of optimization. The second view is the probabilistic view: the random variable formula_26 depends upon the random variable formula_27, which depends upon formula_28, which depends upon the random variable formula_14. This view is most commonly encountered in the context of graphical models. The two views are largely equivalent. In either case, for this particular architecture, the components of individual layers are independent of each other (e.g., the components of formula_25 are independent of each other given their input formula_24). This naturally enables a degree of parallelism in the implementation. Networks such as the previous one are commonly called feedforward, because their graph is a directed acyclic graph. Networks with cycles are commonly called recurrent. Such networks are commonly depicted in the manner shown at the top of the figure, where formula_22 is shown as dependent upon itself. However, an implied temporal dependence is not shown. Backpropagation. Backpropagation training algorithms fall into three categories: Algorithm. Let formula_29 be a network with formula_30 connections, formula_31 inputs and formula_32 outputs. Below, formula_33 denote vectors in formula_34, formula_35 vectors in formula_36, and formula_37 vectors in formula_38. These are called "inputs", "outputs" and "weights", respectively. The network corresponds to a function formula_39 which, given a weight formula_40, maps an input formula_41 to an output formula_42. In supervised learning, a sequence of "training examples" formula_43 produces a sequence of weights formula_44 starting from some initial weight formula_45, usually chosen at random. These weights are computed in turn: first compute formula_46 using only formula_47 for formula_48. The output of the algorithm is then formula_49, giving a new function formula_50. The computation is the same in each step, hence only the case formula_51 is described. formula_52 is calculated from formula_53 by considering a variable weight formula_40 and applying gradient descent to the function formula_54 to find a local minimum, starting at formula_55. This makes formula_52 the minimizing weight found by gradient descent. Learning pseudocode. To implement the algorithm above, explicit formulas are required for the gradient of the function formula_56 where the function is formula_57. The learning algorithm can be divided into two phases: propagation and weight update. Propagation. Propagation involves the following steps: Weight update. For each weight: The "learning rate" is the ratio (percentage) that influences the speed and quality of learning. The greater the ratio, the faster the neuron trains, but the lower the ratio, the more accurate the training. The sign of the gradient of a weight indicates whether the error varies directly with or inversely to the weight. Therefore, the weight must be updated in the opposite direction, "descending" the gradient. Learning is repeated (on new batches) until the network performs adequately. Pseudocode. Pseudocode for a stochastic gradient descent algorithm for training a three-layer network (one hidden layer): initialize network weights (often small random values) do for each training example named ex do prediction = neural-net-output(network, ex) "// forward pass" actual = teacher-output(ex) compute error (prediction - actual) at the output units compute formula_58 for all weights from hidden layer to output layer "// backward pass" compute formula_59 for all weights from input layer to hidden layer "// backward pass continued" update network weights "// input layer not modified by error estimate" until error rate becomes acceptably low return the network The lines labeled "backward pass" can be implemented using the backpropagation algorithm, which calculates the gradient of the error of the network regarding the network's modifiable weights. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "j" }, { "math_id": 1, "text": "p_j(t)" }, { "math_id": 2, "text": "a_j(t)" }, { "math_id": 3, "text": "\\theta_j" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "t+1" }, { "math_id": 6, "text": " a_j(t+1) = f(a_j(t), p_j(t), \\theta_j), " }, { "math_id": 7, "text": "f_\\text{out}" }, { "math_id": 8, "text": " o_j(t) = f_\\text{out}(a_j(t)). " }, { "math_id": 9, "text": "o_i(t)" }, { "math_id": 10, "text": " p_j(t) = \\sum_i o_i(t) w_{ij}. " }, { "math_id": 11, "text": " p_j(t) = \\sum_i o_i(t) w_{ij}+ w_{0j}, " }, { "math_id": 12, "text": "w_{0j}" }, { "math_id": 13, "text": "\\textstyle f : X \\rightarrow Y " }, { "math_id": 14, "text": "\\textstyle X" }, { "math_id": 15, "text": "\\textstyle Y" }, { "math_id": 16, "text": "\\textstyle f(x)" }, { "math_id": 17, "text": "\\textstyle g_i(x)" }, { "math_id": 18, "text": "\\textstyle f (x) = K \\left(\\sum_i w_i g_i(x)\\right) " }, { "math_id": 19, "text": "\\textstyle K" }, { "math_id": 20, "text": "\\textstyle g_i" }, { "math_id": 21, "text": "\\textstyle g = (g_1, g_2, \\ldots, g_n)" }, { "math_id": 22, "text": "\\textstyle f" }, { "math_id": 23, "text": "\\textstyle x" }, { "math_id": 24, "text": "\\textstyle h" }, { "math_id": 25, "text": "\\textstyle g" }, { "math_id": 26, "text": "\\textstyle F = f(G) " }, { "math_id": 27, "text": "\\textstyle G = g(H)" }, { "math_id": 28, "text": "\\textstyle H=h(X)" }, { "math_id": 29, "text": "N" }, { "math_id": 30, "text": "e" }, { "math_id": 31, "text": "m" }, { "math_id": 32, "text": "n" }, { "math_id": 33, "text": "x_1,x_2,\\dots" }, { "math_id": 34, "text": "\\mathbb{R}^m" }, { "math_id": 35, "text": "y_1,y_2,\\dots" }, { "math_id": 36, "text": "\\mathbb{R}^n" }, { "math_id": 37, "text": "w_0, w_1, w_2, \\ldots" }, { "math_id": 38, "text": "\\mathbb{R}^e" }, { "math_id": 39, "text": "y = f_N(w, x)" }, { "math_id": 40, "text": "w" }, { "math_id": 41, "text": "x" }, { "math_id": 42, "text": "y" }, { "math_id": 43, "text": "(x_1,y_1), \\dots, (x_p, y_p)" }, { "math_id": 44, "text": "w_0, w_1, \\dots, w_p" }, { "math_id": 45, "text": "w_0" }, { "math_id": 46, "text": "w_i" }, { "math_id": 47, "text": "(x_i, y_i, w_{i-1})" }, { "math_id": 48, "text": "i = 1, \\dots, p" }, { "math_id": 49, "text": "w_p" }, { "math_id": 50, "text": "x \\mapsto f_N(w_p, x)" }, { "math_id": 51, "text": "i = 1" }, { "math_id": 52, "text": "w_1" }, { "math_id": 53, "text": "(x_1, y_1, w_0)" }, { "math_id": 54, "text": "w\\mapsto E(f_N(w, x_1), y_1)" }, { "math_id": 55, "text": "w = w_0" }, { "math_id": 56, "text": "w \\mapsto E(f_N(w, x), y)" }, { "math_id": 57, "text": "E(y,y')= |y-y'|^2" }, { "math_id": 58, "text": "\\Delta w_h" }, { "math_id": 59, "text": "\\Delta w_i" } ]
https://en.wikipedia.org/wiki?curid=61547718
615491
Timeline of scientific discoveries
The timeline below shows the date of publication of possible major scientific breakthroughs, theories and discoveries, along with the discoverer. This article discounts mere speculation as discovery, although imperfect reasoned arguments, arguments based on elegance/simplicity, and numerically/experimentally verified conjectures qualify (as otherwise no scientific discovery before the late 19th century would count). The timeline begins at the Bronze Age, as it is difficult to give even estimates for the timing of events prior to this, such as of the discovery of counting, natural numbers and arithmetic. To avoid overlap with timeline of historic inventions, the timeline does not list examples of documentation for manufactured substances and devices unless they reveal a more fundamental leap in the theoretical ideas in a field. Bronze Age. Many early innovations of the Bronze Age were prompted by the increase in trade, and this also applies to the scientific advances of this period. For context, the major civilizations of this period are Egypt, Mesopotamia, and the Indus Valley, with Greece rising in importance towards the end of the third millennium BC. The Indus Valley script remains undeciphered and there are very little surviving fragments of its writing, thus any inference about scientific discoveries in that region must be made based only on archaeological digs. The following dates are approximations. Iron Age. The following dates are approximations. 500 BC – 1 BC. The following dates are approximations. 1 AD – 500 AD. Mathematics and astronomy flourish during the Golden Age of India (4th to 6th centuries AD) under the Gupta Empire. Meanwhile, Greece and its colonies have entered the Roman period in the last few decades of the preceding millennium, and Greek science is negatively impacted by the Fall of the Western Roman Empire and the economic decline that follows. 500 AD – 1000 AD. The Golden Age of Indian mathematics and astronomy continues after the end of the Gupta empire, especially in Southern India during the era of the Rashtrakuta, Western Chalukya and Vijayanagara empires of Karnataka, which variously patronised Hindu and Jain mathematicians. In addition, the Middle East enters the Islamic Golden Age through contact with other civilisations, and China enters a golden period during the Tang and Song dynasties. 16th century. The Scientific Revolution occurs in Europe around this period, greatly accelerating the progress of science and contributing to the rationalization of the natural sciences. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tbinom{n}{r}=\\tbinom{n-1}{r}+\\tbinom{n-1}{r-1}" }, { "math_id": 1, "text": "\\tbinom{n}{r}=\\tfrac{n!}{r!(n-r)!}" }, { "math_id": 2, "text": "x^3 = px + q" } ]
https://en.wikipedia.org/wiki?curid=615491
61551787
Moffatt eddies
Moffatt eddies are sequences of eddies that develop in corners bounded by plane walls (or sometimes between a wall and a free surface) due to an arbitrary disturbance acting at asymptotically large distances from the corner. Although the source of motion is the arbitrary disturbance at large distances, the eddies develop quite independently and thus solution of these eddies emerges from an eigenvalue problem, a self-similar solution of the second kind. The eddies are named after Keith Moffatt, who discovered these eddies in 1964, although some of the results were already obtained by William Reginald Dean and P. E. Montagnon in 1949. Lord Rayleigh also studied the problem of flow near the corner with homogeneous boundary conditions in 1911. Moffatt eddies inside cones are solved by P. N. Shankar. Flow description. Near the corner, the flow can be assumed to be Stokes flow. Describing the two-dimensional planar problem by the cylindrical coordinates formula_0 with velocity components formula_1 defined by a stream function such that formula_2 the governing equation can be shown to be simply the biharmonic equation formula_3. The equation has to be solved with homogeneous boundary conditions (conditions taken for two walls separated by angle formula_4) formula_5 The Taylor scraping flow is similar to this problem but driven inhomogeneous boundary condition. The solution is obtained by the eigenfunction expansion, formula_6 where formula_7 are constants and the real part of the eigenvalues are always greater than unity. The eigenvalues formula_8 will be function of the angle formula_9, but regardless eigenfunctions can be written down for any formula_10, formula_11 For antisymmetrical solution, the eigenfunction is even and hence formula_12 and the boundary conditions demand formula_13. The equations admits no real root when formula_14°. These complex eigenvalues indeed correspond to the moffatt eddies. The complex eigenvalue if given by formula_15 where formula_16 Here formula_17. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(r,\\theta)" }, { "math_id": 1, "text": "(u_r,u_\\theta)" }, { "math_id": 2, "text": "u_r = \\frac{1}{r}\\frac{\\partial\\psi}{\\partial\\theta}, \\quad u_\\theta=-\\frac{\\partial\\psi}{\\partial r}" }, { "math_id": 3, "text": "\\nabla^4\\psi=0" }, { "math_id": 4, "text": "2\\alpha" }, { "math_id": 5, "text": "\n\\begin{align}\nr>0,\\ \\theta =-\\alpha: &\\quad u_r = 0, \\ u_\\theta= 0\\\\\nr>0,\\ \\theta =\\alpha: &\\quad u_r = 0, \\ u_\\theta= 0.\n\\end{align}\n" }, { "math_id": 6, "text": "\\psi = \\sum_{n=1}^\\infty A_n r^{\\lambda_n} f_{\\lambda_n}(\\theta)" }, { "math_id": 7, "text": "A_n" }, { "math_id": 8, "text": "\\lambda_n" }, { "math_id": 9, "text": "\\alpha" }, { "math_id": 10, "text": "\\lambda" }, { "math_id": 11, "text": "\n\\begin{align}\nf_0 &= A + B\\theta + C\\theta^2 + D\\theta^3,\\\\\nf_1 &= A\\cos\\theta + B \\sin\\theta + C\\theta\\cos\\theta + D\\theta\\sin\\theta,\\\\\nf_2 &= A\\cos 2\\theta + B \\sin 2\\theta + C\\theta + D,\\\\\nf_\\lambda &= A\\cos\\lambda\\theta + B\\sin\\lambda\\theta + C\\cos(\\lambda-2)\\theta + D\\sin(\\lambda-2)\\theta, \\quad \\lambda\\geq 2.\n\\end{align}\n" }, { "math_id": 12, "text": "B=D=0" }, { "math_id": 13, "text": "\\sin 2(\\lambda-1)\\alpha = -(\\lambda-1) \\sin 2\\alpha" }, { "math_id": 14, "text": "2\\alpha<146" }, { "math_id": 15, "text": "\\lambda_n = 1+(2\\alpha)^{-1}(\\xi_n+i\\eta_n)" }, { "math_id": 16, "text": "\n\\begin{align}\n\\sin\\xi \\cosh\\eta &= - k\\xi,\\\\\n\\cos\\xi \\sinh \\xi &= - k\\eta.\n\\end{align}\n" }, { "math_id": 17, "text": "k=\\sin 2\\alpha/2\\alpha" } ]
https://en.wikipedia.org/wiki?curid=61551787
61556353
Calculus on finite weighted graphs
Type of discrete calculus In mathematics, calculus on finite weighted graphs is a discrete calculus for functions whose domain is the vertex set of a graph with a finite number of vertices and weights associated to the edges. This involves formulating discrete operators on graphs which are analogous to differential operators in calculus, such as graph Laplacians (or discrete Laplace operators) as discrete versions of the Laplacian, and using these operators to formulate differential equations, difference equations, or variational models on graphs which can be interpreted as discrete versions of partial differential equations or continuum variational models. Such equations and models are important tools to mathematically model, analyze, and process discrete information in many different research fields, e.g., image processing, machine learning, and network analysis. In applications, finite weighted graphs represent a finite number of entities by the graph's vertices, any pairwise relationships between these entities by graph edges, and the significance of a relationship by an edge weight function. Differential equations or difference equations on such graphs can be employed to leverage the graph's structure for tasks such as image segmentation (where the vertices represent pixels and the weighted edges encode pixel similarity based on comparisons of Moore neighborhoods or larger windows), data clustering, data classification, or community detection in a social network (where the vertices represent users of the network, the edges represent links between users, and the weight function indicates the strength of interactions between users). The main advantage of finite weighted graphs is that by not being restricted to highly regular structures such as discrete regular grids, lattice graphs, or meshes, they can be applied to represent abstract data with irregular interrelationships. If a finite weighted graph is geometrically embedded in a Euclidean space, i.e., the graph vertices represent points of this space, then it can be interpreted as a discrete approximation of a related nonlocal operator in the continuum setting. Basic definitions. A finite weighted graph formula_0 is defined as a triple formula_1 for which In a directed graph, each edge formula_5 has a start node formula_6 and an end node formula_7. In an undirected graph for every edge formula_8 there exists an edge formula_9 and the weight function is required to be symmetric, i.e., formula_10.[#endnote_] On the remainder of this page, the graphs will be assumed to be undirected, unless specifically stated otherwise. Many of the ideas presented on this page can be generalized to directed graphs. The "edge weight function" formula_11 associates to every edge formula_12 a real value formula_13. For both mathematical and application specific reasons, the weight function on the edges is often required to be strictly positive and on this page it will be assumed to be so unless specifically stated otherwise. Generalizations of many of the ideas presented on this page to include negatively weighted edges are possible. Sometimes an extension of the domain of the edge weight function to formula_14 is considered (with the resulting function still being called the "edge weight function") by setting formula_15 whenever formula_16. In applications each "graph vertex" formula_17 usually represents a single entity in the given data, e.g., elements of a finite data set, pixels in an image, or users in a social network. A "graph edge" represents a relationship between two entities, e.g. pairwise interactions or similarity based on comparisons of geometric neighborhoods (for example of pixels in images) or of another feature, with the edge weight encoding the strength of this relationship. Most commonly used weight functions are normalized to map to values between 0 and 1, i.e., formula_18. In the following it is assumed that the considered graphs are connected without self-loops or multiple edges between vertices. These assumptions are mostly harmless as in many applications each connected component of a disconnected graph can be treated as a graph in its own right, each appearance of formula_19 (which would be nonzero in the presence of self-loops) appears in the presence of another factor which disappears when formula_20 (see the section on differential graph operators below), and edge weights can encode similar information as multiple edges could. Neighborhood. A node formula_21 is a neighbor of the node formula_22 if there exists an edge formula_12. In terms of notation this relationship can be abbreviated by formula_23, which should be read as "formula_24 is a neighbor of formula_25". Otherwise, if formula_24 is not a neighbor of formula_25 one writes formula_26. The neighborhood formula_27 of a vertex formula_22 is simply the set of neighbors formula_28. The degree of a vertex formula_22 is the weighted size of its neighborhood: formula_29 Note that in the special case where formula_30 on formula_31 (i.e. the graph is unweighted) we have formula_32. Space of real vertex functions. Let formula_33 be the space of (real) vertex functions. Since formula_34 is a finite set, any vertex function formula_35 can be represented as a formula_36-dimensional vector formula_37 (where formula_38) and hence the space of vertex functions formula_39 can be identified with an formula_36-dimensional Hilbert space: formula_40. The inner product of formula_39 is defined as: formula_41 Furthermore, for any vertex function formula_42 the formula_43-norm and formula_44-norm of formula_45 are defined as: formula_46 The formula_47-norm is induced by the inner product. In applications vertex functions are useful to label the vertices of the nodes. For example, in graph-based data clustering, each node represents a data point and a vertex function is used to identify cluster membership of the nodes. Space of real edge functions. Analogously to real vertex functions, one can introduce the space of real edge functions formula_48. As any edge function formula_49 is defined on a finite set of edges formula_31, it can be represented as a formula_50-dimensional vector formula_51, where formula_52. Hence, the space of edge functions formula_53 can be identified as a formula_50-dimensional Hilbert space, i.e., formula_54. One special case of an edge function is the "normalized edge weight function" formula_55 introduced above in the section on basic definitions. Similar to that function, any edge function formula_49 can be trivially extended to formula_56 by setting formula_57 if formula_58. The space of those extended edge functions is still denoted by formula_53 and can be identified with formula_59, where now formula_60. The inner product of formula_53 is defined as: formula_61 Additionally, for any edge function formula_62 the formula_43-norm and formula_44-norm of formula_49 are defined as: formula_63 The formula_47-norm is induced by the inner product. If one extends the edge set formula_31 in a way such that formula_64 than it becomes clear that formula_65 because formula_40. This means that each edge function can be identified with a linear matrix operator. Differential graph operators. An important ingredient in the calculus on finite weighted graphs is the mimicking of standard differential operators from the continuum setting in the discrete setting of finite weighted graphs. This allows one to translate well-studied tools from mathematics, such as partial differential equations and variational methods, and make them usable in applications which can best be modeled by a graph. The fundamental concept which makes this translation possible is the graph gradient, a first-order difference operator on graphs. Based on this one can derive higher-order difference operators, e.g., the graph Laplacian. First-order differential operators. Weighted differences. Let formula_66 be a finite weighted graph and let formula_35 be a vertex function. Then the weighted difference (or weighted graph derivative) of formula_45 along a directed edge formula_12 is formula_67 For any weighted difference the following properties hold: Weighted gradient. Based on the notion of weighted differences one defines the weighted gradient operator on graphs formula_71 as formula_72 This is a linear operator. To measure the "local variation" of a vertex function formula_45 in a vertex formula_22 one can restrict the gradient formula_73 of formula_45 to all directed edges starting in formula_25 and using the formula_43-norm of this edge function, i.e., formula_74 Weighted divergence. The adjoint operator formula_75 of the weighted gradient operator is a linear operator defined by formula_76 For undirected graphs with a symmetric weight function formula_77 the adjoint operator formula_78 of a function formula_79 at a vertex formula_6 has the following form: formula_80 One can then define the weighted divergence operator on graphs via the adjoint operator as formula_81. The divergence on a graph measures the net outflow of an edge function in each vertex of the graph. Second-order differential operators. Graph Laplace operator. The weighted graph Laplacian formula_82 is a well-studied operator in the graph setting. Mimicking the relationship formula_83 of the Laplace operator in the continuum setting, the weighted graph Laplacian can be derived for any vertex formula_22 as: formula_84 Note that one has to assume that the graph formula_0 is undirected and has a symmetric weight function formula_85 for this representation. Graph p-Laplace operators. The continuous formula_86-Laplace operator is a second-order differential operator that can be well-translated to finite weighted graphs. It allows the translation of various partial differential equations, e.g., the heat equation, to the graph setting. Based on the first-order partial difference operators on graphs one can formally derive a family of weighted graph formula_86-Laplace operators formula_87 for formula_88 by minimization of the discrete formula_86-Dirichlet energy functional formula_89 The necessary optimality conditions for a minimizer of the energy functional formula_31 lead to the following definition of the graph formula_86-Laplacian: formula_90 Note that the graph Laplace operator is a special case of the graph formula_86-Laplace operator for formula_91, i.e., formula_92 Applications. Calculus on finite weighted graphs is used in a wide range of applications from different fields such as image processing, machine learning, and network analysis. A non-exhaustive list of tasks in which finite weighted graphs have been employed is: 1.&lt;templatestyles src="Citation/styles.css"/&gt;^ Note that a slightly different definition of "undirected graph" is also in use, which considers an undirected edge to be a two-set (set with two distinct elements) formula_93 instead of a pair of ordered pairs formula_8 and formula_9. Here the latter description is needed, as it is required to allow edge functions in formula_53 (see the section about the space of edge functions) to take different values on formula_8 and formula_9.
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "G = (V, E, w)" }, { "math_id": 2, "text": "V = \\{x_1, \\dots, x_n\\}, n\\in\\mathbb{N}" }, { "math_id": 3, "text": "E \\subset V \\times V" }, { "math_id": 4, "text": "w \\colon E \\rightarrow \\mathbb{R}" }, { "math_id": 5, "text": "(x_i,x_j)\\in E" }, { "math_id": 6, "text": "x_i\\in V" }, { "math_id": 7, "text": "x_j\\in V" }, { "math_id": 8, "text": "(x_i,x_j)" }, { "math_id": 9, "text": "(x_j,x_i)" }, { "math_id": 10, "text": "w(x_i,x_j) = w(x_j,x_i)" }, { "math_id": 11, "text": "w" }, { "math_id": 12, "text": "(x_i,x_j) \\in E" }, { "math_id": 13, "text": "w(x_i,x_j) > 0" }, { "math_id": 14, "text": "V\\times V" }, { "math_id": 15, "text": "w(x_i,x_j)=0" }, { "math_id": 16, "text": "(x_i,x_j) \\not\\in E" }, { "math_id": 17, "text": "x \\in V" }, { "math_id": 18, "text": " w : E \\rightarrow (0,1] " }, { "math_id": 19, "text": "w(x_i,x_i)" }, { "math_id": 20, "text": "i=j" }, { "math_id": 21, "text": "x_j \\in V" }, { "math_id": 22, "text": "x_i \\in V" }, { "math_id": 23, "text": "x_j \\sim x_i" }, { "math_id": 24, "text": "x_j" }, { "math_id": 25, "text": "x_i" }, { "math_id": 26, "text": "x_j\\not\\sim x_i" }, { "math_id": 27, "text": "\\mathcal N(x_i)" }, { "math_id": 28, "text": "\\mathcal{N}(x_i) := \\{ x_j\\in V \\colon x_j \\sim x_i \\}" }, { "math_id": 29, "text": " \\deg(x_i) := \\sum_{j\\,:\\,x_j \\sim x_i} w(x_i,x_j)." }, { "math_id": 30, "text": " w\\equiv 1" }, { "math_id": 31, "text": "E" }, { "math_id": 32, "text": " \\deg(x_i) := |\\mathcal{N}(x_i)|" }, { "math_id": 33, "text": "\\mathcal{H}(V) := \\{f : V \\rightarrow \\mathbb{R} \\}" }, { "math_id": 34, "text": "V" }, { "math_id": 35, "text": "f\\in \\mathcal{H}(V)" }, { "math_id": 36, "text": "n" }, { "math_id": 37, "text": "f \\in \\mathbb{R}^n" }, { "math_id": 38, "text": " n:= |V|" }, { "math_id": 39, "text": "\\mathcal{H}(V)" }, { "math_id": 40, "text": "\\mathcal{H}(V) \\cong \\mathbb{R}^n" }, { "math_id": 41, "text": "\\langle f, g \\rangle_{\\mathcal{H}(V)} := \\sum_{x_i \\in V} f(x_i) g(x_i), \\quad \\forall f, g \\in \\mathcal{H}(V)." }, { "math_id": 42, "text": "f \\in \\mathcal{H}(V)" }, { "math_id": 43, "text": "\\ell_p" }, { "math_id": 44, "text": "\\ell_\\infty" }, { "math_id": 45, "text": "f" }, { "math_id": 46, "text": "\\|f\\|_p = \\begin{cases}\n \\left( \\sum_{x_i\\in V} |f(x_i)|^p \\right)^\\frac{1}{p}, &\\text{ for } 1 \\leqslant p < \\infty \\ ,\\\\\n \\max_{x_i\\in V}|f(x_i)|, &\\text{ for } p = \\infty \\ .\n\\end{cases}" }, { "math_id": 47, "text": "\\ell_2" }, { "math_id": 48, "text": "\\mathcal{H}(E) := \\{F:E \\rightarrow \\mathbb{R} \\}" }, { "math_id": 49, "text": "F" }, { "math_id": 50, "text": "m" }, { "math_id": 51, "text": "F \\in \\mathbb{R}^{m}" }, { "math_id": 52, "text": "m := |E|" }, { "math_id": 53, "text": "\\mathcal{H}(E)" }, { "math_id": 54, "text": "\\mathcal{H}(E) \\cong \\mathbb{R}^{m}" }, { "math_id": 55, "text": "w \\colon E \\rightarrow (0, 1]" }, { "math_id": 56, "text": "V \\times V" }, { "math_id": 57, "text": "F(x_i, x_j) := 0 " }, { "math_id": 58, "text": "(x_i, x_j) \\not\\in E" }, { "math_id": 59, "text": "\\mathbb{R}^m" }, { "math_id": 60, "text": "m := |V|^2" }, { "math_id": 61, "text": " \\langle F, G \\rangle_{\\mathcal{H}(E)} := \\sum_{(x_i, x_j) \\in E} F(x_i, x_j) G(x_i, x_j), \\quad \\forall F, G \\in \\mathcal{H}(E)." }, { "math_id": 62, "text": "F \\in \\mathcal{H}(V)" }, { "math_id": 63, "text": "\\|F\\|_p = \\begin{cases}\n \\left(\\sum_{(x_i, x_j)\\in E} |F(x_i, x_j)|^p\\right)^\\frac{1}{p} &\\text{ for } 1 \\leqslant p < \\infty \\ ,\\\\\n \\max_{(x_i, x_j) \\in E}|F(x_i, x_j)|, &\\text{ for } p = \\infty \\ .\n\\end{cases}\n" }, { "math_id": 64, "text": "E = V \\times V" }, { "math_id": 65, "text": "\\mathcal{H}(E) \\cong \\mathbb{R}^{n \\times n}" }, { "math_id": 66, "text": "G = (V,E,w)" }, { "math_id": 67, "text": "\n \\partial_{x_j}f(x_i) \\ := \\ \\sqrt{w(x_i, x_j)}\\left(f(x_j) - f(x_i)\\right).\n" }, { "math_id": 68, "text": "\\partial_{x_i}f(x_j) = -\\partial_{x_j}f(x_i)," }, { "math_id": 69, "text": "\\partial_{x_i}f(x_i) = 0," }, { "math_id": 70, "text": "f(x_i) = f(x_j) \\Rightarrow \\partial_{x_j}f(x_i) = 0." }, { "math_id": 71, "text": "\\nabla_w: \\mathcal{H}(V) \\rightarrow \\mathcal{H}(E)" }, { "math_id": 72, "text": "\n (\\nabla_w f)(x_i, x_j) \\ = \\ \\partial_{x_j}f(x_i).\n" }, { "math_id": 73, "text": "\\nabla_w f" }, { "math_id": 74, "text": "\\|(\\nabla_w f)(x_i,\\cdot)\\|_{\\ell_p} = \\begin{cases}\n \\left(\\sum_{x_j \\sim x_i}w(x_i, x_j)^\\frac{p}{2}|f(x_j) - f(x_i)|^p\\right)^\\frac{1}{p} &\\text{ for } 1 \\leq p < \\infty,\\\\\n \\max_{x_j \\sim x_i} \\sqrt{w(x_i, x_j)}|f(x_j) - f(x_i)| &\\text{ for } p = \\infty.\n\\end{cases}" }, { "math_id": 75, "text": "\\nabla_w^*\\colon\\mathcal{H}(E)\\rightarrow \\mathcal{H}(V)" }, { "math_id": 76, "text": "\n\\langle \\nabla_wf,G\\rangle_{\\mathcal{H}(E)} = \\langle f,\\nabla_w^*G\\rangle_{\\mathcal{H}(V)} \\quad \\text{ for all } f\\in \\mathcal{H}(V), G\\in \\mathcal{H}(E).\n" }, { "math_id": 77, "text": "w \\in \\mathcal{H}(E)" }, { "math_id": 78, "text": "\\nabla_w^*" }, { "math_id": 79, "text": "F\\in \\mathcal{H}(E)" }, { "math_id": 80, "text": "\n \\left(\\nabla_w^*F\\right)(x_i) \\ = \\ \\frac{1}{2}\\sum_{x_j\\sim x_i}{\\sqrt{w(x_i, x_j)}(F(x_j, x_i) - F(x_i, x_j))}.\n" }, { "math_id": 81, "text": "\\operatorname{div}_w := -\\nabla_w^*" }, { "math_id": 82, "text": "\\Delta_w : \\mathcal{H}(V) \\rightarrow \\mathcal{H}(V)" }, { "math_id": 83, "text": "\\operatorname{div}(\\nabla f) = \\Delta f" }, { "math_id": 84, "text": "\\begin{align}\n (\\operatorname{div}_w(\\nabla_w f))(x_i)\n &= \\frac{1}{2}\\sum_{x_j \\sim x_i}\\sqrt{w(x_i, x_j)}(\\nabla_w f(x_j, x_i) - \\nabla_w f(x_i, x_j))\\\\\n &= \\frac{1}{2}\\sum_{x_j \\sim x_i}\\sqrt{w(x_i, x_j)}\\left(\\sqrt{w(x_i, x_j)}(f(x_j) - f(x_i)) - \\sqrt{w(x_j, x_i)}(f(x_i) - f(x_j))\\right)\\\\\n &= \\frac{1}{2}\\sum_{x_j \\sim x_i}w(x_i, x_j)(2f(x_j) - 2f(x_i))\\\\\n &= \\sum_{x_j \\sim x_i}w(x_i, x_j)(f(x_j) - f(x_i)) \\ =: \\ (\\Delta_w f)(x_i).\n\\end{align}" }, { "math_id": 85, "text": "w(x_i, x_j) = w(x_j, x_i)" }, { "math_id": 86, "text": "p" }, { "math_id": 87, "text": "\\Delta_{w,p} \\colon \\mathcal{H}(V) \\rightarrow \\mathcal{H}(V)" }, { "math_id": 88, "text": "1 \\leq p < \\infty " }, { "math_id": 89, "text": "\n E(f) \\ := \\ \\frac{1}{p} \\sum_{x_i \\in V} \\|\\nabla_w f(x_i,\\cdot)\\|_{\\ell_p}^p.\n" }, { "math_id": 90, "text": "\n (\\Delta_{w,p} f)(x_i) \\ := \\ \\sum_{x_j\\sim x_i} w(x_i, x_j)^\\frac{p}{2} |f(x_j) - f(x_i)|^{p-2} (f(x_j) - f(x_i)).\n" }, { "math_id": 91, "text": "p = 2" }, { "math_id": 92, "text": "\n (\\Delta_{w,2} f)(x_i) \\ = \\ (\\Delta_w f)(x_i) \\ = \\ \\sum_{x_j\\sim x_i} w(x_i, x_j) (f(x_j) - f(x_i)).\n" }, { "math_id": 93, "text": "\\{x_i,x_j\\}" } ]
https://en.wikipedia.org/wiki?curid=61556353
61559
Archimedean spiral
Spiral with constant distance from itself The Archimedean spiral (also known as Archimedes' spiral, the arithmetic spiral) is a spiral named after the 3rd-century BC Greek mathematician Archimedes. The term "Archimedean spiral" is sometimes used to refer to the more general class of spirals of this type (see below), in contrast to "Archimedes' spiral" (the specific arithmetic spiral of Archimedes). It is the locus corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line that rotates with constant angular velocity. Equivalently, in polar coordinates ("r", "θ") it can be described by the equation formula_0 with real number b. Changing the parameter b controls the distance between loops. From the above equation, it can thus be stated: position of the particle from point of start is proportional to angle θ as time elapses. Archimedes described such a spiral in his book "On Spirals". Conon of Samos was a friend of his and Pappus states that this spiral was discovered by Conon. Derivation of general equation of spiral. A physical approach is used below to understand the notion of Archimedean spirals. Suppose a point object moves in the Cartesian system with a constant velocity v directed parallel to the x-axis, with respect to the xy-plane. Let at time "t" 0, the object was at an arbitrary point ("c", 0, 0). If the xy plane rotates with a constant angular velocity ω about the z-axis, then the velocity of the point with respect to z-axis may be written as: formula_1 As shown in the figure alongside, we have "vt" + "c" representing the modulus of the position vector of the particle at any time t, with vx and vy as the velocity components along the x and y axes, respectively. formula_2 The above equations can be integrated by applying integration by parts, leading to the following parametric equations: formula_3 Squaring the two equations and then adding (and some small alterations) results in the Cartesian equation formula_4 (using the fact that "ωt" "θ" and "θ" arctan ) or formula_5 Its polar form is formula_6 Arc length and curvature. Given the parametrization in cartesian coordinates formula_7 the arc length from "θ"1 to "θ"2 is formula_8 or, equivalently: formula_9 The total length from "θ"1 0 to "θ"2 "θ" is therefore formula_10 The curvature is given by formula_11 Characteristics. The Archimedean spiral has the property that any ray from the origin intersects successive turnings of the spiral in points with a constant separation distance (equal to 2"πb" if θ is measured in radians), hence the name "arithmetic spiral". In contrast to this, in a logarithmic spiral these distances, as well as the distances of the intersection points measured from the origin, form a geometric progression. The Archimedean spiral has two arms, one for "θ" &gt; 0 and one for "θ" &lt; 0. The two arms are smoothly connected at the origin. Only one arm is shown on the accompanying graph. Taking the mirror image of this arm across the y-axis will yield the other arm. For large θ a point moves with well-approximated uniform acceleration along the Archimedean spiral while the spiral corresponds to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity (see contribution from Mikhail Gaichenkov). As the Archimedean spiral grows, its evolute asymptotically approaches a circle with radius . General Archimedean spiral. Sometimes the term "Archimedean spiral" is used for the more general group of spirals formula_12 The normal Archimedean spiral occurs when "c" 1. Other spirals falling into this group include the hyperbolic spiral ("c" −1), Fermat's spiral ("c" 2), and the lituus ("c" −2). Applications. One method of squaring the circle, due to Archimedes, makes use of an Archimedean spiral. Archimedes also showed how the spiral can be used to trisect an angle. Both approaches relax the traditional limitations on the use of straightedge and compass in ancient Greek geometric proofs. The Archimedean spiral has a variety of real-world applications. Scroll compressors, used for compressing gases, have rotors that can be made from two interleaved Archimedean spirals, involutes of a circle of the same size that almost resemble Archimedean spirals, or hybrid curves. Archimedean spirals can be found in spiral antenna, which can be operated over a wide range of frequencies. The coils of watch balance springs and the grooves of very early gramophone records form Archimedean spirals, making the grooves evenly spaced (although variable track spacing was later introduced to maximize the amount of music that could be cut onto a record). Asking for a patient to draw an Archimedean spiral is a way of quantifying human tremor; this information helps in diagnosing neurological diseases. Archimedean spirals are also used in digital light processing (DLP) projection systems to minimize the "rainbow effect", making it look as if multiple colors are displayed at the same time, when in reality red, green, and blue are being cycled extremely quickly. Additionally, Archimedean spirals are used in food microbiology to quantify bacterial concentration through a spiral platter. They are also used to model the pattern that occurs in a roll of paper or tape of constant thickness wrapped around a cylinder. Many dynamic spirals (such as the Parker spiral of the solar wind, or the pattern made by a Catherine's wheel) are Archimedean. For instance, the star LL Pegasi shows an approximate Archimedean spiral in the dust clouds surrounding it, thought to be ejected matter from the star that has been shepherded into a spiral by another companion star as part of a double star system. Construction methods. The Archimedean Spiral cannot be constructed precisely by traditional compass and straightedge methods, since the arithmetic spiral requires the radius of the curve to be incremented constantly as the angle at the origin is incremented. But an arithmetic spiral can be constructed approximately, to varying degrees of precision, by various manual drawing methods. One such method uses compass and straightedge; another method uses a modified string compass. The common traditional construction uses compass and straightedge to approximate the arithmetic spiral. First, a large circle is constructed and its circumference is subdivided by 12 diameters into 12 arcs (of 30 degrees each; see regular dodecagon). Next, the radius of this circle is itself subdivided into 12 unit segments (radial units), and a series of concentric circles is constructed, each with radius incremented by one radial unit. Starting with the horizontal diameter and the innermost concentric circle, the point is marked where its radius intersects its circumference; one then moves to the next concentric circle and to the next diameter (moving up to construct a counterclockwise spiral, or down for clockwise) to mark the next point. After all points have been marked, successive points are connected by a line approximating the arithmetic spiral (or by a smooth curve of some sort; see French Curve). Depending on the desired degree of precision, this method can be improved by increasing the size of the large outer circle, making more subdivisions of both its circumference and radius, increasing the number of concentric circles (see Polygonal Spiral). Approximating the Archimedean Spiral by this method is of course reminiscent of Archimedes’ famous method of approximating π by doubling the sides of successive polygons (see Polygon approximation of π). Compass and straightedge construction of the Spiral of Theodorus is another simple method to approximate the Archimedean Spiral. A mechanical method for constructing the arithmetic spiral uses a modified string compass, where the string wraps and winds (or unwraps/unwinds) about a fixed central pin (that does not pivot), thereby incrementing (or decrementing) the length of the radius (string) as the angle changes (the string winds around the fixed pin which does not pivot). Such a method is a simple way to create an arithmetic spiral, arising naturally from use of a string compass with winding pin (not the loose pivot of a common string compass). The string compass drawing tool has various modifications and designs, and this construction method is reminiscent of string-based methods for creating ellipses (with two fixed pins). Yet another mechanical method is a variant of the previous string compass method, providing greater precision and more flexibility. Instead of the central pin and string of the string compass, this device uses a non-rotating shaft (column) with helical threads (screw; see Archimedes’ screw) to which are attached two slotted arms: one horizontal arm is affixed to (travels up) the screw threads of the vertical shaft at one end, and holds a drawing tool at the other end; another sloped arm is affixed at one end to the top of the screw shaft, and is joined by a pin loosely fitted in its slot to the slot of the horizontal arm. The two arms rotate together and work in consort to produce the arithmetic spiral: as the horizontal arm gradually climbs the screw, that arm’s slotted attachment to the sloped arm gradually shortens the drawing radius. The angle of the sloped arm remains constant throughout (traces a cone), and setting a different angle varies the pitch of the spiral. This device provides a high degree of precision, depending on the precision with which the device is machined (machining a precise helical screw thread is a related challenge). And of course the use of a screw shaft in this mechanism is reminiscent of Archimedes’ screw. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r = b\\cdot\\theta" }, { "math_id": 1, "text": "\\begin{align}\n|v_0|&=\\sqrt{v^2+\\omega^2(vt+c)^2} \\\\\nv_x&=v \\cos \\omega t - \\omega (vt+c) \\sin \\omega t \\\\\nv_y&=v \\sin \\omega t + \\omega (vt+c) \\cos \\omega t\n\\end{align}" }, { "math_id": 2, "text": "\\begin{align}\n\\int v_x \\,dt &=x \\\\\n\\int v_y \\,dt &=y\n\\end{align}" }, { "math_id": 3, "text": "\\begin{align}\nx&=(vt + c) \\cos \\omega t \\\\\ny&=(vt+c) \\sin \\omega t\n\\end{align}" }, { "math_id": 4, "text": "\\sqrt{x^2+y^2}=\\frac{v}{\\omega}\\cdot \\arctan \\frac{y}{x} +c" }, { "math_id": 5, "text": "\\tan \\left(\\left(\\sqrt{x^2+y^2}-c\\right)\\cdot\\frac{\\omega}{v}\\right) = \\frac{y}{x}" }, { "math_id": 6, "text": "r= \\frac{v}{\\omega}\\cdot \\theta +c." }, { "math_id": 7, "text": "f\\colon\\theta\\mapsto (r\\,\\cos \\theta, r\\,\\sin \\theta) = (b\\, \\theta\\,\\cos \\theta,b\\, \\theta\\,\\sin\\theta)" }, { "math_id": 8, "text": "\\frac{b}{2}\\left[\\theta\\,\\sqrt{1+\\theta^2}+\\ln\\left(\\theta+\\sqrt{1+\\theta^2}\\right)\\right]_{\\theta_1}^{\\theta_2}" }, { "math_id": 9, "text": "\\frac{b}{2}\\left[\\theta\\,\\sqrt{1+\\theta^2}+\\operatorname{arsinh}\\theta\\right]_{\\theta_1}^{\\theta_2}." }, { "math_id": 10, "text": "\\frac{b}{2}\\left[\\theta\\,\\sqrt{1+\\theta^2}+\\ln \\left(\\theta+\\sqrt{1+\\theta^2} \\right)\\right]." }, { "math_id": 11, "text": "\\kappa=\\frac{\\theta^2+2}{b\\left(\\theta^2+1\\right)^\\frac{3}{2}}" }, { "math_id": 12, "text": "r = a + b\\cdot\\theta^\\frac{1}{c}." } ]
https://en.wikipedia.org/wiki?curid=61559
61563
Spiral
Curve that winds around a central point In mathematics, a spiral is a curve which emanates from a point, moving farther away as it revolves around the point. It is a subtype of whorled patterns, a broad group that also includes concentric objects. Helices. Two major definitions of "spiral" in the American Heritage Dictionary are: The first definition describes a planar curve, that extends in both of the perpendicular directions within its plane; the groove on one side of a gramophone record closely approximates a plane spiral (and it is by the finite width and depth of the groove, but "not" by the wider spacing between than within tracks, that it falls short of being a perfect example); note that successive loops "differ" in diameter. In another example, the "center lines" of the arms of a spiral galaxy trace logarithmic spirals. The second definition includes two kinds of 3-dimensional relatives of spirals: In the side picture, the black curve at the bottom is an Archimedean spiral, while the green curve is a helix. The curve shown in red is a conical spiral. Two-dimensional. A two-dimensional, or plane, spiral may be described most easily using polar coordinates, where the radius formula_0 is a monotonic continuous function of angle formula_1: The circle would be regarded as a degenerate case (the function not being strictly monotonic, but rather constant). In "formula_3-formula_4-coordinates" the curve has the parametric representation: Examples. Some of the most important sorts of two-dimensional spirals include: An "Archimedean spiral" is, for example, generated while coiling a carpet. A "hyperbolic spiral" appears as image of a helix with a special central projection (see diagram). A hyperbolic spiral is some times called "reciproke" spiral, because it is the image of an Archimedean spiral with a circle-inversion (see below). The name "logarithmic spiral" is due to the equation formula_11. Approximations of this are found in nature. Spirals which do not fit into this scheme of the first 5 examples: A "Cornu spiral" has two asymptotic points.&lt;br&gt; The "spiral of Theodorus" is a polygon.&lt;br&gt; The "Fibonacci Spiral" consists of a sequence of circle arcs.&lt;br&gt; The "involute of a circle" looks like an Archimedean, but is not: see Involute#Examples. Geometric properties. The following considerations are dealing with spirals, which can be described by a polar equation formula_12, especially for the cases formula_13 (Archimedean, hyperbolic, Fermat's, lituus spirals) and the logarithmic spiral formula_10. The angle formula_14 between the spiral tangent and the corresponding polar circle (see diagram) is called "angle of the polar slope and formula_15 the "polar slope". From vector calculus in polar coordinates one gets the formula formula_16 Hence the slope of the spiral formula_17 is In case of an "Archimedean spiral" (formula_19) the polar slope is formula_20 In a "logarithmic spiral", formula_21 is constant. The curvature formula_22 of a curve with polar equation formula_12 is formula_23 For a spiral with formula_24 one gets In case of formula_19 "(Archimedean spiral)" formula_26.&lt;br&gt; Only for formula_27 the spiral has an "inflection point". The curvature of a "logarithmic spiral" formula_28 is formula_29 The area of a sector of a curve (see diagram) with polar equation formula_12 is formula_30 For a spiral with equation formula_31 one gets formula_33 The formula for a "logarithmic spiral" formula_28 is formula_34 The length of an arc of a curve with polar equation formula_12 is formula_35 For the spiral formula_31 the length is Not all these integrals can be solved by a suitable table. In case of a Fermat's spiral, the integral can be expressed by elliptic integrals only. The arc length of a "logarithmic spiral" formula_28 is formula_37 The inversion at the unit circle has in polar coordinates the simple description: formula_38. A logarithmic spiral formula_28 is mapped onto the logarithmic spiral formula_41 Bounded spirals. The function formula_42 of a spiral is usually strictly monotonic, continuous and unbounded. For the standard spirals formula_42 is either a power function or an exponential function. If one chooses for formula_42 a "bounded" function, the spiral is bounded, too. A suitable bounded function is the arctan function: Setting formula_43 and the choice formula_44 gives a spiral, that starts at the origin (like an Archimedean spiral) and approaches the circle with radius formula_45 (diagram, left). For formula_46 and formula_47 one gets a spiral, that approaches the origin (like a hyperbolic spiral) and approaches the circle with radius formula_48 (diagram, right). Three-dimensional. Two well-known spiral space curves are "conical spirals" and "spherical spirals", defined below. Another instance of space spirals is the "toroidal spiral". A spiral wound around a helix, also known as "double-twisted helix", represents objects such as coiled coil filaments. Conical spirals. If in the formula_3-formula_4-plane a spiral with parametric representation formula_49 is given, then there can be added a third coordinate formula_50, such that the now space curve lies on the cone with equation formula_51: Spirals based on this procedure are called conical spirals. Starting with an "archimedean spiral" formula_53 one gets the conical spiral (see diagram) formula_54 Spherical spirals. Any cylindrical map projection can be used as the basis for a spherical spiral: draw a straight line on the map and find its inverse projection on the sphere, a kind of spherical curve. One of the most basic families of spherical spirals is the Clelia curves, which project to straight lines on an equirectangular projection. These are curves for which longitude and colatitude are in a linear relationship, analogous to Archimedean spirals in the plane; under the azimuthal equidistant projection a Clelia curve projects to a planar Archimedean spiral. If one represents a unit sphere by spherical coordinates formula_55 then setting the linear dependency formula_56 for the angle coordinates gives a parametric curve in terms of parameter &amp;NoBreak;&amp;NoBreak;, formula_57 Another family of spherical spirals is the rhumb lines or loxodromes, which project to straight lines on the Mercator projection. These are the trajectories traced by a ship traveling with constant bearing. Any loxodrome (except for the meridians and parallels) spirals infinitely around either pole, closer and closer each time, unlike a Clelia curve which maintains uniform spacing in colatitude. Under stereographic projection, a loxodrome projects to a logarithmic spiral in the plane. In nature. The study of spirals in nature has a long history. Christopher Wren observed that many shells form a logarithmic spiral; Jan Swammerdam observed the common mathematical characteristics of a wide range of shells from "Helix" to "Spirula"; and Henry Nottidge Moseley described the mathematics of univalve shells. D’Arcy Wentworth Thompson's "On Growth and Form" gives extensive treatment to these spirals. He describes how shells are formed by rotating a closed curve around a fixed axis: the shape of the curve remains fixed but its size grows in a geometric progression. In some shells, such as "Nautilus" and ammonites, the generating curve revolves in a plane perpendicular to the axis and the shell will form a planar discoid shape. In others it follows a skew path forming a helico-spiral pattern. Thompson also studied spirals occurring in horns, teeth, claws and plants. A model for the pattern of florets in the head of a sunflower was proposed by H. Vogel. This has the form formula_58 where "n" is the index number of the floret and "c" is a constant scaling factor, and is a form of Fermat's spiral. The angle 137.5° is the golden angle which is related to the golden ratio and gives a close packing of florets. Spirals in plants and animals are frequently described as whorls. This is also the name given to spiral shaped fingerprints. As a symbol. A spiral like form has been found in Mezine, Ukraine, as part of a decorative object dated to 10,000 BCE. Spiral and triple spiral motifs served as Neolithic symbols in Europe (Megalithic Temples of Malta). The Celtic triple-spiral is in fact a pre-Celtic symbol. It is carved into the rock of a stone lozenge near the main entrance of the prehistoric Newgrange monument in County Meath, Ireland. Newgrange was built around 3200 BCE, predating the Celts; triple spirals were carved at least 2,500 years before the Celts reached Ireland but have long since become part of Celtic culture. The triskelion symbol, consisting of three interlocked spirals or three bent human legs, appears in many early cultures: examples include Mycenaean vessels, coinage from Lycia, staters of Pamphylia (at Aspendos, 370–333 BC) and Pisidia, as well as the heraldic emblem on warriors' shields depicted on Greek pottery. Spirals occur commonly in pre-Columbian art in Latin and Central America. The more than 1,400 petroglyphs (rock engravings) in Las Plazuelas, Guanajuato Mexico, dating 750-1200 AD, predominantly depict spirals, dot figures and scale models. In Colombia, monkeys, frog and lizard-like figures depicted in petroglyphs or as gold offering-figures frequently include spirals, for example on the palms of hands. In Lower Central America spirals along with circles, wavy lines, crosses and points are universal petroglyph characters. Spirals also appear among the Nazca Lines in the coastal desert of Peru, dating from 200 BC to 500 AD. The geoglyphs number in the thousands and depict animals, plants and geometric motifs, including spirals. Spiral shapes, including the swastika, triskele, etc., have often been interpreted as solar symbols. Roof tiles dating back to the Tang dynasty with this symbol have been found west of the ancient city of Chang'an (modern-day Xi'an). Spirals are also a symbol of hypnosis, stemming from the cliché of people and cartoon characters being hypnotized by staring into a spinning spiral (one example being Kaa in Disney's "The Jungle Book"). They are also used as a symbol of dizziness, where the eyes of a cartoon character, especially in anime and manga, will turn into spirals to suggest that they are dizzy or dazed. The spiral is also found in structures as small as the double helix of DNA and as large as a galaxy. Because of this frequent natural occurrence, the spiral is the official symbol of the World Pantheist Movement. The spiral is also a symbol of the dialectic process and of Dialectical monism. The spiral is a frequent symbol for spiritual purification, both within Christianity and beyond (one thinks of the spiral as the neo-Platonist symbol for prayer and contemplation, circling around a subject and ascending at the same time, and as a Buddhist symbol for the gradual process on the Path to Enlightenment). [...] while a helix is repetitive, a spiral expands and thus epitomizes growth - conceptually "ad infinitum". In art. The spiral has inspired artists throughout the ages. Among the most famous of spiral-inspired art is Robert Smithson's earthwork, "Spiral Jetty", at the Great Salt Lake in Utah. The spiral theme is also present in David Wood's Spiral Resonance Field at the Balloon Museum in Albuquerque, as well as in the critically acclaimed Nine Inch Nails 1994 concept album "The Downward Spiral". The Spiral is also a prominent theme in the anime "Gurren Lagann", where it represents a philosophy and way of life. It also central in Mario Merz and Andy Goldsworthy's work. The spiral is the central theme of the horror manga "Uzumaki" by Junji Ito, where a small coastal town is afflicted by a curse involving spirals. "2012 A Piece of Mind By Wayne A Beale" also depicts a large spiral in this book of dreams and images. The coiled spiral is a central image in Australian artist Tanja Stark's Suburban Gothic iconography, that incorporates spiral electric stove top elements as symbols of domestic alchemy and spirituality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "\\varphi" }, { "math_id": 2, "text": "r=r(\\varphi)\\; ." }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "x=r(\\varphi)\\cos\\varphi \\ ,\\qquad y=r(\\varphi)\\sin\\varphi\\; ." }, { "math_id": 6, "text": "r=a \\varphi " }, { "math_id": 7, "text": "r = a/ \\varphi" }, { "math_id": 8, "text": "r= a\\varphi^{1/2}" }, { "math_id": 9, "text": "r = a\\varphi^{-1/2}" }, { "math_id": 10, "text": "r=ae^{k\\varphi}" }, { "math_id": 11, "text": "\\varphi=\\tfrac{1}{k}\\cdot \\ln \\tfrac{r}{a}" }, { "math_id": 12, "text": "r=r(\\varphi)" }, { "math_id": 13, "text": "r(\\varphi)=a\\varphi^n" }, { "math_id": 14, "text": "\\alpha" }, { "math_id": 15, "text": "\\tan \\alpha" }, { "math_id": 16, "text": "\\tan\\alpha=\\frac{r'}{r}\\ ." }, { "math_id": 17, "text": "\\;r=a\\varphi^n \\;" }, { "math_id": 18, "text": "\\tan\\alpha=\\frac{n}{\\varphi}\\ ." }, { "math_id": 19, "text": "n=1" }, { "math_id": 20, "text": "\\; \\tan\\alpha=\\tfrac{1}{\\varphi}\\ ." }, { "math_id": 21, "text": "\\ \\tan\\alpha=k\\ " }, { "math_id": 22, "text": "\\kappa" }, { "math_id": 23, "text": "\\kappa = \\frac{r^2 + 2(r')^2 - r\\; r''}{(r^2+(r')^2)^{3/2}}\\ ." }, { "math_id": 24, "text": "r=a\\varphi^n" }, { "math_id": 25, "text": "\\kappa = \\dotsb = \\frac{1}{a\\varphi^{n-1}}\\frac{\\varphi^2+n^2+n}{(\\varphi^2+n^2)^{3/2}}\\ ." }, { "math_id": 26, "text": "\\kappa=\\tfrac{\\varphi^2+2}{a(\\varphi^2+1)^{3/2}}" }, { "math_id": 27, "text": "-1<n<0 " }, { "math_id": 28, "text": "\\; r=a e^{k\\varphi} \\;" }, { "math_id": 29, "text": "\\; \\kappa=\\tfrac{1}{r\\sqrt{1+k^2}} \\; ." }, { "math_id": 30, "text": "A=\\frac{1}{2}\\int_{\\varphi_1}^{\\varphi_2} r(\\varphi)^2\\; d\\varphi\\ ." }, { "math_id": 31, "text": "r=a\\varphi^n\\; " }, { "math_id": 32, "text": "A=\\frac{1}{2}\\int_{\\varphi_1}^{\\varphi_2} a^2\\varphi^{2n}\\; d\\varphi\n=\\frac{a^2}{2(2n+1)}\\big(\\varphi_2^{2n+1}- \\varphi_1^{2n+1}\\big)\\ , \\quad \\text{if}\\quad n\\ne-\\frac{1}{2},\n" }, { "math_id": 33, "text": "A=\\frac{1}{2}\\int_{\\varphi_1}^{\\varphi_2} \\frac{a^2}{\\varphi}\\; d\\varphi\n=\\frac{a^2}{2}(\\ln\\varphi_2-\\ln\\varphi_1)\\ ,\\quad \\text{if} \\quad n=-\\frac{1}{2}\\ ." }, { "math_id": 34, "text": "\\ A=\\tfrac{r(\\varphi_2)^2-r(\\varphi_1)^2)}{4k}\\ ." }, { "math_id": 35, "text": "L=\\int\\limits_{\\varphi_1}^{\\varphi_2}\\sqrt{\\left(r^\\prime(\\varphi)\\right)^2+r^2(\\varphi)}\\,\\mathrm{d}\\varphi \\ ." }, { "math_id": 36, "text": "L=\\int_{\\varphi_1}^{\\varphi_2} \\sqrt{\\frac{n^2r^2}{\\varphi^2} +r^2}\\; d\\varphi\n= a\\int\\limits_{\\varphi_1}^{\\varphi_2}\\varphi^{n-1}\\sqrt{n^2+\\varphi^2}d\\varphi\n\\ ." }, { "math_id": 37, "text": "\\ L=\\tfrac{\\sqrt{k^2+1}}{k}\\big(r(\\varphi_2)-r(\\varphi_1)\\big) \\ ." }, { "math_id": 38, "text": "\\ (r,\\varphi) \\mapsto (\\tfrac{1}{r},\\varphi)\\ " }, { "math_id": 39, "text": "\\ r= a\\varphi^n\\ " }, { "math_id": 40, "text": "\\ r= \\tfrac{1}{a}\\varphi^{-n}\\ " }, { "math_id": 41, "text": "\\; r=\\tfrac{1}{a} e^{-k\\varphi} \\; ." }, { "math_id": 42, "text": "r(\\varphi)" }, { "math_id": 43, "text": "\\;r=a \\arctan(k\\varphi)\\;" }, { "math_id": 44, "text": "\\;k=0.1, a=4, \\;\\varphi\\ge 0\\;" }, { "math_id": 45, "text": "\\;r=a\\pi/2\\;" }, { "math_id": 46, "text": "\\;r=a (\\arctan(k\\varphi)+\\pi/2)\\;" }, { "math_id": 47, "text": "\\;k=0.2, a=2,\\; -\\infty<\\varphi<\\infty\\;" }, { "math_id": 48, "text": "\\;r=a\\pi\\;" }, { "math_id": 49, "text": "x=r(\\varphi)\\cos\\varphi \\ ,\\qquad y=r(\\varphi)\\sin\\varphi" }, { "math_id": 50, "text": "z(\\varphi)" }, { "math_id": 51, "text": "\\;m(x^2+y^2)=(z-z_0)^2\\ ,\\ m>0\\;" }, { "math_id": 52, "text": "x=r(\\varphi)\\cos\\varphi \\ ,\\qquad y=r(\\varphi)\\sin\\varphi\\ , \\qquad \\color{red}{z=z_0 + mr(\\varphi)} \\ ." }, { "math_id": 53, "text": "\\;r(\\varphi)=a\\varphi\\;" }, { "math_id": 54, "text": "x=a\\varphi\\cos\\varphi \\ ,\\qquad y=a\\varphi\\sin\\varphi\\ , \\qquad z=z_0 + ma\\varphi \\ ,\\quad \\varphi \\ge 0 \\ ." }, { "math_id": 55, "text": "\nx = \\sin \\theta \\, \\cos \\varphi, \\quad\ny = \\sin \\theta \\, \\sin \\varphi, \\quad\nz = \\cos \\theta,\n" }, { "math_id": 56, "text": " \\varphi=c\\theta" }, { "math_id": 57, "text": "\n\\bigl( \\sin \\theta\\, \\cos c\\theta,\\, \\sin \\theta\\, \\sin c\\theta,\\, \\cos \\theta \\,\\bigr).\n" }, { "math_id": 58, "text": "\\theta = n \\times 137.5^{\\circ},\\ r = c \\sqrt{n}" } ]
https://en.wikipedia.org/wiki?curid=61563
61563184
Nonlocal operator
Mathematical mapping In mathematics, a nonlocal operator is a mapping which maps functions on a topological space to functions, in such a way that the value of the output function at a given point cannot be determined solely from the values of the input function in any neighbourhood of any point. An example of a nonlocal operator is the Fourier transform. Formal definition. Let formula_0 be a topological space, formula_1 a set, formula_2 a function space containing functions with domain formula_0, and formula_3 a function space containing functions with domain formula_1. Two functions formula_4 and formula_5 in formula_2 are called equivalent at formula_6 if there exists a neighbourhood formula_7 of formula_8 such that formula_9 for all formula_10. An operator formula_11 is said to be local if for every formula_12 there exists an formula_6 such that formula_13 for all functions formula_4 and formula_5 in formula_2 which are equivalent at formula_8. A nonlocal operator is an operator which is not local. For a local operator it is possible (in principle) to compute the value formula_14 using only knowledge of the values of formula_4 in an arbitrarily small neighbourhood of a point formula_8. For a nonlocal operator this is not possible. Examples. Differential operators are examples of local operators. A large class of (linear) nonlocal operators is given by the integral transforms, such as the Fourier transform and the Laplace transform. For an integral transform of the form formula_15 where formula_16 is some kernel function, it is necessary to know the values of formula_4 almost everywhere on the support of formula_17 in order to compute the value of formula_18 at formula_19. An example of a singular integral operator is the fractional Laplacian formula_20 The prefactor formula_21 involves the Gamma function and serves as a normalizing factor. The fractional Laplacian plays a role in, for example, the study of nonlocal minimal surfaces. Applications. Some examples of applications of nonlocal operators are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "Y" }, { "math_id": 2, "text": "F(X)" }, { "math_id": 3, "text": "G(Y)" }, { "math_id": 4, "text": "u" }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "x\\in X" }, { "math_id": 7, "text": "N" }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "u(x')=v(x')" }, { "math_id": 10, "text": "x'\\in N" }, { "math_id": 11, "text": "A: F(X) \\to G(Y)" }, { "math_id": 12, "text": "y\\in Y" }, { "math_id": 13, "text": "Au(y) = Av(y)" }, { "math_id": 14, "text": "Au(y)" }, { "math_id": 15, "text": " (Au)(y) = \\int \\limits_X u(x)\\, K(x, y)\\, dx," }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "K(\\cdot, y)" }, { "math_id": 18, "text": "Au" }, { "math_id": 19, "text": "y" }, { "math_id": 20, "text": " (-\\Delta)^sf(x) = c_{d,s} \\int\\limits_{\\mathbb{R}^d} \\frac{f(x)-f(y)}{|x-y|^{d+2s}}\\,dy. " }, { "math_id": 21, "text": " c_{d,s} := \\frac{4^s\\Gamma(d/2+s)}{\\pi^{d/2}|\\Gamma(-s)|}" } ]
https://en.wikipedia.org/wiki?curid=61563184
6157535
Naccache–Stern knapsack cryptosystem
The Naccache–Stern Knapsack cryptosystem is an atypical public-key cryptosystem developed by David Naccache and Jacques Stern in 1997. This cryptosystem is deterministic, and hence is not semantically secure. While unbroken to date, this system also lacks provable security. System overview. This system is based on a type of knapsack problem. Specifically, the underlying problem is this: given integers "c","n","p" and "v"0...,"v""n", find a vector formula_0 such that formula_1 The idea here is that when the "v""i" are relatively prime and much smaller than the modulus "p" this problem can be solved easily. It is this observation which allows decryption. Key Generation. To generate a public/private key pair The public key is then "p","n" and "v"0...,"v""n". The private key is "s". Encryption. To encrypt an "n"-bit long message "m", calculate formula_4 where "m""i" is the "i"th bit of the message "m". Decryption. To decrypt a message "c", calculate formula_5 This works because the fraction formula_6 is 0 or 1 depending on whether "p""i" divides "c""s" mod "p". Security. The security of the trapdoor function relies on the difficulty of the following multiplicative knapsack problem: given formula_7 recover the formula_8. Unlike additive knapsack-based cryptosystems, such as Merkle-Hellman, techniques like Euclidean lattice reduction do not apply to this problem. The best known generic attack consists of solving the discrete logarithm problem to recover formula_9 from formula_10, which is considered difficult for a classical computer. However, the quantum algorithm of Shor efficiently solves this problem. Furthermore, currently (2023), there is no proof that the Naccache-Stern knapsack reduces to the discrete logarithm problem. The best known specific attack (in 2018) uses the birthday theorem to partially invert the function without knowing the trapdoor, assuming that the message has a very low Hamming weight. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x \\in \\{0,1\\}^n" }, { "math_id": 1, "text": "c \\equiv \\prod_{i=0}^n v_i^{x_i} \\mod p" }, { "math_id": 2, "text": "\\prod_{i=0}^np_i < p" }, { "math_id": 3, "text": "v_i = \\sqrt[s]{p_i} \\mod p" }, { "math_id": 4, "text": "c = \\prod_{i=0}^n v_i^{m_i} \\mod p" }, { "math_id": 5, "text": "m = \\sum_{i=0}^n \\frac{2^i}{p_i-1} \\times \\left( \\gcd(p_i,c^s \\mod p) -1 \\right)" }, { "math_id": 6, "text": "\\frac{ \\gcd(p_i,c^s \\mod p) - 1 }{p_i - 1}" }, { "math_id": 7, "text": "c = \\prod_{i=0}^n \nv_i^{m_i}\\pmod p," }, { "math_id": 8, "text": "m_i" }, { "math_id": 9, "text": "s" }, { "math_id": 10, "text": "p, p_i, v_i" } ]
https://en.wikipedia.org/wiki?curid=6157535
6157683
Theory of everything (philosophy)
The ultimate nature of reality In philosophy, a theory of everything (ToE) is an ultimate, all-encompassing explanation or description of nature or reality. Adopting the term from physics, where the search for a theory of everything is ongoing, philosophers have discussed the viability of the concept and analyzed its properties and implications. Among the questions to be addressed by a philosophical theory of everything are: "Why is reality understandable?" – "Why are the laws of nature as they are?" – "Why is there anything at all?" A philosophical theory of everything, would need to, as much as is possible or makes sense, unify analytic and continental philosophy. Questions such as "Why is there anything at all?" are arguably metaphysics questions and not so much related to a philosophical ToE. Comprehensive philosophical systems. The "system building" style of metaphysics attempts to answer "all" the important questions in a coherent way, providing a complete picture of the world. The philosophies of Plato and Aristotle could be said to be early examples of comprehensive systems. In the early modern period (17th and 18th centuries), the system-building "scope" of philosophy is often linked to the rationalist "method" of philosophy, that is the technique of deducing the nature of the world by pure "a priori" reason. Examples from the early modern period include Leibniz's monadology, Descartes's dualism, and Spinoza's monism. Hegel's absolute idealism and Whitehead's process philosophy were later systems. At present, work is underway on the structural-systematic philosophy (SSP), to which the following books are devoted: Lorenz B. Puntel, "Structure and Being" (2008; translation of "Struktur und Sein", 2006) and "Being and God" (2011; translation of "Sein und Gott", 2010) and Alan White, "Toward a Philosophical Theory of Everything" (2014). The SSP makes no claims to finality; it aims to be the best systematic philosophy currently available. Other philosophers do not believe philosophy should aim so high. Some scientists think a more mathematical approach than philosophy is needed for a ToE, for instance Stephen Hawking wrote in "A Brief History of Time" that even if we had a ToE, it would necessarily be a set of equations. He wrote, "What is it that breathes fire into the equations and makes a universe for them to describe?" Nicholas Rescher. Properties and impasse of self-substantiation. In "The Price of an Ultimate Theory", originally published in 2000, Nicholas Rescher specifies what he sees as the principal properties of a Theory of Everything and describes an apparent impasse on the road to such a theory. Properties. Principle of sufficient reason. First, he takes as a presupposition the principle of sufficient reason, which in his formulation states that every fact "t" has an explanation "t"': formula_0 where E predicates explanation, so that "t'" E "t" denotes ""t"' explains "t"". Comprehensiveness. Next, he asserts that the most direct and natural construction of a Theory of Everything "T*" would confer upon it two crucial features: comprehensiveness and finality. Comprehensiveness says that wherever there is a fact "t", "T*" affords its explanation: formula_1 Finality. Finality says that as an "ultimate theory", "T*" has no deeper explanation: formula_2 so that the only conceivable explanation of "T*" is "T*" itself. Noncircularity. Rescher notes that it is obviously problematic to deploy a theory for its own explanation; at the heart of the traditional conception of explanatory adequacy, he says, is a principle of noncircularity stating that no fact can explain itself: formula_3 Impasse. The impasse is then that the two critical aspects of a Theory of Everything, comprehensiveness and finality, conflict with the fundamental principle of noncircularity. A comprehensive theory which explains everything must explain itself, and a final theory which has no deeper explanation must, by the principle of sufficient reason, have "some" explanation; consequently it too must be self-explanatory. Rescher concludes that any Theorist of Everything committed to comprehensiveness and finality is bound to regard noncircularity as "something that has to be jettisoned". But how, he asks, can a theory adequately substantiate itself? Ways forward. Rescher's proposal in "The Price of an Ultimate Theory" is to dualize the concept of explanation so that a fact can be explained either "derivationally", by the premises which lead to it, or "systemically", by the consequences which follow from it. With derivational explanation, a fact "t" is explained when it is subsumed by some prior, more fundamental fact "t"'. With systemic explanation, "t" is explained when it is a "best fit" for its consequences, where fitness is measured by uniformity, simplicity, connectedness, and other criteria conducive to systemic integration. Rescher concludes that while a theory of everything cannot be explained derivationally (since no deeper explanation can subsume it), it can be explained systemically by its capacity to integrate its consequences. In his 1996 book "The Conscious Mind", David Chalmers argues that a theory of everything must explain consciousness, that consciousness does not logically supervene on the physical, and that therefore a fundamental theory in physics would not be a theory of everything. A truly final theory, he argues, needs not just physical properties and laws, but phenomenal or protophenomenal properties and psychophysical laws explaining the relationship between physical processes and conscious experience. He concludes that "[o]nce we have a fundamental theory of consciousness to accompany a fundamental theory in physics, we may truly have a theory of everything." Developing such a theory will not be straightforward, he says, but "it ought to be possible in principle." In "Prolegomena to Any Future Philosophy", a 2002 essay in the "Journal of Evolution and Technology", Mark Alan Walker discusses modern responses to the question of how to reconcile "the apparent finitude of humans" with what he calls "the traditional telos of philosophy—the attempt to unite thought and Being, to arrive at absolute knowledge, at a final theory of everything." He contrasts two ways of closing this "gap between the ambitions of philosophy, and the abilities of human philosophers": a "deflationary" approach in which philosophy is "scaled down into something more human" and the attempt to achieve a theory of everything is abandoned, and an "inflationary", transhumanist approach in which philosophers are "scaled up" by advanced technology into "super-intelligent beings" better able to pursue such a theory. Criticism. In "Holistic Explanation and the Idea of a Grand Unified Theory", originally presented as a lecture in 1998, Rescher identifies two negative reactions to the idea of a unified, overarching theory: reductionism and rejectionism. Reductionism holds that large-scale philosophical issues can be meaningfully addressed only when divided into lesser components, while rejectionism holds that questions about such issues are illegitimate and unanswerable. Against reductionism, Rescher argues that explaining individual parts does not explain the coordinating structure of the whole, so that a collectivized approach is required. Against rejectionism, he argues that the question of the "reason" – the "why" – behind existence is pressing, important, and not obviously meaningless. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\forall t \\, \\exists t' \\, (t'\\ E\\ t)" }, { "math_id": 1, "text": "\\forall t \\, (T^*\\ E\\ t)" }, { "math_id": 2, "text": "\\forall t \\, ((t\\ E\\ T^*) \\to (t = T^*))" }, { "math_id": 3, "text": "\\nexists t \\, (t\\ E\\ t)" } ]
https://en.wikipedia.org/wiki?curid=6157683
61577
Electrical resistance and conductance
Opposition to the passage of an electric current The electrical resistance of an object is a measure of its opposition to the flow of electric current. Its reciprocal quantity is &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;electrical conductance, measuring the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with mechanical friction. The SI unit of electrical resistance is the ohm (Ω), while electrical conductance is measured in siemens (S) (formerly called the 'mho' and then represented by ℧). The resistance of an object depends in large part on the material it is made of. Objects made of electrical insulators like rubber tend to have very high resistance and low conductance, while objects made of electrical conductors like metals tend to have very low resistance and high conductance. This relationship is quantified by resistivity or conductivity. The nature of a material is not the only factor in resistance and conductance, however; it also depends on the size and shape of an object because these properties are extensive rather than intensive. For example, a wire's resistance is higher if it is long and thin, and lower if it is short and thick. All objects resist electrical current, except for superconductors, which have a resistance of zero. The resistance R of an object is defined as the ratio of voltage V across it to current I through it, while the conductance G is the reciprocal: formula_0 For a wide variety of materials and conditions, V and I are directly proportional to each other, and therefore R and G are constants (although they will depend on the size and shape of the object, the material it is made of, and other factors like temperature or strain). This proportionality is called Ohm's law, and materials that satisfy it are called "ohmic" materials. In other cases, such as a transformer, diode or battery, V and I are not directly proportional. The ratio is sometimes still useful, and is referred to as a "chordal resistance" or "static resistance", since it corresponds to the inverse slope of a chord between the origin and an I–V curve. In other situations, the derivative formula_1 may be most useful; this is called the "differential resistance". Introduction. In the hydraulic analogy, current flowing through a wire (or resistor) is like water flowing through a pipe, and the voltage drop across the wire is like the pressure drop that pushes water through the pipe. Conductance is proportional to how much flow occurs for a given pressure, and resistance is proportional to how much pressure is required to achieve a given flow. The voltage drop (i.e., difference between voltages on one side of the resistor and the other), not the voltage itself, provides the driving force pushing current through a resistor. In hydraulics, it is similar: the pressure difference between two sides of a pipe, not the pressure itself, determines the flow through it. For example, there may be a large water pressure above the pipe, which tries to push water down through the pipe. But there may be an equally large water pressure below the pipe, which tries to push water back up through the pipe. If these pressures are equal, no water flows. (In the image at right, the water pressure below the pipe is zero.) The resistance and conductance of a wire, resistor, or other element is mostly determined by two properties: Geometry is important because it is more difficult to push water through a long, narrow pipe than a wide, short pipe. In the same way, a long, thin copper wire has higher resistance (lower conductance) than a short, thick copper wire. Materials are important as well. A pipe filled with hair restricts the flow of water more than a clean pipe of the same shape and size. Similarly, electrons can flow freely and easily through a copper wire, but cannot flow as easily through a steel wire of the same shape and size, and they essentially cannot flow at all through an insulator like rubber, regardless of its shape. The difference between copper, steel, and rubber is related to their microscopic structure and electron configuration, and is quantified by a property called resistivity. In addition to geometry and material, there are various other factors that influence resistance and conductance, such as temperature; see below. Conductors and resistors. Substances in which electricity can flow are called conductors. A piece of conducting material of a particular resistance meant for use in a circuit is called a resistor. Conductors are made of high-conductivity materials such as metals, in particular copper and aluminium. Resistors, on the other hand, are made of a wide variety of materials depending on factors such as the desired resistance, amount of energy that it needs to dissipate, precision, and costs. Ohm's law. For many materials, the current I through the material is proportional to the voltage V applied across it: formula_2 over a wide range of voltages and currents. Therefore, the resistance and conductance of objects or electronic components made of these materials is constant. This relationship is called Ohm's law, and materials which obey it are called "ohmic" materials. Examples of ohmic components are wires and resistors. The current–voltage graph of an ohmic device consists of a straight line through the origin with positive slope. Other components and materials used in electronics do not obey Ohm's law; the current is not proportional to the voltage, so the resistance varies with the voltage and current through them. These are called "nonlinear" or "non-ohmic". Examples include diodes and fluorescent lamps. Relation to resistivity and conductivity. The resistance of a given object depends primarily on two factors: what material it is made of, and its shape. For a given material, the resistance is inversely proportional to the cross-sectional area; for example, a thick copper wire has lower resistance than an otherwise-identical thin copper wire. Also, for a given material, the resistance is proportional to the length; for example, a long copper wire has higher resistance than an otherwise-identical short copper wire. The resistance R and conductance G of a conductor of uniform cross section, therefore, can be computed as formula_3 where formula_4 is the length of the conductor, measured in metres (m), A is the cross-sectional area of the conductor measured in square metres (m2), σ (sigma) is the electrical conductivity measured in siemens per meter (S·m−1), and ρ (rho) is the electrical resistivity (also called "specific electrical resistance") of the material, measured in ohm-metres (Ω·m). The resistivity and conductivity are proportionality constants, and therefore depend only on the material the wire is made of, not the geometry of the wire. Resistivity and conductivity are reciprocals: formula_5. Resistivity is a measure of the material's ability to oppose electric current. This formula is not exact, as it assumes the current density is totally uniform in the conductor, which is not always true in practical situations. However, this formula still provides a good approximation for long thin conductors such as wires. Another situation for which this formula is not exact is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. For this reason, the "geometrical" cross-section is different from the "effective" cross-section in which current actually flows, so resistance is higher than expected. Similarly, if two conductors near each other carry AC current, their resistances increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in an electrical substation, or large power cables carrying more than a few hundred amperes. The resistivity of different materials varies by an enormous amount: For example, the conductivity of teflon is about 1030 times lower than the conductivity of copper. Loosely speaking, this is because metals have large numbers of "delocalized" electrons that are not stuck in any one place, so they are free to move across large distances. In an insulator, such as Teflon, each electron is tightly bound to a single molecule so a great force is required to pull it away. Semiconductors lie between these two extremes. More details can be found in the article: Electrical resistivity and conductivity. For the case of electrolyte solutions, see the article: Conductivity (electrolytic). Resistivity varies with temperature. In semiconductors, resistivity also changes when exposed to light. See below. Measurement. An instrument for measuring resistance is called an ohmmeter. Simple ohmmeters cannot measure low resistances accurately because the resistance of their measuring leads causes a voltage drop that interferes with the measurement, so more accurate devices use four-terminal sensing. Static and differential resistance. Many electrical elements, such as diodes and batteries do not satisfy Ohm's law. These are called "non-ohmic" or "non-linear", and their current–voltage curves are not straight lines through the origin. Resistance and conductance can still be defined for non-ohmic elements. However, unlike ohmic resistance, non-linear resistance is not constant but varies with the voltage or current through the device; i.e., its operating point. There are two types of resistance: &lt;templatestyles src="Glossary/styles.css" /&gt; AC circuits. Impedance and admittance. When an alternating current flows through a circuit, the relation between current and voltage across a circuit element is characterized not only by the ratio of their magnitudes, but also the difference in their phases. For example, in an ideal resistor, the moment when the voltage reaches its maximum, the current also reaches its maximum (current and voltage are oscillating in phase). But for a capacitor or inductor, the maximum current flow occurs as the voltage passes through zero and vice versa (current and voltage are oscillating 90° out of phase, see image below). Complex numbers are used to keep track of both the phase and magnitude of current and voltage: formula_6 where: The impedance and admittance may be expressed as complex numbers that can be broken into real and imaginary parts: formula_11 where R is resistance, G is conductance, X is reactance, and B is susceptance. These lead to the complex number identities formula_12 which are true in all cases, whereas formula_13 is only true in the special cases of either DC or reactance-free current. The complex angle formula_14 is the phase difference between the voltage and current passing through a component with impedance Z. For capacitors and inductors, this angle is exactly -90° or +90°, respectively, and X and B are nonzero. Ideal resistors have an angle of 0°, since X is zero (and hence B also), and Z and Y reduce to R and G respectively. In general, AC systems are designed to keep the phase angle close to 0° as much as possible, since it reduces the reactive power, which does no useful work at a load. In a simple case with an inductive load (causing the phase to increase), a capacitor may be added for compensation at one frequency, since the capacitor's phase shift is negative, bringing the total impedance phase closer to 0° again. Y is the reciprocal of Z (formula_15) for all circuits, just as formula_16 for DC circuits containing only resistors, or AC circuits for which either the reactance or susceptance happens to be zero (X or "B" 0, respectively) (if one is zero, then for realistic systems both must be zero). Frequency dependence. A key feature of AC circuits is that the resistance and conductance can be frequency-dependent, a phenomenon known as the universal dielectric response. One reason, mentioned above is the skin effect (and the related proximity effect). Another reason is that the resistivity itself may depend on frequency (see Drude model, deep-level traps, resonant frequency, Kramers–Kronig relations, etc.) Energy dissipation and Joule heating. Resistors (and other elements with resistance) oppose the flow of electric current; therefore, electrical energy is required to push current through the resistance. This electrical energy is dissipated, heating the resistor in the process. This is called "Joule heating" (after James Prescott Joule), also called "ohmic heating" or "resistive heating". The dissipation of electrical energy is often undesired, particularly in the case of transmission losses in power lines. High voltage transmission helps reduce the losses by reducing the current for a given power. On the other hand, Joule heating is sometimes useful, for example in electric stoves and other electric heaters (also called "resistive heaters"). As another example, incandescent lamps rely on Joule heating: the filament is heated to such a high temperature that it glows "white hot" with thermal radiation (also called incandescence). The formula for Joule heating is: formula_17 where P is the power (energy per unit time) converted from electrical energy to thermal energy, R is the resistance, and I is the current through the resistor. Dependence on other conditions. Temperature dependence. Near room temperature, the resistivity of metals typically increases as temperature is increased, while the resistivity of semiconductors typically decreases as temperature is increased. The resistivity of insulators and electrolytes may increase or decrease depending on the system. For the detailed behavior and explanation, see Electrical resistivity and conductivity. As a consequence, the resistance of wires, resistors, and other components often change with temperature. This effect may be undesired, causing an electronic circuit to malfunction at extreme temperatures. In some cases, however, the effect is put to good use. When temperature-dependent resistance of a component is used purposefully, the component is called a resistance thermometer or thermistor. (A resistance thermometer is made of metal, usually platinum, while a thermistor is made of ceramic or polymer.) Resistance thermometers and thermistors are generally used in two ways. First, they can be used as thermometers: by measuring the resistance, the temperature of the environment can be inferred. Second, they can be used in conjunction with Joule heating (also called self-heating): if a large current is running through the resistor, the resistor's temperature rises and therefore its resistance changes. Therefore, these components can be used in a circuit-protection role similar to fuses, or for feedback in circuits, or for many other purposes. In general, self-heating can turn a resistor into a nonlinear and hysteretic circuit element. For more details see Thermistor#Self-heating effects. If the temperature T does not vary too much, a linear approximation is typically used: formula_18 where formula_19 is called the "temperature coefficient of resistance", formula_20 is a fixed reference temperature (usually room temperature), and formula_21 is the resistance at temperature formula_20. The parameter formula_19 is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, formula_19 is different for different reference temperatures. For this reason it is usual to specify the temperature that formula_19 was measured at with a suffix, such as formula_22, and the relationship only holds in a range of temperatures around the reference. The temperature coefficient formula_19 is typically to for metals near room temperature. It is usually negative for semiconductors and insulators, with highly variable magnitude. Strain dependence. Just as the resistance of a conductor depends upon temperature, the resistance of a conductor depends upon strain. By placing a conductor under tension (a form of stress that leads to strain in the form of stretching of the conductor), the length of the section of conductor under tension increases and its cross-sectional area decreases. Both these effects contribute to increasing the resistance of the strained section of conductor. Under compression (strain in the opposite direction), the resistance of the strained section of conductor decreases. See the discussion on strain gauges for details about devices constructed to take advantage of this effect. Light illumination dependence. Some resistors, particularly those made from semiconductors, exhibit "photoconductivity", meaning that their resistance changes when light is shining on them. Therefore, they are called "photoresistors" (or "light dependent resistors"). These are a common type of light detector. Superconductivity. Superconductors are materials that have exactly zero resistance and infinite conductance, because they can have "V" = 0 and "I" ≠ 0. This also means there is no joule heating, or in other words no dissipation of electrical energy. Therefore, if superconductive wire is made into a closed loop, current flows around the loop forever. Superconductors require cooling to temperatures near with liquid helium for most metallic superconductors like niobium–tin alloys, or cooling to temperatures near with liquid nitrogen for the expensive, brittle and delicate ceramic high temperature superconductors. Nevertheless, there are many technological applications of superconductivity, including superconducting magnets. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = \\frac{V}{I}, \\qquad G = \\frac{I}{V} = \\frac{1}{R}." }, { "math_id": 1, "text": " \\frac{\\mathrm{d}V}{\\mathrm{d}I} " }, { "math_id": 2, "text": "I \\propto V" }, { "math_id": 3, "text": "\\begin{align}\n R &= \\rho \\frac{\\ell}{A}, \\\\[5pt]\n G &= \\sigma \\frac{A}{\\ell} \\,.\n\\end{align}" }, { "math_id": 4, "text": "\\ell" }, { "math_id": 5, "text": "\\rho=1/\\sigma" }, { "math_id": 6, "text": "\\begin{array}{cl}\nu(t) &= \\operatorname\\mathcal{R_e} \\left( U_0 \\cdot e^{j\\omega t}\\right) \\\\\ni(t) &= \\operatorname\\mathcal{R_e} \\left( I_0 \\cdot e^{j(\\omega t + \\varphi)}\\right) \\\\\nZ &= \\frac{U}{\\ I\\ } \\\\\nY &= \\frac{\\ 1\\ }{Z} = \\frac{\\ I\\ }{U}\n\\end{array}" }, { "math_id": 7, "text": "\\omega" }, { "math_id": 8, "text": "\\varphi" }, { "math_id": 9, "text": "\\mathcal{R_e}" }, { "math_id": 10, "text": "j \\equiv \\sqrt{-1\\ }" }, { "math_id": 11, "text": "\\begin{align}\nZ &= R + jX \\\\\nY &= G + jB ~.\n\\end{align}" }, { "math_id": 12, "text": "\\begin{align}\nR &= \\frac{G}{\\ G^2 + B^2\\ }\\ , \\qquad & X = \\frac{-B~}{\\ G^2 + B^2\\ }\\ , \\\\\nG &= \\frac{R}{\\ R^2 + X^2\\ }\\ , \\qquad & B = \\frac{-X~}{\\ R^2 + X^2\\ }\\ ,\n\\end{align}" }, { "math_id": 13, "text": "\\ R = 1/G\\ " }, { "math_id": 14, "text": "\\ \\theta = \\arg(Z) = -\\arg(Y)\\ " }, { "math_id": 15, "text": "\\ Z = 1/Y\\ " }, { "math_id": 16, "text": "R = 1/G" }, { "math_id": 17, "text": "P=I^2R" }, { "math_id": 18, "text": "R(T) = R_0[1+\\alpha (T - T_0)]" }, { "math_id": 19, "text": "\\alpha" }, { "math_id": 20, "text": "T_0" }, { "math_id": 21, "text": "R_0" }, { "math_id": 22, "text": "\\alpha_{15}" } ]
https://en.wikipedia.org/wiki?curid=61577
6157978
Micro heat exchanger
Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramic. Microchannel heat exchangers can be used for many applications including: Background. Investigation of microscale thermal devices is motivated by the single phase internal flow correlation for convective heat transfer: formula_0 Where formula_1 is the heat transfer coefficient, formula_2 is the Nusselt number, formula_3 is the thermal conductivity of the fluid and formula_4 is the hydraulic diameter of the channel or duct. In internal laminar flows, the Nusselt number becomes a constant. This is a result which can be arrived at analytically: For the case of a constant wall temperature, formula_5 and for the case of constant heat flux formula_6 for round tubes. The last value is increased to 140/17 = 8.23 for flat parallel plates. As Reynolds number is proportional to hydraulic diameter, fluid flow in channels of small hydraulic diameter will predominantly be laminar in character. This correlation therefore indicates that the heat transfer coefficient increases as channel diameter decreases. Should the hydraulic diameter in forced convection be on the order of tens or hundreds of micrometres, an extremely high heat transfer coefficient should result. This hypothesis was initially investigated by Tuckerman and Pease. Their positive results led to further research ranging from classical investigations of single channel heat transfer to more applied investigations in parallel micro-channel and micro scale plate fin heat exchangers. Recent work in the field has focused on the potential of two-phase flows at the micro-scale. Classification. Just like "conventional" or "macro scale" heat exchangers, micro heat exchangers have one, two or even three fluidic flows. In the case of one fluidic flow, heat can be transferred to the fluid (each of the fluids can be a gas, a liquid, or a multiphase flow) from electrically powered heater cartridges, or removed from the fluid by electrically powered elements like Peltier chillers. In the case of two fluidic flows, micro heat exchangers are usually classified by the orientation of the fluidic flows to another as "cross flow" or "counter flow" devices. If a chemical reaction is conducted inside a micro heat exchanger, the latter is also called a microreactor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h=\\mathit{Nu}_c \\frac{k}{d}" }, { "math_id": 1, "text": "h" }, { "math_id": 2, "text": "\\mathit{Nu}_c" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "\\mathit{Nu}_c=3.657" }, { "math_id": 6, "text": "\\mathit{Nu}_c=4.364" } ]
https://en.wikipedia.org/wiki?curid=6157978
61580
Electrical resistivity and conductivity
Measure of a substance's ability to resist or conduct electric current Electrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter ρ (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is . Electrical conductivity (or specific conductance) is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter σ (sigma), but κ (kappa) (especially in electrical engineering) and γ (gamma) are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m). Resistivity and conductivity are intensive properties of materials, giving the opposition of a standard cube of material to current. Electrical resistance and conductance are corresponding extensive properties that give the opposition of a specific object to electric current. Definition. Ideal case. In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the resistance of the conductor is directly proportional to its length and inversely proportional to its cross-sectional area, where the electrical resistivity ρ (Greek: rho) is the constant of proportionality. This is written as: formula_0formula_1 where &lt;templatestyles src="Plainlist/styles.css"/&gt;* formula_2 is the electrical resistance of a uniform specimen of the material The resistivity can be expressed using the SI unit ohm metre (Ω⋅m) — i.e. ohms multiplied by square metres (for the cross-sectional area) then divided by metres (for the length). Both "resistance" and "resistivity" describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and does not depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same resistivity, but a long, thin copper wire has a much larger resistance than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper. In a hydraulic analogy, passing current through a high-resistivity material is like pushing water through a pipe full of sand - while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not solely determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes. The above equation can be transposed to get Pouillet's law (named after Claude Pouillet): formula_5The resistance of a given element is proportional to the length, but inversely proportional to the cross-sectional area. For example, if A = , formula_3 = (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in Ω⋅m. Conductivity, σ, is the inverse of resistivity: formula_6 Conductivity has SI units of siemens per metre (S/m). General scalar quantities. If the geometry is more complicated, or if the resistivity varies from point to point within the material, the current and electric field will be functions of position. Then it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point: formula_7 where &lt;templatestyles src="Plainlist/styles.css"/&gt;* formula_8 is the resistivity of the conductor material at the point formula_9, The current density is parallel to the electric field by necessity. Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by: formula_12 For example, rubber is a material with large ρ and small σ — because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small ρ and large σ — because even a small electric field pulls a lot of current through it. This expression simplifies to the formula given above under "ideal case" when the resistivity is constant in the material and the geometry has a uniform cross-section. In this case, the electric field and current density are constant and parallel. Tensor resistivity. When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law, which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead. Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form: formula_13 where the conductivity σ and resistivity ρ are rank-2 tensors, and electric field E and current density J are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by: formula_14 where &lt;templatestyles src="Plainlist/styles.css"/&gt;* formula_15 is the electric field vector, with components ("E""x", "E""y", "E""z"); Equivalently, resistivity can be given in the more compact Einstein notation: formula_18 In either case, the resulting expression for each electric field component is: formula_19 Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an x-axis parallel to the current direction, so "J""y" = "J""z" = 0. This leaves: formula_20 Conductivity is defined similarly: formula_21 or formula_22 both resulting in: formula_23 Looking at the two expressions, formula_16 and formula_24 are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, "σxx" may not be equal to 1/"ρxx". This can be seen in the Hall effect, where formula_25 is nonzero. In the Hall effect, due to rotational invariance about the z-axis, formula_26 and formula_27, so the relation between resistivity and conductivity simplifies to: formula_28 If the electric field is parallel to the applied current, formula_25 and formula_29 are zero. When they are zero, one number, formula_30, is enough to describe the electrical resistivity. It is then written as simply formula_31, and this reduces to the simpler expression. Conductivity and current carriers. Relation between current density and electric current velocity. Electric current is the ordered movement of electric charges. Causes of conductivity. Band theory simplified. According to elementary quantum mechanics, an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values – i.e. have energies that differ only minutely – those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms and their distribution within the crystal. The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level. The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times. Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals. An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low. In metals. A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of metres per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density. The mechanism is similar to transfer of momentum of balls in a Newton's cradle but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire. Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions. In semiconductors and insulators. In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature. In ionic liquids/electrolytes. In electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes, currents are carried by ionic salts. Small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance. The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient formula_32, which is the ratio of the concentration of ions formula_33 to the concentration of molecules of the dissolved substance formula_34: formula_35 The specific electrical conductivity (formula_36) of a solution is equal to: formula_37 where formula_38: module of the ion charge, formula_39 and formula_40: mobility of positively and negatively charged ions, formula_34: concentration of molecules of the dissolved substance, formula_32: the coefficient of dissociation. Superconductivity. The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In normal (that is, non-superconducting) conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. In a normal conductor, the current is driven by a voltage gradient, whereas in a superconductor, there is no voltage gradient and the current is instead related to the phase gradient of the superconducting order parameter. A consequence of this is that an electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen so that the resistance of the material becomes truly zero. Plasma. Plasmas are very good conductors and electric potentials play an important role. The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the "plasma potential", or "space potential". If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of "quasineutrality", which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma ("n"e = ⟨Z⟩ &gt; "n"i), but on the scale of the Debye length there can be charge imbalance. In the special case that "double layers" are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: formula_41 Differentiating this relation provides a means to calculate the electric field from the density: formula_42 It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it. In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length. However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths. The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics. Plasma is often called the "fourth state of matter" after solid, liquids and gases. It is distinct from these and other lower-energy states of matter. Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following: Resistivity and conductivity of various materials. The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a water/aqueous solution is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as "specific conductance", relative to the conductivity of pure water at . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows: This table shows the resistivity (ρ), conductivity and temperature coefficient of various materials at . The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at . The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book "One, Two, Three...Infinity" (1947): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The metallic substances differ from all other materials by the fact that the outer shells of their atoms are bound rather loosely, and often let one of their electrons go free. Thus the interior of a metal is filled up with a large number of unattached electrons that travel aimlessly around like a crowd of displaced persons. When a metal wire is subjected to electric force applied on its opposite ends, these free electrons rush in the direction of the force, thus forming what we call an electric current. More technically, the free electron model gives a basic description of electron flow in metals. Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least worse insulator than oven-dry. In any case, a sufficiently high voltage – such as that in lightning strikes or some high-tension power lines – can lead to insulation breakdown and electrocution risk even with apparently dry wood. Temperature dependence. Linear approximation. The electrical resistivity of most materials changes with temperature. If the temperature T does not vary too much, a linear approximation is typically used: formula_43 where formula_32 is called the "temperature coefficient of resistivity", formula_44 is a fixed reference temperature (usually room temperature), and formula_45 is the resistivity at temperature formula_44. The parameter formula_32 is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, formula_32 is different for different reference temperatures. For this reason it is usual to specify the temperature that formula_32 was measured at with a suffix, such as formula_47, and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. Metals. In general, electrical resistivity of metals increases with temperature. Electron–phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity ρ of a metal can be approximated through the Bloch–Grüneisen formula: formula_48 where formula_49 is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface, the Debye radius and the number density of electrons in the metal. formula_50 is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction: The Bloch–Grüneisen formula is an approximation obtained assuming that the studied metal has spherical Fermi surface inscribed within the first Brillouin zone and a Debye phonon spectrum. If more than one source of scattering is simultaneously present, Matthiessen's rule (first formulated by Augustus Matthiessen in the 1860s) states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of n. As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity. This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity. An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes's experiments that led in 1911 to discovery of superconductivity. For details see History of superconductivity. Wiedemann–Franz law. The Wiedemann–Franz law states that for materials where heat and charge transport is dominated by electrons, the ratio of thermal to electrical conductivity is proportional to the temperature: formula_51 where formula_46 is the thermal conductivity, formula_52 is the Boltzmann constant, formula_53 is the electron charge, formula_54 is temperature, and formula_36 is the electric conductivity. The ratio on the rhs is called the Lorenz number. Semiconductors. In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band, which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature following an Arrhenius model: formula_55 An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart–Hart equation: formula_56 where A, B and C are the so-called Steinhart–Hart coefficients. This equation is used to calibrate thermistors. Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers. In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of formula_57 where n = 2, 3, 4, depending on the dimensionality of the system. Kondo insulators. Kondo insulators are materials where the resistivity follows the formula formula_58 where formula_59, formula_60, formula_61 and formula_62 are constant parameters, formula_45 the residual resistivity, formula_63 the Fermi liquid contribution, formula_64 a lattice vibrations term and formula_65 the Kondo effect. Complex resistivity and conductivity. When analyzing the response of materials to alternating electric fields (dielectric spectroscopy), in applications such as electrical impedance tomography, it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity. Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the "admittivity". Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity. An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity. Resistance versus resistivity in complicated geometries. Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula formula_66 above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious. In cases like this, the formulas formula_67 must be replaced with formula_68 where E and J are now vector fields. This equation, along with the continuity equation for J and the Poisson's equation for E, form a set of partial differential equations. In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required. Resistivity-density product. In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity – it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines, aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance. Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration. History. John Walsh and the conductivity of a vacuum. In a 1774 letter to Dutch-born British scientist Jan Ingenhousz, Benjamin Franklin relates an experiment by another British scientist, John Walsh, that purportedly showed this astonishing fact: Although rarified air conducts electricity better than common air, a vacuum does not conduct electricity at all. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Mr. Walsh ... has just made a curious Discovery in Electricity. You know we find that in rarify’d Air it would pass more freely, and leap thro’ greater Spaces than in dense Air; and thence it was concluded that in a perfect Vacuum it would pass any distance without the least Obstruction. But having made a perfect Vacuum by means of boil’d Mercury in a long Torricellian bent Tube, its Ends immers’d in Cups full of Mercury, he finds that the Vacuum will not conduct at all, but resists the Passage of the Electric Fluid absolutely. However, to this statement a note (based on modern knowledge) was added by the editors—at the American Philosophical Society and Yale University—of the webpage hosting the letter: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;We can only assume that something was wrong with Walsh’s findings. ... Although the conductivity of a gas, as it approaches a vacuum, increases up to a point and then decreases, that point is far beyond what the technique described might have been expected to reach. Boiling replaced the air with mercury vapor, which as it cooled created a vacuum that could scarcely have been complete enough to decrease, let alone eliminate, the vapor’s conductivity. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R \\propto \\frac\\ell A\n" }, { "math_id": 1, "text": "\\begin{align}\n R &= \\rho \\frac\\ell A \\\\[3pt]\n {}\\Leftrightarrow \\rho &= R \\frac A \\ell,\n\\end{align}" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "\\ell" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "R = \\rho \\frac{\\ell}{A}." }, { "math_id": 6, "text": "\\sigma = \\frac{1}{\\rho}." }, { "math_id": 7, "text": "\\rho(x) = \\frac{E(x)}{J(x)}," }, { "math_id": 8, "text": "\\rho(x)" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "E(x)" }, { "math_id": 11, "text": "J(x)" }, { "math_id": 12, "text": "\\sigma(x) = \\frac{1}{\\rho(x)} = \\frac{J(x)}{E(x)}." }, { "math_id": 13, "text": "\n \\mathbf{J} = \\boldsymbol\\sigma \\mathbf{E} \\,\\,\\rightleftharpoons\\,\\, \n \\mathbf{E} = \\boldsymbol\\rho \\mathbf{J},\n" }, { "math_id": 14, "text": "\n \\begin{bmatrix} E_x \\\\ E_y \\\\ E_z \\end{bmatrix} = \\begin{bmatrix}\n \\rho_{xx} & \\rho_{xy} & \\rho_{xz} \\\\\n \\rho_{yx} & \\rho_{yy} & \\rho_{yz} \\\\\n \\rho_{zx} & \\rho_{zy} & \\rho_{zz}\n \\end{bmatrix}\\begin{bmatrix} J_x \\\\ J_y \\\\ J_z \\end{bmatrix},\n" }, { "math_id": 15, "text": "\\mathbf{E}" }, { "math_id": 16, "text": "\\boldsymbol{\\rho}" }, { "math_id": 17, "text": "\\mathbf{J}" }, { "math_id": 18, "text": "\\mathbf{E}_i = \\boldsymbol\\rho_{ij} \\mathbf{J}_j ~." }, { "math_id": 19, "text": "\\begin{align}\nE_x &= \\rho_{xx} J_x + \\rho_{xy} J_y + \\rho_{xz} J_z, \\\\\nE_y &= \\rho_{yx} J_x + \\rho_{yy} J_y + \\rho_{yz} J_z, \\\\\nE_z &= \\rho_{zx} J_x + \\rho_{zy} J_y + \\rho_{zz} J_z.\n\\end{align}" }, { "math_id": 20, "text": "\n \\rho_{xx} = \\frac{E_x}{J_x}, \\quad\n \\rho_{yx} = \\frac{E_y}{J_x}, \\text{ and }\n \\rho_{zx} = \\frac{E_z}{J_x}.\n" }, { "math_id": 21, "text": "\n \\begin{bmatrix} J_x \\\\ J_y \\\\ J_z \\end{bmatrix} =\n \\begin{bmatrix}\n \\sigma_{xx} & \\sigma_{xy} & \\sigma_{xz} \\\\\n \\sigma_{yx} & \\sigma_{yy} & \\sigma_{yz} \\\\\n \\sigma_{zx} & \\sigma_{zy} & \\sigma_{zz}\n \\end{bmatrix}\\begin{bmatrix} E_x \\\\ E_y \\\\ E_z \\end{bmatrix}\n" }, { "math_id": 22, "text": "\\mathbf{J}_i = \\boldsymbol{\\sigma}_{ij} \\mathbf{E}_{j}," }, { "math_id": 23, "text": "\\begin{align}\nJ_x &= \\sigma_{xx} E_x + \\sigma_{xy} E_y + \\sigma_{xz} E_z \\\\\nJ_y &= \\sigma_{yx} E_x + \\sigma_{yy} E_y + \\sigma_{yz} E_z \\\\\nJ_z &= \\sigma_{zx} E_x + \\sigma_{zy} E_y + \\sigma_{zz} E_z\n\\end{align}." }, { "math_id": 24, "text": "\\boldsymbol{\\sigma}" }, { "math_id": 25, "text": "\\rho_{xy}" }, { "math_id": 26, "text": " \\rho_{yy}=\\rho_{xx} " }, { "math_id": 27, "text": " \\rho_{yx}=-\\rho_{xy}" }, { "math_id": 28, "text": "\n \\sigma_{xx} = \\frac{ \\rho_{xx}}{\\rho_{xx}^2 + \\rho_{xy}^2}, \\quad\n \\sigma_{xy} = \\frac{-\\rho_{xy}}{\\rho_{xx}^2 + \\rho_{xy}^2}.\n" }, { "math_id": 29, "text": "\\rho_{xz}" }, { "math_id": 30, "text": "\\rho_{xx}" }, { "math_id": 31, "text": "\\rho" }, { "math_id": 32, "text": "\\alpha" }, { "math_id": 33, "text": "N" }, { "math_id": 34, "text": "N_0" }, { "math_id": 35, "text": "N = \\alpha N_0 ~." }, { "math_id": 36, "text": "\\sigma" }, { "math_id": 37, "text": " \\sigma = q\\left(b^+ + b^-\\right)\\alpha N_0 ~," }, { "math_id": 38, "text": "q" }, { "math_id": 39, "text": "b^+" }, { "math_id": 40, "text": "b^-" }, { "math_id": 41, "text": "n_\\text{e} \\propto \\exp\\left(e\\Phi/k_\\text{B} T_\\text{e}\\right)." }, { "math_id": 42, "text": "\\mathbf{E} = -\\frac{k_\\text{B} T_\\text{e}}{e}\\frac{\\nabla n_\\text{e}}{n_\\text{e}}." }, { "math_id": 43, "text": "\\rho(T) = \\rho_0[1 + \\alpha (T - T_0)]," }, { "math_id": 44, "text": "T_0" }, { "math_id": 45, "text": "\\rho_0" }, { "math_id": 46, "text": "\\kappa" }, { "math_id": 47, "text": "\\alpha_{15}" }, { "math_id": 48, "text": "\\rho(T) = \\rho(0) + A\\left(\\frac{T}{\\Theta_R}\\right)^n \\int_0^{\\Theta_R/T} \\frac{x^n}{(e^x - 1)(1 - e^{-x})} \\, dx ," }, { "math_id": 49, "text": "\\rho(0)" }, { "math_id": 50, "text": "\\Theta_R" }, { "math_id": 51, "text": "\n{\\kappa \\over \\sigma} = {\\pi^2 \\over 3} \\left(\\frac{k}{e}\\right)^2 T," }, { "math_id": 52, "text": "k" }, { "math_id": 53, "text": "e" }, { "math_id": 54, "text": "T" }, { "math_id": 55, "text": "\\rho = \\rho_0 e^{\\frac{E_A}{k_B T}}." }, { "math_id": 56, "text": "\\frac{1}{T} = A + B \\ln\\rho + C (\\ln\\rho)^3," }, { "math_id": 57, "text": "\\rho = A\\exp\\left(T^{-1/n}\\right)," }, { "math_id": 58, "text": "\\rho(T) = \\rho_0 + aT^2 + bT^5 + c_m \\ln\\frac{\\mu}{T}" }, { "math_id": 59, "text": "a" }, { "math_id": 60, "text": "b" }, { "math_id": 61, "text": "c_m" }, { "math_id": 62, "text": "\\mu" }, { "math_id": 63, "text": "T^2" }, { "math_id": 64, "text": "T^5" }, { "math_id": 65, "text": "\\ln\\frac{1}{T}" }, { "math_id": 66, "text": "R = \\rho \\ell /A " }, { "math_id": 67, "text": "J = \\sigma E \\,\\, \\rightleftharpoons \\,\\, E = \\rho J" }, { "math_id": 68, "text": "\\mathbf{J}(\\mathbf{r}) = \\sigma(\\mathbf{r}) \\mathbf{E}(\\mathbf{r}) \\,\\, \\rightleftharpoons \\,\\, \\mathbf{E}(\\mathbf{r}) = \\rho(\\mathbf{r}) \\mathbf{J}(\\mathbf{r})," } ]
https://en.wikipedia.org/wiki?curid=61580
6158162
ITU model for indoor attenuation
The ITU indoor propagation model, also known as "ITU model for indoor attenuation", is a radio propagation model that estimates the path loss inside a room or a closed area inside a building delimited by walls of any form. Suitable for appliances designed for indoor use, this model approximates the total path loss an indoor link may experience. Applicable to/under conditions. This model is applicable to only the indoor environments. Typically, such appliances use the lower microwave bands around 2.4 GHz. However, the model applies to a much wider range. Coverage. Frequency: 900 MHz to 5.2 GHz Floors: 1 to 3 Mathematical formulations. The model. The ITU indoor path loss model is formally expressed as: formula_0 where, "L" = the total path loss. Unit: decibel (dB). "f" = Frequency of transmission. Unit: megahertz(MHz). "d" = Distance. Unit: meter (m). "N" = The distance power loss coefficient. "n" = Number of floors between the transmitter and receiver. "P"f(n) = the floor loss penetration factor. Calculation of distance power loss coefficient. The distance power loss coefficient, "N" is the quantity that expresses the loss of signal power with distance. This coefficient is an empirical one. Some values are provided in Table 1. Calculation of floor penetration loss factor. The floor penetration loss factor is an empirical constant dependent on the number of floors the waves need to penetrate. Some values are tabulated in Table 2.
[ { "math_id": 0, "text": "L \\; = \\; 20 \\; \\log_{10} f \\; + \\; N \\; \\log_{10} d \\; + \\; P_f(n) \\; - \\; 28" } ]
https://en.wikipedia.org/wiki?curid=6158162
6158220
Log-distance path loss model
The log-distance path loss model is a radio propagation model that predicts the path loss a encounters inside a building or densely populated areas over distance. Mathematical formulation. Model. Log-distance path loss model is formally expressed as: formula_0 where Corresponding non-logarithmic model. This corresponds to the following non-logarithmic gain model: formula_14 where formula_15 is the average multiplicative gain at the reference distance formula_7 from the transmitter. This gain depends on factors such as carrier frequency, antenna heights and antenna gain, for example due to directional antennas; and formula_13 is a stochastic process that reflects flat fading. In case of only slow fading (shadowing), it may have log-normal distribution with parameter formula_12 dB. In case of only fast fading due to multipath propagation, its amplitude may have Rayleigh distribution or Ricean distribution. This can be convenient, because power is proportional to the square of amplitude. Squaring a Rayleigh-distributed random variable produces an exponentially distributed random variable. In many cases, exponential distributions are computationally convenient and allow direct closed-form calculations in many more situations than a Rayleigh (or even a Gaussian). Empirical coefficient values for indoor propagation. Empirical measurements of coefficients formula_10 and formula_12 in dB have shown the following values for a number of indoor wave propagation cases. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L=L_\\text{Tx}-L_\\text{Rx}=L_0+10\\gamma\\log_{10}\\frac{d}{d_0}+X_\\text{g}" }, { "math_id": 1, "text": "{L}" }, { "math_id": 2, "text": "L_\\text{Tx}=10\\log_{10}\\frac{P_\\text{Tx}}{\\mathrm{1~mW}} \\mathrm{~dBm}" }, { "math_id": 3, "text": "P_\\text{Tx}" }, { "math_id": 4, "text": "L_\\text{Rx}=10\\log_{10}\\frac{P_\\text{Rx}}{\\mathrm{1~mW}} \\mathrm{~dBm}" }, { "math_id": 5, "text": "{P_\\text{Rx}}" }, { "math_id": 6, "text": "L_0" }, { "math_id": 7, "text": "d_0" }, { "math_id": 8, "text": "{d}" }, { "math_id": 9, "text": "{d_0}" }, { "math_id": 10, "text": "\\gamma" }, { "math_id": 11, "text": "X_\\text{g}" }, { "math_id": 12, "text": "\\sigma" }, { "math_id": 13, "text": "F_\\text{g}=10^{-X_\\text{g}/10}" }, { "math_id": 14, "text": "\\frac{P_\\text{Rx}}{P_\\text{Tx}}=\\frac{c_0 F_\\text{g}}{d^{\\gamma}} ," }, { "math_id": 15, "text": "c_0={d_0^{\\gamma}}10^{-L_0/10}" } ]
https://en.wikipedia.org/wiki?curid=6158220
6158260
Higher residuosity problem
In cryptography, most public key cryptosystems are founded on problems that are believed to be intractable. The higher residuosity problem (also called the "n"th-residuosity problem) is one such problem. This problem is "easier" to solve than integer factorization, so the assumption that this problem is hard to solve is "stronger" than the assumption that integer factorization is hard. Mathematical background. If "n" is an integer, then the integers modulo "n" form a ring. If "n" = "pq" where "p" and "q" are primes, then the Chinese remainder theorem tells us that formula_0 The units of any ring form a group under multiplication, and the group of units in formula_1 is traditionally denoted formula_2. From the ring isomorphism above, we have formula_3 as an isomorphism of groups. Since "p" and "q" were assumed to be prime, the groups formula_4 and formula_5 are cyclic of orders "p"−1 and "q"−1 respectively. If "d" is a divisor of "p"−1, then the set of "d"&amp;hairsp;th powers in formula_6 form a subgroup of index "d". If gcd("d","q"−1) = 1, then "every" element in formula_5 is a "d"&amp;hairsp;th power, so the set of "d"&amp;hairsp;th powers in formula_7 is also a subgroup of index "d". In general, if gcd("d","q"−1) = "g", then there are ("q"−1)/"g" "d"&amp;hairsp;th powers in formula_5, so the set of "d"&amp;hairsp;th powers in formula_7 has index "dg". This is most commonly seen when "d" = 2, and we are considering the subgroup of quadratic residues, it is well-known that exactly one quarter of the elements in formula_7 are quadratic residues (when "n" is the product of two primes, as it is here). The important point is that for any divisor "d" of "p"−1 (or "q"−1) the set of "d"&amp;hairsp;th powers forms a subgroup of formula_8 Problem statement. Given an integer "n" = "pq" where "p" and "q" are unknown, an integer "d" such that "d" divides "p"−1, and an integer "x" &lt; "n", it is infeasible to determine whether "x" is a "d"&amp;hairsp;th power (equivalently "d"&amp;hairsp;th residue) modulo "n". Notice that if "p" and "q" are known it is easy to determine whether "x" is a "d"&amp;hairsp;th residue modulo "n" because "x" will be a "d"&amp;hairsp;th residue modulo "p" if and only if formula_9 When "d" = 2, this is called the quadratic residuosity problem. Applications. The semantic security of the Benaloh cryptosystem and the Naccache–Stern cryptosystem rests on the intractability of this problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}/n\\mathbb{Z} \\simeq \\mathbb{Z}/p\\mathbb{Z} \\times \\mathbb{Z}/q\\mathbb{Z}" }, { "math_id": 1, "text": "\\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 2, "text": "(\\mathbb{Z}/n\\mathbb{Z})\n^{\\times}" }, { "math_id": 3, "text": "(\\mathbb{Z}/n\\mathbb{Z})^{\\times} \\simeq (\\mathbb{Z}/p\\mathbb{Z})^{\\times} \\times (\\mathbb{Z}/q\\mathbb{Z})^{\\times}" }, { "math_id": 4, "text": "(\\mathbb{Z}/p\\mathbb{Z})^{\\times}" }, { "math_id": 5, "text": "(\\mathbb{Z}/q\\mathbb{Z})^{\\times}" }, { "math_id": 6, "text": "(\\mathbb{Z}/p\\mathbb{Z})^*" }, { "math_id": 7, "text": "(\\mathbb{Z}/n\\mathbb{Z})^{\\times}" }, { "math_id": 8, "text": "(\\mathbb{Z}/n\\mathbb{Z})^{\\times}." }, { "math_id": 9, "text": "x^{(p-1)/d} \\equiv 1 \\pmod p" } ]
https://en.wikipedia.org/wiki?curid=6158260
6158371
Hata model
The Hata model is a radio propagation model for predicting the path loss of cellular transmissions in exterior environments, valid for microwave frequencies from 150 to 1500 MHz. It is an empirical formulation based on the data from the Okumura model, and is thus also commonly referred to as the Okumura–Hata model. The model incorporates the graphical information from Okumura model and develops it further to realize the effects of diffraction, reflection and scattering caused by city structures. Additionally, the Hata Model applies corrections for applications in suburban and rural environments. Model description. Though based on the Okumura model, the Hata model does not provide coverage to the whole range of frequencies covered by Okumura model. Hata model does not go beyond 1500 MHz while Okumura provides support for up to 1920 MHz. The model is suited for both point-to-point and broadcast communications, and covers mobile station antenna heights of 1–10 m, base station antenna heights of 30–200 m, and link distances from 1–10 km. Urban environments. The Hata model for urban environments is the basic formulation since it was based on Okumura's measurements made in the built-up areas of Tokyo. It is formulated as following: formula_0 For small or medium-sized city, formula_1 and for large cities, formula_2 where LU = Path loss in urban areas. Unit: decibel (dB) hB = Height of base station antenna. Unit: meter (m) hM = Height of mobile station antenna. Unit: meter (m) f = Frequency of transmission. Unit: Megahertz (MHz) CH = Antenna height correction factor d = Distance between the base and mobile stations. Unit: kilometer (km). Suburban environments. The Hata model for suburban environments is applicable to the transmissions just out of the cities and on rural areas where man-made structures are there but not so high and dense as in the cities. To be more precise, this model is suitable where buildings exist, but the mobile station does not have a significant variation of its height. It is formulated as: formula_3 where LSU = Path loss in suburban areas. Unit: decibel (dB) LU = Path loss from the small city version of the model (above). Unit: decibel (dB) f = Frequency of transmission. Unit: Megahertz (MHz). Open environments. The Hata model for rural environments is applicable to the transmissions in open areas where no obstructions block the transmission link. It is formulated as: formula_4 where LO = Path loss in open areas. Unit: decibel (dB) LU = Average path loss from the small city version of the model (above). Unit: decibel (dB) f = Frequency of transmission. Unit: megahertz (MHz). Derivative models. There are more specific models for special uses. For example the COST Hata model, an urban HataModel, was developed by the European Cooperation in Science and Technology. In turn, the ITU-R P.1546 model is an enhancement to the COST-231 Model. PCS is another extension of the Hata model. The Walfisch and Bertoni model is further advanced. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L_U \\; = \\; 69.55 \\; + \\; 26.16 \\; \\log_{10} f \\; - \\; 13.82 \\; \\log_{10} h_B \\; - \\; C_H \\; + \\; [44.9 \\; - \\; 6.55 \\; \\log_{10} h_B] \\; \\log_{10} d" }, { "math_id": 1, "text": "C_H \\; = \\; 0.8 \\; + \\; (\\; 1.1 \\; \\log_{10} f \\; - \\; 0.7 \\; ) \\; h_M \\; - \\; 1.56 \\; \\log_{10} f " }, { "math_id": 2, "text": "C_H \\; = \\begin{cases}\\;8.29 \\; (\\; \\log_{10} ({1.54 h_M}))^2 \\; - \\; 1.1 \\; \\mbox{ , if } 150 \\le f \\le 200 \\\\ \\; 3.2 \\; (\\log_{10} ({11.75 h_M}))^2 \\; - \\; 4.97 \\; \\mbox{ , if }200 < f \\le 1500 \\end{cases}" }, { "math_id": 3, "text": "L_{SU}\\; = \\; L_U \\; - \\; 2 \\big( \\log_{10} {\\frac{f}{28}}\\big)^2 \\; - \\;5.4 " }, { "math_id": 4, "text": "L_{O}\\; = \\; L_U \\; - \\; 4.78 \\big( \\log_{10} {f}\\big)^2 \\; + \\; 18.33 \\big( \\log_{10} {f}\\big)- \\;40.94 " } ]
https://en.wikipedia.org/wiki?curid=6158371
6158383
Computational hardness assumption
Hypothesis in computational complexity theory In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently (where "efficiently" typically means "in polynomial time"). It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood. Computational hardness assumptions are of particular importance in cryptography. A major goal in cryptography is to create cryptographic primitives with provable security. In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example. However, information theoretic security cannot always be achieved; in such cases, cryptographers fall back to computational security. Roughly speaking, this means that these systems are secure "assuming that any adversaries are computationally limited", as all adversaries are in practice. Computational hardness assumptions are also useful for guiding algorithm designers: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such as P ≠ NP. Comparing hardness assumptions. Computer scientists have different ways of assessing which hardness assumptions are more reliable. Strength of hardness assumptions. We say that assumption formula_0 is "stronger" than assumption formula_1 when formula_0 implies formula_1 (and the converse is false or not known). In other words, even if assumption formula_0 were false, assumption formula_1 may still be true, and cryptographic protocols based on assumption formula_1 may still be safe to use. Thus when devising cryptographic protocols, one hopes to be able to prove security using the "weakest" possible assumptions. Average-case vs. worst-case assumptions. An average-case assumption says that a specific problem is hard on most instances from some explicit distribution, whereas a worst-case assumption only says that the problem is hard on "some" instances. For a given problem, average-case hardness implies worst-case hardness, so an average-case hardness assumption is stronger than a worst-case hardness assumption for the same problem. Furthermore, even for incomparable problems, an assumption like the Exponential Time Hypothesis is often considered preferable to an average-case assumption like the planted clique conjecture. However, for cryptographic applications, knowing that a problem has some hard instance (the problem is hard in the worst-case) is useless because it does not provide us with a way of generating hard instances. Fortunately, many average-case assumptions used in cryptography (including RSA, discrete log, and some lattice problems) can be based on worst-case assumptions via worst-case-to-average-case reductions. Falsifiability. A desired characteristic of a computational hardness assumption is falsifiability, i.e. that if the assumption were false, then it would be possible to prove it. In particular, introduced a formal notion of cryptographic falsifiability. Roughly, a computational hardness assumption is said to be falsifiable if it can be formulated in terms of a challenge: an interactive protocol between an adversary and an efficient verifier, where an efficient adversary can convince the verifier to accept if and only if the assumption is false. Common cryptographic hardness assumptions. There are many cryptographic hardness assumptions in use. This is a list of some of the most common ones, and some cryptographic protocols that use them. Integer factorization. Given a composite integer formula_2, and in particular one which is the product of two large primes formula_3, the integer factorization problem is to find formula_4 and formula_5 (more generally, find primes formula_6 such that formula_7). It is a major open problem to find an algorithm for integer factorization that runs in time polynomial in the size of representation (formula_8). The security of many cryptographic protocols rely on the assumption that integer factorization is hard (i.e. cannot be solved in polynomial time). Cryptosystems whose security is equivalent to this assumption include the Rabin cryptosystem and the Okamoto–Uchiyama cryptosystem. Many more cryptosystems rely on stronger assumptions such as RSA, Residuosity problems, and Phi-hiding. RSA problem. Given a composite number formula_2, exponent formula_9 and number formula_10, the RSA problem is to find formula_11. The problem is conjectured to be hard, but becomes easy given the factorization of formula_2. In the RSA cryptosystem, formula_12 is the public key, formula_13 is the encryption of message formula_11, and the factorization of formula_2 is the secret key used for decryption. Residuosity problems. Given a composite number formula_2 and integers formula_14, the residuosity problem is to determine whether there exists (alternatively, find an) formula_15 such that formula_16 Important special cases include the Quadratic residuosity problem and the Decisional composite residuosity problem. As in the case of RSA, this problem (and its special cases) are conjectured to be hard, but become easy given the factorization of formula_2. Some cryptosystems that rely on the hardness of residuousity problems include: Phi-hiding assumption. For a composite number formula_11, it is not known how to efficiently compute its Euler's totient function formula_17. The Phi-hiding assumption postulates that it is hard to compute formula_17, and furthermore even computing any prime factors of formula_17 is hard. This assumption is used in the Cachin–Micali–Stadler PIR protocol. Discrete log problem (DLP). Given elements formula_18 and formula_19 from a group formula_20, the discrete log problem asks for an integer formula_21 such that formula_22. The discrete log problem is not known to be comparable to integer factorization, but their computational complexities are closely related. Most cryptographic protocols related to the discrete log problem actually rely on the stronger Diffie–Hellman assumption: given group elements formula_23, where formula_24 is a generator and formula_25 are random integers, it is hard to find formula_26. Examples of protocols that use this assumption include the original Diffie–Hellman key exchange, as well as the ElGamal encryption (which relies on the yet stronger Decisional Diffie–Hellman (DDH) variant). Multilinear maps. A multilinear map is a function formula_27 (where formula_28 are groups) such that for any formula_29 and formula_30, formula_31. For cryptographic applications, one would like to construct groups formula_32 and a map formula_9 such that the map and the group operations on formula_32 can be computed efficiently, but the discrete log problem on formula_33 is still hard. Some applications require stronger assumptions, e.g. multilinear analogs of Diffie-Hellman assumptions. For the special case of formula_34, bilinear maps with believable security have been constructed using Weil pairing and Tate pairing. For formula_35 many constructions have been proposed in recent years, but many of them have also been broken, and currently there is no consensus about a safe candidate. Some cryptosystems that rely on multilinear hardness assumptions include: Lattice problems. The most fundamental computational problem on lattices is the shortest vector problem (SVP): given a lattice formula_36, find the shortest non-zero vector formula_37. Most cryptosystems require stronger assumptions on variants of SVP, such as shortest independent vectors problem (SIVP), GapSVP, or Unique-SVP. The most useful lattice hardness assumption in cryptography is for the learning with errors (LWE) problem: Given samples to formula_38, where formula_39 for some linear function formula_40, it is easy to learn formula_40 using linear algebra. In the LWE problem, the input to the algorithm has errors, i.e. for each pair formula_41 with some small probability. The errors are believed to make the problem intractable (for appropriate parameters); in particular, there are known worst-case to average-case reductions from variants of SVP. For quantum computers, Factoring and Discrete Log problems are easy, but lattice problems are conjectured to be hard. This makes some lattice-based cryptosystems candidates for post-quantum cryptography. Some cryptosystems that rely on hardness of lattice problems include: Non-cryptographic hardness assumptions. As well as their cryptographic applications, hardness assumptions are used in computational complexity theory to provide evidence for mathematical statements that are difficult to prove unconditionally. In these applications, one proves that the hardness assumption implies some desired complexity-theoretic statement, instead of proving that the statement is itself true. The best-known assumption of this type is the assumption that P ≠ NP, but others include the exponential time hypothesis, the planted clique conjecture, and the unique games conjecture. "C"-hard problems. Many worst-case computational problems are known to be hard or even complete for some complexity class formula_42, in particular NP-hard (but often also PSPACE-hard, PPAD-hard, etc.). This means that they are at least as hard as any problem in the class formula_42. If a problem is formula_42-hard (with respect to polynomial time reductions), then it cannot be solved by a polynomial-time algorithm unless the computational hardness assumption formula_43 is false. Exponential Time Hypothesis (ETH) and variants. The Exponential Time Hypothesis (ETH) is a strengthening of formula_44 hardness assumption, which conjectures that not only does the Boolean satisfiability problem (SAT) not have a polynomial time algorithm, it furthermore requires exponential time (formula_45). An even stronger assumption, known as the Strong Exponential Time Hypothesis (SETH) conjectures that formula_21-SAT requires formula_46 time, where formula_47. ETH, SETH, and related computational hardness assumptions allow for deducing fine-grained complexity results, e.g. results that distinguish polynomial time and quasi-polynomial time, or even formula_48 versus formula_49. Such assumptions are also useful in parametrized complexity. Average-case hardness assumptions. Some computational problems are assumed to be hard on average over a particular distribution of instances. For example, in the planted clique problem, the input is a random graph sampled, by sampling an Erdős–Rényi random graph and then "planting" a random formula_21-clique, i.e. connecting formula_21 uniformly random nodes (where formula_50), and the goal is to find the planted formula_21-clique (which is unique w.h.p.). Another important example is Feige's Hypothesis, which is a computational hardness assumption about random instances of 3-SAT (sampled to maintain a specific ratio of clauses to variables). Average-case computational hardness assumptions are useful for proving average-case hardness in applications like statistics, where there is a natural distribution over inputs. Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems, similarly to the Exponential Time Hypothesis. Unique Games. The Unique Label Cover problem is a constraint satisfaction problem, where each constraint formula_42 involves two variables formula_51, and for each value of formula_15 there is a "unique" value of formula_52 that satisfies formula_42. Determining whether all the constraints can be satisfied is easy, but the Unique Game Conjecture (UGC) postulates that determining whether almost all the constraints (formula_53-fraction, for any constant formula_54) can be satisfied or almost none of them (formula_55-fraction) can be satisfied is NP-hard. Approximation problems are often known to be NP-hard assuming UGC; such problems are referred to as UG-hard. In particular, assuming UGC there is a semidefinite programming algorithm that achieves optimal approximation guarantees for many important problems. Small Set Expansion. Closely related to the Unique Label Cover problem is the Small Set Expansion (SSE) problem: Given a graph formula_56, find a small set of vertices (of size formula_57) whose edge expansion is minimal. It is known that if SSE is hard to approximate, then so is Unique Label Cover. Hence, the "Small Set Expansion Hypothesis", which postulates that SSE is hard to approximate, is a stronger (but closely related) assumption than the Unique Game Conjecture. Some approximation problems are known to be SSE-hard (i.e. at least as hard as approximating SSE). The 3SUM Conjecture. Given a set of formula_2 numbers, the 3SUM problem asks whether there is a triplet of numbers whose sum is zero. There is a quadratic-time algorithm for 3SUM, and it has been conjectured that no algorithm can solve 3SUM in "truly sub-quadratic time": the 3SUM Conjecture is the computational hardness assumption that there are no formula_58-time algorithms for 3SUM (for any constant formula_59). This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly from computational geometry. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "n = p\\cdot q" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "q" }, { "math_id": 6, "text": "p_1,\\dots,p_k" }, { "math_id": 7, "text": "n = \\prod_i p_i" }, { "math_id": 8, "text": "\\log n" }, { "math_id": 9, "text": "e" }, { "math_id": 10, "text": "c := m^e (\\mathrm{mod}\\; n)" }, { "math_id": 11, "text": "m" }, { "math_id": 12, "text": "(n,e)" }, { "math_id": 13, "text": "c" }, { "math_id": 14, "text": "y,d" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": " x^d \\equiv y \\pmod{n}." }, { "math_id": 17, "text": "\\phi(m)" }, { "math_id": 18, "text": "a" }, { "math_id": 19, "text": "b" }, { "math_id": 20, "text": "G" }, { "math_id": 21, "text": "k" }, { "math_id": 22, "text": "a = b^k" }, { "math_id": 23, "text": "g, g^a, g^b" }, { "math_id": 24, "text": "g" }, { "math_id": 25, "text": "a,b" }, { "math_id": 26, "text": "g^{a\\cdot b}" }, { "math_id": 27, "text": "e: G_1 ,\\dots,G_n \\rightarrow G_T" }, { "math_id": 28, "text": "G_1 ,\\dots,G_n,G_T" }, { "math_id": 29, "text": "g_1, \\dots, g_n \\in G_1, \\dots G_n" }, { "math_id": 30, "text": "a_1, \\dots, a_n" }, { "math_id": 31, "text": "e(g_1^{a_1},\\dots,g_n^{a_n}) = e(g_1,\\dots,g_n)^{a_1\\cdots a_n}" }, { "math_id": 32, "text": "G_1,\\dots,G_n,G_T" }, { "math_id": 33, "text": "G_1,\\dots,G_n" }, { "math_id": 34, "text": "n=2" }, { "math_id": 35, "text": "n>2" }, { "math_id": 36, "text": "L" }, { "math_id": 37, "text": "v \\in L" }, { "math_id": 38, "text": "(x,y)" }, { "math_id": 39, "text": "y = f(x)" }, { "math_id": 40, "text": "f(\\cdot)" }, { "math_id": 41, "text": "y\\neq f(x)" }, { "math_id": 42, "text": "C" }, { "math_id": 43, "text": "P \\neq C" }, { "math_id": 44, "text": "P \\neq NP" }, { "math_id": 45, "text": "2^{\\Omega(n)}" }, { "math_id": 46, "text": "2^{(1-\\varepsilon_k)n}" }, { "math_id": 47, "text": "\\lim_{k \\rightarrow \\infty} \\varepsilon_k = 0" }, { "math_id": 48, "text": "n^{1.99}" }, { "math_id": 49, "text": "n^2" }, { "math_id": 50, "text": "2\\log_2 n \\ll k \\ll \\sqrt n" }, { "math_id": 51, "text": "x,y" }, { "math_id": 52, "text": "y" }, { "math_id": 53, "text": "(1-\\varepsilon)" }, { "math_id": 54, "text": "\\varepsilon>0" }, { "math_id": 55, "text": "\\varepsilon" }, { "math_id": 56, "text": "G = (V,E)" }, { "math_id": 57, "text": "n/\\log(n)" }, { "math_id": 58, "text": "O(n^{2-\\varepsilon})" }, { "math_id": 59, "text": "\\varepsilon > 0" } ]
https://en.wikipedia.org/wiki?curid=6158383
6158953
Modal μ-calculus
Extension of propositional modal logic In theoretical computer science, the modal μ-calculus (Lμ, Lμ, sometimes just μ-calculus, although this can have a more general meaning) is an extension of propositional modal logic (with many modalities) by adding the least fixed point operator μ and the greatest fixed point operator ν, thus a fixed-point logic. The (propositional, modal) μ-calculus originates with Dana Scott and Jaco de Bakker, and was further developed by Dexter Kozen into the version most used nowadays. It is used to describe properties of labelled transition systems and for verifying these properties. Many temporal logics can be encoded in the μ-calculus, including CTL* and its widely used fragments—linear temporal logic and computational tree logic. An algebraic view is to see it as an algebra of monotonic functions over a complete lattice, with operators consisting of functional composition plus the least and greatest fixed point operators; from this viewpoint, the modal μ-calculus is over the lattice of a power set algebra. The game semantics of μ-calculus is related to two-player games with perfect information, particularly infinite parity games. Syntax. Let "P" (propositions) and "A" (actions) be two finite sets of symbols, and let Var be a countably infinite set of variables. The set of formulas of (propositional, modal) μ-calculus is defined as follows: Given the above definitions, we can enrich the syntax with: The first two formulas are the familiar ones from the classical propositional calculus and respectively the minimal multimodal logic K. The notation formula_14 (and its dual) are inspired from the lambda calculus; the intent is to denote the least (and respectively greatest) fixed point of the expression formula_0 where the "minimization" (and respectively "maximization") are in the variable formula_7, much like in lambda calculus formula_18 is a function with formula formula_0 in bound variable formula_7; see the denotational semantics below for details. Denotational semantics. Models of (propositional) μ-calculus are given as labelled transition systems formula_19 where: Given a labelled transition system formula_19 and an interpretation formula_24 of the variables formula_7 of the formula_25-calculus, formula_26, is the function defined by the following rules: By duality, the interpretation of the other basic formulas is: Less formally, this means that, for a given transition system formula_19: The interpretations of formula_5 and formula_41 are in fact the "classical" ones from dynamic logic. Additionally, the operator formula_25 can be interpreted as liveness ("something good eventually happens") and formula_9 as safety ("nothing bad ever happens") in Leslie Lamport's informal classification. Decision problems. Satisfiability of a modal μ-calculus formula is EXPTIME-complete. Like for linear temporal logic, the model checking, satisfiability and validity problems of linear modal μ-calculus are PSPACE-complete.
[ { "math_id": 0, "text": "\\phi" }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "\\phi \\wedge \\psi" }, { "math_id": 3, "text": "\\neg \\phi" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "[a] \\phi" }, { "math_id": 6, "text": "\\phi " }, { "math_id": 7, "text": "Z" }, { "math_id": 8, "text": "\\nu Z. \\phi" }, { "math_id": 9, "text": "\\nu" }, { "math_id": 10, "text": "\\phi \\lor \\psi" }, { "math_id": 11, "text": "\\neg (\\neg \\phi \\land \\neg \\psi)" }, { "math_id": 12, "text": "\\langle a \\rangle \\phi" }, { "math_id": 13, "text": "\\neg [a] \\neg \\phi" }, { "math_id": 14, "text": "\\mu Z. \\phi" }, { "math_id": 15, "text": "\\neg \\nu Z. \\neg \\phi [Z:=\\neg Z]" }, { "math_id": 16, "text": "\\phi [Z:=\\neg Z]" }, { "math_id": 17, "text": "\\neg Z" }, { "math_id": 18, "text": "\\lambda Z. \\phi" }, { "math_id": 19, "text": "(S, R, V)" }, { "math_id": 20, "text": "S" }, { "math_id": 21, "text": "R" }, { "math_id": 22, "text": " V : P \\to 2^S" }, { "math_id": 23, "text": "p \\in P" }, { "math_id": 24, "text": "i" }, { "math_id": 25, "text": "\\mu" }, { "math_id": 26, "text": "[\\![\\cdot]\\!]_i:\\phi \\to 2^S" }, { "math_id": 27, "text": "[\\![p]\\!]_i = V(p)" }, { "math_id": 28, "text": "[\\![Z]\\!]_i = i(Z)" }, { "math_id": 29, "text": "[\\![\\phi \\wedge \\psi]\\!]_i = [\\![\\phi]\\!]_i \\cap [\\![\\psi]\\!]_i" }, { "math_id": 30, "text": "[\\![\\neg \\phi]\\!]_i = S \\smallsetminus [\\![\\phi]\\!]_i" }, { "math_id": 31, "text": "[\\![[a] \\phi]\\!]_i = \\{s \\in S \\mid \\forall t \\in S, (s, t) \\in R_a \\rightarrow t \\in [\\![\\phi]\\!]_i\\}" }, { "math_id": 32, "text": "[\\![\\nu Z. \\phi]\\!]_i = \\bigcup \\{T \\subseteq S \\mid T \\subseteq [\\![\\phi]\\!]_{i[Z := T]}\\}" }, { "math_id": 33, "text": "i[Z := T]" }, { "math_id": 34, "text": "T" }, { "math_id": 35, "text": "[\\![\\phi \\vee \\psi]\\!]_i = [\\![\\phi]\\!]_i \\cup [\\![\\psi]\\!]_i" }, { "math_id": 36, "text": "[\\![\\langle a \\rangle \\phi]\\!]_i = \\{s \\in S \\mid \\exists t \\in S, (s, t) \\in R_a \\wedge t \\in [\\![\\phi]\\!]_i\\}" }, { "math_id": 37, "text": "[\\![\\mu Z. \\phi]\\!]_i = \\bigcap \\{T \\subseteq S \\mid [\\![\\phi]\\!]_{i[Z := T]} \\subseteq T \\}" }, { "math_id": 38, "text": "p" }, { "math_id": 39, "text": "V(p)" }, { "math_id": 40, "text": "s" }, { "math_id": 41, "text": "\\langle a\\rangle \\phi" }, { "math_id": 42, "text": "\\nu Z.\\phi" }, { "math_id": 43, "text": "[\\![\\nu Z.\\phi]\\!]_i" }, { "math_id": 44, "text": "T\\mapsto[\\![\\phi]\\!]_{i[Z := T]}" }, { "math_id": 45, "text": "[\\![\\mu Z. \\phi]\\!]_i" }, { "math_id": 46, "text": "\\nu Z.\\phi \\wedge [a]Z" }, { "math_id": 47, "text": "\\mu Z.\\phi \\vee \\langle a \\rangle Z" }, { "math_id": 48, "text": "\\nu Z.\\left (\\bigvee_{a\\in A}\\langle a\\rangle\\top\\wedge \\bigwedge_{a\\in A}[a]Z \\right)" } ]
https://en.wikipedia.org/wiki?curid=6158953
61590966
Conditional symmetric instability
Conditional symmetric instability, or CSI, is a form of convective instability in a fluid subject to temperature differences in a uniform rotation frame of reference while it is thermally stable in the vertical and dynamically in the horizontal (inertial stability). The instability in this case develop only in an inclined plane with respect to the two axes mentioned and that is why it can give rise to a so-called "slantwise convection" if the air parcel is almost saturated and moved laterally and vertically in a CSI area. This concept is mainly used in meteorology to explain the mesoscale formation of intense precipitation bands in an otherwise stable region, such as in front of a warm front. The same phenomenon is also applicable to oceanography. Principle. Hydrostatic stability. An air particle at a certain altitude will be stable if its adiabatically modified temperature during an ascent is equal to or cooler than the environment. Similarly, it is stable if its temperature is equal or warmer during a descent. In the case where the temperature is equal, the particle will remain at the new altitude, while in the other cases, it will return to its initial level4. In the diagram on the right, the yellow line represents a raised particle whose temperature remains at first under that of the environment (stable air) which entails no convection. Then in the animation, there is warming surface warming and the raised particle remains warmer than the environment (unstable air). A measure of hydrostatic stability is to record the variation with the vertical of the equivalent potential temperature (formula_0): * If formula_0 diminish with altitude leads to unstable airmass * If formula_0 remains the same with altitude leads to neutral airmass * If formula_0 increase with altitude leads to stable airmass. Inertial Stability. In the same way, a lateral displacement of an air particle changes its absolute vorticity formula_1. This is given by the sum of the planetary vorticity, formula_2, and formula_3, the geostrophic (or relative) vorticity of the parcel: formula_4 Where : formula_1 can be positive, null or negative depending on the conditions in which the move is made. As the absolute vortex is almost always positive on the synoptic scale, one can consider that the atmosphere is generally stable for lateral movement. Inertial stability is low only when formula_1 is close to zero. Since formula_2 is always positive, formula_9 can be satisfied only on the anticyclonic side of a strong maximum of jet stream or in a barometric ridge at altitude, where the derivative velocities in the direction of displacement in the equation give a significant negative value. The variation of the angular momentum indicate the stability: Slantwise movement. Under certain stable hydrostatic and inertial conditions, slantwise displacement may, however, be unstable when the particle changes air mass or wind regime. The figure on the right shows such a situation. The displacement of the air particle is done with respect to kinetic moment lines (formula_13) that increase from left to right and equivalent potential temperature (formula_14) that increase with height. Horizontal accelerations (to the left or right of a surface formula_15) are due to an increase/decrease in the formula_15 of the environment in which the particle moves. In these cases, the particle accelerates or slows down to adjust to its new environment. Particule A undergoes a horizontal acceleration that gives it positive buoyancy as it moves to colder air and decelerates as it moves to a region of smaller formula_15. The particle rises and eventually becomes colder than its new environment. At this point, it has negative buoyancy and begins to descend. In doing so, formula_15 increases and the particle returns to its original position. Vertical movements in this case result in negative buoyancy as the particle encounters warmer air (formula_16 increases with height) and horizontal acceleration as it moves to larger surfaces formula_15. As the particle goes down, its formula_15 decreases to fit the environment and the particle returns to B. Only case C is unstable. Horizontal acceleration combines with a vertical upward disturbance and allows oblique displacement. Indeed, the formula_16 of the particle is larger than the formula_16 of the environment. While the momentum of the particle is less than that of the environment. An oblique displacement thus produces a positive buoyancy and an acceleration in the oblique displacement direction which reinforces it. The condition for having conditional symmetric instability in an otherwise stable situation is therefore that: Potential effects. CSI is usually embedded in large areas of vertical upward motion. The ideal situation is a geostrophic flow from the South with wind speeds that increase with height. The environment is well mixed and close to saturation. Since the flow is unidirectional, the u component of the wind can be set equal to zero, which establishes a symmetrical flow perpendicular to the temperature gradient in the air mass. This type of flow is typically found in baroclinic atmospheres with cold air to the west. The image to the right shows such a situation in winter with CSI associated with negative equivalent potential vorticity (formula_9) near a warm front. Banded snow forms along the front, near the low pressure area and the CSI. Slantwise convection. If a particle is climbing in a CSI zone, it will cool down and the water vapor will condense upon saturation, giving cloud and precipitation by oblique convection. For example, in front of a warm front, the air mass is stable because the mild air overcomes a cold mass. The geostrophic equilibrium brings back any particle moving perpendicularly from the center of the depression towards it. However, an upwardly oblique displacement by synoptic scale upward acceleration in a CSI layer produces parallel bands of heavy rainfall. Conditional symmetric instability affects a layer that can be thin or very large in the vertical, similar to hydrostatic convection. The thickness of the layer determines the enhancement of convective precipitation within a region otherwise stratiform clouds. As the motion is in an area near saturation, the particle remains very close to the moist adiabatic lapse rate which gives it a limited Convective available potential energy (CAPE). The rate of climb in a slantwise convection zone ranges from a few tens of centimeters per second to a few meters per second. This is usually below the climbing speed limit in a cumulonimbus, i.e. 5 m/s, which gives lightning and limit the occurrence of it with CSI. It is however possible in: Slantwise convection bands have several characteristics: Subsidence. Conversely, if the particle slide downward, it will warm up and become relatively less saturated, dissipating clouds. The snow produced at higher altitude by the slantwise convection will also sublimate in the descending flow and accelerate. It can give it a speed of descent reaching the 20 m/s. This effect is associated with the descent to the ground of the "Sting jet". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta_e" }, { "math_id": 1, "text": "\\eta" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\\zeta" }, { "math_id": 4, "text": "\\eta= \\left[ \\frac{\\partial v}{\\partial x} - \\frac{\\partial u}{\\partial y} \\right ] + f = \\zeta + f \\qquad \\qquad " }, { "math_id": 5, "text": "v" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "\\eta \\le 0 " }, { "math_id": 10, "text": "\\Delta M_g = 0 " }, { "math_id": 11, "text": "\\Delta M_g > 0 " }, { "math_id": 12, "text": "\\Delta M_g < 0 " }, { "math_id": 13, "text": "\\scriptstyle M_g" }, { "math_id": 14, "text": "\\scriptstyle \\theta_e" }, { "math_id": 15, "text": " \\scriptstyle M_g " }, { "math_id": 16, "text": " \\scriptstyle \\theta_e " } ]
https://en.wikipedia.org/wiki?curid=61590966
6159218
Conversion rate optimization
Concept in digital marketing Conversion rate optimization (CRO) is the process of increasing the percentage of users or website visitors to take a desired action (such as buying a product or leaving contact details). History. Online conversion rate optimization (or website optimization) was born out of the need of e-commerce marketers to improve their website's performance in the aftermath of the dot-com bubble, when technology companies started to be more aware about their spending, investing more in website analytics. After the burst, with website creation being more accessible, tons of pages with bad user experience were created. As competition grew on the web during the early 2000s, website analysis tools became available, and awareness of website usability grew, internet marketers were prompted to produce measurables for their tactics and improve their website's user experience. In 2004, new tools enabled internet marketers to experiment with website design and content variations to determine which layouts, copy text, offers, and images perform best. Testing started to be more accessible and known. This form of optimization accelerated in 2007 with the introduction of the free tool Google Website Optimizer. Today, optimization and conversion are key aspects of many digital marketing campaigns. A research study conducted among internet marketers in 2017, for example, showed that 50% of respondents thought that CRO was "crucial to their overall digital marketing strategy". Conversion rate optimization shares many principles with direct response marketing – a marketing approach that emphasizes tracking, testing, and on-going improvement. Direct marketing was popularized in the early twentieth century and supported by the formation of industry groups such as the Direct Marketing Association, which was formed in 1917 and later named Data &amp; Marketing Association and acquired by the Association of National Advertisers following the announcement on May 31, 2018. Like modern day conversion rate optimization, direct response marketers also practice A/B split-testing, response tracking, and audience testing to optimize mail, radio, and print campaigns. Methodology. Conversion rate optimization seeks to increase the percentage of website visitors that take a specific action (often submitting a web form, making a purchase, signing up for a trial, etc.) by methodically testing alternate versions of a page or process, and through removing impediments to user experience and improving page loading speeds. In doing so, businesses are able to generate more leads or sales without investing more money on website traffic, hence increasing their marketing return on investment and overall profitability. Statistical significance helps us understand that the result of a test is not achieved merely based on chance. There are several approaches to conversion optimization with two main schools of thought prevailing in the last few years. One school is more focused on testing to discover the best way to increase website, campaign, or landing page conversion rates. The other school is focused on the pretesting stage of the optimization process. In this second approach, the optimization company will invest a considerable amount of time in understanding the audience and then creating a targeted message that appeals to that particular audience. Only then would it be willing to deploy testing mechanisms to increase conversion rates. Calculation of conversion rate. A conversion rate is defined as the percentage of visitors who complete a goal, as set by the site owner. It is calculated as the total number of conversions, divided by the total number of people who visited your website. formula_0 For example: Your website receives 100 visitors in a day and 15 visitors sign up for your email newsletter (your chosen conversion to measure). Your conversion rate would be 15% for that day. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Conversion\\ rate} = \\frac{\\mathrm{Conversions}}{\\mathrm{Number\\ of\\ visitors}}" } ]
https://en.wikipedia.org/wiki?curid=6159218
61595639
Runs Per Wicket Ratio
Runs Per Wicket Ratio (RpW ratio, also called the Quotient) is a method of ranking teams in league tables in cricket who are equal on other criteria, including points. Usage. It was used in the 2019–21 ICC World Test Championship, under the name RpW Ratio. It is used in Australia's Sheffield Shield, under the name Quotient. It was used to determine the champions in 1910–11, and a number of times after that. It has not been required to determine final league positions since fractional bonus points for each run scored and wicket taken were introduced (in 2014–15). It is used in India's Ranji Trophy and Duleep Trophy, under the name Quotient. Calculation. It is the number of runs scored per wicket lost, divided by the number of runs conceded per wicket taken: formula_0 Mathematically, this is equivalent to: formula_1 For example, if in one match Sri Lanka scored a total of 535 runs for the loss of 14 wickets, then they scored 38.214 runs per wicket. If they conceded 534 runs while taking 20 wickets, then they conceded 26.7 runs per wicket. So their RpW ratio = 38.214 ÷ 26.7 = 1.431. The RpW ratio for their opponent (say New Zealand) is the inverse of this: 26.7 ÷ 38.214 = 0.699. So if two teams have played only each other, their two RpW ratio figures are reciprocals. As the units are the same either side of the division (runs/wickets), they cancel out, so RpW ratio is a dimensionless quantity. Interpretation. It is the average number of runs scored per wicket, for each run per wicket conceded: RpW ratio is not to be confused with the ratio of runs conceded to wickets taken by individual bowlers, known as bowling average, or with batting average. Note: With just RpW Ratio, the winner of a particular match can't be decided as the data is insufficient in telling the actual status of the match already played. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{RpW ratio }=\\frac{\\text{total runs scored}}{\\text{total wickets lost}} \\div \\frac{\\text{total runs conceded}}{\\text{total wickets taken}}." }, { "math_id": 1, "text": "\\text{RpW ratio} = \\frac{ {\\text{total runs scored}} \\times {\\text{total wickets taken}} } { {\\text{total runs conceded}} \\times {\\text{total wickets lost}} }." } ]
https://en.wikipedia.org/wiki?curid=61595639
615976
Residual value
Future value of a good in terms of absolute value Residual value is one of the constituents of a leasing calculus or operation. It describes the future value of a good in terms of absolute value in monetary terms, and it is sometimes abbreviated into a percentage of the initial price when the item was new. Example: A car is sold at a list price of $20,000 today. After a usage of 36 months and 50,000 miles (ca. 80,467 km) its value is contractually defined as $10,000 or 50%. The credited amount, on which the interest is applied, thus is $20,000 present value minus the present value of $10,000 future value. Residual values are contractually dealt with either in terms of closed contracts or open contracts. In accounting, residual value is another name for salvage value, the remaining value of an asset after it has been fully depreciated, or after deteriorating beyond further use. The residual value derives its calculation from a base price, calculated after depreciation. Residual values are calculated using a number of factors, generally a vehicles market value for the term and mileage required is the start point for the calculation, followed by seasonality, monthly adjustment, lifecycle, and disposal performance. The leasing company setting the residual values (RVs) will use their own historical information to insert the adjustment factors within the calculation to set the end value being the residual value. In accounting, the residual value could be defined as an estimated amount that an entity can obtain when disposing of an asset after its useful life has ended. When doing this, the estimated costs of disposing of the asset should be deducted. The formula to calculate the residual value can be seen with the next example as follows: A company owns a machine which was bought for €20,000. This machine has a useful life of five years, which has just ended. The company knows that if it sells the machine now, it will be able to recover 10% of the price of acquisition. Therefore, the residual value would be: formula_0
[ { "math_id": 0, "text": "\\text{Residual value} = 10\\% \\times (20{,}000) = 2{,}000" } ]
https://en.wikipedia.org/wiki?curid=615976
616019
Distributivity (order theory)
In the mathematical area of order theory, there are various notions of the common concept of distributivity, applied to the formation of suprema and infima. Most of these apply to partially ordered sets that are at least lattices, but the concept can in fact reasonably be generalized to semilattices as well. Distributive lattices. Probably the most common type of distributivity is the one defined for lattices, where the formation of binary suprema and infima provide the total operations of join (formula_0) and meet (formula_1). Distributivity of these two operations is then expressed by requiring that the identity formula_2 hold for all elements "x", "y", and "z". This distributivity law defines the class of distributive lattices. Note that this requirement can be rephrased by saying that binary meets preserve binary joins. The above statement is known to be equivalent to its order dual formula_3 such that one of these properties suffices to define distributivity for lattices. Typical examples of distributive lattice are totally ordered sets, Boolean algebras, and Heyting algebras. Every finite distributive lattice is isomorphic to a lattice of sets, ordered by inclusion (Birkhoff's representation theorem). Distributivity for semilattices. A semilattice is partially ordered set with only one of the two lattice operations, either a meet- or a join-semilattice. Given that there is only one binary operation, distributivity obviously cannot be defined in the standard way. Nevertheless, because of the interaction of the single operation with the given order, the following definition of distributivity remains possible. A meet-semilattice is distributive, if for all "a", "b", and "x": If "a" ∧ "b" ≤ "x" then there exist "a"′ and "b"′ such that "a" ≤ "a"′, "b" ≤ "b' " and "x" = "a"′ ∧ "b' ". Distributive join-semilattices are defined dually: a join-semilattice is distributive, if for all "a", "b", and "x": If "x" ≤ "a" ∨ "b" then there exist "a"′ and "b"′ such that "a"′ ≤ "a", "b"′ ≤ "b" and "x" = "a"′ ∨ "b' ". In either case, a' and b' need not be unique. These definitions are justified by the fact that given any lattice "L", the following statements are all equivalent: Thus any distributive meet-semilattice in which binary joins exist is a distributive lattice. A join-semilattice is distributive if and only if the lattice of its ideals (under inclusion) is distributive. This definition of distributivity allows generalizing some statements about distributive lattices to distributive semilattices. Distributivity laws for complete lattices. For a complete lattice, arbitrary subsets have both infima and suprema and thus infinitary meet and join operations are available. Several extended notions of distributivity can thus be described. For example, for the infinite distributive law, finite meets may distribute over arbitrary joins, i.e. formula_4 may hold for all elements "x" and all subsets "S" of the lattice. Complete lattices with this property are called frames, locales or complete Heyting algebras. They arise in connection with pointless topology and Stone duality. This distributive law "is not equivalent" to its dual statement formula_5 which defines the class of dual frames or complete co-Heyting algebras. Now one can go even further and define orders where arbitrary joins distribute over arbitrary meets. Such structures are called completely distributive lattices. However, expressing this requires formulations that are a little more technical. Consider a doubly indexed family {"x""j","k" | "j" in "J", "k" in "K"("j")} of elements of a complete lattice, and let "F" be the set of choice functions "f" choosing for each index "j" of "J" some index "f"("j") in "K"("j"). A complete lattice is completely distributive if for all such data the following statement holds: formula_6 Complete distributivity is again a self-dual property, i.e. dualizing the above statement yields the same class of complete lattices. Completely distributive complete lattices (also called "completely distributive lattices" for short) are indeed highly special structures. See the article on completely distributive lattices. Distributive elements in arbitrary lattices. In an arbitrary lattice, an element "x" is called a "distributive element" if ∀"y","z": "x" ∨ ("y" ∧ "z") = ("x" ∨ "y") ∧ ("x" ∨ "z"). An element "x" is called a "dual distributive element" if ∀"y","z": "x" ∧ ("y" ∨ "z") = ("x" ∧ "y") ∨ ("x" ∧ "z"). In a distributive lattice, every element is of course both distributive and dual distributive. In a non-distributive lattice, there may be elements that are distributive, but not dual distributive (and vice versa). For example, in the depicted pentagon lattice "N"5, the element "x" is distributive, but not dual distributive, since "x" ∧ ("y" ∨ "z") = "x" ∧ 1 = "x" ≠ "z" = 0 ∨ "z" = ("x" ∧ "y") ∨ ("x" ∧ "z"). In an arbitrary lattice "L", the following are equivalent: In an arbitrary lattice, if "x"1 and "x"2 are distributive elements, then so is "x"1 ∨ "x"2. Literature. "Distributivity is a basic concept that is treated in any textbook on lattice and order theory. See the literature given for the articles on order theory and lattice theory. More specific literature includes:" References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vee" }, { "math_id": 1, "text": "\\wedge" }, { "math_id": 2, "text": "x \\wedge (y \\vee z) = (x \\wedge y) \\vee (x \\wedge z)" }, { "math_id": 3, "text": "x \\vee (y \\wedge z) = (x \\vee y) \\wedge (x \\vee z)" }, { "math_id": 4, "text": "x \\wedge \\bigvee S = \\bigvee \\{ x \\wedge s \\mid s \\in S \\}" }, { "math_id": 5, "text": "x \\vee \\bigwedge S = \\bigwedge \\{ x \\vee s \\mid s \\in S \\}" }, { "math_id": 6, "text": " \\bigwedge_{j\\in J}\\bigvee_{k\\in K(j)} x_{j,k} = \n \\bigvee_{f\\in F}\\bigwedge_{j\\in J} x_{j,f(j)}\n " } ]
https://en.wikipedia.org/wiki?curid=616019
61603971
Transformer (deep learning architecture)
Machine learning algorithm used for natural-language processing &lt;templatestyles src="Machine learning/styles.css"/&gt; A transformer is a deep learning architecture developed by researchers at Google and based on the multi-head attention mechanism, proposed in a 2017 paper "Attention Is All You Need". Text is converted to numerical representations called tokens, and each token is converted into a vector via looking up from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism allowing the signal for key tokens to be amplified and less important tokens to be diminished. Transformers have the advantage of having no recurrent units, and therefore require less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets, such as the Wikipedia corpus and Common Crawl. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since then. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multi-modal processing, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (Bidirectional Encoder Representations from Transformers).&lt;templatestyles src="Template:TOC limit/styles.css" /&gt; History. Predecessors. For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens. A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called "multiplicative units". Neural networks using multiplicative units were called "sigma-pi networks" or "higher-order networks", but they faced high computational complexity. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers. However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence. An early attempt to overcome this was the fast weight controller (1992) which computed the weight matrix for further processing depending on the input. It used the fast weights architecture (1987), where one neural network outputs the weights of another neural network. It was later shown to be equivalent to the linear Transformer without normalization. Attention with seq2seq. The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see for previous papers). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014. (Sutskever et al, 2014) was a 380M-parameter model for machine translation using two long short-term memory (LSTM). The architecture consists of two parts. The "encoder" is an LSTM that takes in a sequence of tokens and turns it into a vector. The "decoder" is another LSTM that converts the vector into a sequence of tokens. Similarly, (Cho et al, 2014) was 130M-parameter model that used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq. These early seq2seq models had no attention mechanism, and the state vector is accessible only after the "last" word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved, since the input is processed sequentially by one recurrent network into a "fixed"-size output vector, which was then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, and the output quality degrades. As evidence, reversing the input sentence improved seq2seq translation. (Bahdanau et al, 2014) introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem, allowing the model to process long-distance dependencies more easily. They called their model "RNNsearch", as it "emulates searching through a source sentence during decoding a translation". (Luong et al, 2015) compared the relative performance of global (that of (Bahdanau et al, 2014)) and local (sliding window) attention model architectures for machine translation, and found that a mixed attention architecture had higher quality than global attention, while the use of a local attention architecture reduced translation time. In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it achieved a higher level of performance than the statistical approach, which took ten years to develop. In the same year, self-attention "avant la lettre", originally called "intra-attention or" "intra-sentence attention", was proposed for LSTMs. Parallelizing attention. Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them to be accelerated on GPUs. In 2016, "decomposable attention" applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude less parameters than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention "without" recurrence is sufficient for language translation, thus the title "attention is "all" you need". That hypothesis was against conventional wisdom of the time, and even his father, a well-known computational linguist, was skeptical. In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. Its parallelizability was an important factor to its widespread use in large neural networks. AI boom era. Already in spring 2017, even before the "Attention is all your need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used in many generative models that contribute to the ongoing AI boom. In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model. Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering a boom around large language models. Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), are based on the Transformer architecture. Training. Methods for stabilizing training. The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again. A 2020 paper found that using layer normalization "before" (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup. Pretrain-finetune. Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include: The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are: Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture. Tasks. In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer. In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens: formula_0and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task. In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks. In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks. Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model). Architecture. All transformers have the same primary components: The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section. By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as formula_1. Tokenization. As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between token sequences and texts is a tokenizer. The set of all tokens is the vocabulary of the tokenizer, and its size is the "vocabulary size" formula_2. When faced with tokens outside the vocabulary, typically a special token is used, written as "[UNK]" for "unknown". Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece. Embedding. Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix formula_3. For example, if the input token is formula_4, then the one-hot representation is formula_5, and its embedding vector isformula_6The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors. The number of dimensions in an embedding vector is called "hidden size" or "embedding size" and written as formula_7 Un-embedding. An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens. The un-embedding layer is a linear-softmax layer:formula_8The matrix has shape formula_9. Positional encoding. A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about "where" the words are in the input sequence. Without positional encoding, the model would be unable to process input sequence as more than a bag of words, as for example, both "man bites dog" and "dog bites man" would be processed exactly the same way. The positional encoding is defined as a function of type formula_10, where formula_11 is a positive even integer. The full positional encoding defined in the original paper is:formula_12where formula_13. Here, formula_14 is a free parameter that should be significantly larger than the biggest formula_15 that would be input into the positional encoding function. The original paper uses formula_16. The function is in a simpler form when written as a complex function of type formula_17formula_18where formula_19. The main reason for using this positional encoding function is that using it, shifts are linear transformations:formula_20where formula_21 is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication. By taking a linear sum, any convolution can also be implemented as linear transformations:formula_22for any constants formula_23. This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position." In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference. Encoder-decoder (overview). Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far. The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time). Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model. Feedforward network. The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:formula_24where formula_25 is its activation function. The original Transformer used ReLU activation. The number of neurons in the middle layer is called "intermediate size" (GPT), "filter size" (BERT), or "feedforward size" (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size: formula_26. Scaled dot-product attention. Attention head. The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights formula_27, the key weights formula_28, and the value weights formula_29. The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length formula_30, and each entry is a vector of dimension formula_31. Similarly for the key and value sequences. For each vector formula_32 in the query sequence, it is multiplied by a matrix formula_27 to produce a query vector formula_33. The matrix of all query vectors is the query matrix:formula_34Similarly, we construct the key matrix formula_35 and the value matrix formula_36. It is usually the case that all formula_37 are square matrices, meaning formula_38, etc. Attention weights are calculated using the query and key vectors: the attention weight formula_39 from token formula_40 to token formula_41 is the dot product between formula_42 and formula_43. The attention weights are divided by the square root of the dimension of the key vectors, formula_44, which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that formula_27 and formula_28 are different matrices allows attention to be non-symmetric: if token formula_40 attends to token formula_41 (i.e. formula_45 is large), this does not necessarily mean that token formula_41 will attend to token formula_40 (i.e. formula_46 could be small). The output of the attention unit for token formula_40 is the weighted sum of the value vectors of all tokens, weighted by formula_39, the attention from token formula_40 to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices formula_47, formula_48 and formula_49 are defined as the matrices where the formula_40th rows are vectors formula_42, formula_50, and formula_51 respectively. Then we can represent the attention asformula_52 where the softmax is applied over each of the rows of the matrix. The number of dimensions in a query vector is "query size" formula_53 and similarly for the "key size" formula_54 and "value size" formula_55. The output dimension of an attention head is its "head dimension" formula_56. The attention mechanism requires the following three equalities to hold:formula_57but is otherwise unconstrained. If the attention head is used in a self-attention fashion, then formula_58. If the attention head is used in a cross-attention fashion, then usually formula_59. It is theoretically possible for all three to be different, but that is rarely the case in practice. Multiheaded attention. One set of formula_60 matrices is called an "attention head", and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". In addition, the influence field representing relevance can become progressively dilated in successive layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers. Concretely, let the multiple attention heads be indexed by formula_40, then we haveformula_61 where the matrix formula_62 is the concatenation of word embeddings, and the matrices formula_63 are "projection matrices" owned by individual attention head formula_40, and formula_64 is a final projection matrix owned by the whole multi-headed attention head. It is theoretically possible for each attention head to have a different head dimension formula_56, but that is rarely the case in practice. As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:formula_65Since formula_66, its projection matrix formula_67 is a square matrix. Masked attention. It may be necessary to cut out attention links between some word-pairs. For example, the decoder, when decoding for the token position formula_68, should not have access to the token at position formula_69. This may be accomplished before the softmax stage by adding a mask matrix formula_3 that is formula_70 at entries where the attention link must be cut, and formula_71 at other places:formula_72A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. For example, the following matrix is commonly used in decoder self-attention modules, called "causal masking":formula_73In words, it means that each token can pay attention to itself, and every token before it, but not any after it. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form formula_74, where formula_75 is a random permutation matrix. Encoder. An encoder consists of an embedding layer, followed by multiple encoder layers. Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:formula_76 where formula_77 stands for "feed-forward network". We can more succinctly write it asformula_78with the implicit convention that the formula_77 is applied to each row of the matrix individually. The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder. As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking. Decoder. A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer. Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the "encoder-decoder attention". Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked. In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism. Schematically, we have:formula_79where formula_80 is the matrix with rows being the output vectors from the encoder. The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text. Full transformer architecture. Sublayers. Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network. The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence. Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector. There are two common conventions in use: the "post-LN" and the "pre-LN" convention. In the post-LN convention, the output of each sublayer is formula_81where formula_82 is the function implemented by the sublayer itself. In the pre-LN convention, the output of each sublayer isformula_83The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention was developed in 2020, which was found to be easier to train, requiring no warm-up, leading to faster convergence. Pseudocode. The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from input: Encoder input t_e Decoder input t_d output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence)) /* encoder */ z_e ← encoder.tokenizer(t_e) for each t in 1:length(z_e) do z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t) for each l in 1:length(encoder.layers) do layer ← encoder.layers[l] /* first sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.multiheaded_attention(z_e, z_e, z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] /* second sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.feedforward(z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] for each t in 1:length(z_e) do z_e[t] ← encoder.final_layer_norm(z_e[t]) /* decoder */ z_d ← decoder.tokenizer(t_d) for each t in 1:length(z_d) do z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t) for each l in 1:length(decoder.layers) do layer ← decoder.layers[l] /* first sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* second sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.multiheaded_attention(z_d, z_e, z_e) for each i in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* third sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.feedforward(z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] z_d ← decoder.final_layer_norm(z_d) output_distributions ← [] for each t in 1:length(z_d) do output_distributions.append(decoder.unembed(z_d[t])) return output_distributions Terminology. The Transformer architecture, being modular, allows variations. Several common variations are described here. An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder. A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only. An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder. A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the formformula_84where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons. There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively. Subsequent work. Alternative activation functions. The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series used SwiGLU; both GPT-1 and BERT used GELU. Alternative normalizations. The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include ScaleNorm, or FixNorm. Alternative positional encodings. Transformers may use other positional encoding methods than sinusoidal. The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module. RoPE. RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors formula_85. Now pick some angle formula_86. Then RoPE encoding isformula_87Equivalently, if we write the 2-dimensional vectors as complex numbers formula_88, then RoPE encoding is just multiplication by an angle:formula_89For a list of formula_90-dimensional vectors, a RoPE encoder is defined by a sequence of angles formula_91. Then the RoPE encoding is applied to each pair of coordinates. The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:formula_92 for any integer formula_15. ALiBi. ALiBi (Attention with Linear Biases) is not a "replacement" for the positional encoder on the original transformer. Instead, it is an "additional" positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isformula_93Here, formula_94 is a real number ("scalar"), and formula_95 is the "linear bias" matrix defined byformula_96in other words, formula_97. The idea being that the linear bias matrix is a softened mask. Just as formula_71 represent full attention paid, and formula_70 represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction. ALiBi allows pretraining on short context windows, then finetuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located). Relative Position Encodings. Relative Position Encodings is similar to ALiBi, but more generic:formula_98where formula_95 is a Toeplitz matrix, that is, formula_99 whenever formula_100. This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding". Efficient implementation. The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. "Transformers" is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models. FlashAttention. FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention. Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA). Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8. Multi-Query Attention. Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally, formula_101with Multi-Query Attention, there is just one formula_102, thus: formula_103 This has a neutral effect on model quality and training speed, but increases inference speed. Caching. When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching. If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots. Speculative decoding. Transformers are used in large language models for autoregressive sequence generation: generating a stream of text, one token at a time. However, in most settings, decoding from language models is memory-bound, meaning that we have spare compute power available. Speculative decoding uses this spare compute power by computing several tokens in parallel. Similarly to speculative execution in CPUs, future tokens are computed concurrently, by speculating on the value of previous tokens, and are later discarded if it turns out the speculation was incorrect. Specifically, consider a transformer model like GPT-3 with a context window size of 512. To generate an entire context window autoregressively with greedy decoding, it must be run for 512 times, each time generating a token formula_104. However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each formula_105 is indeed the token with the largest log-likelihood in the formula_68-th output. In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose a small model generated four speculative tokens: formula_106. These tokens are run through the larger model, and only formula_107 and formula_108 are accepted. The same run of the large model already generated a new token formula_109 to replace formula_110, and formula_111 is completely discarded. The process then repeats (starting from the 4th token) until all tokens are generated. For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used. Sub-quadratic transformers. Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. "Long Range Arena" (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs. Alternative attention graphs. The standard attention graph is either all-to-all or causal, both of which scales as formula_112 where formula_14 is the number of tokens in a sequence. Reformer (2020) reduces the computational load from formula_112 to formula_113 by using locality-sensitive hashing and reversible layers. Sparse attention uses attention graphs that grows slower than formula_112. For example, BigBird (2020) uses random small-world networks which grows as formula_114. Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value. Random Feature Attention. Random Feature Attention (2021) uses Fourier random features:formula_115where formula_116 are independent samples from the normal distribution formula_117. This choice of parameters satisfy formula_118, or formula_119Consequently, the one-headed attention, with one query, can be written as formula_120where formula_121. Similarly for multiple queries, and for multiheaded attention. This approximation can be computed in linear time, as we can compute the matrix formula_122 first, then multiply it with the query. In essence, we have managed to obtain a more precise version of formula_123Performer (2022) uses the same Random Feature Attention, but formula_116 are first independently sampled from the normal distribution formula_117, then they are Gram-Schmidt processed. Multimodality. Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned. Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers are a variant of Transformers designed for multimodality. For image generation, two notable architectures are DALL-E 1 (2021) and Parti (2022). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Applications. The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, AlbertAGPT, Claude, BERT, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world or practical applications, including: Beyond traditional NLP, the transformer architecture has had success in other applications, such as: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{Loss} = -\\sum_{t\\in\\text{masked tokens}}\\ln(\\text{probability of }t\\text{ conditional on its context}) " }, { "math_id": 1, "text": "xW" }, { "math_id": 2, "text": "n_{\\text{vocabulary}}" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "3" }, { "math_id": 5, "text": "[0, 0, 0, 1, 0, 0, \\dots]" }, { "math_id": 6, "text": "\\mathrm{Embed}(3) = [0, 0, 0, 1, 0, 0, \\dots]M" }, { "math_id": 7, "text": "d_{\\text{emb}}" }, { "math_id": 8, "text": "\\mathrm{UnEmbed}(x) = \\mathrm{softmax}(xW + b)" }, { "math_id": 9, "text": "(d_{\\text{emb}}, n_{\\text{vocabulary}})" }, { "math_id": 10, "text": "f: \\R \\to \\R^d; d \\in \\mathbb{Z}, d > 0" }, { "math_id": 11, "text": "d" }, { "math_id": 12, "text": "(f(t)_{2k}, f(t)_{2k+1}) = (\\sin(\\theta), \\cos(\\theta)) \\quad \\forall k \\in \\{0, 1, \\ldots, d/2 - 1\\}" }, { "math_id": 13, "text": "\\theta = \\frac{t}{r^k}, r = N^{2/d}" }, { "math_id": 14, "text": "N" }, { "math_id": 15, "text": "k" }, { "math_id": 16, "text": "N=10000" }, { "math_id": 17, "text": "f: \\R \\to \\mathbb C^{d/2}" }, { "math_id": 18, "text": "f(t) = \\left(e^{it/r^k}\\right)_{k=0, 1, \\ldots, \\frac d 2 - 1}" }, { "math_id": 19, "text": "r = N^{2/d}" }, { "math_id": 20, "text": "f(t + \\Delta t) = \\mathrm{diag}(f(\\Delta t)) f(t)" }, { "math_id": 21, "text": "\\Delta t \\in \\R" }, { "math_id": 22, "text": "\\sum_j c_j f(t + \\Delta t_j) = \\left(\\sum_j c_j \\,\\mathrm{diag}(f(\\Delta t_j))\\right) f(t)" }, { "math_id": 23, "text": "c_j" }, { "math_id": 24, "text": "\\mathrm{FFN}(x) = \\phi(xW^{(1)} + b^{(1)})W^{(2)} + b^{(2)}" }, { "math_id": 25, "text": "\\phi" }, { "math_id": 26, "text": "d_{\\text{ffn}} = 4 d_{\\text{emb}}" }, { "math_id": 27, "text": "W^Q" }, { "math_id": 28, "text": "W^K" }, { "math_id": 29, "text": "W^V" }, { "math_id": 30, "text": "\\ell_{\\text{seq, query}}" }, { "math_id": 31, "text": "d_{\\text{emb, query}}" }, { "math_id": 32, "text": "x_{i, \\text{query}}" }, { "math_id": 33, "text": "q_i = x_{i, \\text{query}} W^Q" }, { "math_id": 34, "text": "Q = X_{\\text{query}} W^Q" }, { "math_id": 35, "text": "K = X_{\\text{key}} W^K" }, { "math_id": 36, "text": "V = X_{\\text{value}} W^V" }, { "math_id": 37, "text": "W^Q, W^K, W^V " }, { "math_id": 38, "text": "d_{\\text{emb, query}}= d_{\\text{query}}" }, { "math_id": 39, "text": "a_{ij}" }, { "math_id": 40, "text": "i" }, { "math_id": 41, "text": "j" }, { "math_id": 42, "text": "q_i" }, { "math_id": 43, "text": "k_j" }, { "math_id": 44, "text": "\\sqrt{d_k}" }, { "math_id": 45, "text": "q_i\\cdot k_j" }, { "math_id": 46, "text": "q_j\\cdot k_i" }, { "math_id": 47, "text": "Q" }, { "math_id": 48, "text": "K" }, { "math_id": 49, "text": "V" }, { "math_id": 50, "text": "k_i" }, { "math_id": 51, "text": "v_i" }, { "math_id": 52, "text": "\\begin{align}\n\\text{Attention}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^\\mathrm{T}}{\\sqrt{d_k}}\\right)V\n\\end{align}" }, { "math_id": 53, "text": "d_{\\text{query}}" }, { "math_id": 54, "text": "d_{\\text{key}}" }, { "math_id": 55, "text": "d_{\\text{value}}" }, { "math_id": 56, "text": "d_{\\text{head}}" }, { "math_id": 57, "text": "\\ell_{\\text {seq, key}}=\\ell_{\\text {seq, value}}, \\;d_{\\text {query}}=d_{\\text {key}}, \\; d_{\\text {value}}=d_{\\text {head}}\n" }, { "math_id": 58, "text": "X_{\\text{query}} = X_{\\text{key}} = X_{\\text{value}} " }, { "math_id": 59, "text": "X_{\\text{query}} \\neq X_{\\text{key}} = X_{\\text{value}} " }, { "math_id": 60, "text": "\\left( W^Q, W^K, W^V \\right)" }, { "math_id": 61, "text": "\\text{MultiheadedAttention}(Q, K, V) = \\text{Concat}_{i \\in [n_{\\text{heads}}]}(\\text{Attention}(XW^Q_i, XW^K_i, XW^V_i)) W^O" }, { "math_id": 62, "text": "X" }, { "math_id": 63, "text": "W^Q_i, W^K_i, W^V_i" }, { "math_id": 64, "text": "W^O" }, { "math_id": 65, "text": "d_{\\text{emb}} = 768, n_{\\text{head}} = 12, d_{\\text{head}} = 64" }, { "math_id": 66, "text": "12 \\times 64 = 768" }, { "math_id": 67, "text": "W^O \\in \\R^{(64\\times 12) \\times 768}" }, { "math_id": 68, "text": "t" }, { "math_id": 69, "text": "t+1" }, { "math_id": 70, "text": "-\\infty" }, { "math_id": 71, "text": "0" }, { "math_id": 72, "text": "\\begin{align}\n\\text{MaskedAttention}(Q, K, V) = \\text{softmax}\\left(M + \\frac{QK^\\mathrm{T}}{\\sqrt{d_k}}\\right)V\n\\end{align}" }, { "math_id": 73, "text": "M_{\\text{causal}} = \\begin{bmatrix}\n0 & -\\infty & -\\infty & \\dots & -\\infty \\\\\n0 & 0 & -\\infty & \\dots & -\\infty \\\\\n0 & 0 & 0 & \\dots & -\\infty \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & 0 & \\dots & 0\n\\end{bmatrix}\n" }, { "math_id": 74, "text": "P M_{\\text{causal}} P^{-1}\n" }, { "math_id": 75, "text": "P\n" }, { "math_id": 76, "text": "\\begin{aligned}\n\\text{given input vectors } & h_0, h_1, \\dots\\\\\n\\text{combine them into a matrix } H &= \\begin{bmatrix} h_0 \\\\ h_1 \\\\ \\vdots \\end{bmatrix} \\\\\n\\text{EncoderLayer}(H) &= \\begin{bmatrix} \\text{FFN}(\\text{MultiheadedAttention}(H, H, H)_0) \\\\ \\text{FFN}(\\text{MultiheadedAttention}(H, H, H)_1) \\\\ \\vdots \\end{bmatrix} \\\\\n\n\\end{aligned}" }, { "math_id": 77, "text": "\\text{FFN}" }, { "math_id": 78, "text": "\\text{EncoderLayer}(H) = \\text{FFN}(\\text{MultiheadedAttention}(H, H, H))\n" }, { "math_id": 79, "text": "\\begin{aligned}\nH' &= \\text{MaskedMultiheadedAttention}(H, H, H) \\\\\n\\text{DecoderLayer}(H) &=\\text{FFN}(\\text{MultiheadedAttention}(H', H^E, H^E))\n\\end{aligned}\n" }, { "math_id": 80, "text": "H^E\n" }, { "math_id": 81, "text": "\\mathrm{LayerNorm}(x + \\mathrm{Sublayer}(x))" }, { "math_id": 82, "text": "\\mathrm{Sublayer}(x)" }, { "math_id": 83, "text": "x + \\mathrm{Sublayer}(\\mathrm{LayerNorm}(x))" }, { "math_id": 84, "text": "M_{\\text{prefixLM}} = \\begin{bmatrix}\n\\mathbf{0} & 0, -\\infty \\\\\n\\mathbf{0} & M_{\\text{causal}} \n\\end{bmatrix}\n" }, { "math_id": 85, "text": "[(x^{(1)}_1, x^{(2)}_1), (x^{(1)}_2, x^{(2)}_2), (x^{(1)}_3, x^{(2)}_3), ...]" }, { "math_id": 86, "text": "\\theta" }, { "math_id": 87, "text": "\\text{RoPE}\\big(x^{(1)}_m, x^{(2)}_m, m\\big) =\n\\begin{pmatrix} \\cos m \\theta & - \\sin m \\theta \\\\\n\\sin m \\theta & \\cos m \\theta \\end{pmatrix}\n\\begin{pmatrix} x^{(1)}_m \\\\ x^{(2)}_m \\\\ \\end{pmatrix} = \\begin{pmatrix} x^{(1)}_m \\cos m\\theta - x^{(2)}_m \\sin m \\theta \\\\ x^{(2)}_m \\cos m\\theta + x^{(1)}_m \\sin m \\theta \\\\ \\end{pmatrix}\n" }, { "math_id": 88, "text": "z_m := x^{(1)}_m + i x^{(2)}_m" }, { "math_id": 89, "text": "\\text{RoPE}\\big(z_m, m\\big) = e^{i m\\theta} z_m\n" }, { "math_id": 90, "text": "2n" }, { "math_id": 91, "text": "\\theta^{(1)}, ..., \\theta^{(n)}" }, { "math_id": 92, "text": "\n\\text{RoPE}\\big(x, m\\big)^T\\text{RoPE}\\big(y, n\\big)\n=\n\\text{RoPE}\\big(x, m+k\\big)^T\\text{RoPE}\\big(y, n+k\\big)\n" }, { "math_id": 93, "text": "\\begin{align}\n\\text{Attention}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^\\mathrm{T}}{\\sqrt{d_k}} + s B\\right)V\n\\end{align}" }, { "math_id": 94, "text": "s" }, { "math_id": 95, "text": "B" }, { "math_id": 96, "text": "B = \\begin{pmatrix}\n0 & 1 & 2 & 3 & \\cdots \\\\\n-1 & 0 & 1 & 2 & \\cdots \\\\\n-2 & -1 & 0 & 1 & \\cdots \\\\\n-3 & -2 & -1 & 0 & \\cdots \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & \\ddots \\\\\n\\end{pmatrix}\n" }, { "math_id": 97, "text": "B_{i, j} = j - i" }, { "math_id": 98, "text": "\\begin{align}\n\\text{Attention}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^\\mathrm{T}}{\\sqrt{d_k}} + B\\right)V\n\\end{align}" }, { "math_id": 99, "text": "B_{i, j} = B_{i', j'}" }, { "math_id": 100, "text": "i-j = i'-j'" }, { "math_id": 101, "text": "\\text{MultiheadedAttention}(Q, K, V) = \\text{Concat}_{i \\in [n_{\\text{heads}}]}\\left(\\text{Attention}(XW^Q_i, XW^K_i, XW^V_i)\\right) W^O" }, { "math_id": 102, "text": "W^K, W^V" }, { "math_id": 103, "text": "\\text{MultiQueryAttention}(Q, K, V) = \\text{Concat}_{i \\in [n_{\\text{heads}}]}\\left(\\text{Attention}(XW^Q_i, XW^K, XW^V)\\right) W^O" }, { "math_id": 104, "text": "x_1, x_2, ..., x_{512}" }, { "math_id": 105, "text": "x_t" }, { "math_id": 106, "text": "\\tilde{x_1}, \\tilde{x_2}, \\tilde{x_3}, \\tilde{x_4}" }, { "math_id": 107, "text": "\\tilde{x_1}" }, { "math_id": 108, "text": "\\tilde{x_2}" }, { "math_id": 109, "text": "x_3" }, { "math_id": 110, "text": "\\tilde{x_3}" }, { "math_id": 111, "text": "\\tilde{x_4}" }, { "math_id": 112, "text": "O(N^2)" }, { "math_id": 113, "text": "O(N\\ln N)" }, { "math_id": 114, "text": "O(N)" }, { "math_id": 115, "text": "\\varphi(x) = \\frac{1}{\\sqrt D}[\\cos\\langle w_1, x\\rangle, \\sin\\langle w_1, x\\rangle, \\cdots \\cos\\langle w_D, x\\rangle, \\sin\\langle w_D, x\\rangle]^T" }, { "math_id": 116, "text": "w_1, ..., w_D" }, { "math_id": 117, "text": "N(0, \\sigma^2 I)" }, { "math_id": 118, "text": "\\mathbb E[\\langle \\varphi(x), \\varphi(y)\\rangle] = e^{-\\frac{\\|x-y\\|^2}{2\\sigma^2}}" }, { "math_id": 119, "text": "e^{\\langle x, y\\rangle/\\sigma^2} = \\mathbb E[\\langle e^{\\|x\\|^2/2\\sigma^2} \\varphi(x), e^{\\|y\\|^2/2\\sigma^2}\\varphi(y)\\rangle] \\approx \\langle e^{\\|x\\|^2/2\\sigma^2} \\varphi(x), e^{\\|y\\|^2/2\\sigma^2}\\varphi(y)\\rangle " }, { "math_id": 120, "text": "\n\\text{Attention}(q, K, V) = \\text{softmax}\\left(\\frac{qK^\\mathrm{T}}{\\sqrt{d_k}}\\right)V\n\n\\approx \\frac{\\varphi(q)^T \\sum_i e^{\\|k_i\\|^2/2\\sigma^2}\\varphi(k_i) v_i^T}{\\varphi(q)^T \\sum_i e^{\\|k_i\\|^2/2\\sigma^2}\\varphi(k_i)}" }, { "math_id": 121, "text": "\\sigma = d_K^{1/4}" }, { "math_id": 122, "text": "\\varphi(k_i) v_i^T" }, { "math_id": 123, "text": "\\text{Attention}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^\\mathrm{T}}{\\sqrt{d_k}}\\right)V \\approx Q(K^TV/\\sqrt{d_k})\n" } ]
https://en.wikipedia.org/wiki?curid=61603971
6160804
Polyharmonic spline
In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cubic splines in one dimension. Definition. A polyharmonic spline is a linear combination of polyharmonic radial basis functions (RBFs) denoted by formula_0 plus a polynomial term: where The polynomial with the coefficients formula_10 improves fitting accuracy for polyharmonic smoothing splines and also improves extrapolation away from the centers formula_11 See figure below for comparison of splines with polynomial term and without polynomial term. The polyharmonic RBFs are of the form: formula_12 Other values of the exponent formula_13 are not useful (such as formula_14), because a solution of the interpolation problem might not exist. To avoid problems at formula_15 (since formula_16), the polyharmonic RBFs with the natural logarithm might be implemented as: formula_17 or, more simply adding a continuity extension in formula_15 formula_18 The weights formula_19 and formula_20 are determined such that the function interpolates formula_6 given points formula_21 (for formula_22) and fulfills the formula_9 orthogonality conditions formula_23 All together, these constraints are equivalent to the symmetric linear system of equations where formula_24 In order for this system of equations to have a unique solution, formula_25 must be full rank. formula_25 is full rank for very mild conditions on the input data. For example, in two dimensions, three centers forming a non-degenerate triangle ensure that formula_25 is full rank, and in three dimensions, four centers forming a non-degenerate tetrahedron ensure that B is full rank. As explained later, the linear transformation resulting from the restriction of the domain of the linear transformation formula_26 to the null space of formula_27 is positive definite. This means that if formula_25 is full rank, the system of equations (2) always has a unique solution and it can be solved using a linear solver specialised for symmetric matrices. The computed weights allow evaluation of the spline for any formula_28 using equation (1). Many practical details of implementing and using polyharmonic splines are explained in Fasshauer. In Iske polyharmonic splines are treated as special cases of other multiresolution methods in scattered data modelling. Discussion. The main advantage of polyharmonic spline interpolation is that usually very good interpolation results are obtained for scattered data without performing any "tuning", so automatic interpolation is feasible. This is not the case for other radial basis functions. For example, the Gaussian function formula_29 needs to be tuned, so that formula_13 is selected according to the underlying grid of the independent variables. If this grid is non-uniform, a proper selection of formula_13 to achieve a good interpolation result is difficult or impossible. Main disadvantages are: Fast construction and evaluation methods. One straightforward approach to speeding up model construction and evaluation is to use a subset of formula_13 nearest interpolation nodes to build a local model every time we evaluate the spline. As a result, the total time needed for model construction and evaluation at formula_32 points changes from formula_35 to formula_36. This can yield better timings if formula_13 is much less than formula_6. Such an approach is advocated by some software libraries, the most notable being scipy.interpolate.RBFInterpolator. The main drawback is that it introduces small discontinuities in the spline and requires problem-specific tuning: a proper choice of the neighbors count, formula_13. Recently, methods have been developed to overcome the aforementioned difficulties without sacrificing main advantages of polyharmonic splines. First, a bunch of methods for fast formula_37 evaluation were proposed: Second, an accelerated model construction by applying an iterative solver to an ACBF-preconditioned linear system was proposed by Brown et al. This approach reduces running time from formula_40 to formula_30, and further to formula_41 when combined with accelerated evaluation techniques. The approaches above are often employed by commercial geospatial data analysis libraries and by some open source implementations (e.g. ALGLIB). Sometimes domain decomposition methods are used to improve asymptotic behavior, reducing memory requirements from formula_30 to formula_42, thus making polyharmonic splines suitable for datasets with more than 1.000.000 points. Reason for the name "polyharmonic". A polyharmonic equation is a partial differential equation of the form formula_43 for any natural number formula_44, where formula_45 is the Laplace operator. For example, the biharmonic equation is formula_46 and the triharmonic equation is formula_47. All the polyharmonic radial basis functions are solutions of a polyharmonic equation (or more accurately, a modified polyharmonic equation with a Dirac delta function on the right hand side instead of 0). For example, the thin plate radial basis function is a solution of the modified 2-dimensional biharmonic equation. Applying the 2D Laplace operator (formula_48) to the thin plate radial basis function formula_49 either by hand or using a computer algebra system shows that formula_50. Applying the Laplace operator to formula_51 (this is formula_52) yields 0. But 0 is not exactly correct. To see this, replace formula_53 with formula_54 (where formula_55 is some small number tending to 0). The Laplace operator applied to formula_56 yields formula_57. For formula_58 the right hand side of this equation approaches infinity as formula_55 approaches 0. For any other formula_59, the right hand side approaches 0 as formula_55 approaches 0. This indicates that the right hand side is a Dirac delta function. A computer algebra system will show that formula_60 So the thin plate radial basis function is a solution of the equation formula_61. Applying the 3D Laplacian (formula_62) to the biharmonic RBF formula_63 yields formula_64 and applying the 3D formula_65 operator to the triharmonic RBF formula_66 yields formula_67. Letting formula_68 and computing formula_69 again indicates that the right hand side of the PDEs for the biharmonic and triharmonic RBFs are Dirac delta functions. Since formula_70 the exact PDEs satisfied by the biharmonic and triharmonic RBFs are formula_71 and formula_72. Polyharmonic smoothing splines. Polyharmonic splines minimize where formula_73 is some box in formula_74 containing a neighborhood of all the centers, formula_75 is some positive constant, and formula_76 is the vector of all formula_44th order partial derivatives of formula_77 For example, in 2D formula_78 and formula_79 and in 3D formula_80. In 2D formula_81 making the integral the simplified thin plate energy functional. To show that polyharmonic splines minimize equation (3), the fitting term must be transformed into an integral using the definition of the Dirac delta function: formula_82 So equation (3) can be written as the functional formula_83 where formula_84 is a multi-index that ranges over all partial derivatives of order formula_44 for formula_85 In order to apply the Euler–Lagrange equation for a single function of multiple variables and higher order derivatives, the quantities formula_86 and formula_87 are needed. Inserting these quantities into the E−L equation shows that A weak solution formula_88 of (4) satisfies for all smooth test functions formula_89 that vanish outside of formula_90 A weak solution of equation (4) will still minimize (3) while getting rid of the delta function through integration. Let formula_91 be a polyharmonic spline as defined by equation (1). The following calculations will show that formula_91 satisfies (5). Applying the formula_92 operator to equation (1) yields formula_93 where formula_94 formula_95 and formula_96 So (5) is equivalent to The only possible solution to (6) for all test functions formula_89 is (which implies interpolation if formula_97). Combining the definition of formula_98 in equation (1) with equation (7) results in almost the same linear system as equation (2) except that the matrix formula_99 is replaced with formula_100 where formula_101 is the formula_102 identity matrix. For example, for the 3D triharmonic RBFs, formula_26 is replaced with formula_103 Explanation of additional constraints. In (2), the bottom half of the system of equations (formula_104) is given without explanation. The explanation first requires deriving a simplified form of formula_105 when formula_73 is all of formula_85 First, require that formula_106 This ensures that all derivatives of order formula_44 and higher of formula_107 vanish at infinity. For example, let formula_108 and formula_109 and formula_0 be the triharmonic RBF. Then formula_110 (considering formula_0 as a mapping from formula_111 to formula_112). For a given center formula_113 formula_114 On a line formula_115 for arbitrary point formula_116 and unit vector formula_117 formula_118 Dividing both numerator and denominator of this by formula_119 shows that formula_120 a quantity independent of the center formula_121 So on the given line, formula_122 It is not quite enough to require that formula_123 because in what follows it is necessary for formula_124 to vanish at infinity, where formula_125 and formula_126 are multi-indices such that formula_127 For triharmonic formula_128 formula_129 (where formula_130 and formula_131 are the weights and centers of formula_89) is always a sum of total degree 5 polynomials in formula_132 formula_133 and formula_134 divided by the square root of a total degree 8 polynomial. Consider the behavior of these terms on the line formula_115 as formula_135 approaches infinity. The numerator is a degree 5 polynomial in formula_136 Dividing numerator and denominator by formula_137 leaves the degree 4 and 5 terms in the numerator and a function of formula_138 only in the denominator. A degree 5 term divided by formula_137 is a product of five formula_139 coordinates and formula_136 The formula_140 (and formula_141) constraint makes this vanish everywhere on the line. A degree 4 term divided by formula_137 is either a product of four formula_139 coordinates and an formula_142 coordinate or a product of four formula_139 coordinates and a single formula_143 or formula_144 coordinate. The formula_140 constraint makes the first type of term vanish everywhere on the line. The additional constraints formula_145 will make the second type of term vanish. Now define the inner product of two functions formula_146 defined as a linear combination of polyharmonic RBFs formula_147 with formula_148 and formula_149 as formula_150 Integration by parts shows that For example, let formula_151 and formula_152 Then Integrating the first term of this by parts once yields formula_153 since formula_154 vanishes at infinity. Integrating by parts again results in formula_155 So integrating by parts twice for each term of (9) yields formula_156 Since formula_157 (8) shows that formula_158 So if formula_159 and formula_160 Now the origin of the constraints formula_104 can be explained. Here formula_25 is a generalization of the formula_25 defined above to possibly include monomials up to degree formula_161 In other words, formula_162 where formula_163 is a column vector of all degree formula_164 monomials of the coordinates of formula_11 The top half of (2) is equivalent to formula_165 So to obtain a smoothing spline, one should minimize the scalar field formula_166 defined by formula_167 The equations formula_168 and formula_169 (where formula_170 denotes row formula_171 of formula_26) are equivalent to the two systems of linear equations formula_172 and formula_173 Since formula_26 is invertible, the first system is equivalent to formula_174 So the first system implies the second system is equivalent to formula_175 Just as in the previous smoothing spline coefficient derivation, the top half of (2) becomes formula_176 This derivation of the polyharmonic smoothing spline equation system did not assume the constraints necessary to guarantee that formula_177 But the constraints necessary to guarantee this, formula_178 and formula_179 are a subset of formula_180 which is true for the critical point formula_181 of formula_182 So formula_183 is true for the formula_91 formed from the solution of the polyharmonic smoothing spline equation system. Because the integral is positive for all formula_184 the linear transformation resulting from the restriction of the domain of linear transformation formula_26 to formula_181 such that formula_185 must be positive definite. This fact enables transforming the polyharmonic smoothing spline equation system to a symmetric positive definite system of equations that can be solved twice as fast using the Cholesky decomposition. Examples. The next figure shows the interpolation through four points (marked by "circles") using different types of polyharmonic splines. The "curvature" of the interpolated curves grows with the order of the spline and the extrapolation at the left boundary ("x" &lt; 0) is reasonable. The figure also includes the radial basis functions "φ" = exp(−"r"2) which gives a good interpolation as well. Finally, the figure includes also the non-polyharmonic spline phi = r2 to demonstrate, that this radial basis function is not able to pass through the predefined points (the linear equation has no solution and is solved in a least squares sense). The next figure shows the same interpolation as in the first figure, with the only exception that the points to be interpolated are scaled by a factor of 100 (and the case phi = r2 is no longer included). Since "φ" = (scale·"r")"k" = (scale"k")·"r""k", the factor (scale"k") can be extracted from matrix A of the linear equation system and therefore the solution is not influenced by the scaling. This is different for the logarithmic form of the spline, although the scaling has not much influence. This analysis is reflected in the figure, where the interpolation shows not much differences. Note, for other radial basis functions, such as "φ" = exp(−"kr"2) with "k" = 1, the interpolation is no longer reasonable and it would be necessary to adapt "k". The next figure shows the same interpolation as in the first figure, with the only exception that the polynomial term of the function is not taken into account (and the case phi = r2 is no longer included). As can be seen from the figure, the extrapolation for "x" &lt; 0 is no longer as "natural" as in the first figure for some of the basis functions. This indicates, that the polynomial term is useful if extrapolation occurs. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Computer Code
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "\\mathbf{x} = [x_1 \\ x_2 \\ \\cdots \\ x_{d}]^{\\textrm{T}}" }, { "math_id": 2, "text": "\\textrm{T}" }, { "math_id": 3, "text": "\\mathbf{x}" }, { "math_id": 4, "text": "d" }, { "math_id": 5, "text": "\\mathbf{c}_i = [c_{i,1} \\ c_{i,2} \\ \\cdots \\ c_{i,d}]^{\\textrm{T}}" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "\\mathbf{w} = [w_1 \\ w_2 \\ \\cdots \\ w_N]^{\\textrm{T}}" }, { "math_id": 8, "text": "\\mathbf{v} = [v_1 \\ v_2 \\ \\cdots \\ v_{d+1}]^{\\textrm{T}}" }, { "math_id": 9, "text": "d+1" }, { "math_id": 10, "text": "\\mathbf{v}" }, { "math_id": 11, "text": "\\mathbf{c}_i." }, { "math_id": 12, "text": " \n\\begin{align}\n \\varphi(r) &= \\begin{cases}\n r^k & \\text{with } k=1,3,5,\\ldots, \\\\\n r^k \\ln(r) & \\text{with } k=2,4,6,\\ldots\n \\end{cases} \\\\[5mm]\n r &= |\\mathbf{x} - \\mathbf{c}_i| \n = \\sqrt{ (\\mathbf{x} - \\mathbf{c}_i)^\\mathrm{T} \\, (\\mathbf{x} - \\mathbf{c}_i) }.\n \\end{align} \n" }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": " \\varphi(r) = r^2 " }, { "math_id": 15, "text": "r=0" }, { "math_id": 16, "text": "\\log(0) = -\\infty" }, { "math_id": 17, "text": "\n\\varphi(r) = \\begin{cases}\n r^{k-1} \\ln(r^r) & \\text{for } r < 1, \\quad \\text {(this works because }0^0\\text{ is defined)} \\\\\n r^k \\ln(r) & \\text{for } r \\ge 1.\n \\end{cases}\n" }, { "math_id": 18, "text": "\n\\varphi(r) = \\begin{cases}\n 0 & \\text{for } r< \\epsilon, \\quad \\text{(for some very small value of }\\epsilon\\text{, e.g. if using floating point numbers in double precisions, } \\epsilon=10^{-200}\\text{)}\\\\\n r^k \\ln(r) & \\text{for } r \\ge \\epsilon.\n \\end{cases}\n" }, { "math_id": 19, "text": "w_i" }, { "math_id": 20, "text": "v_j" }, { "math_id": 21, "text": "(\\mathbf{c}_i, f_i)" }, { "math_id": 22, "text": "i=1,2,\\ldots,N" }, { "math_id": 23, "text": " \n \\sum_{i=1}^N w_i=0, \\;\\; \\sum_{i=1}^N w_i \\mathbf{c}_i=\\mathbf{0}.\n" }, { "math_id": 24, "text": " \n A_{i,j} = \\varphi(|\\mathbf{c}_i - \\mathbf{c}_j|), \\quad\n B = \n \\begin{bmatrix}\n 1 & 1 & \\cdots & 1 \\\\\n \\mathbf{c}_1 & \\mathbf{c}_2 & \\cdots & \\mathbf{c}_N\n \\end{bmatrix}^{\\textrm{T}}, \\quad\n \\mathbf{f} = [f_1, f_2, \\ldots, f_N]^{\\textrm{T}}.\n" }, { "math_id": 25, "text": "B" }, { "math_id": 26, "text": "A" }, { "math_id": 27, "text": " B^{\\textrm{T}}" }, { "math_id": 28, "text": "\\mathbf{x}\\in\\mathbb{R}^d" }, { "math_id": 29, "text": "e^{-k\\cdot r^2}" }, { "math_id": 30, "text": "O(N^2)" }, { "math_id": 31, "text": "O(N^3)." }, { "math_id": 32, "text": "M" }, { "math_id": 33, "text": "O(MN)" }, { "math_id": 34, "text": "N," }, { "math_id": 35, "text": "O(N^3+MN)" }, { "math_id": 36, "text": "O(k^3*M)" }, { "math_id": 37, "text": "O(\\log N)" }, { "math_id": 38, "text": "r^{2k-1}" }, { "math_id": 39, "text": "r^{2k}\\log r" }, { "math_id": 40, "text": "O(N^3)" }, { "math_id": 41, "text": "O(N\\log N)" }, { "math_id": 42, "text": "O(N)" }, { "math_id": 43, "text": "\\Delta^m f = 0" }, { "math_id": 44, "text": "m" }, { "math_id": 45, "text": "\\Delta" }, { "math_id": 46, "text": "\\Delta^2 f = 0" }, { "math_id": 47, "text": "\\Delta^3 f = 0" }, { "math_id": 48, "text": "\\Delta = \\partial_{xx} + \\partial_{yy}" }, { "math_id": 49, "text": "f_{\\text{tp}}(x,y) = (x^2+y^2) \\log \\sqrt{x^2+y^2}" }, { "math_id": 50, "text": "\\Delta f_{\\text{tp}} = 4 + 4\\log r" }, { "math_id": 51, "text": "\\Delta f_{\\text{tp}}" }, { "math_id": 52, "text": "\\Delta^2 f_{\\text{tp}}" }, { "math_id": 53, "text": "r^2=x^2+y^2" }, { "math_id": 54, "text": "\\rho^2 = x^2+y^2+h^2" }, { "math_id": 55, "text": "h" }, { "math_id": 56, "text": "4 \\log \\rho" }, { "math_id": 57, "text": "\\Delta^2 f_{\\text{tp}} = 8h^2 / \\rho^4" }, { "math_id": 58, "text": "(x,y)=(0,0)," }, { "math_id": 59, "text": "(x,y)" }, { "math_id": 60, "text": "\\lim_{h \\to 0}\\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty 8h^2/(x^2+y^2+h^2)^2 \\,dx \\,dy = 8\\pi." }, { "math_id": 61, "text": "\\Delta^2 f_{\\text{tp}} = 8\\pi\\delta(x,y)" }, { "math_id": 62, "text": "\\Delta = \\partial_{xx} + \\partial_{yy} + \\partial_{zz}" }, { "math_id": 63, "text": "f_{\\text{bi}}(x,y,z)=\\sqrt{x^2+y^2+z^2}" }, { "math_id": 64, "text": "\\Delta f_{\\text{bi}} = 2/r" }, { "math_id": 65, "text": "\\Delta^2" }, { "math_id": 66, "text": "f_{\\text{tri}}(x,y,z) = (x^2+y^2+z^2)^{3/2}" }, { "math_id": 67, "text": "\\Delta^2 f_{\\text{tri}} = 24/r" }, { "math_id": 68, "text": "\\rho^2 = x^2+y^2+z^2+h^2" }, { "math_id": 69, "text": "\\Delta(1/\\rho) = -3h^2 / \\rho^5" }, { "math_id": 70, "text": "\\lim_{h \\to 0}\\int_{-\\infty}^\\infty\\int_{-\\infty}^\\infty\\int_{-\\infty}^\\infty -3h^2/(x^2+y^2+z^2+h^2)^{5/2} \\,dx \\,dy \\,dz = -4\\pi," }, { "math_id": 71, "text": "\\Delta^2 f_{\\text{bi}} = -8\\pi\\delta(x,y,z)" }, { "math_id": 72, "text": "\\Delta^3 f_{\\text{tri}} = -96\\pi\\delta(x,y,z)" }, { "math_id": 73, "text": "\\mathcal{B}" }, { "math_id": 74, "text": "\\mathbb{R}^d" }, { "math_id": 75, "text": "\\lambda" }, { "math_id": 76, "text": "\\nabla^m f" }, { "math_id": 77, "text": "f." }, { "math_id": 78, "text": "\\nabla^1 f = (f_x\\ f_y)" }, { "math_id": 79, "text": "\\nabla^2 f = (f_{xx} \\ f_{xy} \\ f_{yx} \\ f_{yy})" }, { "math_id": 80, "text": "\\nabla^2 f = (f_{xx} \\ f_{xy} \\ f_{xz} \\ f_{yx} \\ f_{yy} \\ f_{yz} \\ f_{zx} \\ f_{zy} \\ f_{zz})" }, { "math_id": 81, "text": "|\\nabla^2 f|^2 = f_{xx}^2 + 2f_{xy}^2 + f_{yy}^2," }, { "math_id": 82, "text": "\\sum_{i=1}^N (f(\\mathbf{c}_i) - f_i)^2 = \\int_{\\mathcal{B}}\\sum_{i=1}^N (f(\\mathbf{x}) - f_i)^2 \\delta(\\mathbf{x} - \\mathbf{c}_i) \\,d\\mathbf{x}." }, { "math_id": 83, "text": "J[f] = \\int_{\\mathcal{B}} F(\\mathbf{x},f, \\partial^{\\alpha_1}f, \\partial^{\\alpha_2}f, \\ldots, \\partial^{\\alpha_n}f ) \\,d\\mathbf{x} = \\int_{\\mathcal{B}} \\left[ \\sum_{i=1}^N (f(\\mathbf{x}) - f_i)^2 \\delta(\\mathbf{x} - \\mathbf{c}_i) + \\lambda |\\nabla^m f|^2 \\right]\\,d\\mathbf{x}." }, { "math_id": 84, "text": "\\alpha_i" }, { "math_id": 85, "text": "\\mathbb{R}^d." }, { "math_id": 86, "text": "{\\partial F \\over\\partial f} = 2\\sum_{i=1}^N (f(\\mathbf{x}) - f_i) \\delta(\\mathbf{x} - x_i)" }, { "math_id": 87, "text": "\\sum_{i=1}^n \\partial^{\\alpha_i} {\\partial F \\over\\partial (\\partial^{\\alpha_i}f)} = 2\\lambda \\Delta^m f" }, { "math_id": 88, "text": "f(\\mathbf{x})" }, { "math_id": 89, "text": "g" }, { "math_id": 90, "text": "\\mathcal{B}." }, { "math_id": 91, "text": "f" }, { "math_id": 92, "text": "\\Delta^m" }, { "math_id": 93, "text": " \\Delta^m f = \\sum_{i=1}^M w_i C_{m,d} \\delta(\\mathbf{x} - \\mathbf{c}_i)" }, { "math_id": 94, "text": "C_{2,2} = 8\\pi, " }, { "math_id": 95, "text": "C_{2,3}=-8\\pi," }, { "math_id": 96, "text": "C_{3,3}=-96\\pi." }, { "math_id": 97, "text": "\\lambda=0" }, { "math_id": 98, "text": " f " }, { "math_id": 99, "text": " A " }, { "math_id": 100, "text": " A + (-1)^m C_{m,d}\\lambda I " }, { "math_id": 101, "text": " I " }, { "math_id": 102, "text": " N\\times N " }, { "math_id": 103, "text": "A + 96\\pi\\lambda I." }, { "math_id": 104, "text": "B^{\\textrm{T}}\\mathbf{w} = 0" }, { "math_id": 105, "text": " \\int_{\\mathcal{B}} |\\nabla^m f|^2 \\,d\\mathbf{x}" }, { "math_id": 106, "text": " \\sum_{i=1}^N w_i =0. " }, { "math_id": 107, "text": " f(\\mathbf{x}) = \\sum_{i=1}^N w_i \\varphi(|\\mathbf{x} - \\mathbf{c}_i|) " }, { "math_id": 108, "text": "m=3" }, { "math_id": 109, "text": "d=3" }, { "math_id": 110, "text": "\\varphi_{zzy} = 3y(x^2+y^2) / (x^2+y^2+z^2)^{3/2}" }, { "math_id": 111, "text": "\\mathbb{R}^3" }, { "math_id": 112, "text": "\\mathbb{R}" }, { "math_id": 113, "text": "\\mathbf{P} = (P_1,P_2,P_3)," }, { "math_id": 114, "text": "\\varphi_{zzy}(\\mathbf{x} - \\mathbf{P}) = \\frac{3(y-P_2)((y-P_2)^2 + (x-P_1)^2)}{((x-P_1)^2 + (y-P_2)^2 + (z-P_3)^2)^{3/2}}." }, { "math_id": 115, "text": "\\mathbf{x} = \\mathbf{a} + t\\mathbf{b}" }, { "math_id": 116, "text": "\\mathbf{a}" }, { "math_id": 117, "text": "\\mathbf{b}," }, { "math_id": 118, "text": "\\varphi_{zzy}(\\mathbf{x} - \\mathbf{P}) = \\frac{3(a_2+b_2t - P_2)((a_2+b_2t-P_2)^2 + (a_1+b_1t-P_1)^2)}{((a_1+b_1t-P_1)^2 + (a_2+b_2t-P_2)^2 + (a_3+b_3t-P_3)^2)^{3/2}}." }, { "math_id": 119, "text": "t^3" }, { "math_id": 120, "text": " \\lim_{t \\to \\infty} \\varphi_{zyy}(\\mathbf{x}-\\mathbf{P}) = 3b_2(b_2^2 + b_1^2) / (b_1^2 + b_2^2 + b_3^2)^{3/2}," }, { "math_id": 121, "text": "\\mathbf{P}." }, { "math_id": 122, "text": " \\lim_{t\\to\\infty} f_{zyy}(\\mathbf{x}) = \\lim_{t\\to\\infty}\\sum_{i=1}^N w_i \\varphi_{zyy}(\\mathbf{x} - \\mathbf{c}_i) = \\left(\\sum_{i=1}^N w_i\\right)3b_2(b_2^2 + b_1^2) / (b_1^2 + b_2^2 + b_3^2)^{3/2} = 0. " }, { "math_id": 123, "text": "\\sum_{i=1}^N w_i =0," }, { "math_id": 124, "text": "f_{\\alpha}g_{\\beta}" }, { "math_id": 125, "text": "\\alpha" }, { "math_id": 126, "text": "\\beta" }, { "math_id": 127, "text": "|\\alpha|+|\\beta|=2m-1." }, { "math_id": 128, "text": "\\varphi," }, { "math_id": 129, "text": "w_i u_j\\varphi_\\alpha(\\mathbf{x}-\\mathbf{c}_i) \\varphi_\\beta(\\mathbf{x} - \\mathbf{d}_j)" }, { "math_id": 130, "text": "u_j" }, { "math_id": 131, "text": "\\mathbf{d}_j" }, { "math_id": 132, "text": "x," }, { "math_id": 133, "text": "y," }, { "math_id": 134, "text": "z" }, { "math_id": 135, "text": "t" }, { "math_id": 136, "text": "t." }, { "math_id": 137, "text": "t^4" }, { "math_id": 138, "text": "\\mathbf{b}" }, { "math_id": 139, "text": "b" }, { "math_id": 140, "text": "\\sum w = 0" }, { "math_id": 141, "text": "\\sum u=0" }, { "math_id": 142, "text": "a" }, { "math_id": 143, "text": "c_i" }, { "math_id": 144, "text": "d_j" }, { "math_id": 145, "text": " \\sum_{i=1}^N w_i \\mathbf{c}_i = 0 " }, { "math_id": 146, "text": "f,g:\\mathbb{R}^d \\to \\mathbb{R}" }, { "math_id": 147, "text": "\\varphi_{m,d}" }, { "math_id": 148, "text": "\\sum w = 0" }, { "math_id": 149, "text": " \\sum w \\mathbf{c}=0" }, { "math_id": 150, "text": "\\langle f, g \\rangle = \\int_{\\mathbb{R}^d} (\\nabla^m f) \\cdot (\\nabla^m g) \\, d\\mathbf{x}." }, { "math_id": 151, "text": " m=2 " }, { "math_id": 152, "text": " d=2. " }, { "math_id": 153, "text": "\\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty f_{xx}g_{xx} \\,dx \\,dy = \\int_{-\\infty}^\\infty f_x g_{xx}\\big|_{-\\infty}^{\\infty} \\,dy - \\int_{-\\infty}^{\\infty}\\int_{-\\infty}^{\\infty}f_x g_{xxx} \\,dx \\,dy = - \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty f_x g_{xxx} \\,dx \\,dy" }, { "math_id": 154, "text": "f_x g_{xx}" }, { "math_id": 155, "text": "\\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty f g_{xxxx} \\,dx \\,dy." }, { "math_id": 156, "text": " \\langle f,g\\rangle = \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty f (g_{xxxx} + 2g_{xxyy} + g_{yyyy}) \\,dx \\,dy = \\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty f (\\Delta^2 g) \\,dx \\,dy." }, { "math_id": 157, "text": "(\\Delta^m f)(\\mathbf{x}) = \\sum_{i=1}^N w_i C_{m,d}\\delta(\\mathbf{x - \\mathbf{c}_i})," }, { "math_id": 158, "text": " \n\\begin{align}\n\\langle f,f\\rangle &= (-1)^m \\int_{\\mathbb{R}^d} f(\\mathbf{x}) \\sum_{i=1}^N w_i (-1)^m C_{m,d}\\delta(\\mathbf{x - \\mathbf{c}_i}) \\,d\\mathbf{x}\n = (-1)^m C_{m,d} \\sum_{i=1}^N w_i f(\\mathbf{c}_i) \\\\\n &= (-1)^m C_{m,d} \\sum_{i=1}^N \\sum_{j=1}^N w_i w_j \\varphi(\\mathbf{c}_i - \\mathbf{c}_j) = (-1)^m C_{m,d} \\mathbf{w}^{\\textrm{T}} A \\mathbf{w}. \n\\end{align}\n" }, { "math_id": 159, "text": " \\sum w = 0 " }, { "math_id": 160, "text": " \\sum w\\mathbf{c} = 0 , " }, { "math_id": 161, "text": "m-1." }, { "math_id": 162, "text": "B=\\begin{bmatrix} 1 & 1 & \\dots & 1\\\\ \\mathbf{c}_1 & \\mathbf{c}_2 & \\dots & \\mathbf{c}_N \\\\ \n\\vdots & \\vdots & \\dots & \\vdots \\\\\n\\mathbf{c}_1^{m-1} & \\mathbf{c}_2^{m-1} & \\dots & \\mathbf{c}_N^{m-1}\n\\end{bmatrix} ^ {\\textrm{T}}" }, { "math_id": 163, "text": "\\mathbf{c}_i^j" }, { "math_id": 164, "text": "j" }, { "math_id": 165, "text": "A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f} = 0." }, { "math_id": 166, "text": "F:\\mathbb{R}^{N+d+1}\\rightarrow \\mathbb{R}" }, { "math_id": 167, "text": "\nF(\\mathbf{w}, \\mathbf{v}) = |A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f}|^2 + \\lambda C \\mathbf{w}^{\\textrm{T}} A \\mathbf{w}.\n" }, { "math_id": 168, "text": "\n\\frac{\\partial F}{\\partial w_i} = 2 A_{i*} (A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f}) + 2\\lambda C A_{i*}\\mathbf{w}=0 \\quad \n\\textrm{for} \\ i=1,2,\\ldots,N\n" }, { "math_id": 169, "text": "\n\\frac{\\partial F}{\\partial v_i} = 2 B^{\\textrm{T}}_{i*} (A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f})=0 \\quad\n\\textrm{for} \\ i=1,2,\\ldots,d+1\n" }, { "math_id": 170, "text": "A_{i*}" }, { "math_id": 171, "text": "i" }, { "math_id": 172, "text": " A(A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f} +\\lambda C \\mathbf{w}) = 0 " }, { "math_id": 173, "text": " B^{\\textrm{T}}(A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f}) = 0. " }, { "math_id": 174, "text": " A\\mathbf{w} + B\\mathbf{v} - \\mathbf{f} +\\lambda C \\mathbf{w} = 0. " }, { "math_id": 175, "text": "B^{\\textrm{T}}\\mathbf{w} = 0." }, { "math_id": 176, "text": "(A+\\lambda C I)\\mathbf{w} + B\\mathbf{v} = \\mathbf{f}." }, { "math_id": 177, "text": " \\int_{\\mathcal{\\mathbb{R}}^d} |\\nabla^m f|^2 \\,d\\mathbf{x} = C w^{\\textrm{T}}Aw." }, { "math_id": 178, "text": " \\sum w = 0" }, { "math_id": 179, "text": " \\sum w \\mathbf{c} = 0 ," }, { "math_id": 180, "text": "B^{\\textrm{T}}w=0 " }, { "math_id": 181, "text": "w" }, { "math_id": 182, "text": "F." }, { "math_id": 183, "text": " \\int_{\\mathcal{\\mathbb{R}}^d} |\\nabla^m f|^2 \\,d\\mathbf{x} = C w^{\\textrm{T}}Aw" }, { "math_id": 184, "text": "w\\neq 0," }, { "math_id": 185, "text": "B^T w = 0" } ]
https://en.wikipedia.org/wiki?curid=6160804
61608254
Hamiltonian cycle polynomial
In mathematics, the Hamiltonian cycle polynomial of an "n"×"n"-matrix is a polynomial in its entries, defined as formula_0 where formula_1 is the set of "n"-permutations having exactly one cycle. This is an algebraic option useful, in a number of cases, for determining the existence of a Hamiltonian cycle in a directed graph. It is a generalization of the number of Hamiltonian cycles of a digraph as the sum of the products of its Hamiltonian cycles' arc weights (all of which equal unity) for weighted digraphs with arc weights taken from a given commutative ring. In the meantime, for an undirected weighted graph the sum of the products of the edge weights of its Hamiltonian cycles containing any fixed edge ("i","j") can be expressed as the product of the weight of ("i","j") and the Hamiltonian cycle polynomial of a matrix received from its weighted adjacency matrix via subjecting its rows and columns to any permutation mapping "i" to "1" and "j" to "2" and then removing its "1"-st row and "2"-nd column. In () it was shown that formula_2 where formula_3 is the submatrix of formula_4 induced by the rows and columns of formula_4 indexed by formula_5, and formula_6 is the complement of formula_5 in formula_7, while the determinant of the empty submatrix is defined to be 1. Due to this and Borchardt's identities, for a non-singular "n"×"n" Cauchy matrix formula_8 formula_9 where formula_10 are diagonal matrices that make formula_11 unitary (in a real field or a field of a finite characteristic, or orthogonal in the field of complex numbers), formula_12 is the Hadamard (entry-wise) square of formula_8, and formula_13 is the identity "n"×"n"-matrix with the entry of indexes 1,1 replaced by 0. In a field of characteristic 2 the equality formula_2 turns into formula_14 what therefore provides an opportunity to polynomial-time calculate the Hamiltonian cycle polynomial of any unitary matrix formula_15 (i.e. such that formula_16 where formula_17 is the identity "n"×"n"-matrix), because in such a field each minor of a unitary matrix coincides with its algebraic complement: formula_18 where formula_13 is the identity "n"×"n"-matrix with the entry of indexes 1,1 replaced by 0. Hence if it's possible to polynomial-time assign weights from a field of characteristic 2 to a digraph's arcs that make its weighted adjacency matrix unitary and having a non-zero Hamiltonian cycle polynomial then the digraph is Hamiltonian. Therefore the Hamiltonian cycle problem is computable on such graphs in polynomial time. In characteristic 2, the Hamiltonian cycle polynomial of an "n"×"n"-matrix is zero if "n" &gt; 2k where k is its rank or if it's involutory and "n" &gt; 2. Besides, in an arbitrary ring formula_19 whose characteristic isn't an even natural number, for any skew-symmetric "n"×"n"-matrix formula_4 there exists a power series in a formal variable formula_20 formula_21 such that it's a unitary "n"×"n"-matrix over formula_22 and formula_23, formula_24, while for any formula_25 satisfying these conditions formula_26 equals the coefficient at the formula_27-th power of formula_20 in the power series formula_28. And for any ring formula_19 of an even characteristic the same is true when the diagonal of formula_29 is a multiple of 2. It implies that computing, up to the formula_27-th power of formula_20, the Hamiltonian cycle polynomial of a unitary "n"×"n"-matrix over the infinite extension of any ring of characteristic q (not necessarily prime) by the formal variable formula_20 is a #formula_30P-complete problem if formula_31 isn't 2 and computing the Hamiltonian cycle polynomial of a formula_32-semi-unitary matrix (i.e. an "n"×"n"-matrix formula_33 such that formula_34) over such an extension of any ring of characteristic 2 is a #formula_35P-complete problem for any formula_32 &gt; 0 (because any formula_32-semi-unitary matrix can be received from a unitary matrix via removing formula_32 rows and formula_32 columns). For formula_36 the latter statement can be re-formulated as the #formula_35P-completeness of computing, for a given unitary "n"×"n"-matrix formula_15 over a field of characteristic 2, the "n"×"n"-matrix formula_37 whose "i","j"-th entry is the Hamiltonian cycle polynomial of a matrix received from formula_15 via subjecting its rows and columns to any permutation mapping "j" to "1" and "i" to "2" and then removing its "1"-st row and "2"-nd column (i.e. the sum of the products of the arc weights of the corresponding weighted digraph's Hamiltonian paths from vertex "i" to vertex "j") for "i" ≠ "j" and zero for "i" = "j". This matrix satisfies the matrix equation formula_38, while formula_39 where formula_40 is an arbitrary n-vector (what can be interpreted as the polynomial-time computability of the Hamiltonian cycle polynomial of any 1-semi-unitary "m"×"m"-matrix formula_4 such that formula_41 where formula_42 is the formula_43-th column of formula_4 with its formula_43-th entry replaced by 0 and formula_17 is the identity "m"×"m"-matrix). Moreover, it would be worth noting that in characteristic 2 the Hamiltonian cycle polynomial possesses its invariant matrix compressions (partly analogical to the Gaussian modification that is invariant for the determinant), taking into account the fact that formula_44 for any "t"×"t"-matrix formula_45 having three equal rows or, if formula_46 &gt; 2, a pair of indexes i,j such that its i-th and j-th rows are identical and its i-th and j-th columns are identical too. Hence if a matrix has two equal rows with indexes "i" and "j" then adding one of them to any third one doesn't change this polynomial in characteristic 2 what allows to Gaussian-style eliminate all the entries of its "i"-th column except the "i","i"-th and "j","i"-th ones (in case if they aren't zero) and remove its "i"-th column and "j"-th row (in the manner described above) – then the Hamiltonian cycle polynomial of the initial matrix equals this polynomial of the new one multiplied by the initial "j","i"-th entry. Also in characteristic 2 and for matrices with more than two rows the Hamiltonian cycle polynomial isn't changed by adding the "i"-th column to the "j"-th one in a matrix where the "i"-th and "j"-th rows are identical what, particularly, yields the identity formula_47 for an "n"×"n"-matrix formula_15, "m"×"m"-matrices formula_33 and diagonal formula_48, "m"×"n"-matrix formula_4 and "n"×"m"-matrix formula_49. This identity's restriction to the case when formula_15 is unitary, formula_50 and formula_51, where formula_17 is the identity "m"×"m"-matrix, makes the (2"m"+"n")×(2"m"+"n")-matrix in the equality's right side unitary and its Hamiltonian cycle polynomial computable, hence, in polynomial time what therefore generalizes the above-given formula for the Hamiltonian cycle polynomial of a unitary matrix. Besides, in characteristic 2 for square matrices X, Y formula_52 is the square of the sum, over all the pairs of non-equal indexes i,j, of the i,j-th entry of Y multiplied by the Hamiltonian cycle polynomial of the matrix received from X+Y via removing its "i"-th row and "j"-th column (in the manner described above). Hence, upon putting in the above equality A = B and U = V, we receive another extension of the class of unitary matrices where the Hamiltonian cycle polynomial is computable in polynomial time. Apart from the above-mentioned compression transformations, in characteristic 2 the following relation is also valid for the Hamiltonian cycle polynomials of a matrix and its partial inverse (for formula_53 and formula_54 being square, formula_53 being invertible): formula_55 and, due to the fact that the Hamiltonian cycle polynomial doesn't depend on the matrix's diagonal entries, adding an arbitrary diagonal matrix doesn't change this polynomial too. These two types of transformation don't compress the matrix, but keep its size unchanged. However, in a number of cases their application allows to reduce the matrix's size by some of the above-mentioned compression operators. Hence there is a variety of matrix compression operators performed in polynomial time and preserving the Hamiltonian cycle polynomial in characteristic 2 whose sequential application, together with the transpose transformation (utilized each time the operators leave the matrix intact), has, for each matrix, a certain limit that can be defined as the compression-closure operator. When applied to classes of matrices, that operator thus maps one class to another. As it was proven in (), if the compression-closure of the class of unitary matrices contains a subset where computing this polynomial is #formula_35P-complete then the Hamiltonian cycle polynomial is computable in polynomial time over any field of this characteristic -- what would imply the equality "RP" = "NP". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\operatorname{ham}(A)=\\sum_{\\sigma\\in H_n}\\prod_{i=1}^n a_{i,\\sigma(i)}" }, { "math_id": 1, "text": "H_n" }, { "math_id": 2, "text": "\\operatorname{ham} (A) = \\sum_{J\\subseteq \\{2,\\dots,n\\}} \\det(-A_J)\\operatorname{per}(A_{\\bar J})" }, { "math_id": 3, "text": "A_J" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "J" }, { "math_id": 6, "text": "\\bar J" }, { "math_id": 7, "text": "\\{1,\\dots,n\\}" }, { "math_id": 8, "text": " C(x,y) " }, { "math_id": 9, "text": " \\operatorname{ham}(C(x,y)) = {\\det}(- D_{1}^2 C^{*2}(x,y) D_{2}^2 + I_{/1}) \\operatorname{det} (C(x,y)) " }, { "math_id": 10, "text": " D_1, D_2 " }, { "math_id": 11, "text": " D_1 C(x,y) D_2 " }, { "math_id": 12, "text": " C^{*2}(x,y)" }, { "math_id": 13, "text": " I_{/1} " }, { "math_id": 14, "text": "\\operatorname{ham} (A) = \\sum_{J\\subseteq \\{2,\\dots,n\\}} \\det(A_J)\\det(A_{\\bar J})" }, { "math_id": 15, "text": "U" }, { "math_id": 16, "text": "U^{T} U = I" }, { "math_id": 17, "text": "I" }, { "math_id": 18, "text": " \\operatorname{ham} (U) = {\\det}^2(U + I_{/1}) " }, { "math_id": 19, "text": "R" }, { "math_id": 20, "text": "\\varepsilon" }, { "math_id": 21, "text": " U(\\varepsilon) =\\sum_{n=0}^\\infty U_n \\varepsilon^n " }, { "math_id": 22, "text": "R\\left(\\varepsilon\\right)" }, { "math_id": 23, "text": "U_0 = I" }, { "math_id": 24, "text": "U_1 = A" }, { "math_id": 25, "text": " U(\\varepsilon)" }, { "math_id": 26, "text": "\\operatorname{ham} (A)" }, { "math_id": 27, "text": "n" }, { "math_id": 28, "text": "\\operatorname{ham} (U(\\varepsilon))" }, { "math_id": 29, "text": "AA^{T}" }, { "math_id": 30, "text": "_q" }, { "math_id": 31, "text": "q" }, { "math_id": 32, "text": "k" }, { "math_id": 33, "text": "V" }, { "math_id": 34, "text": "\\operatorname{rank}(V^T V - I \\,) = k" }, { "math_id": 35, "text": "_2" }, { "math_id": 36, "text": "k = 1" }, { "math_id": 37, "text": "H(U)" }, { "math_id": 38, "text": "U(H(U))^T = H(U)U^T" }, { "math_id": 39, "text": " \\operatorname{ham} \\left ( \\begin{matrix}U & {Ua} \\\\a^{T} & 1 \\end{matrix} \\right ) = (a_{1}^{2} +...+ a_{n}^{2}) \\operatorname{ham} (U)" }, { "math_id": 40, "text": " a " }, { "math_id": 41, "text": "AA^{T} = I + bb^{T} " }, { "math_id": 42, "text": "b" }, { "math_id": 43, "text": "m" }, { "math_id": 44, "text": "\\operatorname{ham} (X) = 0" }, { "math_id": 45, "text": "X" }, { "math_id": 46, "text": "t" }, { "math_id": 47, "text": "\\det(D+D^{-1}) \\operatorname{ham} \\left ( \\begin{matrix}V & A \\\\B & U \\end{matrix} \\right ) = \\operatorname{ham} \\left ( \\begin{matrix}V & V+D & A\\\\ V+D^{-1} & V+D^{-1}+D & A\\\\ B & B & U\\end{matrix} \\right )" }, { "math_id": 48, "text": "D" }, { "math_id": 49, "text": "B" }, { "math_id": 50, "text": " VD + DV^{T}+AA^{T}=I+D^{2} " }, { "math_id": 51, "text": "BD=UA^{T}" }, { "math_id": 52, "text": " \\operatorname{ham}\\left ( \\begin{matrix}X & Y\\\\Y & X \\end{matrix} \\right ) " }, { "math_id": 53, "text": "A_{11}" }, { "math_id": 54, "text": "A_{22}" }, { "math_id": 55, "text": "\\operatorname{ham}\\left ( \\begin{matrix}A_{11} & A_{12}\\\\ A_{21} & A_{22}\\end{matrix} \\right) = {\\det}^2\\left ( A_{11} \\right ) \\operatorname{ham}\\left ( \\begin{matrix}A_{11}^{-1} & A_{11}^{-1}A_{12}\\\\ A_{21}A_{11}^{-1} & A_{22} + A_{21} A_{11}^{-1} A_{12} \\end{matrix} \\right ) " } ]
https://en.wikipedia.org/wiki?curid=61608254
6161274
Poisson–Boltzmann equation
Equation used for physiological interfaces, polymer science, and semiconductors The Poisson–Boltzmann equation describes the distribution of the electric potential in solution in the direction normal to a charged surface. This distribution is important to determine how the electrostatic interactions will affect the molecules in solution. The Poisson–Boltzmann equation is derived via mean-field assumptions. From the Poisson–Boltzmann equation many other equations have been derived with a number of different assumptions. Origins. Background and derivation. The Poisson–Boltzmann equation describes a model proposed independently by Louis Georges Gouy and David Leonard Chapman in 1910 and 1913, respectively. In the Gouy-Chapman model, a charged solid comes into contact with an ionic solution, creating a layer of surface charges and counter-ions or double layer. Due to thermal motion of ions, the layer of counter-ions is a diffuse layer and is more extended than a single molecular layer, as previously proposed by Hermann Helmholtz in the Helmholtz model. The Stern Layer model goes a step further and takes into account the finite ion size. The Gouy–Chapman model explains the capacitance-like qualities of the electric double layer. A simple planar case with a negatively charged surface can be seen in the figure below. As expected, the concentration of counter-ions is higher near the surface than in the bulk solution. The Poisson–Boltzmann equation describes the electrochemical potential of ions in the diffuse layer. The three-dimensional potential distribution can be described by the Poisson equation formula_0 where The freedom of movement of ions in solution can be accounted for by Boltzmann statistics. The Boltzmann equation is used to calculate the local ion density such that formula_4 where The equation for local ion density can be substituted into the Poisson equation under the assumptions that the work being done is only electric work, that our solution is composed of a 1:1 salt (e.g., NaCl), and that the concentration of salt is much higher than the concentration of ions. The electric work to bring a charged cation or charged anion to a surface with potential ψ can be represented by formula_9 and formula_10 respectively. These work equations can be substituted into the Boltzmann equation, producing two expressions formula_11 and formula_12, where "e" is the charge of an electron, 1.602×10-19 coulombs. Substituting these Boltzmann relations into the local electric charge density expression, the following expression can be obtained formula_13 Finally the charge density can be substituted into the Poisson equation to produce the Poisson–Boltzmann equation. Related theories. The Poisson–Boltzmann equation can take many forms throughout various scientific fields. In biophysics and certain surface chemistry applications, it is known simply as the Poisson–Boltzmann equation. It is also known in electrochemistry as Gouy-Chapman theory; in solution chemistry as Debye–Huckel theory; in colloid chemistry as Derjaguin–Landau–Verwey–Overbeek (DLVO) theory. Only minor modifications are necessary to apply the Poisson–Boltzmann equation to various interfacial models, making it a highly useful tool in determining electrostatic potential at surfaces. Solving analytically. Because the Poisson–Boltzmann equation is a partial differential of the second order, it is commonly solved numerically; however, with certain geometries, it can be solved analytically. Geometries. The geometry that most easily facilitates this is a planar surface. In the case of an infinitely extended planar surface, there are two dimensions in which the potential cannot change because of symmetry. Assuming these dimensions are the y and z dimensions, only the x dimension is left. Below is the Poisson–Boltzmann equation solved analytically in terms of a second order derivative with respect to x. formula_14 Analytical solutions have also been found for axial and spherical cases in a particular study. The equation is in the form of a logarithm of a power series and it is as follows: formula_15 It uses a dimensionless potential formula_16 and the lengths are measured in units of the Debye electron radius in the region of zero potential formula_17 (where formula_18 denotes the number density of negative ions in the zero potential region). For the spherical case, L=2, the axial case, L=1, and the planar case, L=0. Low-potential vs high-potential cases. When using the Poisson–Boltzmann equation, it is important to determine if the specific case is low or high potential. The high-potential case becomes more complex so if applicable, use the low-potential equation. In the low-potential condition, the linearized version of the Poisson–Boltzmann equation (shown below) is valid, and it is commonly used as it is more simple and spans a wide variety of cases. formula_19 Low-potential case conditions. Strictly, low potential means that formula_20; however, the results that the equations yields are valid for a wider range of potentials, from 50–80mV. Nevertheless, at room temperature, formula_21 and that is generally the standard. Some boundary conditions that apply in low potential cases are that: at the surface, the potential must be equal to the surface potential and at large distances from the surface the potential approaches a zero value. This distance decay length is yielded by the Debye length formula_22 equation. formula_23 formula_24 As salt concentration increases, the Debye length decreases due to the ions in solution screening the surface charge. A special instance of this equation is for the case of formula_25 water with a monovalent salt. The Debye length equation is then: formula_26 where formula_27 is the salt concentration in mol/L. These equations all require 1:1 salt concentration cases, but if ions that have higher valence are present, the following case is used. formula_28 High-potential case. The high-potential case is referred to as the “full one-dimensional case”. In order to obtain the equation, the general solution to the Poisson–Boltzmann equation is used and the case of low potentials is dropped. The equation is solved with a dimensionless parameter formula_29, which is not to be confused with the spatial coordinate symbol, y. Employing several trigonometric identities and the boundary conditions that at large distances from the surface, the dimensionless potential and its derivative are zero, the high potential equation is revealed. formula_30 This equation solved for formula_31 is shown below. formula_32 In order to obtain a more useful equation that facilitates graphing high potential distributions, take the natural logarithm of both sides and solve for the dimensionless potential, y. formula_33 Knowing that formula_29, substitute this for y in the previous equation and solve for formula_34. The following equation is rendered. formula_35 formula_36 Conditions. In low potential cases, the high potential equation may be used and will still yield accurate results. As the potential rises, the low potential, linear case overestimates the potential as a function of distance from the surface. This overestimation is visible at distances less than half the Debye length, where the decay is steeper than exponential decay. The following figure employs the linearized equation and the high potential graphing equation derived above. It is a potential-versus-distance graph for varying surface potentials of 50, 100, 150, and 200 mV. The equations employed in this figure assume an 80mM NaCl solution. General applications. The Poisson–Boltzmann equation can be applied in a variety of fields mainly as a modeling tool to make approximations for applications such as charged biomolecular interactions, dynamics of electrons in semiconductors or plasma, etc. Most applications of this equation are used as models to gain further insight on electrostatics. Physiological applications. The Poisson–Boltzmann equation can be applied to biomolecular systems. One example is the binding of electrolytes to biomolecules in a solution. This process is dependent upon the electrostatic field generated by the molecule, the electrostatic potential on the surface of the molecule, as well as the electrostatic free energy. The linearized Poisson–Boltzmann equation can be used to calculate the electrostatic potential and free energy of highly charged molecules such as tRNA in an ionic solution with different number of bound ions at varying physiological ionic strengths. It is shown that electrostatic potential depends on the charge of the molecule, while the electrostatic free energy takes into account the net charge of the system. Another example of utilizing the Poisson–Boltzmann equation is the determination of an electric potential profile at points perpendicular to the phospholipid bilayer of an erythrocyte. This takes into account both the glycocalyx and spectrin layers of the erythrocyte membrane. This information is useful for many reasons including the study of the mechanical stability of the erythrocyte membrane. Electrostatic free energy. The Poisson–Boltzmann equation can also be used to calculate the electrostatic free energy for hypothetically charging a sphere using the following charging integral: formula_37 where formula_38 is the final charge on the sphere The electrostatic free energy can also be expressed by taking the process of the charging system. The following expression utilizes chemical potential of solute molecules and implements the Poisson-Boltzmann Equation with the Euler-Lagrange functional: formula_39 Note that the free energy is independent of the charging pathway [5c]. The above expression can be rewritten into separate free energy terms based on different contributions to the total free energy formula_40 where Finally, by combining the last three term the following equation representing the outer space contribution to the free energy density integral formula_45 These equations can act as simple geometry models for biological systems such as proteins, nucleic acids, and membranes. This involves the equations being solved with simple boundary conditions such as constant surface potential. These approximations are useful in fields such as colloid chemistry. Materials science. An analytical solution to the Poisson–Boltzmann equation can be used to describe an electron-electron interaction in a metal-insulator semiconductor (MIS). This can be used to describe both time and position dependence of dissipative systems such as a mesoscopic system. This is done by solving the Poisson–Boltzmann equation analytically in the three-dimensional case. Solving this results in expressions of the distribution function for the Boltzmann equation and self-consistent average potential for the Poisson equation. These expressions are useful for analyzing quantum transport in a mesoscopic system. In metal-insulator semiconductor tunneling junctions, the electrons can build up close to the interface between layers and as a result the quantum transport of the system will be affected by the electron-electron interactions. Certain transport properties such as electric current and electronic density can be known by solving for self-consistent Coulombic average potential from the electron-electron interactions, which is related to electronic distribution. Therefore, it is essential to analytically solve the Poisson–Boltzmann equation in order to obtain the analytical quantities in the MIS tunneling junctions. Applying the following analytical solution of the Poisson–Boltzmann equation (see section 2) to MIS tunneling junctions, the following expression can be formed to express electronic transport quantities such as electronic density and electric current formula_46 Applying the equation above to the MIS tunneling junction, electronic transport can be analyzed along the z-axis, which is referenced perpendicular to the plane of the layers. An n-type junction is chosen in this case with a bias V applied along the z-axis. The self-consistent average potential of the system can be found using formula_47 where λ is called the Debye length. The electronic density and electric current can be found by manipulation to equation 16 above as functions of position z. These electronic transport quantities can be used to help understand various transport properties in the system. Limitations. As with any approximate model, the Poisson–Boltzmann equation is an approximation rather than an exact representation. Several assumptions were made to approximate the potential of the diffuse layer. The finite size of the ions was considered negligible and ions were treated as individual point charges, where ions were assumed to interact with the average electrostatic field of all their neighbors rather than each neighbor individually. In addition, non-Coulombic interactions were not considered and certain interactions were unaccounted for, such as the overlap of ion hydration spheres in an aqueous system. The permittivity of the solvent was assumed to be constant, resulting in a rough approximation as polar molecules are prevented from freely moving when they encounter the strong electric field at the solid surface. Though the model faces certain limitations, it describes electric double layers very well. The errors resulting from the previously mentioned assumptions cancel each other for the most part. Accounting for non-Coulombic interactions increases the ion concentration at the surface and leads to a reduced surface potential. On the other hand, including the finite size of the ions causes the opposite effect. The Poisson–Boltzmann equation is most appropriate for approximating the electrostatic potential at the surface for aqueous solutions of univalent salts at concentrations smaller than 0.2 M and potentials not exceeding 50–80 mV. In the limit of strong electrostatic interactions, a strong coupling theory is more applicable than the weak coupling assumed in deriving the Poisson-Boltzmann theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nabla^2\\psi = \\frac{\\partial^2\\psi}{\\partial x^2} + \\frac{\\partial^2\\psi}{\\partial y^2} + \\frac{\\partial^2\\psi}{\\partial z^2} = -\\frac{\\rho_e}{\\varepsilon_r\\varepsilon_0}, " }, { "math_id": 1, "text": " \\rho_e " }, { "math_id": 2, "text": " \\varepsilon_r " }, { "math_id": 3, "text": " \\varepsilon_0 " }, { "math_id": 4, "text": "c_i = c^0_i \\cdot e^\\frac{-W_i}{k_\\mathrm{B}T}," }, { "math_id": 5, "text": " c^0_i " }, { "math_id": 6, "text": " W_i " }, { "math_id": 7, "text": " k_\\mathrm{B} " }, { "math_id": 8, "text": " T " }, { "math_id": 9, "text": " W^+ = e\\psi " }, { "math_id": 10, "text": " W^- = -e\\psi " }, { "math_id": 11, "text": " c^- = c_0 \\cdot e^\\frac{e\\psi(x,y,z)}{k_BT}" }, { "math_id": 12, "text": " c^+ = c_0 \\cdot e^\\frac{-e\\psi(x,y,z)}{k_BT}" }, { "math_id": 13, "text": " \\rho_e = e{(c^+ - c^-)} = c_0e \\cdot \\left[e^\\frac{-e\\psi(x,y,z)}{k_BT} - e^\\frac{e\\psi(x,y,z)}{k_BT}\\right]. " }, { "math_id": 14, "text": "\\frac{d^2\\psi}{dx^2} = \\frac{c_0e}{\\varepsilon\\varepsilon_0} \\cdot \\left[e^\\frac{e\\psi(x)}{k_\\mathrm{B} T}- e^\\frac{-e\\psi(x)}{k_\\mathrm{B} T}\\right]" }, { "math_id": 15, "text": "\\frac{d^2\\psi}{dr^2} + \\frac{L}{r}\\frac{d\\psi}{dr} = e^\\psi-\\delta e^{-\\psi}" }, { "math_id": 16, "text": "\\psi = \\frac{e\\Phi}{kT}" }, { "math_id": 17, "text": "R_{eD} = \\sqrt{\\frac{kT}{4\\pi e^2 n_{e0}}}" }, { "math_id": 18, "text": "n_{e0}" }, { "math_id": 19, "text": "\\psi=\\psi_0e^{-\\Kappa x}" }, { "math_id": 20, "text": "e\\left\\vert \\psi \\right\\vert \\ll k_\\mathrm{B}T" }, { "math_id": 21, "text": "\\psi\\le \\mathrm{25 mV}" }, { "math_id": 22, "text": "\\lambda_D" }, { "math_id": 23, "text": "\\Kappa=\\sqrt{\\frac{2c_0e^2}{\\varepsilon\\varepsilon_0 k_\\mathrm{B} T}}" }, { "math_id": 24, "text": "\\lambda_D=\\Kappa^{-1}" }, { "math_id": 25, "text": "25^\\circ C" }, { "math_id": 26, "text": "\\lambda_D=\\frac{\\mathrm{0.304nm}}{\\sqrt{c_0}}" }, { "math_id": 27, "text": "c_0" }, { "math_id": 28, "text": "\\Kappa=\\sqrt{\\frac{e^2}{\\varepsilon\\varepsilon_0 k_\\mathrm{B} T} \\sum c_i{Z_i}^2}" }, { "math_id": 29, "text": "y\\equiv\\frac{e\\psi}{k_BT}" }, { "math_id": 30, "text": "e^{-\\Kappa x}=\\frac{(e^{y/2}-1) (e^{y_0/2}+1)}{(e^{y/2}+1 )(e^{y_0/2}-1)}" }, { "math_id": 31, "text": "e^{y/2}" }, { "math_id": 32, "text": "e^{y/2}=\\frac{e^{y_0/2}+1+(e^{y_0/2}-1) \\cdot e^{-\\Kappa x}}{e^{y_0/2}+1-(e^{y_0/2}-1) \\cdot e^{-\\Kappa x}}" }, { "math_id": 33, "text": "y=2\\ln \\frac{e^{y_0/2}+1+(e^{y_0/2}-1) \\cdot e^{-\\Kappa x}}{e^{y_0/2}+1-(e^{y_0/2}-1) \\cdot e^{-\\Kappa x}}" }, { "math_id": 34, "text": "\\psi" }, { "math_id": 35, "text": "\\psi=\\frac{2k_BT}{e} \\cdot \\ln \\frac{e^{y_0/2}+1+(e^{y_0/2}-1) \\cdot e^{-\\Kappa x}}{e^{y_0/2}+1-(e^{y_0/2}-1) \\cdot e^{-\\Kappa x}}" }, { "math_id": 36, "text": "y_0=\\frac{e\\psi_0}{k_BT}" }, { "math_id": 37, "text": "\\Delta G^\\text{el} = \\int^\\tau qU (\\tau')\\,d\\tau'" }, { "math_id": 38, "text": "\\tau q" }, { "math_id": 39, "text": "\\Delta G^\\text{el} = \\int_V \\left(kT\\sum_i c_i^\\infty\\left[1 - \\exp\\left(\\frac{-z_i qU}{kT}\\right)\\right] + p^f U - \\frac{-\\varepsilon(\\boldsymbol\\nabla U)^2}{8 \\pi}\\right)dV" }, { "math_id": 40, "text": "\\Delta G^\\text{el} = \\Delta G^\\text{ef} + \\Delta G^\\text{em} + \\Delta G^\\text{mob} + \\Delta G^\\text{solv}" }, { "math_id": 41, "text": "\\Delta G^\\text{ef} = \\int_V \\frac{p^f U}{2} dV" }, { "math_id": 42, "text": "\\Delta G^\\text{em} = \\int_V \\frac{\\sum_i c_i z_i qU}{2} dV" }, { "math_id": 43, "text": "\\Delta G^\\text{mob} = kT\\int_V c_i \\ln\\frac{c_i}{c_i^{\\infty}} dV" }, { "math_id": 44, "text": "\\Delta G^\\text{solv} = kT\\int_V \\sum_i c_i^{\\infty}\\left[1 - \\exp\\left(\\frac{-z_i qU}{kT}\\right)\\right]dV" }, { "math_id": 45, "text": "\\Delta G^\\text{out} = \\Delta G^\\text{em} + \\Delta G^\\text{mob} + \\Delta G^\\text{solv}" }, { "math_id": 46, "text": "f_1f^0 - f_0 + \\frac{e E_z\\tau_0}{m} \\frac{\\partial f_0}{\\partial v_z}\\left(1 - e^{\\frac{-\\tau}{\\tau_0}}\\right) - \\int_0^t \\frac{e}{m} e{^\\frac{t-\\tau'}{\\tau_0}} \\nabla \\rho [r - v(t - t')] \\times \\frac{\\partial f_0}{\\partial v} dt'" }, { "math_id": 47, "text": "\\rho\\rho_1 + \\rho_2" }, { "math_id": 48, "text": "\\rho_1 \\approx \\frac{a E_z}{2\\lambda_{D1}} e^{-\\lambda_{D1}z}" }, { "math_id": 49, "text": "\\rho_2 \\approx \\frac{ne\\sqrt{\\pi}G(i\\lambda_{D1})e^{\\frac{-t}{\\tau_0} - \\lambda_{D1}z}}{3\\sqrt{3} \\varepsilon_0 \\varepsilon_r \\lambda_{D1}} \\left(1 - e^{1 - \\sqrt{\\frac{2ne^2t^2}{m \\varepsilon_0 \\varepsilon_r}}}\\right)" } ]
https://en.wikipedia.org/wiki?curid=6161274
61621079
Kristen DeAngelis
American microbiologist, environmental activist Kristen M. DeAngelis is a professor in the department of Microbiology at the University of Massachusetts where she studies soil microbes in relation to climate change. Early life and education. DeAngelis is originally from Watertown, Massachusetts. She graduated from Harvard University within the Biology department in 1997. DeAngelis received her Ph. D. in Microbiology from the University of California Berkeley in 2006. She subsequently worked as a Seaborg Postdoctoral Fellow at Lawrence Berkley National Lab and in the Deconstruction Division at the Joint BioEnergy Institute Career and research. She is currently a lead researcher at the University of Massachusetts on soil microbes and their connection to global warming. Specifically, DeAngelis focuses on the adaptability of soil microbes and their responses to climate change in order to better understand soil ecology and its role as a carbon sink. Using plots of ground that have been artificially heated to formula_0 warmer than surrounding earth, DeAngelis, along with other qualified researchers, is attempting to simulate climate change and its potential effects on Earth. This is a part of a long term study at the Harvard Forest in Massachusetts, which has been going on since 1991. Her participation is key in discovering unexpected results, in which the scientists have observed changes in the composition and functional potential of soil bacterial communities which are correlated with alternating periods of accelerated and stationary CO2 release from the soils. Her current work evaluates the hypothesis that there is an evolutionary component to the soil bacterial response to long-term warming at this site. Some of her past research includes studying the responses of microbes in the Arctic to thawing permafrost to better understand their role in the Earth's natural carbon cycle, and the potential production of biofuels by microbes. She also was involved in the development of a new technology to investigate microbes more in depth. She offered paid internships at her lab at UMass to students of many ages, including high school seniors. Her interns’ research results include information that may affect the availability of bio-fuels as well as understanding potential ecological responses to climate change. Publications. One of DeAngelis' most cited articles pertains to relationships between functionality in the rhizosphere and oat root growth, specifically focusing on the effects of microbial organisms in this layer. Other significant publications include: Memberships. DeAngelis has served the Ecological Society of America as Chair of the Microbiology Section (2015–16), Vice President (2014-15), and Secretary (2013–14) Public engagement and activism. DeAngelis has participated in climate activist groups and marches, such as the one that took place in Massachusetts in early 2019. During this particular march, she interested hundreds of people in registering to vote within their counties in the United States in an attempt to increase political advocacy surrounding laws that affect the environment. She is also active within 500 Women Scientists Pod in Amherst, Massachusetts, where the group encourages scientific engagement from females around the world. Awards and honors. Funding for her most recent research was awarded by the federal government in 2018; she received two grants that will be financially dispersed over a period of 5 years, totaling around $2.5 million. This award was given to DeAngelis by the NSF (National Science Foundation) as a CAREER award (Faculty Early Career Development Program). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "5^\\circ C" } ]
https://en.wikipedia.org/wiki?curid=61621079
61622110
Localized Chern class
Concept in geometry In algebraic geometry, a localized Chern class is a variant of a Chern class, that is defined for a chain complex of vector bundles as opposed to a single vector bundle. It was originally introduced in Fulton's "intersection theory", as an algebraic counterpart of the similar construction in algebraic topology. The notion is used in particular in the Riemann–Roch-type theorem. S. Bloch later generalized the notion in the context of arithmetic schemes (schemes over a Dedekind domain) for the purpose of giving #Bloch's conductor formula that computes the non-constancy of Euler characteristic of a degenerating family of algebraic varieties (in the mixed characteristic case). Definitions. Let "Y" be a pure-dimensional regular scheme of finite type over a field or discrete valuation ring and "X" a closed subscheme. Let formula_0 denote a complex of vector bundles on "Y" formula_1 that is exact on formula_2. The localized Chern class of this complex is a class in the bivariant Chow group of formula_3 defined as follows. Let formula_4 denote the tautological bundle of the Grassmann bundle formula_5 of rank formula_6 sub-bundles of formula_7. Let formula_8. Then the "i"-th localized Chern class formula_9 is defined by the formula: formula_10 where formula_11 is the projection and formula_12 is a cycle obtained from formula_13 by the so-called graph construction. Example: localized Euler class. Let formula_14 be as in #Definitions. If "S" is smooth over a field, then the localized Chern class coincides with the class formula_15 where, roughly, formula_16 is the section determined by the differential of "f" and (thus) formula_17 is the class of the singular locus of "f". Consider an infinite dimensional bundle E over an infinite dimensional manifold M with a section s with Fredholm derivative. In practice this situation occurs whenever we have system of PDE’s which are elliptic when considered modulo some gauge group action. The zero set Z(s) is then the moduli space of solutions modulo gauge, and the index of the derivative is the virtual dimension. The localized Euler class of the pair (E,s) is a homology class with closed support on the zero set of the section. Its dimension is the index of the derivative. When the section is transversal, the class is just the fundamental class of the zero set with the proper orientation. The class is well behaved in one parameter families and therefore defines the “right” fundamental cycle even if the section is no longer transversal. Bloch's conductor formula. This formula enables us to compute the conductor that measures the wild ramification by using the sheaf of differential 1-forms. S. Bloch conjectures a formula for the Artin conductor of the ℓ-adic etale cohomology of a regular model of a variety over a local field and proves it for a curve. The deepest result about the Bloch conductor is its equality with the Artin conductor, defined in terms of the l-adic cohomology of X, in certain cases. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{\\bullet}" }, { "math_id": 1, "text": "0 = E_{n-1} \\to E_n \\to \\dots \\to E_m \\to E_{m-1} = 0" }, { "math_id": 2, "text": "Y - X" }, { "math_id": 3, "text": "X \\subset Y" }, { "math_id": 4, "text": "\\xi_i" }, { "math_id": 5, "text": "G_i" }, { "math_id": 6, "text": "\\operatorname{rk} E_i" }, { "math_id": 7, "text": "E_i \\otimes E_{i-1}" }, { "math_id": 8, "text": "\\xi = \\prod (-1)^i \\operatorname{pr}_i^*(\\xi_i)" }, { "math_id": 9, "text": "c_{i, X}^Y(E_{\\bullet})" }, { "math_id": 10, "text": "c_{i, X}^Y(E_{\\bullet}) \\cap \\alpha = \\eta_*(c_i(\\xi) \\cap \\gamma)" }, { "math_id": 11, "text": "\\eta: G_n \\times_Y \\dots \\times_Y G_m \\to X" }, { "math_id": 12, "text": "\\gamma" }, { "math_id": 13, "text": "\\alpha" }, { "math_id": 14, "text": "f: X \\to S" }, { "math_id": 15, "text": "(-1)^{\\dim X} \\mathbf{Z}(s_f)" }, { "math_id": 16, "text": "s_f" }, { "math_id": 17, "text": "\\mathbf{Z}(s_f)" } ]
https://en.wikipedia.org/wiki?curid=61622110
61622763
One Clean Qubit
The One Clean Qubit model of computation is performed an formula_1 qubit system with one pure state and formula_2 maximally mixed states. This model was motivated by highly mixed states that are prevalent in Nuclear magnetic resonance quantum computers. It's described by the density matrix formula_3, where I is the identity matrix. In computational complexity theory, DQC1; also known as the Deterministic quantum computation with one clean qubit is the class of decision problems solvable by a one clean qubit machine in polynomial time, upon measuring the first qubit, with an error probability of at most 1/poly(n) for all instances. Error Bounds and Composability. The most standard definition of DQC1 requires that measuring the output qubit correctly accepts or rejects the input, with error at most formula_4 for specified some polynomial "q", given a gap in acceptance probabilities of formula_5 for NO instances and formula_6 for YES instances. Most probabilistic classes, such as BPP, BQP, and RP are agnostic to the precise probability gap, because any polynomial acceptance gap can be "amplified" to a fixed gap such as (1/3,2/3). A notable outlier is PP, which permits exponentially small gaps. DQC1 does not admit an obvious notion of parallel composability or amplification: there is no clear construction to transform a circuit with, say, a (2/5,3/5) acceptance gap into a more accurate (1/5,4/5) acceptance gap. It is known that DQC1 offers composability in the sense that the "one" clean qubit can be upgraded to "two" clean qubits, or even formula_7 many clean qubits, without modifying the class Computation with Unitaries and One Pure Qubit. D. J. Shepherd.&lt;/ref&gt; It is also not strengthened by measuring all of these clean qubits (as opposed to just the first clean qubit). Relation to other classes. Because as many as formula_8 qubits are permitted, DQC1 contains all logspace computations. It is closed under formula_9L reductions as well. It is not known to contain BPP or even P. It is contained in BQP, and it is conjectured that this is containment is strict. It is known that simulating the sampling problem even for 3 output qubits is classically hard, in the sense that it would imply a PH collapse. The term DQC1 has been used to instead refer to decision problems solved by a polynomial time classical circuit that adaptively makes queries to polynomially many DQC1 circuits. In this sense of use, the class naturally contains all of BPP, and the power of the class is focused on the "inherently quantum" power. Trace Estimation. Trace estimation is complete for DQC1. Let formula_0 be a unitary formula_10 matrix. Given a state formula_11, the Hadamard test can estimate formula_12 where formula_13 is the probability that the measured clean qubit is 0. formula_14 mixed state inputs can be simulated by letting formula_11 be chosen uniformly at random from formula_15 computational basis states. When measured, the probability that the final result is 0 is formula_16 To estimate the imaginary part of the formula_17, the clean qubit is initialized to formula_18 instead of formula_19. DQC1-complete Problems. In addition to unitary trace estimation, estimating a coefficient in the Pauli decomposition of a unitary and approximating the Jones polynomial at a fifth root of unity are also DQC1-complete. In fact, trace estimation is a special case of Pauli decomposition coefficient estimation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n-1" }, { "math_id": 3, "text": " \\rho = \\left|0\\right\\rangle \\langle 0| \\otimes \\frac{I}{2}" }, { "math_id": 4, "text": "1/q(n)" }, { "math_id": 5, "text": "[0,1/2-1/q(n)]" }, { "math_id": 6, "text": "[1/2+1/q(n),1]" }, { "math_id": 7, "text": "log(n)" }, { "math_id": 8, "text": "O(\\log(n))" }, { "math_id": 9, "text": "\\oplus" }, { "math_id": 10, "text": "2^n \\times 2^n" }, { "math_id": 11, "text": "|\\psi\\rangle" }, { "math_id": 12, "text": "\\left\\langle\\psi\\right| U \\left|\\psi\\right\\rangle" }, { "math_id": 13, "text": "\\frac{1}{2} + \\frac{1}{2}\\mathcal{Re}(\\left\\langle\\psi\\right| U \\left|\\psi\\right\\rangle)" }, { "math_id": 14, "text": "I/2^n" }, { "math_id": 15, "text": "2^{n}" }, { "math_id": 16, "text": "\\frac{1}{2^n} \\sum_{x \\subset \\{0, 1\\}^n} \\frac{1+\\mathcal{Re}\\left\\langle x\\right| U \\left| x\\right\\rangle}{2} = \\frac{1}{2} + \\frac{1}{2} \\frac{\\mathcal{Re}(Tr(U))}{2^n} ." }, { "math_id": 17, "text": "Tr(U)" }, { "math_id": 18, "text": "\\frac{1}{\\sqrt{2}}\\left(\\left|0\\right\\rangle - i\\left|1\\right\\rangle \\right)" }, { "math_id": 19, "text": "\\frac{1}{\\sqrt{2}}\\left(\\left|0\\right\\rangle +\\left|1\\right\\rangle \\right)" } ]
https://en.wikipedia.org/wiki?curid=61622763
61623050
Eckert-Greifendorff projection
Map projection by Max Eckert-GreifendorffThe Eckert-Greifendorff projection is an equal-area map projection described by Max Eckert-Greifendorff in 1935. Unlike his previous six projections, it is not pseudocylindrical. Development. Directly inspired by the Hammer projection, Eckert-Greifendorff suggested the use of the equatorial form of the Lambert azimuthal equal-area projection instead of Aitoff's use of the azimuthal equidistant projection: formula_0 where laea"x" and laea"y" are the "x" and "y" components of the equatorial Lambert azimuthal equal-area projection. Written out explicitly: formula_1 The inverse is calculated with the intermediate variable formula_2 The longitude and latitudes can then be calculated by formula_3 where "λ" is the longitude from the central meridian and "φ" is the latitude. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align} x &= 2\\operatorname{laea}_x\\left(\\frac{\\lambda}{4}, \\varphi\\right) \\\\\ny &= \\tfrac12 \\operatorname{laea}_y\\left(\\frac{\\lambda}{4}, \\varphi\\right) \\end{align}" }, { "math_id": 1, "text": "\\begin{align} x &= \\frac{4 \\sqrt 2 \\cos \\varphi \\sin \\frac{\\lambda}{4}}{\\sqrt{1 + \\cos \\varphi \\cos \\frac{\\lambda}{4}}} \\\\\ny &= \\frac{\\sqrt 2\\sin \\varphi}{\\sqrt{1 + \\cos \\varphi \\cos \\frac{\\lambda}{4}}} \\end{align}" }, { "math_id": 2, "text": "z \\equiv \\sqrt{1 - \\left(\\tfrac1{16} x\\right)^2 - \\left(\\tfrac12 y\\right)^2}" }, { "math_id": 3, "text": "\\begin{align}\n\\lambda &= 4 \\arctan \\frac{zx}{4\\left(2z^2 - 1\\right)} \\\\\n\\varphi &= \\arcsin zy\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=61623050
61623589
Asuman Aksoy
Turkish-American mathematician Asuman Güven Aksoy is a Turkish-American mathematician whose research concerns topics in functional analysis, metric geometry, and operator theory including Banach spaces, measures of non-compactness, fixed points, Birnbaum–Orlicz spaces, real trees, injective metric spaces, and tight spans. She works at Claremont McKenna College, where she is Crown Professor of Mathematics and George R. Roberts Fellow. Education. Aksoy studied mathematics and physics at Ankara University, graduating with a bachelor's degree in 1976. She earned a master's degree in mathematics at Middle East Technical University in 1978, with a thesis "Subspaces of Nuclear Fréchet Spaces" supervised by Tosun Terzioğlu. She moved to the United States in 1978 for additional graduate study at the University of Michigan, and eventually became a US citizen. She completed her doctorate at the University of Michigan in 1984. Her dissertation, "Approximation Schemes, Related formula_0-Numbers, and Applications", was supervised by Melapalayam S. Ramanujan. Career. After completing her doctorate, Aksoy joined the faculty of Oakland University in 1984, and was tenured there in 1987. She moved to Claremont McKenna in 1990, and chaired the mathematics department there from 1997 to 2000 and again from 2007 to 2009. She was given the Crown Professorship and Roberts Fellowship in 2009. Books. With Mohamed Amine Khamsi, Aksoy is the author of two books: Recognition. In 2006 the Southern California–Nevada Section of the Mathematical Association of America gave Aksoy their annual Award for Distinguished College or University Teaching of Mathematics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "s" } ]
https://en.wikipedia.org/wiki?curid=61623589
61630330
Guy David (mathematician)
French mathematician Guy David (born 1957) is a French mathematician, specializing in analysis. Biography. David studied from 1976 to 1981 at the École normale supérieure, graduating with "Agrégation" and "Diplôme d'études approfondies" (DEA). At the University of Paris-Sud (Paris XI) he received in 1981 his doctoral degree ("Thèse du 3ème cycle") and in 1986 his higher doctorate ("Thèse d'État") with thesis "Noyau de Cauchy et opérateurs de Caldéron-Zygmund" supervised by Yves Meyer. David was from 1982 to 1989 an "attaché de recherches" (research associate) at the "Centre de mathématiques Laurent Schwartz" of the CNRS. At the University of Paris-Sud he was from 1989 to 1991 a professor and from 1991 to 2001 a professor first class, and is since 1991 a professor of the "Classe exceptionelle". David is known for his research on Hardy spaces and on singular integral equations using the methods of Alberto Calderón. In 1998 David solved a special case of a problem of Vitushkin. Among other topics, David has done research on Painlevé's problem of geometrically characterizing removable singularities for bounded functions; Xavier Tolsa's solution of Painlevé's problem is based upon David's methods. With Jean-Lin Journé he proved in 1984 the T(1) Theorem, for which they jointly received the Salem Prize. The T(1) Theorem is of fundamental importance for the theory of singular integral operators of Calderón-Zygmund type. David also did research on the conjecture of David Mumford and Jayant Shah in image processing and made contributions to the theory of Hardy spaces; the contributions were important for Jones' traveling salesman theorem in formula_0. David has written several books in collaboration with Stephen Semmes.
[ { "math_id": 0, "text": "\\mathbb{R}^2" } ]
https://en.wikipedia.org/wiki?curid=61630330
616348
Molniya (satellite)
Soviet military surveillance and communications satellites The Molniya (Russian: , , "Lightning") series satellites were military and communications satellites launched by the Soviet Union from 1965 to 1991, and by the Russian Federation from 1991 to 2004. These satellites used highly eccentric elliptical orbits known as Molniya orbits, which have a long dwell time over high latitudes. They are suited for communications purposes in polar regions, in the same way that geostationary satellites are used for equatorial regions. There were 164 Molniya satellites launched, all in Molniya orbits with the exception of Molniya 1S which was launched into geostationary orbit for testing purposes. History. In the early 1960s, when Europe and America were establishing geostationary communication satellites, the Russians found these orbits unsuitable. They were limited in the amount of rocket power available and it is extremely energy intensive to both launch a satellite to 40,000 km, and change its inclination to be over the equator, especially when launched from Russia. Additionally geostationary satellites give poor coverage in polar regions. A large portion of Russian territory consists of polar regions, making this arrangement further unfavorable to Russian interests. As a result, OKB-1 sought to find a more optimal orbit for the satellite. Studies found that this could be achieved using a large elliptical orbit, with an apogee over Russian territory. The satellite's name, "quick as lightning", is in reference to the speed with which it passes through the perigee. Molniya series satellites were replaced (succeeded) by the Meridian series, with the first launch in 2006. As of 2023[ [update]], there are currently 36 Molniya satellites left in orbit. Molniya 1. The Molniya programme was authorized on 30 October 1961 and design was handled by OKB-1. They were based on the KAUR-2 satellite bus, with design finishing in 1963. The first launch took place on 4 June 1964 and ended in failure when the 8K78 booster core stage lost thrust 287 seconds into launch due to a jammed servo motor. The next attempt was on 22 August 1964 and reached orbit successfully, but the parabolic communications antennas did not properly deploy due to a design flaw in the release mechanism. Publicly referred to as Kosmos 41, it nonetheless operated for nine months. The first operational satellite, Molniya 1-1, was successfully launched on 23 April 1965. By 30 May 1966, the third Molniya 1 had taken the first images of the whole Earth in history. The early Molniya-1 satellites were designed for television, telegraph and telephone across Russia, but they were also fitted with cameras used for weather monitoring, and possibly for assessing clear areas for Zenit spy satellites. The system was operational by 1967, with the construction of the Orbita groundstations. They had a lifespan of approximately 1.5 years, as their orbits were disrupted by perturbations, as well as deteriorating solar arrays and they had to be constantly replaced. By the 1970s, the Molniya 1 series (and the upgrade Molniya 1T) was mostly used for military communications, with civilian communications moving to Molniya 2. In total 94 Molniya 1 series satellites were launched, with the last going up in 2004. Molniya 2. The first Molniya 2 satellites were tested from 1971 with the first operational satellite launching in 1974 from Plesetsk. The used the same satellite bus and basic design as later model Molniya 1 satellites, but with an expanded number of users under the military's Unified System of Satellite Communications (YeSSS) program. Development was difficult because the final satellite bus was unpressurized, changing their selection of radios. These satellites were used in the Soviet national Orbita television network, which had been established a few years earlier in 1967. Only seventeen Molniya 2 series satellite were launched, as they were soon superseded by the Molniya 3. Molniya 3. Originally called the Molniya-2M, their development began in 1972, with launches from 1974. They were also based on the KAUR-2 bus, launching solely from Plesetsk. Earlier models were used for civilian communications in a similar orbit, but different purpose, to the military-only Molniya-1 satellites. From 1980s they were used by the military, and by the 1990s they were operated in the same manner as the Molniya 1 satellites. A total of 53 Molniya 3 series satellites were launched, with the last one going up in 2003. Orbital Properties. A typical Molniya series satellite, has: Inclination. In general, the oblateness of the Earth perturbs the argument of perigee (formula_0), so that even if the apogee started near the north pole, it would gradually move unless constantly corrected with station-keeping thruster burns. Keeping the dwell point over Russia, and useful for communications necessitated without excessive fuel use meant that the satellites needed an inclination of 63.4°, for which these perturbations are zero. Period. Similarly, to ensure the ground track repeats every 24 hours the nodal period needed to be half a sidereal day. Eccentricity. To maximise the dwell time the eccentricity, the differences in altitudes of the apogee and perigee, had to be large. However, the perigee needed to be far enough above the atmosphere to avoid drag, and the orbital period needed to be approximately half a sidereal day. These two factors constrained the eccentricity to become approximately 0.737. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=616348
61634820
Mutation bias
Mutation bias is a pattern in which some type of mutation occurs more often than expected under uniformity. The types are most often defined by the molecular nature of the mutational change, but sometimes they are based on downstream effects, e.g., Ostrow, et al. Scientific context. The concept of mutation bias appears in several scientific contexts, most commonly in molecular studies of evolution, where mutation biases may be invoked to account for such phenomena as systematic differences in codon usage or genome composition between species. The short tandem repeat (STR) loci used in forensic identification may show biased patterns of gain and loss of repeats. In cancer research, some types of tumors have distinctive mutational signatures that reflect differences in the contributions of mutational pathways. Mutational signatures have proved useful in both detection and treatment. Recent studies of the emergence of resistance to anti-microbials and anti-cancer drugs show that mutation biases are an important determinant of the prevalence for different types of resistant strains or tumors. Thus, a knowledge of mutation bias can be used to design more evolution-resistant therapies. When mutation bias is invoked as a possible cause of some pattern in evolution, this is generally an application of the theory of arrival biases, and the alternative hypotheses may include selection, biased gene conversion, and demographic factors. In the past, due to the technical difficulty of detecting rare mutations, most attempts to characterize the mutation spectrum were based on reporter gene systems, or based on patterns of presumptively neutral change in pseudogenes. More recently, there has been an effort to use the MA (mutation accumulation) method and high-throughput sequencing (e.g., ). Status in evolution. Cases of mutation bias are cited by mutationism advocates of the extended evolutionary synthesis who have argued that mutation bias is an entirely novel evolutionary principle. This viewpoint has been criticized by Erik Svensson. A 2019 review by Svensson and David Berger concluded that "we find little support for mutation bias as an independent force in adaptive evolution, although it can interact with selection under conditions of small population size and when standing genetic variation is limited, entirely consistent with standard evolutionary theory." In contrast to Svensson and Berger a 2023 review by Arlin Stoltzfus and colleagues concluded that there is strong empirical evidence and theoretical arguments that mutation bias has predictable effects on genetic changes fixed in adaptation. Types. Transition-transversion bias. The canonical DNA nucleotides include 2 purines (A and G) and 2 pyrimidines (T and C). In the molecular evolution literature, the term transition is used for nucleotide changes within a chemical class, and transversion for changes from one chemical class to the other. Each nucleotide is subject to one transition (e.g., T to C) and 2 transversions (e.g., T to A or T to G). Because a site (or a sequence) is subject to twice as many transversions as transitions, the total rate of transversions for a sequence may be higher even when the rate of transitions is higher on a per-path basis. In the molecular evolution literature, the per-path rate bias is typically denoted by "κ" (kappa), so that, if the rate of each transversion is "u", the rate of each transition is "κu". Then, the aggregate rate ratio (transitions to transversions) is "R = (1 * κu) / (2 * u) = κ / 2". For instance, in yeast, "κ ~ 1.2", therefore the aggregate bias is "R = 1.2 / 2 = 0.6", whereas in E. coli, "κ ~ 4" so that "R ~ 2". In a variety of organisms, transition mutations occur several-fold more frequently than expected under uniformity. The bias in animal viruses is sometimes much more extreme, e.g., 31 of 34 nucleotide mutations in a recent study in HIV were transitions. As noted above, the bias toward transitions is weak in yeast, and appear to be absent in the grasshopper "Podisma pedestris". Male mutation bias. Definition. Male mutation bias is also called "Male-Driven Evolution". The rate of male germline mutations is generally higher than in females. The phenomenon of Male mutation bias have been observed in many species. Origin. In 1935, the British-Indian scientist J.B.S. Haldane found that in hemophilia, the blood clotting disorder originated on the X chromosomes is due to fathers' germline mutation. Then he proposed the hypothesis that the male germline contributes inordinately more mutations to succeeding generations than that in the female germline mutation. Evidence. In 1987, Takashi Miyata at al. designed an approach to test Haldane’s hypothesis. If α is the ratio of the male mutation rate to the female mutation rate, Y and X are denoted as Y and X-linked sequence mutation rate, he include that the ratio of Y-linked sequence mutation rate to X-linked sequence mutation rate is: formula_0 The mean Y/X ratio is 2.25 in higher primates. By using the equation, we could estimate the ratio of male to female mutation rates α ≈ 6. In some organisms with a shorter generation time than humans, the mutation rate in males is also larger than those in females. Because their cell divisions in males are usually not that large. The ratio of the number of germ cell divisions from one generation to the next in males to females is less than that in human. There are also other hypotheses that want to explain the male mutation bias. They think it may be caused by the mutation rate in the Y-linked sequence higher than the X-linked sequence mutation rate. The male germline genome is heavily methylated and more inclined to mutate than females. X chromosomes experience more purifying selection mutations on hemizygous chromosomes. To test this hypothesis, people use birds to study their mutation rate. Contrary to humans, bird males are homogametes (WW), and females are heterogametes (WZ). They found that the bird male-to-female ratio in mutation rates ranges from 4 to 7. It also proved that the mutation bias is mostly resulted from more male germline mutation than the female. Explanation. A mutation is a heritable variation in the genetic information of a short region of DNA sequences. Mutations can be categorized into replication-dependent mutations and replication-independent mutations. Therefore, there are two kinds of mutation mechanisms to explain the phenomenon of male mutation bias. Replication-dependent mechanism. The number of germ cell divisions in females are constant and are much less than that in males. In females, most primary oocytes are formed at birth. The number of cell divisions occurred in the production of a mature ovum is constant. In males, more cell divisions are required during the process of spermatogenesis. Not only that, the cycle of spermatogenesis is never-ending. Spermatogonia will continue to divide throughout the whole productive life of the male. The number of male germline cell divisions at production is not only higher than female germline cell divisions but also mounting as the age of the male increases. One might expect the male mutation rate would be similar to the rate of male germline cell divisions. But only few species conform to the estimation of the male mutation rate. Even in these species, the ratio of male-to-female mutation rate is lower than the ratio of male-to-female in the number of germline cell divisions. Replication-independent mechanism. The skew estimates of the male-to-female mutation rate ratio introduce the other important mechanism that highly influences male mutation bias. Mutations at CpG sites result in a C-to-T transition. These C-to-T nucleotide substitutions occur 10-50 times faster than that at rest sites in DNA sequences, especially likely appeared in the male and female germlines. The CpG mutation barely expresses any sex biases because of the independence of replication, and effectively lower the ratio of male-to-female mutation rate. Besides, neighbor-dependent mutations can also cause biases in mutation rate, and may have no relevance to DNA replication. For example, if mutations originated by the effect of mutagens show weak male mutation bias, such as exposure to the UV light. GC-AT bias. A GC-AT bias is a bias with a net effect on GC content. For instance, if G and C sites are simply more mutable than A and T sites, other things being equal, this would result in a net downward pressure on GC content. Mutation-accumulation studies indicate a strong many-fold bias toward AT in mitochondria of "D. melanogaster", and a more modest 2-fold bias toward AT in yeast. A common idea in the literature of molecular evolution is that codon usage and genome composition reflect the effects of mutation bias, e.g., codon usage has been treated with a mutation-selection-drift model combining mutation biases, selection for translationally preferred codons, and drift. To the extent that mutation bias prevails under this model, mutation bias toward GC is responsible for genomes with high GC content, and likewise the opposite bias is responsible for genomes with low GC content. Starting in the 1990s, it became clear that GC-biased gene conversion was a major factor—previously unanticipated—in affecting GC content in diploid organisms such as mammals. Similarly, although it may be the case that bacterial genome composition strongly reflects GC and AT biases, the proposed mutational biases have not been demonstrated to exist. Indeed, Hershberg and Petrov suggest that mutation in most bacterial genomes is biased toward AT, even when the genome is not AT-rich. Related concepts. The concept of mutation bias, as defined above, does not imply foresight, design, or even a specially evolved tendency, e.g., the bias may emerge simply as a side-effect of DNA repair processes. Currently there is no established terminology for mutation-generating systems that tend to produce useful mutations. The term "directed mutation" or adaptive mutation is sometimes used with the implication of a process of mutation that senses and responds to conditions directly. When the sense is simply that the mutation system is tuned to enhance the production of helpful mutations under certain conditions, the terminology of "mutation strategies" or "natural genetic engineering" has been suggested, but these terms are not widely used. As argued in Ch. 5 of Stoltzfus 2021, various mechanisms of mutation in pathogenic microbes, e.g., mechanisms for phase variation and antigenic variation, appear to have evolved so as to enhance lineage survival, and these mechanisms are routinely described as strategies or adaptations in the microbial genetics literature, such as by Foley 2015. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y/X = \\frac{3\\alpha}{2+\\alpha}" } ]
https://en.wikipedia.org/wiki?curid=61634820
616351
Molniya orbit
Type of high-latitude satellite orbit A Molniya orbit (Russian: , , "Lightning") is a type of satellite orbit designed to provide communications and remote sensing coverage over high latitudes. It is a highly elliptical orbit with an inclination of 63.4 degrees, an argument of perigee of 270 degrees, and an orbital period of approximately half a sidereal day. The name comes from the "Molniya" satellites, a series of Soviet/Russian civilian and military communications satellites which have used this type of orbit since the mid-1960s. A variation on the Molniya orbit is the so-called Three Apogee (TAP) orbit, whose period is a third of a sidereal day. The Molniya orbit has a long dwell time over the hemisphere of interest, while moving very quickly over the other. In practice, this places it over either Russia or Canada for the majority of its orbit, providing a high angle of view to communications and monitoring satellites covering these high-latitude areas. Geostationary orbits, which are necessarily inclined over the equator, can only view these regions from a low angle, hampering performance. In practice, a satellite in a Molniya orbit serves the same purpose for high latitudes as a geostationary satellite does for equatorial regions, except that multiple satellites are required for continuous coverage. Satellites placed in Molniya orbits have been used for television broadcasting, telecommunications, military communications, relaying, weather monitoring, early warning systems and some classified purposes. History. The Molniya orbit was discovered by Soviet scientists in the 1960s as a high-latitude communications alternative to geostationary orbits, which require large launch energies to achieve a high perigee and to change inclination to orbit over the equator (especially when launched from Russian latitudes). As a result, OKB-1 sought a less energy-demanding orbit. Studies found that this could be achieved using a highly elliptical orbit with an apogee over Russian territory. The orbit's name refers to the "lightning" speed with which the satellite passes through the perigee. The first use of the Molniya orbit was by the communications satellite series of the same name. After two launch failures, and one satellite failure in 1964, the first successful satellite to use this orbit, Molniya 1-1, launched on 23 April 1965. The early Molniya-1 satellites were used for civilian television, telecommunication and long-range military communications, but they were also fitted with cameras used for weather monitoring, and possibly for assessing clear areas for Zenit spy satellites. The original Molniya satellites had a lifespan of approximately 1.5 years, as their orbits were disrupted by perturbations, and they had to be constantly replaced. The succeeding series, the Molniya-2, provided both military and civilian broadcasting and was used to create the Orbita television network, spanning the Soviet Union. These were in turn replaced by the Molniya-3 design. A satellite called Mayak was designed to supplement and replace the Molniya satellites in 1997, but the project was cancelled, and the Molniya-3 was replaced by the Meridian satellites, the first of which launched in 2006. The Soviet US-K early warning satellites, which watch for American rocket launches, were launched in Molniya orbits from 1967, as part of the Oko system. From 1971, the American Jumpseat and Trumpet military satellites were launched into Molniya orbits (and possibly used to intercept Soviet communications from the Molniya satellites). Detailed information about both projects remains classified as of 2019[ [update]]. This was followed by the American SDS constellation, which operates with a mixture of Molniya and geostationary orbits. These satellites are used to relay signals from lower flying satellites back to ground stations in the United States and have been active in some capacity since 1976. A Russian satellite constellation called Tyulpan was designed in 1994 to support communications at high latitudes, but it did not progress past the planning phase. In 2015 and 2017 Russia launched two "Tundra" satellites into a Molniya orbit, despite their name, as part of its EKS early warning system. Uses. Much of the area of the former Soviet Union, and Russia in particular, is located at high northern latitudes. To broadcast to these latitudes from a geostationary orbit (above the Earth's equator) requires considerable power due to the low elevation angles, and the extra distance and atmospheric attenuation that comes with it. Sites located above 81° latitude are unable to view geostationary satellites at all, and as a rule of thumb, elevation angles of less than 10° can cause problems, depending on the communications frequency. A satellite in a Molniya orbit is better suited to communications in these regions, because it looks more directly down on them during large portions of its orbit. With an apogee altitude as high as and an apogee sub-satellite point of 63.4 degrees north, it spends a considerable portion of its orbit with excellent visibility in the northern hemisphere, from Russia as well as from northern Europe, Greenland and Canada. While satellites in Molniya orbits require considerably less launch energy than those in geostationary orbits (especially launching from high latitudes), their ground stations need steerable antennas to track the spacecraft, links must be switched between satellites in a constellation and range changes cause variations in signal amplitude. Additionally, there is a greater need for station-keeping, and the spacecraft will pass through the Van Allen radiation belt four times per day. Southern hemisphere proposals. Similar orbits with an argument of perigee of 90° could allow high-latitude coverage in the southern hemisphere. A proposed constellation, the "Antarctic Broadband Program", would have used satellites in an inverted Molniya orbit to provide broadband internet service to facilities in Antarctica. Initially funded by the now defunct Australian Space Research Program, it did not progress beyond initial development. Molniya constellations. Permanent high-latitude coverage of a large area of Earth (like the whole of Russia, where the southern parts are about 45°N) requires a constellation of at least three spacecraft in Molniya orbits. If three spacecraft are used, then each spacecraft will be active for a period of eight hours per orbit, centered around apogee, as illustrated in figure 4. Figure 5 shows the satellite's field of view around the apogee. The Earth completes half a rotation in twelve hours, so the apogees of successive Molniya orbits will alternate between one half of the northern hemisphere and the other. For the original Molniya orbit, the apogees were placed over Russia and North America, but by changing the right ascension of the ascending node this can be varied. The coverage from a satellite in a Molniya orbit over Russia is shown in figures 6 to 8, and over North America in figures 9 to 11. The orbits of the three spacecraft should then have the same orbital parameters, but different right ascensions of the ascending nodes, with their passes over the apogees separated by 7.97 hours. Since each satellite has an operational period of approximately eight hours, when one spacecraft travels four hours after its apogee passage (see figure 8 or figure 11), then the next satellite will enter its operational period, with the view of the earth shown in figure 6 (or figure 9), and the switch-over can take place. Note that the two spacecraft at the time of switch-over are separated by about , so that the ground stations only have to move their antennas a few degrees to acquire the new spacecraft. Properties. A typical Molniya orbit has the following properties: Argument of perigee. The argument of perigee is set at 270°, causing the satellite to experience apogee at the most northerly point of its orbit. For any future applications over the southern hemisphere, it would instead be set at 90°. Orbital inclination. In general, the oblateness of the Earth perturbs the argument of perigee (formula_0), so that it gradually changes with time. If we only consider the first-order coefficient formula_1, the perigee will change according to equation 1, unless it is constantly corrected with station-keeping thruster burns. where formula_2 is the orbital inclination, formula_3 is the eccentricity, formula_4 is mean motion in degrees per day, formula_1 is the perturbing factor, formula_5 is the radius of the earth, formula_6 is the semimajor axis, and formula_7 is in degrees per day. To avoid this expenditure of fuel, the Molniya orbit uses an inclination of 63.4°, for which the factor formula_8 is zero, so that there is no change in the position of perigee over time. An orbit designed in this manner is called a frozen orbit. Orbital period. To ensure the geometry relative to the ground stations repeats every 24 hours, the period should be about half a sidereal day, keeping the longitudes of the apogees constant. However, the oblateness of the Earth also perturbs the right ascension of the ascending node (formula_9), changing the nodal period and causing the ground track to drift over time at the rate shown in equation 2. where formula_10 is in degrees per day. Since the inclination of a Molniya orbit is fixed (as above), this perturbation is formula_11 degrees per day. To compensate, the orbital period is adjusted so that the longitude of the apogee changes enough to cancel out this effect. Eccentricity. The eccentricity of the orbit is based on the differences in altitudes of its apogee and perigee. To maximise the amount of time that the satellite spends over the apogee, the eccentricity should be set as high as possible. However, the perigee needs to be high enough to keep the satellite substantially above the atmosphere to minimize drag (~600km), and the orbital period needs to be kept to approximately half a sidereal day (as above). These two factors constrain the eccentricity, which becomes approximately 0.737. Semi-major axis. The exact height of a satellite in a Molniya orbit varies between missions, but a typical orbit will have a perigee altitude of approximately and an apogee altitude of , for a semi-major axis of . Modelling. To track satellites using Molniya orbits, scientists use the SDP4 simplified perturbations model, which calculates the location of a satellite based on orbital shape, drag, radiation, gravitation effects from the sun and moon, and earth resonance terms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\omega" }, { "math_id": 1, "text": "J_2" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "e" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "R_E" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "\\dot{\\omega}" }, { "math_id": 8, "text": "4-5\\sin^2{i}" }, { "math_id": 9, "text": "\\Omega" }, { "math_id": 10, "text": "\\dot{\\Omega}" }, { "math_id": 11, "text": "\\dot{\\Omega} = -0.3" } ]
https://en.wikipedia.org/wiki?curid=616351
61638365
Entanglement monotone
In quantum information and quantum computation, an entanglement monotone or entanglement measure is a function that quantifies the amount of entanglement present in a quantum state. Any entanglement monotone is a nonnegative function whose value does not increase under local operations and classical communication. Definition. Let formula_0be the space of all states, i.e., Hermitian positive semi-definite operators with trace one, over the bipartite Hilbert space formula_1. An entanglement measure is a function formula_2such that: Some authors also add the condition that formula_12 over the maximally entangled state formula_13. If the nonnegative function only satisfies condition 2 of the above, then it is called an entanglement monotone. Various entanglement monotones exist for bipartite systems as well as for multipartite systems. Common entanglement monotones are the entropy of entanglement, concurrence, negativity, squashed entanglement, entanglement of formation and tangle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{S}(\\mathcal{H}_A\\otimes\\mathcal{H}_B)" }, { "math_id": 1, "text": "\\mathcal{H}_A\\otimes\\mathcal{H}_B" }, { "math_id": 2, "text": "\\mu:{\\displaystyle {\\mathcal {S}}({\\mathcal {H}}_{A}\\otimes {\\mathcal {H}}_{B})}\\to \\mathbb{R}_{\\geq 0}" }, { "math_id": 3, "text": "\\mu(\\rho)=0" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "E_i\\otimes F_i" }, { "math_id": 6, "text": "\\mathcal{E}_{LOCC}" }, { "math_id": 7, "text": "p_i=\\mathrm{Tr}[(E_i\\otimes F_i)\\rho (E_i\\otimes F_i)^{\\dagger}]" }, { "math_id": 8, "text": "\\rho_i=(E_i\\otimes F_i)\\rho (E_i\\otimes F_i)^{\\dagger}/\\mathrm{Tr}[(E_i\\otimes F_i)\\rho (E_i\\otimes F_i)^{\\dagger}]" }, { "math_id": 9, "text": "\\mu" }, { "math_id": 10, "text": "\\mu(\\rho)\\geq \\sum_i p_i\\mu(\\rho_i)" }, { "math_id": 11, "text": "\\mu(\\rho)\\geq \\sum_i \\mu(p_i\\rho_i)" }, { "math_id": 12, "text": "\\mu(\\varrho)=1" }, { "math_id": 13, "text": "\\varrho" } ]
https://en.wikipedia.org/wiki?curid=61638365
61641996
Maxwell's theorem (geometry)
Given a triangle and a point, constructs a second triangle with a special point Maxwell's theorem is the following statement about triangles in the plane. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;For a given triangle formula_1 and a point formula_3 not on the sides of that triangle construct a second triangle formula_0, such that the side formula_4 is parallel to the line segment formula_5, the side formula_6 is parallel to the line segment formula_7 and the side formula_8 is parallel to the line segment formula_9. Then the parallel to formula_10 through formula_11, the parallel to formula_12 through formula_13 and the parallel to formula_14 through formula_15 intersect in a common point formula_2. The theorem is named after the physicist James Clerk Maxwell (1831–1879), who proved it in his work on reciprocal figures, which are of importance in statics.
[ { "math_id": 0, "text": "A'B'C'" }, { "math_id": 1, "text": "ABC" }, { "math_id": 2, "text": "V'" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "A'B'" }, { "math_id": 5, "text": "CV" }, { "math_id": 6, "text": "A'C'" }, { "math_id": 7, "text": "BV" }, { "math_id": 8, "text": "B'C'" }, { "math_id": 9, "text": "AV" }, { "math_id": 10, "text": "AB" }, { "math_id": 11, "text": "C'" }, { "math_id": 12, "text": "BC" }, { "math_id": 13, "text": "A'" }, { "math_id": 14, "text": "AC" }, { "math_id": 15, "text": "B'" } ]
https://en.wikipedia.org/wiki?curid=61641996
61643903
Ecclesiastes 4
Fourth chapter of the biblical book Ecclesiastes Ecclesiastes 4 is the fourth chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the fifth and second centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter discusses life's hardship and life's companions. Text. The original text was written in Hebrew. This chapter is divided into 16 or 17 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. The New King James Version divides this chapter into four sections: Oppression without comfort (4:1–3). Following the reaction to the apparent perverted nature of the world in chapter 3, Qoheleth states that being dead or unborn is better than living to experience the horror of oppression, with little solace and no comforter for the oppressed. The solution may be hinted before (2:26; 3:22) but none is suggested here. "Then I returned and considered all the oppression that is done under the sun:" "And look! The tears of the oppressed," "But they have no comforter—" "On the side of their oppressors there is power," "But they have no comforter." Verse 1. Qoheleth "returns" to his thoughts in . August Hahn suggests instead, "And anew, I saw..." 'All their Toil' (4:4–12). Qoheleth uses the oppressed's loneliness to switch his focus to competition as the sole motive for work and skill, which also "vanity" to his view. It may lead on one hand to envy and rivalry, or on the other hand, the withdrawal from life altogether, folding of the hands that elsewhere is associated with sloth (cf. Proverbs 6:10; 24:33). This brings the argument that co-operation is better than competition, as pointed in verse 9, given illustrations in verses 10–12a and restated in 12b. The saying about the strength of a plaited cord in verse 12b resembles an ancient proverb found in the Epic of Gilgamesh. "Again, I considered all travail, and every right work, that for this a man is envied of his neighbour. This is also vanity and vexation of spirit." An isolated leader: Wise Youth, Foolish King (4:13–16). In this part Qoheleth is again comparing one idea against another: a wise youth is better than one elderly king who becomes foolish, but the popularity of that youth will not last, for this person will soon be replaced by another youth and ultimately forgotten as the cycle keeps repeating. Two universally true points are that isolation is a part of the pain in human experience and that the next generation won't solve the problem of the previous one (cf. ). In any case, Qoheleth seems to be drawing out an implication of 1:11, that the true and important things at one moment will only be forgotten and swallowed up in repetition of time. "Better is a poor and a wise child than an old and foolish king, who will no more be admonished." Verse 13. The elderly king was once wise (suggested by "no more"), but then lost his wisdom. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61643903
616448
Wilson loop
Gauge field loop operator In quantum field theory, Wilson loops are gauge invariant operators arising from the parallel transport of gauge variables around closed loops. They encode all gauge information of the theory, allowing for the construction of loop representations which fully describe gauge theories in terms of these loops. In pure gauge theory they play the role of order operators for confinement, where they satisfy what is known as the area law. Originally formulated by Kenneth G. Wilson in 1974, they were used to construct links and plaquettes which are the fundamental parameters in lattice gauge theory. Wilson loops fall into the broader class of loop operators, with some other notable examples being 't Hooft loops, which are magnetic duals to Wilson loops, and Polyakov loops, which are the thermal version of Wilson loops. Definition. To properly define Wilson loops in gauge theory requires considering the fiber bundle formulation of gauge theories. Here for each point in the formula_1-dimensional spacetime formula_0 there is a copy of the gauge group formula_2 forming what's known as a fiber of the fibre bundle. These fiber bundles are called principal bundles. Locally the resulting space looks like formula_3 although globally it can have some twisted structure depending on how different fibers are glued together. The issue that Wilson lines resolve is how to compare points on fibers at two different spacetime points. This is analogous to parallel transport in general relativity which compares tangent vectors that live in the tangent spaces at different points. For principal bundles there is a natural way to compare different fiber points through the introduction of a connection, which is equivalent to introducing a gauge field. This is because a connection is a way to separate out the tangent space of the principal bundle into two subspaces known as the vertical and horizontal subspaces. The former consists of all vectors pointing along the fiber formula_2 while the latter consists of vectors that are perpendicular to the fiber. This allows for the comparison of fiber values at different spacetime points by connecting them with curves in the principal bundle whose tangent vectors always live in the horizontal subspace, so the curve is always perpendicular to any given fiber. If the starting fiber is at coordinate formula_4 with a starting point of the identity formula_5, then to see how this changes when moving to another spacetime coordinate formula_6, one needs to consider some spacetime curve formula_7 between formula_4 and formula_6. The corresponding curve in the principal bundle, known as the horizontal lift of formula_8, is the curve formula_9 such that formula_10 and that its tangent vectors always lie in the horizontal subspace. The fiber bundle formulation of gauge theory reveals that the Lie-algebra valued gauge field formula_11 is equivalent to the connection that defines the horizontal subspace, so this leads to a differential equation for the horizontal lift formula_12 This has a unique formal solution called the Wilson line between the two points formula_13 where formula_14 is the path-ordering operator, which is unnecessary for abelian theories. The horizontal lift starting at some initial fiber point other than the identity merely requires multiplication by the initial element of the original horizontal lift. More generally, it holds that if formula_15 then formula_16 for all formula_17. Under a local gauge transformation formula_18 the Wilson line transforms as formula_19 This gauge transformation property is often used to directly introduce the Wilson line in the presence of matter fields formula_20 transforming in the fundamental representation of the gauge group, where the Wilson line is an operator that makes the combination formula_21 gauge invariant. It allows for the comparison of the matter field at different points in a gauge invariant way. Alternatively, the Wilson lines can also be introduced by adding an infinitely heavy test particle charged under the gauge group. Its charge forms a quantized internal Hilbert space, which can be integrated out, yielding the Wilson line as the world-line of the test particle. This works in quantum field theory whether or not there actually is any matter content in the theory. However, the swampland conjecture known as the completeness conjecture claims that in a consistent theory of quantum gravity, every Wilson line and 't Hooft line of a particular charge consistent with the Dirac quantization condition must have a corresponding particle of that charge be present in the theory. Decoupling these particles by taking the infinite mass limit no longer works since this would form black holes. The trace of closed Wilson lines is a gauge invariant quantity known as the Wilson loop formula_22 Mathematically the term within the trace is known as the holonomy, which describes a mapping of the fiber into itself upon horizontal lift along a closed loop. The set of all holonomies itself forms a group, which for principal bundles must be a subgroup of the gauge group. Wilson loops satisfy the reconstruction property where knowing the set of Wilson loops for all possible loops allows for the reconstruction of all gauge invariant information about the gauge connection. Formally the set of all Wilson loops forms an overcomplete basis of solutions to the Gauss' law constraint. The set of all Wilson lines is in one-to-one correspondence with the representations of the gauge group. This can be reformulated in terms of Lie algebra language using the weight lattice of the gauge group formula_23. In this case the types of Wilson loops are in one-to-one correspondence with formula_24 where formula_25 is the Weyl group. Hilbert space operators. An alternative view of Wilson loops is to consider them as operators acting on the Hilbert space of states in Minkowski signature. Since the Hilbert space lives on a single time slice, the only Wilson loops that can act as operators on this space are ones formed using spacelike loops. Such operators formula_26 create a closed loop of electric flux, which can be seen by noting that the electric field operator formula_27 is nonzero on the loop formula_28 but it vanishes everywhere else. Using Stokes theorem it follows that the spatial loop measures the magnetic flux through the loop. Order operator. Since temporal Wilson lines correspond to the configuration created by infinitely heavy stationary quarks, Wilson loop associated with a rectangular loop formula_29 with two temporal components of length formula_30 and two spatial components of length formula_31, can be interpreted as a quark-antiquark pair at fixed separation. Over large times the vacuum expectation value of the Wilson loop projects out the state with the minimum energy, which is the potential formula_32 between the quarks. The excited states with energy formula_33 are exponentially suppressed with time and so the expectation value goes as formula_34 making the Wilson loop useful for calculating the potential between quark pairs. This potential must necessarily be a monotonically increasing and concave function of the quark separation. Since spacelike Wilson loops are not fundamentally different from the temporal ones, the quark potential is really directly related to the pure Yang–Mills theory structure and is a phenomenon independent of the matter content. Elitzur's theorem ensures that local non-gauge invariant operators cannot have a non-zero expectation values. Instead one must use non-local gauge invariant operators as order parameters for confinement. The Wilson loop is exactly such an order parameter in pure Yang–Mills theory, where in the confining phase its expectation value follows the area law formula_35 for a loop that encloses an area formula_36. This is motivated from the potential between infinitely heavy test quarks which in the confinement phase is expected to grow linearly formula_37 where formula_38 is known as the string tension. Meanwhile, in the Higgs phase the expectation value follows the perimeter law formula_39 where formula_40 is the perimeter length of the loop and formula_41 is some constant. The area law of Wilson loops can be used to demonstrate confinement in certain low dimensional theories directly, such as for the Schwinger model whose confinement is driven by instantons. Lattice formulation. In lattice field theory, Wilson lines and loops play a fundamental role in formulating gauge fields on the lattice. The smallest Wilson lines on the lattice, those between two adjacent lattice points, are known as links, with a single link starting from a lattice point formula_42 going in the formula_43 direction denoted by formula_44. Four links around a single square are known as a plaquette, with their trace forming the smallest Wilson loop. It is these plaquettes that are used to construct the lattice gauge action known as the Wilson action. Larger Wilson loops are expressed as products of link variables along some loop formula_29, denoted by formula_45 These Wilson loops are used to study confinement and quark potentials numerically. Linear combinations of Wilson loops are also used as interpolating operators that give rise to glueball states. The glueball masses can then be extracted from the correlation function between these interpolators. The lattice formulation of the Wilson loops also allows for an analytic demonstration of confinement in the strongly coupled phase, assuming the quenched approximation where quark loops are neglected. This is done by expanding out the Wilson action as a power series of traces of plaquettes, where the first non-vanishing term in the expectation value of the Wilson loop in an formula_46 gauge theory gives rise to an area law with a string tension of the form formula_47 where formula_48 is the inverse coupling constant and formula_49 is the lattice spacing. While this argument holds for both the abelian and non-abelian case, compact electrodynamics only exhibits confinement at strong coupling, with there being a phase transition to the Coulomb phase at formula_50, leaving the theory deconfined at weak coupling. Such a phase transition is not believed to exist for formula_51 gauge theories at zero temperature, instead they exhibit confinement at all values of the coupling constant. Properties. Makeenko–Migdal loop equation. Similarly to the functional derivative which acts on functions of functions, functions of loops admit two types of derivatives called the area derivative and the perimeter derivative. To define the former, consider a contour formula_29 and another contour formula_52 which is the same contour but with an extra small loop at formula_53 in the formula_43-formula_54 plane with area formula_55. Then the area derivative of the loop functional formula_56 is defined through the same idea as the usual derivative, as the normalized difference between the functional of the two loops formula_57 The perimeter derivative is similarly defined whereby now formula_58 is a slight deformation of the contour formula_29 which at position formula_53 has a small extruding loop of length formula_59 in the formula_43 direction and of zero area. The perimeter derivative formula_60 of the loop functional is then defined as formula_61 In the large N-limit, the Wilson loop vacuum expectation value satisfies a closed functional form equation called the Makeenko–Migdal equation formula_62 Here formula_63 with formula_64 being a line that does not close from formula_53 to formula_65, with the two points however close to each other. The equation can also be written for finite formula_66, but in this case it does not factorize and instead leads to expectation values of products of Wilson loops, rather than the product of their expectation values. This gives rise to an infinite chain of coupled equations for different Wilson loop expectation values, analogous to the Schwinger–Dyson equations. The Makeenko–Migdal equation has been solved exactly in two dimensional formula_67 theory. Mandelstam identities. Gauge groups that admit fundamental representations in terms of formula_68 matrices have Wilson loops that satisfy a set of identities called the Mandelstam identities, with these identities reflecting the particular properties of the underlying gauge group. The identities apply to loops formed from two or more subloops, with formula_69 being a loop formed by first going around formula_70 and then going around formula_71. The Mandelstam identity of the first kind states that formula_72, with this holding for any gauge group in any dimension. Mandelstam identities of the second kind are acquired by noting that in formula_66 dimensions, any object with formula_73 totally antisymmetric indices vanishes, meaning that formula_74. In the fundamental representation, the holonomies used to form the Wilson loops are formula_68 matrix representations of the gauge groups. Contracting formula_73 holonomies with the delta functions yields a set of identities between Wilson loops. These can be written in terms the objects formula_75 defined iteratively so that formula_76 and formula_77 In this notation the Mandelstam identities of the second kind are formula_78 For example, for a formula_79 gauge group this gives formula_80. If the fundamental representation are matrices of unit determinant, then it also holds that formula_81. For example, applying this identity to formula_82 gives formula_83 Fundamental representations consisting of unitary matrices satisfy formula_84. Furthermore, while the equality formula_85 holds for all gauge groups in the fundamental representations, for unitary groups it moreover holds that formula_86. Renormalization. Since Wilson loops are operators of the gauge fields, the regularization and renormalization of the underlying Yang–Mills theory fields and couplings does not prevent the Wilson loops from requiring additional renormalization corrections. In a renormalized Yang–Mills theory, the particular way that the Wilson loops get renormalized depends on the geometry of the loop under consideration. The main features are Additional applications. Scattering amplitudes. Wilson loops play a role in the theory of scattering amplitudes where a set of dualities between them and special types of scattering amplitudes has been found. These have first been suggested at strong coupling using the AdS/CFT correspondence. For example, in formula_89 supersymmetric Yang–Mills theory maximally helicity violating amplitudes factorize into a tree-level component and a loop level correction. This loop level correction does not depend on the helicities of the particles, but it was found to be dual to certain polygonal Wilson loops in the large formula_66 limit, up to finite terms. While this duality was initially only suggested in the maximum helicity violating case, there are arguments that it can be extended to all helicity configurations by defining appropriate supersymmetric generalizations of the Wilson loop. String theory compactifications. In compactified theories, zero mode gauge field states that are locally pure gauge configurations but are globally inequivalent to the vacuum are parameterized by closed Wilson lines in the compact direction. The presence of these on a compactified open string theory is equivalent under T-duality to a theory with non-coincident D-branes, whose separations are determined by the Wilson lines. Wilson lines also play a role in orbifold compactifications where their presence leads to greater control of gauge symmetry breaking, giving a better handle on the final unbroken gauge group and also providing a mechanism for controlling the number of matter multiplets left after compactification. These properties make Wilson lines important in compactifications of superstring theories. Topological field theory. In a topological field theory, the expectation value of Wilson loops does not change under smooth deformations of the loop since the field theory does not depend on the metric. For this reason, Wilson loops are key observables on in these theories and are used to calculate global properties of the manifold. In formula_90 dimensions they are closely related to knot theory with the expectation value of a product of loops depending only on the manifold structure and on how the loops are tied together. This led to the famous connection made by Edward Witten where he used Wilson loops in Chern–Simons theory to relate their partition function to Jones polynomials of knot theory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "G" }, { "math_id": 3, "text": "\\mathbb R^d \\times G" }, { "math_id": 4, "text": "x_i" }, { "math_id": 5, "text": "g_i=e" }, { "math_id": 6, "text": "x_f" }, { "math_id": 7, "text": "\\gamma:[0,1]\\rightarrow M" }, { "math_id": 8, "text": "\\gamma(t)" }, { "math_id": 9, "text": "\\tilde \\gamma(t)" }, { "math_id": 10, "text": "\\tilde \\gamma(0) = g_i" }, { "math_id": 11, "text": "A_\\mu(x) = A^a_\\mu(x)T^a" }, { "math_id": 12, "text": "\ni\\frac{dg(t)}{dt} = A_\\mu(x)\\frac{dx^\\mu}{dt} g(t).\n" }, { "math_id": 13, "text": "\ng_f(t_f) = W[x_i, x_f] = \\mathcal P\\exp\\bigg( i \\int_{x_i}^{x_f}A_\\mu dx^\\mu \\bigg),\n" }, { "math_id": 14, "text": "\\mathcal P" }, { "math_id": 15, "text": "\\tilde \\gamma'(0) = \\tilde \\gamma(0)g" }, { "math_id": 16, "text": "\\tilde \\gamma'(t) = \\tilde \\gamma(t)g" }, { "math_id": 17, "text": "t\\geq0" }, { "math_id": 18, "text": "g(x)" }, { "math_id": 19, "text": "\nW[x_i, x_f] \\rightarrow g(x_f) W[x_i, x_f] g^{-1}(x_i).\n" }, { "math_id": 20, "text": "\\phi(x)" }, { "math_id": 21, "text": "\\phi(x_i)^\\dagger W[x_i,x_f]\\phi(x_f)" }, { "math_id": 22, "text": "W[\\gamma] = \\text{tr}\\bigg[\\mathcal P \\exp\\bigg( i \\oint_\\gamma A_\\mu dx^\\mu\\bigg)\\bigg]." }, { "math_id": 23, "text": "\\Lambda_w" }, { "math_id": 24, "text": "\\Lambda_w/W" }, { "math_id": 25, "text": "W" }, { "math_id": 26, "text": "W[\\gamma]" }, { "math_id": 27, "text": "E^i" }, { "math_id": 28, "text": "E^iW[\\gamma]|0\\rangle \\neq 0" }, { "math_id": 29, "text": "\\gamma" }, { "math_id": 30, "text": "T" }, { "math_id": 31, "text": "r" }, { "math_id": 32, "text": "V(r)" }, { "math_id": 33, "text": "V(r)+\\Delta E" }, { "math_id": 34, "text": "\n\\langle W[\\gamma]\\rangle \\sim e^{-TV(r)}(1+\\mathcal O(e^{-T\\Delta E})),\n" }, { "math_id": 35, "text": "\n\\langle W[\\gamma]\\rangle \\sim e^{-aA[\\gamma]}\n" }, { "math_id": 36, "text": "A[\\gamma]" }, { "math_id": 37, "text": "V(r) \\sim \\sigma r" }, { "math_id": 38, "text": "\\sigma" }, { "math_id": 39, "text": "\n\\langle W[\\gamma]\\rangle \\sim e^{-bL[\\gamma]},\n" }, { "math_id": 40, "text": "L[\\gamma]" }, { "math_id": 41, "text": "b" }, { "math_id": 42, "text": "n" }, { "math_id": 43, "text": "\\mu" }, { "math_id": 44, "text": "U_\\mu(n)" }, { "math_id": 45, "text": "\nL[U] = \\text{tr} \\bigg[\\prod_{n \\in \\gamma} U_\\mu(n)\\bigg].\n" }, { "math_id": 46, "text": "\\text{SU}(3)" }, { "math_id": 47, "text": "\n\\sigma = - \\frac{1}{a^2}\\ln \\bigg(\\frac{\\beta}{18}\\bigg)(1+\\mathcal O(\\beta)),\n" }, { "math_id": 48, "text": "\\beta =6/g^2" }, { "math_id": 49, "text": "a" }, { "math_id": 50, "text": "\\beta \\sim 1.01" }, { "math_id": 51, "text": "\\text{SU}(N)" }, { "math_id": 52, "text": "\\gamma_{\\delta \\sigma_{\\mu\\nu}}" }, { "math_id": 53, "text": "x" }, { "math_id": 54, "text": "\\nu" }, { "math_id": 55, "text": "\\delta \\sigma_{\\mu\\nu}=dx_\\mu \\wedge dx_\\nu" }, { "math_id": 56, "text": "F[\\gamma]" }, { "math_id": 57, "text": "\n\\frac{\\delta F[\\gamma]}{\\delta \\sigma_{\\mu\\nu}(x)} = \\frac{1}{\\delta \\sigma_{\\mu\\nu}(x)}[F[\\gamma_{\\delta \\sigma_{\\mu\\nu}}]-F[\\gamma]].\n" }, { "math_id": 58, "text": "\\gamma_{\\delta x_\\mu}" }, { "math_id": 59, "text": "\\delta x_\\mu" }, { "math_id": 60, "text": "\\partial_\\mu^x" }, { "math_id": 61, "text": "\n\\partial_\\mu^x F[\\gamma] = \\frac{1}{\\delta x_\\mu}[F[\\gamma_{\\delta x_\\mu}]-F[\\gamma]].\n" }, { "math_id": 62, "text": "\n\\partial^x_\\mu \\frac{\\delta}{\\delta \\sigma_{\\mu\\nu}(x)}\\langle W[\\gamma]\\rangle = g^2 N \\oint_\\gamma dy_\\nu \\delta^{(D)}(x-y) \\langle W[\\gamma_{yx}]\\rangle \\langle W[\\gamma_{xy}]\\rangle.\n" }, { "math_id": 63, "text": "\\gamma = \\gamma_{xy}\\cup \\gamma_{yx}" }, { "math_id": 64, "text": "\\gamma_{xy}" }, { "math_id": 65, "text": "y" }, { "math_id": 66, "text": "N" }, { "math_id": 67, "text": "\\text{U}(\\infty)" }, { "math_id": 68, "text": "N\\times N" }, { "math_id": 69, "text": "\\gamma = \\gamma_2 \\circ \\gamma_1" }, { "math_id": 70, "text": "\\gamma_1" }, { "math_id": 71, "text": "\\gamma_2" }, { "math_id": 72, "text": "W[\\gamma_1\\circ \\gamma_2] = W[\\gamma_2 \\circ \\gamma_1]" }, { "math_id": 73, "text": "N+1" }, { "math_id": 74, "text": "\\delta^{a_1}_{[b_1}\\delta^{a_2}_{b_2}\\cdots \\delta^{a_{N+1}}_{b_{N+1}]} = 0" }, { "math_id": 75, "text": "M_K" }, { "math_id": 76, "text": "M_1[\\gamma] = W[\\gamma]" }, { "math_id": 77, "text": "\n(K+1)M_{K+1}[\\gamma_1, \\dots, \\gamma_{K+1}] = W[\\gamma_{K+1}]M_K[\\gamma_1,\\dots, \\gamma_K] - M_K[\\gamma_1 \\circ \\gamma_{K+1},\\gamma_2, \\dots, \\gamma_K] -\\cdots - M_K[\\gamma_1, \\gamma_2, \\dots, \\gamma_K\\circ \\gamma_{K+1}].\n" }, { "math_id": 78, "text": "\nM_{N+1}[\\gamma_1, \\dots, \\gamma_{N+1}] = 0.\n" }, { "math_id": 79, "text": "\\text{U}(1)" }, { "math_id": 80, "text": "W[\\gamma_1]W[\\gamma_2] = W[\\gamma_1\\circ \\gamma_2]" }, { "math_id": 81, "text": "M_N(\\gamma, \\dots, \\gamma)=1" }, { "math_id": 82, "text": "\\text{SU}(2)" }, { "math_id": 83, "text": "\nW[\\gamma_1]W[\\gamma_2] = W[\\gamma_1\\circ \\gamma_2^{-1}]+W[\\gamma_1\\circ \\gamma_2].\n" }, { "math_id": 84, "text": "W[\\gamma] = W^*[\\gamma^{-1}]" }, { "math_id": 85, "text": "W[I] = N" }, { "math_id": 86, "text": "|W[\\gamma]|\\leq N" }, { "math_id": 87, "text": "Z[\\phi]" }, { "math_id": 88, "text": "\\phi" }, { "math_id": 89, "text": "\\mathcal N=4" }, { "math_id": 90, "text": "2+1" } ]
https://en.wikipedia.org/wiki?curid=616448
61647741
Ecclesiastes 5
Fifth chapter of the biblical book Ecclesiastes Ecclesiastes 5 is the fifth chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the fifth and second centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter contains advice on how to approach God the right way, and a discussion about poverty and wealth. Text. The original text was written in Hebrew. This chapter is divided into 19 or 20 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). Fragments containing parts of this chapter were found among the Dead Sea Scrolls 4QQoha (4Q109; 175-150 BCE; extant verses 13–17). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. Stuart Weeks treats verses 1-9 as a section dealing with "fear of God", and connects verses 10-20 and 6:1-12 as a section dealing with "the problems and inadequacies of wealth". The New King James Version has two sections: The New International Version divides the verses in a similar pattern. In Hebrew texts, verse 1 is numbered 4:17, and verses 2-20 are numbered as 5:1-19. The approach to God (5:1–7). The lack of solution in chapter 4 on the theme of companionship leads the reader to consider whether God is the answer, and this part shows how to approach God in the right way. "For in the multitude of dreams and many words there is also vanity. But fear God." The poor under oppressive bureaucracy (5:8–9). Weeks suggests these two verses are "notoriously difficult". Qoheleth points to the frustration of the poor facing delays to obtain justice due to oppressive bureaucracy and tiers of hierarchy. "He that is higher than the highest regardeth" (KJV), from Hebrew "ḡā-ḇō-ah mê-‘al ḡā-ḇō-ah shō-mêr,", is also translated as "the high official is watched by a higher" (NRSV), not referring to an official of any government bureaucracy, but rather ambitious people who strive to be higher than others. However, there will always people higher than those, ultimately pointing to God as the highest one. The problems and inadequacies of wealth (5:10–20). This section starts by stating three concise points about financial greed (verses 10–12): the lover of money never have enough; increasing resources will result in bigger financial commitments; more wealth causes less peace. The rich man could not pass any of the wealth to their offsprings (verse 14) nor take anything to the next life (verse 15). The whole passage of 5:8 to 6:9 is arranged as such that the outer sections will point to the center at verse 20. Qoheleth reiterates his earlier conclusion that God's gift is the ability to find enjoyment in the wealth, which keep humans occupied in life, so that "they should not much call to mind the days of their lives". "For he will not much remember the days of his life because God keeps him occupied with joy in his heart." Verse 20. As God gives joy to people as the antidote to their obsessions, people should not think too much ahead but to enjoy the present. That capacity for enjoyment of God's other gifts is also a gift of God, a theme which Qoheleth develops in chapter 6. E. H. Plumptre offers "a satisfying meaning":&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The man who has learnt the secret of enjoyment is not anxious about the days of his life, does not brood even over its transitoriness, but takes each day tranquilly, as it comes, as God’s gift to him. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61647741
61652290
Ecclesiastes 6
Sixth chapter of the biblical book Ecclesiastes Ecclesiastes 6 is the sixth chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the 5th and 2nd centuries BC. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter deals with wealth and insatiability. Text. The original text was written in Hebrew. This chapter is divided into 12 verses. The Latin Vulgate translation has 11 verses, with the final verse carried over to the start of chapter 7. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). Fragments containing parts of this chapter were found among the Dead Sea Scrolls 4QQoha (4Q109; 175-150 BCE; extant verses 1?, 3–8, 12). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Wealth and its insecurity (6:1–6). This section focuses on the theme of wealth, to whom God gives it and its trappings, that the rich may live long and have much, but may die unsatisfied and unmourned, while someone else would ultimately enjoy the riches; therefore, they are worse off than a stillborn child, which at least finds rest (verse 6). "If a man begets a hundred children and lives many years, so that the days of his years are many, but his soul is not satisfied with goodness, or indeed he has no burial, I say that a stillborn child is better than he." Verse 3. The Jerusalem Bible (1966) and The Living Bible (1971) both suggest "as many daughters" as preferable translations in place of "lives many years". Retired Baptist minister Peter Pett suggests that, although Psalm 127 celebrates the blessing of many children, the number 100 should be interpreted as "a great many children, more than the norm". Insatiable longing (6:7–9). While people work to earn a living, Qoheleth argues that they are left unsatisfied because of the unanswered important questions: what is the point of wisdom, and what reason is there for the poor to find a role in life to improve one's lot? Verse 9 advises for contentment, because the desire to find answers beyond the obvious is seen as "vanity" ("hebel"). "All the labour of man is for his mouth, and yet the appetite is not filled." An impasse (6:10–12). This part starts a passage that ends in 7:13–14, insisting that 'no one knows what is good for humanity'. Everything has been defined, so humans are inherently incapable of contesting a cause with the mightier one (verse 10). "Whatever happens, it has already been given a name," "and it is known what man is;" "he cannot contend with Him who is stronger than he." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61652290
61652598
Ecclesiastes 7
Seventh chapter of the biblical book Ecclesiastes Ecclesiastes 7 is the seventh chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the 5th and 2nd centuries BC. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter deals with suffering and sin. The style of the first half (verses 1–14) is similar to that of the 'sentence literature' collections (such as ) and, as in such collections, the sayings are linked by catchwords and thematic ties with the previous ones, with a series of "better ... than" presenting dialectic pairs of issues. The second half exposes the 'crookedness of life' (verse 13) that moves to the 'crookedness of humanity' (verse 29). Text. The original text was written in Hebrew. This chapter is divided into 29 verses. The Latin Vulgate has 30 verses, as it includes as verse 7:1. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). Fragments containing parts of this chapter were found among the Dead Sea Scrolls 4QQoha (4Q109; 175-150 BC; extant verses 1–10, 19–20). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. In the Jerusalem Bible, this chapter opens Part Two of the book, and verses 1-7 are presented as a "prologue" comparable to the opening prologue in Ecclesiastes 1:4-11. E. H. Plumptre sees this chapter as an interruption to the "sequence of thought" being developed in chapter 6. The New American Bible (Revised Edition) divides the chapter into three parts, with verses 1-14 providing a "critique of [the] sages on the Day of Adversity", verses 15 a "critique of [the] sages on Justice and Wickedness", and verses 26-29 a "critique of [the] advice of women". Instructive suffering (7:1–6). This section gives the first instruction about suffering, to be followed by the exposition about its dangers in hindering wisdom (verses 7–10). As a funeral may cause one to think about life, whereas a party probably not, visits to 'house of mourning' may bring more valuable lessons for inner character, enabling true resolutions in one's life. "A good name is better than precious ointment," "And the day of death than the day of one’s birth. Verse 1. This verse opens a series of maxims continuing to verse 5. There is an alliteration in the Hebrew is hidden in English translation: "A good name (shem) is better than good ointment (shemen)". Four dangers (7:7–10). The four dangers to hinder wisdom are: corruption (verse 7), impatience (verse 8), bitterness (verse 9) and nostalgia (verse 10). To deal with these one needs to take 'a long-term view of life when reacting to adversity'. The need of wisdom (7:11–12). Wisdom, like a land which could be given as an inheritance, belongs to God but is granted to his people, and it has a deeper level of protective power than wealth. Life under God (7:13–14). Both good times and bad times are God-ordained and purposeful, so people should accept good when accessible and face adversity when it becomes reality. Neither too wise nor too foolish (7:15–22). This part states the 'inadvisability of extreme righteousness and wisdom', with the advice to adopt only some parts of wisdom, 'neither to be too wise and righteous, nor too foolish and wicked', but 'a bit of both', as those who fear God will succeed in both or 'escape the consequences of doing neither'. The Search for integrity (7:23–29). The section suggests that it is not so easy to find wisdom nor any 'definitive explanation of the world'. It closes with Qoheleth's ironic commentary in . "And I find something more bitter than death: the woman whose heart is snares and nets, and whose hands are fetters. He who pleases God escapes her, but the sinner is taken by her." Verse 26. The statements in this verse are not a polemic against women, but an allegorical warning against "Folly", described as an evilly seductive woman (cf. Proverbs 2:16-19; 5:20; 6:24-35; 7:5-27; 23:27-28), who is on a hunt to catch sinful people. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61652598
61656252
Ecclesiastes 8
Eighth chapter of the biblical book Ecclesiastes Ecclesiastes 8 is the eighth chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the 5th and 2nd centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter concerns human and divine authority with the advice that fearing God is the wisest course. Text. The original text was written in Hebrew. This chapter is divided into 17 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. The New King James Version has two sections within this chapter: Royal authority (8:1–9). Verse 1 closes the theme from chapter 7. and may be read as part of that section, leaving verses 2-9 as a group dealing with authority. Qoheleth then uses the previous observation of human authority to form a basis for understanding the divine authority but Weeks notes that verses 2 and 3 present "several difficulties", and their sentence division is unclear. "Who is like the wise?" "And who knows the interpretation of a thing?" "A man's wisdom makes his face shine," "and the hardness of his face is changed." Fearing God is the wisest course (8:10–17). Echoing the idea in Ecclesiastes 3:16–17, Qoheleth affirms that it is still 'safer to stand in fear before God', even as the righteous are sometimes regarded as wicked and the punishment of the wicked seems lacking. "So I commended enjoyment, because a man has nothing better under the sun than to eat, drink, and be merry; for this will remain with him in his labor all the days of his life which God gives him under the sun." Verse 15. "To eat, drink, and be merry" refers to Ecclesiastes 2:23-24; 3:13; 5:18-20; 9:7, as the remedy that Qoheleth offers, that is, to accept God's gift and place oneself in his hands. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61656252
6165632
Seymour Narrows
Section of Discovery Passage, British Columbia, Canada Seymour Narrows is a section of the Discovery Passage in British Columbia known for strong tidal currents. Discovery Passage lies between Vancouver Island at Menzies Bay, British Columbia and Quadra Island except at its northern end where the eastern shoreline is Sonora Island. The section known as Seymour Narrows begins about from the south end of Discovery Passage where it enters the Georgia Strait near Campbell River. For most of the length of the narrows, the channel is about wide. Through this narrow channel, currents can reach . Etymology. The narrows were named for Rear Admiral Sir George Francis Seymour who commanded the Pacific Station from 1844 to 1848. Seymour Narrows was described by Captain George Vancouver as "one of the vilest stretches of water in the world." Even after Ripple Rock was removed, it remains a challenging route. In March 1981, the freighter "Star Philippine" ran aground in the narrows. Hydrology. Seymour Narrows is notable also because the flowing current can be sufficiently turbulent to realize a Reynolds number of about formula_0, i.e. one hundred million, which is possibly the largest Reynolds number regularly attained in natural water channels on Earth (the current speed is about , the nominal depth about ). Turbulence develops usually around a Reynolds number of 2000, depending on the geometric structure of the channel. Seymour Narrows delineates part of the northern extent of the Salish Sea. Ripple Rock. Ripple Rock was a submerged twin-peak mountain that lay just nine feet beneath the surface of Seymour Narrows. It was a serious hazard to shipping, sinking 119 vessels and taking 114 lives. The gunboat USS "Saranac" was one of the rock's first recorded victims. On April 5, 1958, after twenty-seven months of tunneling and engineering work, Ripple Rock was blown up with 1,375 tons of Nitramex 2H explosive making it the largest commercial, non-nuclear blast in North America. The Halifax Explosion in 1917 was larger but it was not a deliberate act. The event was broadcast live on Canadian television. The footage is also regularly screened at the Campbell River Museum. Vancouver band The Evaporators wrote a song about the event and released it on their 2004 album "Ripple Rock". References. &lt;templatestyles src="Reflist/styles.css" /&gt; Canadian Tide and Current Tables, Vol. 6, 2003
[ { "math_id": 0, "text": "10^8" } ]
https://en.wikipedia.org/wiki?curid=6165632
61660335
Discrete calculus
Discrete (i.e., incremental) version of infinitesimal calculus Discrete calculus or the calculus of discrete functions, is the mathematical study of "incremental" change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word "calculus" is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of "continuous" change. Discrete calculus has two entry points, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of piece-wise linear curves. Integral calculus concerns accumulation of quantities and the areas under piece-wise constant curves. These two points of view are related to each other by the fundamental theorem of discrete calculus. The study of the concepts of change starts with their discrete form. The development is dependent on a parameter, the increment formula_0 of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as "limits". Informally, the limit of discrete calculus as formula_1 is infinitesimal calculus. Even though it serves as a discrete underpinning of calculus, the main value of discrete calculus is in applications. Two initial constructions. Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called "differentiation". Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the "difference quotient function" or just the "difference quotient" of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function. Suppose the functions are defined at points separated by an increment formula_2: formula_3 The "doubling function" may be denoted by formula_4 and the "squaring function" by formula_5. The "difference quotient" is the rate of change of the function over one of the intervals formula_6 defined by the formula: formula_7 It takes the function formula_8 as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function formula_9, as will turn out. As a matter of convenience, the new function may defined at the middle points of the above intervals: formula_10 As the rate of change is that for the whole interval formula_6, any point within it can be used as such a reference or, even better, the whole interval which makes the difference quotient a formula_11-cochain. The most common notation for the difference quotient is: formula_12 If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if formula_8 is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of formula_8 is how the position is changing in time, that is, it is the velocity of the ball. If a function is linear (that is, if the points of the graph of the function lie on a straight line), then the function can be written as formula_13, where formula_14 is the independent variable, formula_15 is the dependent variable, formula_16 is the formula_15-intercept, and: formula_17 This gives an exact value for the slope of a straight line. If the function is not linear, however, then the change in formula_15 divided by the change in formula_14 varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let formula_8 be a function, and fix a point formula_14 in the domain of formula_8. formula_18 is a point on the graph of the function. If formula_19 is the increment of formula_14, then formula_20 is the next value of formula_14. Therefore, formula_21 is the increment of formula_18. The slope of the line between these two points is formula_22 So formula_23 is the slope of the line between formula_18 and formula_21. Here is a particular example, the difference quotient of the squaring function. Let formula_5 be the squaring function. Then: formula_24 The difference quotient of the difference quotient is called the "second difference quotient" and it is defined at formula_25 and so on. Discrete integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of a sum is called "integration". In technical language, integral calculus studies a certain linear operator. The "Riemann sum" inputs a function and outputs a function, which gives the algebraic sum of areas between the part of the graph of the input and the x-axis. A motivating example is the distances traveled in a given time. formula_26 If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval. When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to "any" irregularly shaped region exhibiting an incrementally varying velocity over a given time period. If the bars in the diagram on the right represents speed as it varies from an interval to the next, the distance traveled (between the times represented by formula_27 and formula_16) is the area of the shaded region formula_28. So, the interval between formula_27 and formula_16 is divided into a number of equal segments, the length of each segment represented by the symbol formula_0. For each small segment, we have one value of the function formula_29. Call that value formula_30. Then the area of the rectangle with base formula_0 and height formula_30 gives the distance (time formula_0 multiplied by speed formula_30) traveled in that segment. Associated with each segment is the value of the function above it, formula_31. The sum of all such rectangles gives the area between the axis and the piece-wise constant curve, which is the total distance traveled. Suppose a function is defined at the mid-points of the intervals of equal length formula_2: formula_32 Then the Riemann sum from formula_27 to formula_33 in sigma notation is: formula_34 As this computation is carried out for each formula_35, the new function is defined at the points: formula_3 The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration. The fundamental theorem of calculus: If a function formula_8 is defined on a partition of the interval formula_36, formula_33, and if formula_37 is a function whose difference quotient is formula_8, then we have: formula_38 Furthermore, for every formula_39, we have: formula_40 This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences. History. The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly in definitions and proofs. After the limit is taken, however, they are never to be seen again. However, the Kirchhoff's voltage law (1847) can be expressed in terms of the one-dimensional discrete exterior derivative. During the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop. The main contributions come from the following individuals: The recent development of discrete calculus, starting with Whitney, has been driven by the needs of applied modeling. Applications. Discrete calculus is used for modeling either directly or indirectly as a discretization of infinitesimal calculus in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying "The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction." Commonly expressed today as Force = Mass × Acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use the Riemann sums to derive its path. Maxwell's theory of electromagnetism and Einstein's theory of general relativity have been expressed in the language of discrete calculus. Chemistry uses calculus in determining reaction rates and radioactive decay (exponential decay). In biology, population dynamics starts with reproduction and death rates to model population changes (population modeling). In engineering, difference equations are used to plot a course of a spacecraft within zero gravity environments, to model heat transfer, diffusion, and wave propagation. The discrete analogue of Green's theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, to rapidly extract features and detect object; another algorithm that could be used is the summed area table. In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies. In economics, calculus allows for the determination of maximal profit by calculating both marginal cost and marginal revenue, as well as modeling of markets. In signal processing and machine learning, discrete calculus allows for appropriate definitions of operators (e.g., convolution), level set optimization and other key functions for neural network analysis on graph structures. Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function. Calculus of differences and sums. Suppose a function (a formula_41-cochain) formula_8 is defined at points separated by an increment formula_2: formula_3 The "difference" (or the exterior derivative, or the coboundary operator) of the function is given by: formula_42 It is defined at each of the above intervals; it is a formula_11-cochain. Suppose a formula_11-cochain formula_43 is defined at each of the above intervals. Then its "sum" is a function (a formula_41-cochain) defined at each of the points by: formula_44 These are their properties: formula_46 formula_47 formula_48 formula_49 formula_50 The definitions are applied to graphs as follows. If a function (a formula_41-cochain) formula_8 is defined at the nodes of a graph: formula_51 then its "exterior derivative" (or the differential) is the difference, i.e., the following function defined on the edges of the graph (formula_11-cochain): formula_52 If formula_43 is a formula_11-cochain, then its "integral" over a sequence of edges formula_53 of the graph is the sum of its values over all edges of formula_53 ("path integral"): formula_54 These are the properties: formula_55 formula_56 formula_57 formula_59 formula_60 where a formula_11-chain formula_53 consists of formula_61 for some fixed formula_62, then formula_63 See references. Chains of simplices and cubes. A simplicial complex formula_64 is a set of simplices that satisfies the following conditions: 1. Every face of a simplex from formula_64 is also in formula_64. 2. The non-empty intersection of any two simplices formula_65 is a face of both formula_66 and formula_67. By definition, an orientation of a "k"-simplex is given by an ordering of the vertices, written as formula_68, with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean. Let formula_64 be a simplicial complex. A simplicial "k"-chain is a finite formal sum formula_69 where each "c""i" is an integer and σ"i" is an oriented "k"-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example, formula_70 The vector space of "k"-chains on formula_64 is written formula_71. It has a basis in one-to-one correspondence with the set of "k"-simplices in formula_64. To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices. Let formula_72 be an oriented "k"-simplex, viewed as a basis element of formula_71. The boundary operator formula_73 is the linear operator defined by: formula_74 where the oriented simplex formula_75 is the formula_76th face of formula_53, obtained by deleting its formula_76th vertex. In formula_71, elements of the subgroup formula_77 are referred to as cycles, and the subgroup formula_78 is said to consist of boundaries. A direct computation shows that formula_79. In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the vector spaces formula_80 form a chain complex. Another equivalent statement is that formula_81 is contained in formula_82. A cubical complex is a set composed of points, line segments, squares, cubes, and their "n"-dimensional counterparts. They are used analogously to simplices to form complexes. An elementary interval is a subset formula_83 of the form formula_84 for some formula_85. An elementary cube formula_86 is the finite product of elementary intervals, i.e. formula_87 where formula_88 are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube formula_89 embedded in Euclidean space formula_90 (for some formula_91 with formula_92). A set formula_93 is a cubical complex if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set) and it contains all of the faces of all of its cubes. The boundary operator and the chain complex are defined similarly to those for simplicial complexes. More general are cell complexes. A chain complex formula_94 is a sequence of vector spaces formula_95 connected by linear operators (called boundary operators) formula_96, such that the composition of any two consecutive maps is the zero map. Explicitly, the boundary operators satisfy formula_97, or with indices suppressed, formula_98. The complex may be written out as follows. formula_99 A simplicial map is a map between simplicial complexes with the property that the images of the vertices of a simplex always span a simplex (therefore, vertices have vertices for images). A simplicial map formula_8 from a simplicial complex formula_64 to another formula_100 is a function from the vertex set of formula_64 to the vertex set of formula_100 such that the image of each simplex in formula_64 (viewed as a set of vertices) is a simplex in formula_100. It generates a linear map, called a chain map, from the chain complex of formula_64 to the chain complex of formula_100. Explicitly, it is given on formula_101-chains by formula_102 if formula_103 are all distinct, and otherwise it is set equal to formula_41. A chain map formula_8 between two chain complexes formula_104 and formula_105 is a sequence formula_106 of homomorphisms formula_107 for each formula_35 that commutes with the boundary operators on the two chain complexes, so formula_108. This is written out in the following commutative diagram: A chain map sends cycles to cycles and boundaries to boundaries. See references. Discrete differential forms: cochains. For each vector space "Ci" in the chain complex we consider its dual space formula_109 and formula_110 is its dual linear operator formula_111 This has the effect of "reversing all the arrows" of the original complex, leaving a cochain complex formula_112 The cochain complex formula_113 is the dual notion to a chain complex. It consists of a sequence of vector spaces formula_114 connected by linear operators formula_115 satisfying formula_116. The cochain complex may be written out in a similar fashion to the chain complex. formula_117 The index formula_35 in either formula_118 or formula_119 is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension. The elements of the individual vector spaces of a (co)chain complex are called cochains. The elements in the kernel of formula_120 are called cocycles (or closed elements), and the elements in the image of formula_120 are called coboundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles. The Poincaré lemma states that if formula_121 is an open ball in formula_122, any closed formula_123-form formula_124 defined on formula_121 is exact, for any integer formula_123 with formula_125. When we refer to cochains as discrete (differential) forms, we refer to formula_120 as the exterior derivative. We also use the calculus notation for the values of the forms: formula_126 Stokes' theorem is a statement about the discrete differential forms on manifolds, which generalizes the fundamental theorem of discrete calculus for a partition of an interval: formula_127 Stokes' theorem says that the sum of a form formula_124 over the boundary of some orientable manifold formula_128 is equal to the sum of its exterior derivative formula_129 over the whole of formula_128, i.e., formula_130 It is worthwhile to examine the underlying principle by considering an example for formula_131 dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains. See references. The wedge product of forms. In discrete calculus, this is a construction that creates from forms higher order forms: adjoining two cochains of degree formula_123 and formula_132 to form a composite cochain of degree formula_133. For cubical complexes, the wedge product is defined on every cube seen as a vector space of the same dimension. For simplicial complexes, the wedge product is implemented as the cup product: if formula_134 is a formula_123-cochain and formula_135 is a formula_132-cochain, then formula_136 where formula_53 is a formula_137 -simplex and formula_138, is the simplex spanned by formula_64 into the formula_139-simplex whose vertices are indexed by formula_140. So, formula_141 is the formula_123-th "front face" and formula_142 is the formula_132-th "back face" of formula_53, respectively. The coboundary of the cup product of cochains formula_134 and formula_135 is given by formula_143 The cup product of two cocycles is again a cocycle, and the product of a coboundary with a cocycle (in either order) is a coboundary. The cup product operation satisfies the identity formula_144 In other words, the corresponding multiplication is graded-commutative. See references. Laplace operator. The Laplace operator formula_145 of a function formula_8 at a vertex formula_123, is (up to a factor) the rate at which the average value of formula_8 over a cellular neighborhood of formula_123 deviates from formula_146. The Laplace operator represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some point is proportional to the Laplace operator of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used in the sciences for modelling various physical phenomena. The codifferential formula_147 is an operator defined on formula_101-forms by: formula_148 where formula_120 is the exterior derivative or differential and formula_149 is the Hodge star operator. The codifferential is the adjoint of the exterior derivative according to Stokes' theorem: formula_150 Since the differential satisfies formula_151, the codifferential has the corresponding property formula_152 The Laplace operator is defined by: formula_153 See references. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta x" }, { "math_id": 1, "text": "\\Delta x\\to 0" }, { "math_id": 2, "text": "\\Delta x=h>0" }, { "math_id": 3, "text": "a, a+h, a+2h, \\ldots, a+nh,\\ldots" }, { "math_id": 4, "text": "g(x)=2x" }, { "math_id": 5, "text": "f(x)=x^2" }, { "math_id": 6, "text": "[x,x+h]" }, { "math_id": 7, "text": "\\frac{f(x+h)-f(x)}{h}." }, { "math_id": 8, "text": "f" }, { "math_id": 9, "text": "g(x)=2x+h" }, { "math_id": 10, "text": "a+h/2, a+h+h/2, a+2h+h/2,..., a+nh+h/2,..." }, { "math_id": 11, "text": "1" }, { "math_id": 12, "text": "\\frac{\\Delta f}{\\Delta x}(x+h/2)=\\frac{f(x+h)-f(x)}{h}." }, { "math_id": 13, "text": "y=mx + b" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "y" }, { "math_id": 16, "text": "b" }, { "math_id": 17, "text": "m= \\frac{\\text{rise}}{\\text{run}}= \\frac{\\text{change in } y}{\\text{change in } x} = \\frac{\\Delta y}{\\Delta x}." }, { "math_id": 18, "text": "(x, f(x))" }, { "math_id": 19, "text": "h" }, { "math_id": 20, "text": "x + h" }, { "math_id": 21, "text": "(x+h, f(x+h))" }, { "math_id": 22, "text": "m = \\frac{f(x+h) - f(x)}{(x+h) - x} = \\frac{f(x+h) - f(x)}{h}." }, { "math_id": 23, "text": "m" }, { "math_id": 24, "text": "\\begin{align}\\frac{\\Delta f}{\\Delta x}(x) &={(x+h)^2 - x^2\\over{h}} \\\\\n&={x^2 + 2hx + h^2 - x^2\\over{h}} \\\\\n&={2hx + h^2\\over{h}} \\\\\n&= 2x + h .\n\\end{align}\n" }, { "math_id": 25, "text": "a+h, a+2h, a+3h, \\ldots, a+nh,\\ldots" }, { "math_id": 26, "text": "\\text{distance} = \\text{speed} \\cdot \\text{time}" }, { "math_id": 27, "text": "a" }, { "math_id": 28, "text": "s" }, { "math_id": 29, "text": "f(x)" }, { "math_id": 30, "text": "v" }, { "math_id": 31, "text": "f(x) = v" }, { "math_id": 32, "text": "a+h/2, a+h+h/2, a+2h+h/2,\\ldots, a+nh-h/2,\\ldots" }, { "math_id": 33, "text": "b=a+nh" }, { "math_id": 34, "text": "\\sum_{i=1}^n f(a+ih)\\, \\Delta x." }, { "math_id": 35, "text": "n" }, { "math_id": 36, "text": "[a, b]" }, { "math_id": 37, "text": "F" }, { "math_id": 38, "text": "\\sum_{i=0}^{n-1} f(a+ih+h/2)\\, \\Delta x = F(b) - F(a)." }, { "math_id": 39, "text": "m=0,1,2,\\ldots,n-1" }, { "math_id": 40, "text": "\\frac{\\Delta}{\\Delta x}\\sum_{i=0}^m f(a+ih+h/2)\\, \\Delta x = f(a+mh+h/2)." }, { "math_id": 41, "text": "0" }, { "math_id": 42, "text": "\\big(\\Delta f\\big)\\big([x,x+h]\\big)=f(x+h)-f(x)." }, { "math_id": 43, "text": "g" }, { "math_id": 44, "text": "\\left(\\sum g\\right)\\!(a+nh) = \\sum_{i=1}^{n} g\\big([a+(i-1)h,a+ih]\\big)." }, { "math_id": 45, "text": "c" }, { "math_id": 46, "text": "\\Delta c = 0" }, { "math_id": 47, "text": "\\Delta (a f + b g) = a \\,\\Delta f + b \\,\\Delta g,\\quad \\sum (a f + b g) = a \\,\\sum f + b \\,\\sum g" }, { "math_id": 48, "text": " \\Delta (f g) = f \\,\\Delta g + g \\,\\Delta f + \\Delta f \\,\\Delta g " }, { "math_id": 49, "text": " \\left( \\sum \\Delta f\\right)\\!(a+nh) = f(a+nh)-f(a) " }, { "math_id": 50, "text": " \\Delta\\!\\left(\\sum g\\right) = g " }, { "math_id": 51, "text": "a, b, c, \\ldots " }, { "math_id": 52, "text": "\\left(df\\right)\\!\\big([a,b]\\big) = f(b)-f(a)." }, { "math_id": 53, "text": "\\sigma" }, { "math_id": 54, "text": "\\int_\\sigma g = \\sum_{\\sigma} g\\big([a,b]\\big)." }, { "math_id": 55, "text": "dc = 0" }, { "math_id": 56, "text": "d(a f + b g) = a \\,df + b \\,dg,\\quad \\int_\\sigma (a f + b g) = a \\,\\int_\\sigma f + b \\,\\int_\\sigma g" }, { "math_id": 57, "text": "d(f g) = f \\,dg + g \\,df + df \\,dg" }, { "math_id": 58, "text": "[a_0,a_1],[a_1,a_2],...,[a_{n-1},a_n]" }, { "math_id": 59, "text": "\\int_\\sigma df = f(a_n)-f(a_0)" }, { "math_id": 60, "text": "f(x) = \\int_\\sigma g" }, { "math_id": 61, "text": "[a_0,a_1],[a_1,a_2],...,[a_{n-1},x]" }, { "math_id": 62, "text": "a_0" }, { "math_id": 63, "text": "df = g" }, { "math_id": 64, "text": "S" }, { "math_id": 65, "text": "\\sigma_1, \\sigma_2 \\in S" }, { "math_id": 66, "text": "\\sigma_1" }, { "math_id": 67, "text": "\\sigma_2" }, { "math_id": 68, "text": "(v_0,...,v_k)" }, { "math_id": 69, "text": "\\sum_{i=1}^N c_i \\sigma_i, \\," }, { "math_id": 70, "text": " (v_0,v_1) = -(v_1,v_0)." }, { "math_id": 71, "text": "C_k" }, { "math_id": 72, "text": "\\sigma = (v_0,...,v_k)" }, { "math_id": 73, "text": "\\partial_k: C_k \\rightarrow C_{k-1}" }, { "math_id": 74, "text": "\\partial_k(\\sigma)=\\sum_{i=0}^k (-1)^i (v_0 , \\dots , \\widehat{v_i} , \\dots ,v_k)," }, { "math_id": 75, "text": "(v_0 , \\dots , \\widehat{v_i} , \\dots ,v_k)" }, { "math_id": 76, "text": "i" }, { "math_id": 77, "text": "Z_k = \\ker \\partial_k" }, { "math_id": 78, "text": "B_k = \\operatorname{im} \\partial_{k+1}" }, { "math_id": 79, "text": "\\partial^2= 0" }, { "math_id": 80, "text": "(C_k, \\partial_k)" }, { "math_id": 81, "text": "B_k" }, { "math_id": 82, "text": "Z_k" }, { "math_id": 83, "text": "I\\subset\\mathbf{R}" }, { "math_id": 84, "text": "I = [\\ell, \\ell+1]\\quad\\text{or}\\quad I=[\\ell, \\ell]" }, { "math_id": 85, "text": "\\ell\\in\\mathbf{Z}" }, { "math_id": 86, "text": "Q" }, { "math_id": 87, "text": "Q=I_1\\times I_2\\times \\cdots\\times I_d\\subset \\mathbf{R}^d" }, { "math_id": 88, "text": "I_1,I_2,\\ldots,I_d" }, { "math_id": 89, "text": "[0,1]^n" }, { "math_id": 90, "text": "\\mathbf{R}^d" }, { "math_id": 91, "text": "n,d\\in\\mathbf{N}\\cup\\{0\\}" }, { "math_id": 92, "text": "n\\leq d" }, { "math_id": 93, "text": "X\\subseteq\\mathbf{R}^d" }, { "math_id": 94, "text": "(C_*, \\partial_*)" }, { "math_id": 95, "text": "\\ldots,C_0, C_1, C_2, C_3, C_4, \\ldots" }, { "math_id": 96, "text": "\\partial_n : C_n \\to C_{n-1}" }, { "math_id": 97, "text": "\\partial_n \\circ \\partial_{n+1} = 0" }, { "math_id": 98, "text": "\\partial^2 = 0" }, { "math_id": 99, "text": "\n\\cdots\n\\xleftarrow{\\partial_0} C_0\n\\xleftarrow{\\partial_1} C_1\n\\xleftarrow{\\partial_2} C_2\n\\xleftarrow{\\partial_3} C_3\n\\xleftarrow{\\partial_4} C_4\n\\xleftarrow{\\partial_5}\n\\cdots\n" }, { "math_id": 100, "text": "T" }, { "math_id": 101, "text": "k" }, { "math_id": 102, "text": "f((v_0, \\ldots, v_k)) = (f(v_0),\\ldots,f(v_k))" }, { "math_id": 103, "text": "f(v_0), ..., f(v_k)" }, { "math_id": 104, "text": "(A_*, d_{A,*})" }, { "math_id": 105, "text": "(B_*, d_{B,*})" }, { "math_id": 106, "text": "f_*" }, { "math_id": 107, "text": "f_n : A_n \\rightarrow B_n" }, { "math_id": 108, "text": " d_{B,n} \\circ f_n = f_{n-1} \\circ d_{A,n}" }, { "math_id": 109, "text": "C_i^* := \\mathrm{Hom}(C_i,{\\bf R})," }, { "math_id": 110, "text": "d^i=\\partial^*_i" }, { "math_id": 111, "text": "d^{i-1}: C_{i-1}^* \\to C_{i}^*." }, { "math_id": 112, "text": "\\cdots \\leftarrow C_{i+1}^* \\stackrel{\\partial^*_i}{\\leftarrow}\\ C_{i}^* \\stackrel{\\partial^*_{i-1}}{\\leftarrow} C_{i-1}^* \\leftarrow \\cdots " }, { "math_id": 113, "text": "(C^*, d^*)" }, { "math_id": 114, "text": "...,C_0, C_1, C_2, C_3, C_4, ..." }, { "math_id": 115, "text": "d^n: C^n\\to C^{n+1}" }, { "math_id": 116, "text": "d^{n+1}\\circ d^n = 0" }, { "math_id": 117, "text": "\n\\cdots\n\\xrightarrow{d^{-1}}\nC^0 \\xrightarrow{d^0}\nC^1 \\xrightarrow{d^1}\nC^2 \\xrightarrow{d^2}\nC^3 \\xrightarrow{d^3}\nC^4 \\xrightarrow{d^4}\n\\cdots\n" }, { "math_id": 118, "text": "C_n" }, { "math_id": 119, "text": "C^n" }, { "math_id": 120, "text": "d" }, { "math_id": 121, "text": "B" }, { "math_id": 122, "text": "{\\bf R}^n" }, { "math_id": 123, "text": "p" }, { "math_id": 124, "text": "\\omega" }, { "math_id": 125, "text": "1 \\le p\\le n" }, { "math_id": 126, "text": "\\omega (s)=\\int_s\\omega." }, { "math_id": 127, "text": "\\sum_{i=0}^{n-1} \\frac{\\Delta F}{\\Delta x}(a+ih+h/2) \\, \\Delta x = F(b) - F(a)." }, { "math_id": 128, "text": "\\Omega" }, { "math_id": 129, "text": "d\\omega" }, { "math_id": 130, "text": "\\int_\\Omega d\\omega=\\int_{\\partial \\Omega}\\omega\\,." }, { "math_id": 131, "text": "d=2" }, { "math_id": 132, "text": "q" }, { "math_id": 133, "text": "p + q" }, { "math_id": 134, "text": "f^p" }, { "math_id": 135, "text": "g^q" }, { "math_id": 136, "text": "(f^p \\smile g^q)(\\sigma) = f^p(\\sigma_{0,1, ..., p}) \\cdot g^q(\\sigma_{p, p+1 ,..., p + q})" }, { "math_id": 137, "text": "(p + q)" }, { "math_id": 138, "text": "\\sigma_S,\\ S \\subset \\{0,1,...,p+q \\}" }, { "math_id": 139, "text": "(p+q)" }, { "math_id": 140, "text": "\\{0,...,p+q \\}" }, { "math_id": 141, "text": "\\sigma_{0,1, ..., p}" }, { "math_id": 142, "text": "\\sigma_{p, p+1, ..., p + q}" }, { "math_id": 143, "text": "d(f^p \\smile g^q) = d{f^p} \\smile g^q + (-1)^p(f^p \\smile d{g^q})." }, { "math_id": 144, "text": "\\alpha^p \\smile \\beta^q = (-1)^{pq}(\\beta^q \\smile \\alpha^p)." }, { "math_id": 145, "text": "\\Delta f" }, { "math_id": 146, "text": "f(p)" }, { "math_id": 147, "text": "\\delta:C^k\\to C^{k-1}" }, { "math_id": 148, "text": "\\delta = (-1)^{n(k-1) + 1} {\\star} d {\\star} = (-1)^{k}\\, {\\star}^{-1} d {\\star} ," }, { "math_id": 149, "text": "\\star" }, { "math_id": 150, "text": " (\\eta,\\delta \\zeta) = (d\\eta,\\zeta). " }, { "math_id": 151, "text": "d^2=0" }, { "math_id": 152, "text": "\\delta^2 = {\\star} d {\\star} {\\star} d {\\star} = (-1)^{k(n-k)} {\\star} d^2 {\\star} = 0 ." }, { "math_id": 153, "text": "\\Delta = (\\delta + d)^2 = \\delta d + d\\delta ." } ]
https://en.wikipedia.org/wiki?curid=61660335
61662653
Leslie Cheng
American mathematician Leslie C. Cheng is an American mathematician specializing in harmonic analysis. She holds the Rachel C. Hale Chair in Mathematics at Bryn Mawr College. Cheng did poorly in calculus in high school, and began her undergraduate studies at Bryn Mawr in 1988 intending to study the humanities. However, she was converted to mathematics by faculty member Rhonda Hughes and graduated with a bachelor's degree in mathematics in 1992. She went to the University of Pittsburgh for graduate study, completing her Ph.D. there in 1998. Her dissertation, "formula_0 Estimates for Oscillatory Integral Operators", was supervised by Yibiao Pan. After completing her doctorate, despite having offers for tenure-track faculty positions elsewhere, Cheng worked in temporary positions until getting an offer to return to Bryn Mawr in 2002. She was given the Rachel C. Hale Chair at Bryn Mawr in 2018 as recognition for her "commitment to her students, teaching excellence, and scholarship". Since June 2015, she was also a mentor for the "Class of 2019 Boston S.T.E.M. Posse". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^p" } ]
https://en.wikipedia.org/wiki?curid=61662653
61664297
Theorem of absolute purity
Mathematical theorem In algebraic geometry, the theorem of absolute (cohomological) purity is an important theorem in the theory of étale cohomology. It states: given for each integer formula_3, the map formula_4 is bijective, where the map is induced by cup product with formula_5. The theorem was introduced in SGA 5 Exposé I, § 3.1.4. as an open problem. Later, Thomason proved it for large "n" and Gabber in general. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i: Z \\to X" }, { "math_id": 1, "text": "\\mathcal{F}" }, { "math_id": 2, "text": "\\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 3, "text": "m \\ge 0" }, { "math_id": 4, "text": "\\operatorname{H}^m(Z_{\\text{ét}}; \\mathcal{F}) \\to \\operatorname{H}^{m+2r}_Z(X_{\\text{ét}}; \\mathcal{F}(r))" }, { "math_id": 5, "text": "c_r(Z)" } ]
https://en.wikipedia.org/wiki?curid=61664297
61664510
Margit Rösler
German mathematician Margit Rösler (born 1962) is a German mathematician known for her research in harmonic analysis, special functions, and Dunkl operators. She is a professor of mathematics at Paderborn University. Rösler earned a diploma in mathematics with distinction from the Technical University of Munich in 1988. She completed her PhD at the same university in 1992. Her dissertation, "Durch orthogonale trigonometrische Systeme auf dem Einheitskreis induzierte Faltungsstrukturen auf formula_0", was jointly supervised by Rupert Lasser and Elmar Thoma. She remained at TU Munich as a postdoctoral researcher and assistant professor, earning a habilitation in 1999. Her habilitation thesis was "Contributions to the theory of Dunkl operators". She was a lecturer at the University of Göttingen from 2000 until 2004. Then, after short-term positions at the University of Amsterdam and Technische Universität Darmstadt, and a professorship at the Clausthal University of Technology, she took her present position at Paderborn University in 2012. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=61664510
61665635
Character module
In mathematics, especially in the area of abstract algebra, every module has an associated character module. Using the associated character module it is possible to investigate the properties of the original module. One of the main results discovered by Joachim Lambek shows that a module is flat if and only if the associated character module is injective. Definition. The group formula_0, the group of rational numbers modulo formula_1, can be considered as a formula_2-module in the natural way. Let formula_3 be an additive group which is also considered as a formula_2-module. Then the group formula_4 of formula_2-homomorphisms from formula_3 to formula_5 is called the "character group associated to formula_3". The elements in this group are called "characters". If formula_3 is a left formula_6-module over a ring formula_6, then the character group formula_7 is a right formula_6-module and called the "character module associated to" formula_3. The module action in the character module for formula_8 and formula_9 is defined by formula_10 for all formula_11. The character module can also be defined in the same way for right formula_6-modules. In the literature also the notations formula_12 and formula_13 are used for character modules. Let formula_14 be left formula_6-modules and formula_15 an formula_6-homomorphismus. Then the mapping formula_16 defined by formula_17 for all formula_18 is a right formula_6-homomorphism. Character module formation is a contravariant functor from the category of left formula_6-modules to the category of right formula_6-modules. Motivation. The abelian group formula_19 is divisible and therefore an injective formula_20-module. Furthermore it has the following important property: Let formula_21 be an abelian group and formula_22 nonzero. Then there exists a group homomorphism formula_23 with formula_24. This says that formula_19 is a cogenerator. With these properties one can show the main theorem of the theory of character modules: Theorem (Lambek): A left module formula_3 over a ring formula_6 is flat if and only if the character module formula_7 is an injective right formula_6-module. Properties. Let formula_3 be a left module over a ring formula_6 and formula_7 the associated character module.
[ { "math_id": 0, "text": "(\\mathbb{Q}/\\mathbb{Z}, +)" }, { "math_id": 1, "text": "1" }, { "math_id": 2, "text": "\\mathbb{Z}" }, { "math_id": 3, "text": "M" }, { "math_id": 4, "text": "M^* = \\operatorname{Hom}_\\mathbb{Z} (M, \\mathbb{Q} / \\mathbb{Z})" }, { "math_id": 5, "text": "\\mathbb{Q} / \\mathbb{Z}" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "M^*" }, { "math_id": 8, "text": "f \\in \\operatorname{Hom}_\\mathbb{Z} (M, \\mathbb{Q} / \\mathbb{Z}) " }, { "math_id": 9, "text": "r \\in R " }, { "math_id": 10, "text": "(f r)(m) = f(rm) " }, { "math_id": 11, "text": "m \\in M" }, { "math_id": 12, "text": "M', M^0" }, { "math_id": 13, "text": "M^+" }, { "math_id": 14, "text": "M,N" }, { "math_id": 15, "text": "f \\colon M \\to N" }, { "math_id": 16, "text": "f^* \\colon N^* \\to M^*" }, { "math_id": 17, "text": "f^* (h) = h \\circ f" }, { "math_id": 18, "text": "h \\in N^*" }, { "math_id": 19, "text": "\\Q / \\Z" }, { "math_id": 20, "text": "\\Z" }, { "math_id": 21, "text": "G" }, { "math_id": 22, "text": "g \\in G" }, { "math_id": 23, "text": "f \\colon G \\to \\Q / \\Z" }, { "math_id": 24, "text": "f(g) \\neq 0" }, { "math_id": 25, "text": "R^*" }, { "math_id": 26, "text": "N" }, { "math_id": 27, "text": "(M/N)^*" }, { "math_id": 28, "text": "\\operatorname{Hom}_R (N, M^*)" }, { "math_id": 29, "text": "(N \\otimes_R M)^*" } ]
https://en.wikipedia.org/wiki?curid=61665635
61671668
Ecclesiastes 9
Ninth chapter of the biblical book Ecclesiastes Ecclesiastes 9 is the ninth chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains the philosophical and theological reflections of a character known as Qoheleth, a title literally meaning "the assembler" but traditionally translated as "the Teacher" or "The Preacher". The identity of Qoheleth it unknown. In traditional Jewish texts such as the Peshitta, Targum, and Talmud, authorship of Ecclesiastes is attributed to King Solomon, due to the statement in Ecclesiastes 1:1 which identifies Qoheleth as the "son of David, king in Jerusalem". However, it is generally agreed upon by contemporary scholars that the book could not have been written in the 10th century during the time of Solomon. It is now thought to be one of the latest books in the Old Testament to be written, likely sometime between the 5th and 3rd centuries BCE. This chapter brings together some of the book's major themes, namely the shared fate of death, the importance of enjoyment in the midst of an unpredictable world, and the value of wisdom. Textual Witnesses. The original text was written in Hebrew. This chapter is divided into 18 verses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. The following structure has been provided by biblical scholar Michael V. Fox. Ignorance, Death, and Pleasure (9:1–10). The central theme of this section is that death is the fate that ultimately awaits all people. Though traditional wisdom might suggest that one's fate should be determined by how righteously they lived, this does not turn out to be true. Death is the great equalizer. However, for Qoheleth this grim reality is not a reason to fall into nihilism. He instead emphasizes that life is always preferable to death. While the living may know that they are going to die, the dead know nothing at all. Therefore, Qoheleth exhorts his audience to live fully while they still can, finding joy in every moment. Such pleasures will no longer be possible in the realm of Sheol. Although Sheol has often been mistakenly equated with the hell of later Judaism and Christianity, it is more accurately described as a "place of non-being where all consciousness and all passions have ceased." Enjoyment passages like verses 7-10 are strategically placed throughout Ecclesiastes. Though some have claimed that these exhortations of joy are hedonistic or naïve, they are better understood as recognitions of life's possibilities even in the midst of its uncertainties and inexplicable contradictions. To experience joy is not to deny the pain and confusion of life but to appreciate the small pleasures within it. "For I considered all this in my heart, so that I could declare it all: that the righteous and the wise and their works are in the hand of God. People know neither love nor hatred by anything they see before them." Verse 1. This verse mentions "the righteous", but taken with verse 2 it is clear that "the wicked" are also in the hands of God. Methodist writer Joseph Benson suggests that they are mentioned "not exclusively, ... but eminently because, by the course of God's providence toward them, they might seem to be quite neglected by God". Time and Contingency (9:11–12). In this section, Qoheleth presents his observations about the unpredictability of life, making the claim that people cannot know what will happen to them. In the words of Michael V. Fox, "everyone is subject to the vagaries of chance and fortune, which can nullify the advantages of talents and efforts." There is no reliable connection between one's efforts and what one receives, and this absurdity characterizes life. Qoheleth, therefore, breaks down all of the assurances of success to which people cling. Time and chance ruin any certainty that one will get the outcome that they deserve. Wisdom and Folly (9:13–18). In this section, Qoheleth returns to the topic of wisdom (a discussion that continues through 11:6). Verses 13-16 tell the story of a poor, wise man who saved a city from being sieged by a powerful king but was not recognized or remembered for his work. The final two verses serve as Qoheleth's response to this story, in which he seems to be quoting traditional wisdom sayings. Here, he seems to be holding two seemingly contradictory statements in tension with one another: wisdom is both vulnerable and powerful. Though wisdom is ultimately of tremendous importance, one cannot expect that wisdom will be met with reward. As is characteristic of Qoheleth's reflections throughout Ecclesiastes, realism rather than untenable optimism or pessimism is encouraged. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61671668
616723
Pitch space
Model for relationships between pitches In music theory, pitch spaces model relationships between pitches. These models typically use distance to model the degree of relatedness, with closely related pitches placed near one another, and less closely related pitches farther apart. Depending on the complexity of the relationships under consideration, the models may be multidimensional. Models of pitch space are often graphs, groups, lattices, or geometrical figures such as helixes. Pitch spaces distinguish octave-related pitches. When octave-related pitches are not distinguished, we have instead pitch class spaces, which represent relationships between pitch classes. (Some of these models are discussed in the entry on modulatory space, though readers should be advised that the term "modulatory space" is not a standard music-theoretical term.) Chordal spaces model relationships between chords. Linear and helical pitch space. The simplest pitch space model is the real line. A fundamental frequency "f" is mapped to a real number "p" according to the equation formula_0 This creates a linear space in which octaves have size 12, semitones (the distance between adjacent keys on the piano keyboard) have size 1, and middle C is assigned the number 60, as it is in MIDI. 440 Hz is the standard frequency of 'concert A', which is the note 9 semitones above 'middle C'. Distance in this space corresponds to physical distance on keyboard instruments, orthographical distance in Western musical notation, and psychological distance as measured in psychological experiments and conceived by musicians. The system is flexible enough to include "microtones" not found on standard piano keyboards. For example, the pitch halfway between C (60) and C# (61) can be labeled 60.5. One problem with linear pitch space is that it does not model the special relationship between octave-related pitches, or pitches sharing the same pitch class. This has led theorists such as Moritz Wilhelm Drobisch (1846) and Roger Shepard (1982) to model pitch relations using a helix. In these models, linear pitch space is wrapped around a cylinder so that all octave-related pitches lie along a single line. Care must be taken when interpreting these models, as it is not clear how to interpret "distance" in the three-dimensional space containing the helix; nor is it clear how to interpret points in the three-dimensional space not contained on the helix itself. Higher-dimensional pitch spaces. Other theorists, such as Leonhard Euler (1739), Hermann von Helmholtz (1863/1885), Arthur von Oettingen (1866), Hugo Riemann (not to be confused with mathematician Bernhard Riemann), and Christopher Longuet-Higgins (1978) have modeled pitch relationships using two-dimensional (or higher-dimensional) lattices, under the name of Tonnetz. In these models, one dimension typically corresponds to acoustically pure perfect fifths while the other corresponds to major thirds. (Variations are possible in which one axis corresponds to acoustically pure minor thirds.) Additional dimensions can be used to represent additional intervals including—most typically—the octave. All these models attempt to capture the fact that intervals separated by acoustically pure intervals such as octaves, perfect fifths, and major thirds are thought to be perceptually closely related. But proximity in these spaces need not represent physical proximity on musical instruments: by moving one's hands a very short distance on a violin string, one can move arbitrarily far in these multiple-dimensional models. For this reason, it is hard to assess the psychological relevance of distance as measured by these lattices. History of pitch space. The idea of pitch space goes back at least as far as the ancient Greek music theorists known as the Harmonists. To quote one of them, Bacchius: "And what is a diagram? A representation of a musical system. And we use a diagram so that, for students of the subject, matters which are hard to grasp with the hearing may appear before their eyes" (Bacchius, in Franklin, "Diatonic Music in Ancient Greece"). The Harmonists drew geometrical pictures so that the intervals of various scales could be compared visually; they thereby located the intervals in a pitch space. Higher-dimensional pitch spaces have also long been investigated. The use of a lattice was proposed by Euler (1739) to model just intonation using an axis of perfect fifths and another of major thirds. Similar models were the subject of intense investigation in the 19th century, chiefly by theorists such as Oettingen and Riemann (Cohn 1997). Contemporary theorists such as James Tenney (1983) and W.A. Mathieu (1997) carry on this tradition. Moritz Wilhelm Drobisch (1846) was the first to suggest a helix (i.e. the spiral of fifths) to represent octave equivalence and recurrence (Lerdahl, 2001), and hence to give a model of pitch space. Roger Shepard (1982) regularizes Drobish's helix, and extends it to a double helix of two wholetone scales over a circle of fifths which he calls the "melodic map" (Lerdahl, 2001). Michael Tenzer suggests its use for Balinese gamelan music since the octaves are not 2:1 and thus there is even less octave equivalence than in western tonal music (Tenzer, 2000). See also chromatic circle. Instrument design. Since the 19th century there have been many attempts to design isomorphic keyboards based on pitch spaces. The only ones to have caught on so far are several accordion layouts. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\np = 69 + 12\\cdot\\log_2 {(f/440)}\n\\," } ]
https://en.wikipedia.org/wiki?curid=616723
61675634
Ecclesiastes 10
Tenth chapter of the biblical book Ecclesiastes Ecclesiastes 10 is the tenth chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the 5th and 2nd centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter focuses on foolishness in persons, in high places, in action, in words and even in national life. Text. The original text was written in Hebrew. This chapter is divided into 20 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Foolishness in personal life (10:1–3). This section speaks of foolishness in the invisible side of one's life, contrasted to face (cf. : "a sad countenance"), hands (Ecclesiastes 7:26: "[a woman's] hands are fetters") or body (cf. : "put away evil from your flesh"). In verse 1 Qoheleth observes the impact of "a little folly" on one otherwise respected for "a great weight of wisdom". Foolishness in high places (10:4–7). Folly can be found within the leadership of a nation (verse 5) and results in odd reversals of position and prestige, in a society which Stuart Weeks describes as "topsy-turvy". "If a ruler’s anger rises against you, do not leave your post; calmness can lay great offenses to rest." Verse 4. For "calmness", the New King James Version notes that "healing" or "health" translate the Hebrew more literally. Weeks notes that this verse "probably commends 'soothing' rather than 'calmness'". Foolishness in action (10:8–11). Michael Eaton summarises this section as "vindictiveness has its built-in penalties", and "slackness may nullify inherent skill". Precautionary measures using wisdom can avert accidents, but are of no use when it is too late (as with snakes that were not charmed before). "He who quarries stones may be hurt by them," "And he who splits wood may be endangered by it." Foolishness in words (10:12–15). Spoken words can be a test of wisdom, as the ones from the wise can be helpful, but the foolish ones originate from the foolishness of the heart. Foolishness in national life (10:16–20). Qoheleth contrasts the way of disaster (verse 16) and the way of safety (verse 17) in national level. Eating and drinking early on a day indicate self-centered indulgence. Qoheleth does not despise laughter, wine or money, but the point is that "the pleasures of life should not be its total outlook" (verse 19), and one needs "to take life day by day from the hand of God". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61675634
61679626
Rotating wall technique
The rotating wall technique (RW technique) is a method used to compress a single-component plasma (a cold dense gas of charged particles) confined in an electromagnetic trap. It is one of many scientific and technological applications that rely on storing charged particles in vacuum. This technique has found extensive use in improving the quality of these traps and in tailoring of both positron and antiproton (i.e. antiparticle) plasmas for a variety of end uses. Overview. Single-component plasmas (SCP), which are a type of nonneutral plasma, have many uses, including studying a variety of plasma physics phenomena and for the accumulation, storage and delivery of antiparticles. Applications include the creation and study of antihydrogen, beams to study the interaction of positrons with ordinary matter and to create dense gases of positronium (Ps) atoms, and the creation of Ps-atom beams. The “rotating wall (RW) technique” uses rotating electric fields to compress SCP in PM traps radially to increase the plasma density and/or to counteract the tendency of plasma to diffuse radially out of the trap. It has proven crucial in improving the quality and hence utility of trapped plasmas and trap-based beams. Principles of operation. For this application, a plasma is stored in a Penning–Malmberg (PM) trap in a uniform magnetic field, "B". The charge cloud is typically cylindrical in shape with dimension along "B" large compared to the radius. This charge produces a radial electric field which would tend to push the plasma outward. To counteract this, the plasma spins about the axis of symmetry producing a Lorentz force to balance that due to the electric field, and the plasma takes the form of a spinning charged rod. Such cold, single-component plasmas in PM traps can come to thermal equilibrium and rotate as a rigid body at frequency formula_0, where "n" is the plasma density. As illustrated in Fig. 1, the RW technique uses an azimuthally segmented cylindrical electrode covering a portion of a plasma. Phased, sinusoidal voltages at frequency fRW are applied to the segments. The result is a rotating electric field perpendicular to the axis of symmetry of the plasma. This field induces an electric dipole moment in the plasma and hence a torque. Rotation of the field in the direction of, and faster than the natural rotation of the plasma acts to spin the plasma faster, thereby increasing the Lorentz force and producing plasma compression (cf. Figs. 2 and 3). An important requirement for plasma compression using the RW technique is good coupling between the plasma and the rotating field. This is necessary to overcome asymmetry-induced transport which acts as a drag on the plasma and tends to oppose the RW torque. For high quality PM traps with little asymmetry induced transport, one can access a so-called “strong drive regime." In this case, application of a rotating electric field at frequency results in the plasma spinning up to the applied frequency, namely fE = fRW (cf. Fig. 3). This has proven enormously useful as a way to fix plasma density simply by adjusting fRW. History. The RW technique was first developed by Huang et al., to compress a magnetized Mg+ plasma. The technique was soon thereafter applied to electron plasmas, where a segmented electrode, such as that described above, was used to couple to waves (Trivelpiece-Gould modes) in the plasma. The technique was also used to phase-lock the rotation frequency of laser cooled single-component ion crystals. The first use of the RW technique for antimatter was done using small positron plasmas without coupling to modes. The strong drive regime, which was discovered somewhat later using electron plasmas, has proven to be more useful in that tuning to (and tracking) plasma modes is unnecessary. A related technique has been developed to compress single-component charged gases in PM traps (i.e., charge clouds not in the plasma regime). Uses. The RW technique has found extensive use in manipulating antiparticles in Penning–Malmberg traps. One important application is the creation of specially tailored antiparticle beams for atomic physics experiments. Frequently one would like a beam with a large current density. In this case, one compresses the plasma with the RW technique before delivery. This has been crucial in experiments to study dense gases of positronium (Ps) atoms and formation of the Ps2 molecule (e+e−e+e−) [5-7]. It has also been important in the creation of high-quality Ps-atom beams. The RW technique is used in three ways in the creation of low-energy antihydrogen atoms. Antiprotons are compressed radially by sympathetic compression with electrons co-loaded in the trap. The technique has also been used to fix the positron density before the positrons and antiprotons are combined. Recently it was discovered that one could set all of the important parameters of the electron and positron plasmas for antihydrogen production using the RW to fix the plasma density and evaporative cooling to cool the plasma and fix the on-axis space charge potential. The result was greatly increased reproducibility for antihydrogen production. In particular, this technique, dubbed SDREVC (strong drive regime evaporative cooling), was successful to the extent that it increased the number of trappable antihydrogen by an order of magnitude. This is particularly important in that, while copious amounts of antihydrogen can be produced, the vast majority are at high temperature and cannot be trapped in the small well depth of the minimum-magnetic field atom traps. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f_{E} =\\frac{ \\mathrm{e} n}{ \\mathrm{4\\pi \\varepsilon _{0}} B}" } ]
https://en.wikipedia.org/wiki?curid=61679626
61683958
Ecclesiastes 11
Eleventh chapter of the biblical book Ecclesiastes Ecclesiastes 11 is the eleventh chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called '(the) Qoheleth' ("the Teacher"), composed probably between the 5th and 2nd centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. Michael Eaton notes that this chapter and the next are characterized by the encouragement to make decision and the need to act speedily. Text. The original text was written in Hebrew. This chapter is divided into 10 verses: by number of verses it is the shortest chapter in Ecclesiastes. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. The New King James Version divides this chapter into two sections: The venture of faith (11:1–6). The keyword for this section is "faith" or considerable trust, so the ominous outlook or the unexpected happenings will not ruin the joy of life. "Cast your bread upon the waters," "for you will find it after many days." "If the clouds are full of rain," "they empty themselves on the earth," "and if a tree falls to the south or to the north," "in the place where the tree falls, there it will lie." Verse 3. has been interpreted as having an eschatological dimension "because death will shortly cut us down". The life of joy (11:7–10). Qoheleth ends his long monologue with a summary of advice: "life is good and to be enjoyed", especially best when one is young, but against that enjoyment, one must remember that "darkness is to follow, and that deeds will be judged", as "to remember one's creator is also to remember one's judge". "Rejoice, O young man, in thy youth; and let thy heart cheer thee in the days of thy youth, and walk in the ways of thine heart, and in the sight of thine eyes: but know thou, that for all these things God will bring thee into judgment." Verse 9. This is not to imply that enjoyment is contrary to God's will, because Qoheleth states multiple times that enjoyment is God's gift (2:24-26; 3:10-15; 5:18-20; 9:7-9), so it is more to mean that God will judge people for the failure to accept the gift of enjoyment. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61683958
61684410
Ecclesiastes 12
Twelfth chapter of the biblical book Ecclesiastes Ecclesiastes 12 is the twelfth (and the final) chapter of the Book of Ecclesiastes in the Hebrew Bible or the Old Testament of the Christian Bible. The book contains philosophical speeches by a character called 'Qoheleth' ("the Teacher"), composed probably between the 5th and 2nd centuries BCE. Peshitta, Targum, and Talmud attribute the authorship of the book to King Solomon. This chapter continues the previous one in the sustained encouragement to make decision and the need to act speedily. Text. The original text was written in Hebrew. This chapter is divided into 14 verses. Textual witnesses. Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes Codex Leningradensis (1008). There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century). The Greek text is probably derived from the work of Aquila of Sinope or his followers. Structure. The New King James Version divides this chapter into two parts: Weeks and Eaton see verses 1-8 as a continuation of chapter 11. Verses 9-14 constitute an epilogue to the whole book. The urgency of decision (12:1–8). This part reminds the people to look not only to the life of joy but also to its Creator. Verses 2–7 are considered to contain metaphorical and picturesque descriptions of the physical degeneration accompanying old age, until the end of life, depicted as the end of the world, all in one Hebrew sentence (verses 1-7). There is a Sumerian poem applying the same style to the same topic. "Remember also your Creator in the days of your youth, before the evil days come and the years draw near of which you will say, "I have no pleasure in them";" Verse 6. "before the silver cord is snapped, or the golden bowl is broken, or the pitcher is broken at the fountain, or the wheel broken at the cistern" "Then shall the dust return to the earth as it was: and the spirit shall return unto God who gave it." Verse 7. This verse certainly depicts death, echoing ; . "Vanity of vanities, saith the preacher; all is vanity." Verse 8. This verse and with the same words form an "inclusio", together framing the body of the book. Epilogue (12:9–14). This section provides an assessment and apology for Qoheleth and the book. The final admonition:&lt;templatestyles src="Template:Blockquote/styles.css" /&gt;13 Fear God and keep His commandments, For this is man's all. 14 For God will bring every work into judgment, Including every secret thing, Whether good or evil points to the conclusion in a formula establishing authority of the text, that one should fear God and obey God's commandments, as there will be judgment. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{G}" } ]
https://en.wikipedia.org/wiki?curid=61684410
61690057
Kummer's transformation of series
Mathematical method In mathematics, specifically in the field of numerical analysis, Kummer's transformation of series is a method used to accelerate the convergence of an infinite series. The method was first suggested by Ernst Kummer in 1837. Technique. Let formula_0 be an infinite sum whose value we wish to compute, and let formula_1 be an infinite sum with comparable terms whose value is known. If the limit formula_2 exists, then formula_3 is always also a sequence going to zero and the series given by the difference, formula_4, converges. If formula_5, this new series differs from the original formula_6 and, under broad conditions, converges more rapidly. We may then compute formula_7 as formula_8, where formula_9 is a constant. Where formula_10, the terms can be written as the product formula_11. If formula_10 for all formula_12, the sum is over a component-wise product of two sequences going to zero, formula_13. Example. Consider the Leibniz formula for π: formula_14 We group terms in pairs as formula_15 formula_16 where we identify formula_17. We apply Kummer's method to accelerate formula_7, which will give an accelerated sum for computing formula_18. Let formula_19 formula_20 This is a telescoping series with sum value &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2. In this case formula_21 and so Kummer's transformation formula above gives formula_22 formula_23 which converges much faster than the original series. Coming back to Leibniz formula, we obtain a representation of formula_24 that separates formula_25 and involves a fastly converging sum over just the squared even numbers formula_26, formula_18 formula_27 formula_28 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A=\\sum_{n=1}^\\infty a_n" }, { "math_id": 1, "text": "B=\\sum_{n=1}^\\infty b_n" }, { "math_id": 2, "text": "\\gamma:=\\lim_{n\\to \\infty} \\frac{a_n}{b_n}" }, { "math_id": 3, "text": "a_n-\\gamma \\,b_n" }, { "math_id": 4, "text": "\\sum_{n=1}^\\infty (a_n-\\gamma\\, b_n)" }, { "math_id": 5, "text": "\\gamma\\neq 0" }, { "math_id": 6, "text": "\\sum_{n=1}^\\infty a_n" }, { "math_id": 7, "text": "A" }, { "math_id": 8, "text": "A=\\gamma\\,B + \\sum_{n=1}^\\infty (a_n-\\gamma\\,b_n)" }, { "math_id": 9, "text": "\\gamma B" }, { "math_id": 10, "text": "a_n\\neq 0" }, { "math_id": 11, "text": "(1-\\gamma\\,b_n/a_n)\\,a_n" }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": "A=\\gamma\\,B + \\sum_{n=1}^\\infty (1-\\gamma\\,b_n/a_n)\\,a_n" }, { "math_id": 14, "text": "1 \\,-\\, \\frac{1}{3} \\,+\\, \\frac{1}{5} \\,-\\, \\frac{1}{7} \\,+\\, \\frac{1}{9} \\,-\\, \\cdots \\,=\\, \\frac{\\pi}{4}." }, { "math_id": 15, "text": "1 - \\left(\\frac{1}{3} - \\frac{1}{5}\\right) - \\left(\\frac{1}{7} - \\frac{1}{9}\\right) + \\cdots" }, { "math_id": 16, "text": "\\, = 1 - 2\\left(\\frac{1}{15} + \\frac{1}{63} + \\cdots \\right) = 1-2A" }, { "math_id": 17, "text": "A = \\sum_{n=1}^\\infty \\frac{1}{16n^2-1}" }, { "math_id": 18, "text": "\\pi=4-8A" }, { "math_id": 19, "text": "B = \\sum_{n=1}^\\infty \\frac{1}{4n^2-1} = \\frac{1}{3} + \\frac{1}{15} + \\cdots" }, { "math_id": 20, "text": "\\, = \\frac{1}{2} - \\frac{1}{6} + \\frac{1}{6} - \\frac{1}{10} + \\cdots" }, { "math_id": 21, "text": "\\gamma := \\lim_{n\\to \\infty} \\frac{\\frac{1}{16n^2-1}}{\\frac{1}{4n^2-1}} = \\lim_{n\\to \\infty} \\frac{4n^2-1}{16n^2-1} = \\frac{1}{4} " }, { "math_id": 22, "text": "A=\\frac{1}{4} \\cdot \\frac{1}{2} + \\sum_{n=1}^\\infty \\left ( 1-\\frac{1}{4} \\frac{\\frac{1}{4n^2-1}}{\\frac{1}{16n^2-1}} \\right ) \\frac{1}{16n^2-1}" }, { "math_id": 23, "text": " = \\frac{1}{8} - \\frac{3}{4} \\sum_{n=1}^\\infty \\frac{1}{16n^2-1}\\frac{1}{4n^2-1}" }, { "math_id": 24, "text": "\\pi" }, { "math_id": 25, "text": "3" }, { "math_id": 26, "text": "(2n)^2" }, { "math_id": 27, "text": "=3+6\\cdot\\sum_{n=1}^\\infty \\frac{1}{(4(2n)^2-1)((2n)^2-1)}" }, { "math_id": 28, "text": "=3 + \\frac{2}{15} + \\frac{2}{315} + \\frac{6}{5005} + \\cdots" } ]
https://en.wikipedia.org/wiki?curid=61690057
61698432
Morley–Wang–Xu element
In applied mathematics, the Morlely–Wang–Xu (MWX) element is a canonical construction of a family of piecewise polynomials with the minimal degree elements for any formula_0-th order of elliptic and parabolic equations in any spatial-dimension formula_1 for formula_2. The MWX element provides a consistent approximation of Sobolev space formula_3 in formula_1. Morley–Wang–Xu element. The Morley–Wang–Xu element formula_4 is described as follows. formula_5 is a simplex and formula_6. The set of degrees of freedom will be given next. Given an formula_7-simplex formula_5 with vertices formula_8, for formula_9, let formula_10 be the set consisting of all formula_11-dimensional subsimplexe of formula_5. For any formula_12, let formula_13 denote its measure, and let formula_14 be its unit outer normals which are linearly independent. For formula_15, any formula_11-dimensional subsimplex formula_16 and formula_17 with formula_18, define formula_19 The degrees of freedom are depicted in Table 1. For formula_20, we obtain the well-known conforming linear element. For formula_21 and formula_22, we obtain the well-known nonconforming Crouziex–Raviart element. For formula_23, we recover the well-known Morley element for formula_24 and its generalization to formula_22. For formula_25, we obtain a new cubic element on a simplex that has 20 degrees of freedom. Generalizations. There are two generalizations of Morley–Wang–Xu element (which requires formula_2). formula_26: Nonconforming element. As a nontrivial generalization of Morley–Wang–Xu elements, Wu and Xu propose a universal construction for the more difficult case in which formula_26. Table 1 depicts the degrees of freedom for the case that formula_27. The shape function space is formula_28, where formula_29 is volume bubble function. This new family of finite element methods provides practical discretization methods for, say, a sixth order elliptic equations in 2D (which only has 12 local degrees of freedom). In addition, Wu and Xu propose an formula_30 nonconforming finite element that is robust for the sixth order singularly perturbed problems in 2D. formula_31: Interior penalty nonconforming FEMs. An alternative generalization when formula_32 is developed by combining the interior penalty and nonconforming methods by Wu and Xu. This family of finite element space consists of piecewise polynomials of degree not greater than formula_33. The degrees of freedom are carefully designed to preserve the weak-continuity as much as possible. For the case in which formula_34, the corresponding interior penalty terms are applied to obtain the convergence property. As a simple example, the proposed method for the case in which formula_35 is to find formula_36, such that formula_37 where the nonconforming element is depicted in Figure 1. . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2m" }, { "math_id": 1, "text": "\\mathbb{R}^n" }, { "math_id": 2, "text": "1\\leq m \\leq n" }, { "math_id": 3, "text": "H^m" }, { "math_id": 4, "text": "(T,P_T,D_T)" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "P_T = P_m(T) " }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "a_i" }, { "math_id": 9, "text": "1\\leq k\\leq n" }, { "math_id": 10, "text": "\\mathcal{F}_{T,k}" }, { "math_id": 11, "text": "(n-k)" }, { "math_id": 12, "text": "F \\in \\mathcal{F}_{T,k}" }, { "math_id": 13, "text": "|F|" }, { "math_id": 14, "text": "\\nu_{F,1}, \\cdots, \\nu_{F,k} " }, { "math_id": 15, "text": "1\\leq k\\leq m" }, { "math_id": 16, "text": "F\\in \\mathcal{F}_{T,k}" }, { "math_id": 17, "text": "\\beta\\in A_k" }, { "math_id": 18, "text": "|\\beta|=m-k" }, { "math_id": 19, "text": "\nd_{T,F,\\beta}(v) = \\frac{1}{|F|}\\int_F \\frac{\\partial^{|\\beta|v}}{\\partial \\nu_{F,1}^{\\beta_1} \\cdots \\nu_{F,k}^{\\beta_k}}.\n" }, { "math_id": 20, "text": "m=n=1" }, { "math_id": 21, "text": "m=1" }, { "math_id": 22, "text": "n\\geq 2" }, { "math_id": 23, "text": "m=2" }, { "math_id": 24, "text": "n=2" }, { "math_id": 25, "text": "m=n=3" }, { "math_id": 26, "text": "m=n+1" }, { "math_id": 27, "text": "n\\leq3, m\\leq n+1" }, { "math_id": 28, "text": "\\mathcal{P}_{n+1}(T)+q_T\\mathcal{P}_1(T)" }, { "math_id": 29, "text": "q_T = \\lambda_1\\lambda_2\\cdots\\lambda_n+1" }, { "math_id": 30, "text": "H^3" }, { "math_id": 31, "text": "m,n \\geq 1" }, { "math_id": 32, "text": "m > n" }, { "math_id": 33, "text": "m" }, { "math_id": 34, "text": "m>n" }, { "math_id": 35, "text": "m = 3, n = 2" }, { "math_id": 36, "text": "u_h\\in V_h" }, { "math_id": 37, "text": "\n(\\nabla^3_h u_h, \\nabla^3_h v_h) + \\eta \\sum_{F\\in \\mathcal{F}_h} h_F^{-5}\\int_F [u_h][v_h] = (f,v_h) \\quad \\forall v_h \\in V_h,\n" } ]
https://en.wikipedia.org/wiki?curid=61698432
61701
Venn diagram
Diagram that shows all possible logical relations between a collection of sets A Venn diagram is a widely used diagram style that shows the logical relation between sets, popularized by John Venn (1834–1923) in the 1880s. The diagrams are used to teach elementary set theory, and to illustrate simple set relationships in probability, logic, statistics, linguistics and computer science. A Venn diagram uses simple closed curves drawn on a plane to represent sets. Very often, these curves are circles or ellipses. Similar ideas had been proposed before Venn such as by Christian Weise in 1712 ("Nucleus Logicoe Wiesianoe") and Leonhard Euler ("Letters to a German Princess") in 1768. The idea was popularised by Venn in "Symbolic Logic", Chapter V "Diagrammatic Representation", published in 1881. Details. A Venn diagram, also called a "set diagram" or "logic diagram", shows "all" possible logical relations between a finite collection of different sets. These diagrams depict elements as points in the plane, and sets as regions inside closed curves. A Venn diagram consists of multiple overlapping closed curves, usually circles, each representing a set. The points inside a curve labelled "S" represent elements of the set "S", while points outside the boundary represent elements not in the set "S". This lends itself to intuitive visualizations; for example, the set of all elements that are members of both sets "S" and "T", denoted "S" ∩ "T" and read "the intersection of "S" and "T"", is represented visually by the area of overlap of the regions "S" and "T". In Venn diagrams, the curves are overlapped in every possible way, showing all possible relations between the sets. They are thus a special case of Euler diagrams, which do not necessarily show all relations. Venn diagrams were conceived around 1880 by John Venn. They are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics, and computer science. A Venn diagram in which the area of each shape is proportional to the number of elements it contains is called an area-proportional (or scaled) Venn diagram. Example. This example involves two sets of creatures, represented here as colored circles. The orange circle represents all types of creatures that have two legs. The blue circle represents creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram. Living creatures that have two legs "and" can fly—for example, parrots—are then in both sets, so they correspond to points in the region where the blue and orange circles overlap. This overlapping region would only contain those elements (in this example, creatures) that are members of both the orange set (two-legged creatures) and the blue set (flying creatures). Humans and penguins are bipedal, and so are in the orange circle, but since they cannot fly, they appear in the left part of the orange circle, where it does not overlap with the blue circle. Mosquitoes can fly, but have six, not two, legs, so the point for mosquitoes is in the part of the blue circle that does not overlap with the orange one. Creatures that are neither two-legged nor able to fly (for example, whales and spiders) would all be represented by points outside both circles. The combined region of the two sets is called their "union", denoted by A ∪ B, where A is the orange circle and B the blue. The union in this case contains all living creatures that either are two-legged or can fly (or both). The region included in both A and B, where the two sets overlap, is called the "intersection" of A and B, denoted by A ∩ B. History. Venn diagrams were introduced in 1880 by John Venn in a paper entitled "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings" in the "Philosophical Magazine and Journal of Science", about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Frank Ruskey and Mark Weston, predates Venn but are "rightly associated" with him as he "comprehensively surveyed and formalized their usage, and was the first to generalize them". Diagrams of overlapping circles representing unions and intersections were introduced by Catalan philosopher Ramon Llull (c. 1232–1315/1316) in the 13th century, who used them to illustrate combinations of basic principles. Gottfried Wilhelm Leibniz (1646–1716) produced similar diagrams in the 17th century (though much of this work was unpublished), as did Johann Christian Lange in a work from 1712 describing Christian Weise's contributions to logic. Euler diagrams, which are similar to Venn diagrams but don't necessarily contain all possible unions and intersections, were first made prominent by mathematician Leonhard Euler in the 18th century. Venn did not use the term "Venn diagram" and referred to the concept as "Eulerian Circles". He became acquainted with Euler diagrams in 1862 and wrote that Venn diagrams did not occur to him "till much later", while attempting to adapt Euler diagrams to Boolean logic. In the opening sentence of his 1880 article Venn wrote that Euler diagrams were the only diagrammatic representation of logic to gain "any general acceptance". Venn viewed his diagrams as a pedagogical tool, analogous to verification of physical concepts through experiment. As an example of their applications, he noted that a three-set diagram could show the syllogism: 'All "A" is some "B". No "B" is any "C". Hence, no "A" is any "C".' Charles L. Dodgson (Lewis Carroll) includes "Venn's Method of Diagrams" as well as "Euler's Method of Diagrams" in an "Appendix, Addressed to Teachers" of his book "Symbolic Logic" (4th edition published in 1896). The term "Venn diagram" was later used by Clarence Irving Lewis in 1918, in his book "A Survey of Symbolic Logic". In the 20th century, Venn diagrams were further developed. David Wilson Henderson showed, in 1963, that the existence of an "n"-Venn diagram with "n"-fold rotational symmetry implied that "n" was a prime number. He also showed that such symmetric Venn diagrams exist when "n" is five or seven. In 2002, Peter Hamburger found symmetric Venn diagrams for "n" = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. These combined results show that rotationally symmetric Venn diagrams exist, if and only if "n" is a prime number. Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory, as part of the new math movement in the 1960s. Since then, they have also been adopted in the curriculum of other fields such as reading. Overview. A Venn diagram is constructed with a collection of simple closed curves drawn in a plane. According to Lewis, the "principle of these diagrams is that classes [or "sets"] be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not-null". Venn diagrams normally comprise overlapping circles. The interior of the circle symbolically represents the elements of the set, while the exterior represents elements that are not members of the set. For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while the other circle may represent the set of all tables. The overlapping region, or "intersection", would then represent the set of all wooden tables. Shapes other than circles can be employed as shown below by Venn's own higher set diagrams. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets. That is, they are schematic diagrams generally not drawn to scale. Venn diagrams are similar to Euler diagrams. However, a Venn diagram for "n" component sets must contain all 2"n" hypothetically possible zones, that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actually possible zones in a given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram, the corresponding zone is missing from the diagram. For example, if one set represents "dairy products" and another "cheeses", the Venn diagram contains a zone for cheeses that are not dairy products. Assuming that in the context "cheese" means some type of dairy product, the Euler diagram has the cheese zone entirely contained within the dairy-product zone—there is no zone for (non-existent) non-dairy cheese. This means that as the number of contours increases, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small. The difference between Euler and Venn diagrams can be seen in the following example. Take the three sets: The Euler and the Venn diagram of those sets are: Extensions to higher numbers of sets. Venn diagrams typically represent two or three sets, but there are forms that allow for higher numbers. Shown below, four intersecting spheres form the highest order Venn diagram that has the symmetry of a simplex and can be visually represented. The 16 intersections correspond to the vertices of a tesseract (or the cells of a 16-cell, respectively). For higher numbers of sets, some loss of symmetry in the diagrams is unavoidable. Venn was keen to find "symmetrical figures ... elegant in themselves," that represented higher numbers of sets, and he devised an "elegant" four-set diagram using ellipses (see below). He also gave a construction for Venn diagrams for "any" number of sets, where each successive curve that delimits a set interleaves with previous curves, starting with the three-circle diagram. Edwards–Venn diagrams. Anthony William Fairbank Edwards constructed a series of Venn diagrams for higher numbers of sets by segmenting the surface of a sphere, which became known as Edwards–Venn diagrams. For example, three sets can be easily represented by taking three hemispheres of the sphere at right angles ("x" = 0, "y" = 0 and "z" = 0). A fourth set can be added to the representation, by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on. The resulting sets can then be projected back to a plane, to give "cogwheel" diagrams with increasing numbers of teeth—as shown here. These diagrams were devised while designing a stained-glass window in memory of Venn. Other diagrams. Edwards–Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides. They are also two-dimensional representations of hypercubes. Henry John Stephen Smith devised similar "n"-set diagrams using sine curves with the series of equations formula_3 Charles Lutwidge Dodgson (also known as Lewis Carroll) devised a five-set diagram known as Carroll's square. Joaquin and Boyles, on the other hand, proposed supplemental rules for the standard Venn diagram, in order to account for certain problem cases. For instance, regarding the issue of representing singular statements, they suggest to consider the Venn diagram circle as a representation of a set of things, and use first-order logic and set theory to treat categorical statements as statements about sets. Additionally, they propose to treat singular statements as statements about set membership. So, for example, to represent the statement "a is F" in this retooled Venn diagram, a small letter "a" may be placed inside the circle that represents the set F. Related concepts. Venn diagrams correspond to truth tables for the propositions formula_4, formula_5, etc., in the sense that each region of Venn diagram corresponds to one row of the truth table. This type is also known as Johnston diagram. Another way of representing sets is with John F. Randolph's R-diagrams. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\{1,\\, 2,\\, 5\\}" }, { "math_id": 1, "text": "B = \\{1,\\, 6\\}" }, { "math_id": 2, "text": "C = \\{4,\\, 7\\}" }, { "math_id": 3, "text": "y_i = \\frac{\\sin\\left(2^i x\\right)}{2^i} \\text{ where } 0 \\leq i \\leq n-1 \\text{ and } i \\in \\mathbb{N}. " }, { "math_id": 4, "text": "x\\in A" }, { "math_id": 5, "text": "x\\in B" } ]
https://en.wikipedia.org/wiki?curid=61701
6170575
Dynamic light scattering
Technique for determining size distribution of particles Dynamic light scattering (DLS) is a technique in physics that can be used to determine the size distribution profile of small particles in suspension or polymers in solution. In the scope of DLS, temporal fluctuations are usually analyzed using the intensity or photon autocorrelation function (also known as photon correlation spectroscopy – PCS or quasi-elastic light scattering – QELS). In the time domain analysis, the autocorrelation function (ACF) usually decays starting from zero delay time, and faster dynamics due to smaller particles lead to faster decorrelation of scattered intensity trace. It has been shown that the intensity ACF is the Fourier transform of the power spectrum, and therefore the DLS measurements can be equally well performed in the spectral domain. DLS can also be used to probe the behavior of complex fluids such as concentrated polymer solutions. Setup. A monochromatic light source, usually a laser, is shot through a polarizer and into a sample. The scattered light then goes through a second polarizer where it is collected by a photomultiplier and the resulting image is projected onto a screen. This is known as a speckle pattern (Figure 1). All of the molecules in the solution are being hit with the light and all of the molecules diffract the light in all directions. The diffracted light from all of the molecules can either interfere constructively (light regions) or destructively (dark regions). This process is repeated at short time intervals and the resulting set of speckle patterns is analyzed by an autocorrelator that compares the intensity of light at each spot over time. The polarizers can be set up in two geometrical configurations. One is a vertical/vertical (VV) geometry, where the second polarizer allows light through that is in the same direction as the primary polarizer. In vertical/horizontal (VH) geometry the second polarizer allows light that is not in the same direction as the incident light. Description. When light hits small particles, the light scatters in all directions (Rayleigh scattering) as long as the particles are small compared to the wavelength (below 250 nm). Even if the light source is a laser, and thus is monochromatic and coherent, the scattering intensity fluctuates over time. This fluctuation is due to small particles in suspension undergoing Brownian motion, and so the distance between the scatterers in the solution is constantly changing with time. This scattered light then undergoes either constructive or destructive interference by the surrounding particles, and within this intensity fluctuation, information is contained about the time scale of movement of the scatterers. Sample preparation either by filtration or centrifugation is critical to remove dust and artifacts from the solution. The dynamic information of the particles is derived from the autocorrelation of the intensity trace recorded during the experiment. The second order autocorrelation curve is generated from the intensity trace as follows: formula_0 where g2(q;τ) is the autocorrelation function at a particular wave vector, q, and delay time, τ, and I is the intensity. The angular brackets formula_1 denote the expected value operator, which in some texts is denoted by a capital E. At short time delays, the correlation is high because the particles do not have a chance to move to a great extent from the initial state that they were in. The two signals are thus essentially unchanged when compared after only a very short time interval. As the time delays become longer, the correlation decays exponentially, meaning that, after a long period has elapsed, there is no correlation between the scattered intensity of the initial and final states. This exponential decay is related to the motion of the particles, specifically to the diffusion coefficient. To fit the decay (i.e., the autocorrelation function), numerical methods are used, based on calculations of assumed distributions. If the sample is monodisperse (uniform) then the decay is simply a single exponential. The Siegert equation relates the second-order autocorrelation function with the first-order autocorrelation function g1(q;τ) as follows: formula_2 where the first term of the sum is related to the baseline value (≈1) and the parameter β is a correction factor that depends on the geometry and alignment of the laser beam in the light scattering setup. It is roughly equal to the inverse of the number of speckle (see Speckle pattern) from which light is collected. A smaller focus of the laser beam yields a coarser speckle pattern, a lower number of speckle on the detector, and thus a larger second-order autocorrelation. The most important use of the autocorrelation function is its use for size determination. Multiple scattering. Dynamic light scattering provides insight into the dynamic properties of soft materials by measuring single scattering events, meaning that each detected photon has been scattered by the sample exactly once. In principle, the DLS measurements can be performed with the detector positioned at any angle. The choice of the best angle depends on the sample properties, such as turbidity and particle size. Back scattering detection (e.g., 173° or 175°) is particularly interesting for turbid and highly concentrated samples, which contain large particles. Side scattering detection (90°) is recommended for weakly scattering samples, including small particles and transparent samples. Finally, forward scattering detection (e.g., 13° or 15°) is suitable for detection of samples containing small particles with few large particles. Some DLS instruments in the market also allow automatic angle selection based on a continuous transmittance measurement. At the lower end of the turbidity range, the Cavity Amplified Scattering Spectroscopy method makes use of an integrating cavity to elongate photon paths through quasi non-scattering samples. As opposed to conventional DLS instruments, this method is angle independent as it probes samples isotropically from all directions. Even though the DLS measurement using a single-angle detection has been the most diffuse technique, the application to many systems of scientific and industrial relevance has been limited due to often-encountered multiple scattering, wherein photons are scattered multiple times by the sample before being detected. Accurate interpretation becomes exceedingly difficult for systems with non-negligible contributions from multiple scattering. Especially for larger particles and those with high refractive index contrast, this limits the technique to very low particle concentrations, and a large variety of systems are, therefore, excluded from investigations with dynamic light scattering. However, as shown by Schaetzel, it is possible to suppress multiple scattering in dynamic light scattering experiments via a cross-correlation approach. The general idea is to isolate singly scattered light and suppress undesired contributions from multiple scattering in a dynamic light scattering experiment. Different implementations of cross-correlation light scattering have been developed and applied. Currently, the most widely used scheme is the so-called 3D-dynamic light scattering method. The same method can also be used to correct static light scattering data for multiple scattering contributions. Alternatively, in the limit of strong multiple scattering, a variant of dynamic light scattering called diffusing-wave spectroscopy can be applied. Data analysis. Introduction. Once the autocorrelation data have been generated, different mathematical approaches can be employed to obtain 'information' from it. Analysis of the scattering is facilitated when particles do not interact through collisions or electrostatic forces between ions. Particle-particle collisions can be suppressed by dilution, and charge effects are reduced by the use of salts to collapse the electrical double layer. The simplest approach is to treat the first-order autocorrelation function as a single exponential decay. This is appropriate for a monodisperse population. formula_3 where Γ is the decay rate. The translational diffusion coefficient Dt may be derived at a single angle or at a range of angles depending on the wave vector q. formula_4 with formula_5 where λ is the incident laser wavelength, n0 is the solvent refractive index and θ is the angle at which the detector is located with respect to the sample cell. The refractive index of the solvent plays a crucial role in light scattering and is important to calculate the Stokes radius from the Stokes-Einstein equation. Therefore, previous refractive index data from the scattering medium should be evaluated with dedicated instruments, known as refractometers. Alternatively, DLS instruments containing a refractive index measurement module allow a good estimative for this important parameter within ±0.5%, which is the accuracy defined by ISO 22412:2017 for refractive index values required for DLS. Besides the refractive index of the medium, the refractive index of the particles is only necessary when analyzing larger particle size (usually above 100 nm) and volume- or number-weighted size distributions are needed. In these cases, prior knowledge of the refractive index and absorbance of the material is required in order to apply the Mie scattering. Depending on the anisotropy and polydispersity of the system, a resulting plot of (Γ/q2) vs. q2 may or may not show an angular dependence. Small spherical particles will show no angular dependence, hence no anisotropy. A plot of (Γ/q2) vs. q2 will result in a horizontal line. Particles with a shape other than a sphere will show anisotropy and thus an angular dependence when plotting (Γ/q2) vs. q2. The intercept will be in any case the Dt. Thus, there is an optimum angle of detection θ for each particle size. A high-quality analysis should always be performed at several scattering angles (multiangle DLS). This becomes even more important in a polydisperse sample with an unknown particle size distribution. At certain angles the scattering intensity of some particles will completely overwhelm the weak scattering signal of other particles, thus making them invisible to the data analysis at this angle. DLS instruments which only work at a fixed angle can only deliver good results for some particles. Thus, the indicated precision of a DLS instrument with only one detection angle is only ever true for certain particles. Dt is often used to calculate the hydrodynamic radius of a sphere through the Stokes–Einstein equation. It is important to note that the size determined by dynamic light scattering is the size of a sphere that moves in the same manner as the scatterer. So, for example, if the scatterer is a random coil polymer, the determined size is not the same as the radius of gyration determined by static light scattering. It is also useful to point out that the obtained size will include any other molecules or solvent molecules that move with the particle. So, for example, colloidal gold with a layer of surfactant will appear larger by dynamic light scattering (which includes the surfactant layer) than by transmission electron microscopy (which does not "see" the layer due to poor contrast). In most cases, samples are polydisperse. Thus, the autocorrelation function is a sum of the exponential decays corresponding to each of the species in the population. formula_6 It is tempting to obtain data for g1(q;τ) and attempt to invert the above to extract G(Γ). Since G(Γ) is proportional to the relative scattering from each species, it contains information on the distribution of sizes. However, this is known as an ill-posed problem. The methods described below (and others) have been developed to extract as much useful information as possible from an autocorrelation function. Cumulant method. One of the most common methods is the cumulant method, from which in addition to the sum of the exponentials above, more information can be derived about the variance of the system as follows: formula_7 where is the average decay rate and is the second order polydispersity index (or an indication of the variance). A third-order polydispersity index may also be derived but this is necessary only if the particles of the system are highly polydisperse. The z-averaged translational diffusion coefficient Dz may be derived at a single angle or at a range of angles depending on the wave vector q. formula_8 One must note that the cumulant method is valid for small τ and sufficiently narrow G(Γ). One should seldom use parameters beyond μ3, because overfitting data with many parameters in a power-series expansion will render all the parameters, including formula_9 and μ2, less precise. The cumulant method is far less affected by experimental noise than the methods below. Size-distribution function. The particle size distribution can also be obtained using the autocorrelation function. However, polydisperse samples are not well resolved by the cumulant fit analysis. Thus, the combination of non-negative least squares (NNLS) algorithms with regularization methods, such as the Tikhonov regularization, can be used to resolve multimodal samples. An important feature of the NNLS optimization is the regularization term used to identify specific solutions and minimize the deviation between the measure data and the fit. There is no ideal regularization term that is suitable for all samples. The shape of this term can determine if the solution will represent a general broad distribution with small number of peaks or if narrow and discrete populations will be fit. Alternatively, the calculation of the particle size distribution is performed using the CONTIN algorithm. CONTIN algorithm. An alternative method for analyzing the autocorrelation function can be achieved through an inverse Laplace transform known as CONTIN developed by Steven Provencher. The CONTIN analysis is ideal for heterodisperse, polydisperse, and multimodal systems that cannot be resolved with the cumulant method. The resolution for separating two different particle populations is approximately a factor of five or higher and the difference in relative intensities between two different populations should be less than 1:10−5. Maximum entropy method. The Maximum entropy method is an analysis method that has great developmental potential. The method is also used for the quantification of sedimentation velocity data from analytical ultracentrifugation. The maximum entropy method involves several iterative steps to minimize the deviation of the fitted data from the experimental data and subsequently reduce the χ2 of the fitted data. Scattering of non-spherical particles. If the particle in question is not spherical, the rotational motion must be considered as well because the scattering of the light will be different depending on orientation. According to Pecora, rotational Brownian motion will affect the scattering when a particle fulfills two conditions; they must be both optically and geometrically anisotropic. Rod-shaped molecules fulfill these requirements, so a rotational diffusion coefficient must be considered in addition to a translational diffusion coefficient. In its most succinct form, the equation appears as formula_10 Where A/B is the ratio of the two relaxation modes (translational and rotational), Mp contains information about the axis perpendicular to the central axis of the particle, and Ml contains information about the axis parallel to the central axis. In 2007, Peter R. Lang and his team decided to use dynamic light scattering to determine the particle length and aspect ratio of short gold nanorods. They chose this method since it does not destroy the sample and it has a relatively easy setup. Both relaxation states were observed in VV geometry and the diffusion coefficients of both motions were used to calculate the aspect ratios of the gold nanoparticles. Applications. DLS is used to characterize the size of various particles including proteins, polymers, micelles, Protein cages and virus-like particles, vesicles, carbohydrates, nanoparticles, biological cells, and gels. If the system is not disperse in size, the mean effective diameter of the particles can be determined. This measurement depends on the size of the particle core, the size of surface structures, particle concentration, and the type of ions in the medium. Since DLS essentially measures fluctuations in scattered light intensity due to diffusing particles, the diffusion coefficient of the particles can be determined. DLS software of commercial instruments typically displays the particle population at different diameters. If the system is monodisperse, there should only be one population, whereas a polydisperse system would show multiple particle populations. If there is more than one size population present in a sample then either the CONTIN analysis should be applied for photon correlation spectroscopy instruments, or the power spectrum method should be applied for Doppler shift instruments. Stability studies can be done conveniently using DLS. Periodical DLS measurements of a sample can show whether the particles aggregate over time by seeing whether the hydrodynamic radius of the particle increases. If particles aggregate, there will be a larger population of particles with a larger radius. In some DLS machines, stability depending on temperature can be analyzed by controlling the temperature "in situ". See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "g^2(q;\\tau) = \\frac{\\langle I(t)I(t+\\tau)\\rangle}{\\langle I(t)\\rangle^2}" }, { "math_id": 1, "text": "\\langle\\cdot\\rangle" }, { "math_id": 2, "text": "g^2(q;\\tau)= 1+\\beta\\left[g^1(q;\\tau)\\right]^2" }, { "math_id": 3, "text": "\\ g^1(q;\\tau)= \\exp(-\\Gamma\\tau) \\, " }, { "math_id": 4, "text": "\\ \\Gamma=q^2D_t\\," }, { "math_id": 5, "text": "\\ q = \\frac{4\\pi n_0}{\\lambda}\\sin\\left(\\frac{\\theta}{2}\\right)" }, { "math_id": 6, "text": "g^1(q;\\tau)= \\sum_{i=1}^n G_i(\\Gamma_i)\\exp(-\\Gamma_i\\tau) = \\int G(\\Gamma)\\exp(-\\Gamma\\tau)\\,d\\Gamma." }, { "math_id": 7, "text": "\\ g^1(q;\\tau) = \\exp\\left(-\\bar{\\Gamma}\\left(\\tau - \\frac{\\mu_2}{2!}\\tau^2 + \\frac{\\mu_3}{3!}\\tau^3 + \\cdots\\right)\\right)" }, { "math_id": 8, "text": "\\ \\bar{\\Gamma}=q^2D_z\\," }, { "math_id": 9, "text": "\\scriptstyle \\bar{\\Gamma}" }, { "math_id": 10, "text": "\\frac{A}{B} = \\frac{5}{4} \\frac{4\\Mu_p+2\\Nu\\Mu_l\\Mu_p+\\Mu_l}{\\Mu_p-\\Nu+\\Mu_l}" } ]
https://en.wikipedia.org/wiki?curid=6170575
617121
Game semantics
Game semantics (, translated as "dialogical logic") is an approach to formal semantics that grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for a player, somewhat resembling Socratic dialogues or medieval theory of Obligationes. History. In the late 1950s Paul Lorenzen was the first to introduce a game semantics for logic, and it was further developed by Kuno Lorenz. At almost the same time as Lorenzen, Jaakko Hintikka developed a model-theoretical approach known in the literature as "GTS" (game-theoretical semantics). Since then, a number of different game semantics have been studied in logic. Shahid Rahman (Lille III) and collaborators developed dialogical logic into a general framework for the study of logical and philosophical issues related to logical pluralism. Beginning 1994 this triggered a kind of renaissance with lasting consequences. This new philosophical impulse experienced a parallel renewal in the fields of theoretical computer science, computational linguistics, artificial intelligence, and the formal semantics of programming languages, for instance the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface between logic and games, and Hanno Nickau who addressed the full abstraction problem in programming languages by means of games. New results in linear logic by Jean-Yves Girard in the interfaces between mathematical game theory and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others, including S. Abramsky, J. van Benthem, A. Blass, D. Gabbay, M. Hyland, W. Hodges, R. Jagadeesan, G. Japaridze, E. Krabbe, L. Ong, H. Prakken, G. Sandu, D. Walton, and J. Woods, who placed game semantics at the center of a new concept in logic in which logic is understood as a dynamic instrument of inference. There has also been an alternative perspective on proof theory and meaning theory, advocating that Wittgenstein's "meaning as use" paradigm as understood in the context of proof theory, where the so-called reduction rules (showing the effect of elimination rules on the result of introduction rules) should be seen as appropriate to formalise the explanation of the (immediate) consequences one can draw from a proposition, thus showing the function/purpose/usefulness of its main connective in the calculus of language (, , , , , ). Classical logic. The simplest application of game semantics is to propositional logic. Each formula of this language is interpreted as a game between two players, known as the "Verifier" and the "Falsifier". The Verifier is given "ownership" of all the disjunctions in the formula, and the Falsifier is likewise given ownership of all the conjunctions. Each move of the game consists of allowing the owner of the principal connective to pick one of its branches; play will then continue in that subformula, with whichever player controls its principal connective making the next move. Play ends when a primitive proposition has been so chosen by the two players; at this point the Verifier is deemed the winner if the resulting proposition is true, and the Falsifier is deemed the winner if it is false. The original formula will be considered true precisely when the Verifier has a winning strategy, while it will be false whenever the Falsifier has the winning strategy. If the formula contains negations or implications, other, more complicated, techniques may be used. For example, a negation should be true if the thing negated is false, so it must have the effect of interchanging the roles of the two players. More generally, game semantics may be applied to predicate logic; the new rules allow a principal quantifier to be removed by its "owner" (the Verifier for existential quantifiers and the Falsifier for universal quantifiers) and its bound variable replaced at all occurrences by an object of the owner's choosing, drawn from the domain of quantification. Note that a single counterexample falsifies a universally quantified statement, and a single example suffices to verify an existentially quantified one. Assuming the axiom of choice, the game-theoretical semantics for classical first-order logic agree with the usual model-based (Tarskian) semantics. For classical first-order logic the winning strategy for the Verifier essentially consists of finding adequate Skolem functions and witnesses. For example, if "S" denotes formula_0 then an equisatisfiable statement for "S" is formula_1. The Skolem function "f" (if it exists) actually codifies a winning strategy for the Verifier of "S" by returning a witness for the existential sub-formula for every choice of "x" the Falsifier might make. The above definition was first formulated by Jaakko Hintikka as part of his GTS interpretation. The original version of game semantics for classical (and intuitionistic) logic due to Paul Lorenzen and Kuno Lorenz was not defined in terms of models but of winning strategies over "formal dialogues" (P. Lorenzen, K. Lorenz 1978, S. Rahman and L. Keiff 2005). Shahid Rahman and Tero Tulenheimo developed an algorithm to convert GTS-winning strategies for classical logic into the dialogical winning strategies and vice versa. Formal dialogues and GTS games may be infinite and use end-of-play rules rather than letting players decide when to stop playing. Reaching this decision by standard means for strategic inferences (iterated elimination of dominated strategies or IEDS) would, in GTS and formal dialogues, be equivalent to solving the halting problem and exceeds the reasoning abilities of human agents. GTS avoids this with a rule to test formulas against an underlying model; logical dialogues, with a non-repetition rule (similar to threefold repetition in Chess). Genot and Jacot (2017) proved that players with severely bounded rationality can reason to terminate a play without IEDS. For most common logics, including the ones above, the games that arise from them have perfect information—that is, the two players always know the truth values of each primitive, and are aware of all preceding moves in the game. However, with the advent of game semantics, logics, such as the independence-friendly logic of Hintikka and Sandu, with a natural semantics in terms of games of imperfect information have been proposed. Intuitionistic logic, denotational semantics, linear logic, logical pluralism. The primary motivation for Lorenzen and Kuno Lorenz was to find a game-theoretic (their term was "dialogical", in German Dialogische Logik) semantics for intuitionistic logic. Andreas Blass was the first to point out connections between game semantics and linear logic. This line was further developed by Samson Abramsky, Radhakrishnan Jagadeesan, Pasquale Malacaria and independently Martin Hyland and Luke Ong, who placed special emphasis on compositionality, i.e. the definition of strategies inductively on the syntax. Using game semantics, the authors mentioned above have solved the long-standing problem of defining a fully abstract model for the programming language PCF. Consequently, game semantics has led to fully abstract semantic models for a variety of programming languages, and to new semantic-directed methods of software verification by software model checking. Shahid Rahman and Helge Rückert extended the dialogical approach to the study of several non-classical logics such as modal logic, relevance logic, free logic and connexive logic. Recently, Rahman and collaborators developed the dialogical approach into a general framework aimed at the discussion of logical pluralism. Quantifiers. Foundational considerations of game semantics have been more emphasised by Jaakko Hintikka and Gabriel Sandu, especially for independence-friendly logic (IF logic, more recently "information"-friendly logic), a logic with branching quantifiers. It was thought that the principle of compositionality fails for these logics, so that a Tarskian truth definition could not provide a suitable semantics. To get around this problem, the quantifiers were given a game-theoretic meaning. Specifically, the approach is the same as in classical propositional logic, except that the players do not always have perfect information about previous moves by the other player. Wilfrid Hodges has proposed a compositional semantics and proved it equivalent to game semantics for IF-logics. More recently Shahid Rahman and the team of dialogical logic in Lille implemented dependences and independences within a dialogical framework by means of a dialogical approach to intuitionistic type theory called "immanent reasoning". Computability logic. Japaridze’s computability logic is a game-semantical approach to logic in an extreme sense, treating games as targets to be serviced by logic rather than as technical or foundational means for studying or justifying logic. Its starting philosophical point is that logic is meant to be a universal, general-utility intellectual tool for ‘navigating the real world’ and, as such, it should be construed semantically rather than syntactically, because it is semantics that serves as a bridge between real world and otherwise meaningless formal systems (syntax). Syntax is thus secondary, interesting only as much as it services the underlying semantics. From this standpoint, Japaridze has repeatedly criticized the often followed practice of adjusting semantics to some already existing target syntactic constructions, with Lorenzen’s approach to intuitionistic logic being an example. This line of thought then proceeds to argue that the semantics, in turn, should be a game semantics, because games “offer the most comprehensive, coherent, natural, adequate and convenient mathematical models for the very essence of all ‘navigational’ activities of agents: their interactions with the surrounding world”. Accordingly, the logic-building paradigm adopted by computability logic is to identify the most natural and basic operations on games, treat those operators as logical operations, and then look for sound and complete axiomatizations of the sets of game-semantically valid formulas. On this path a host of familiar or unfamiliar logical operators have emerged in the open-ended language of computability logic, with several sorts of negations, conjunctions, disjunctions, implications, quantifiers and modalities. Games are played between two agents: a machine and its environment, where the machine is required to follow only computable strategies. This way, games are seen as interactive computational problems, and the machine's winning strategies for them as solutions to those problems. It has been established that computability logic is robust with respect to reasonable variations in the complexity of allowed strategies, which can be brought down as low as logarithmic space and polynomial time (one does not imply the other in interactive computations) without affecting the logic. All this explains the name “computability logic” and determines applicability in various areas of computer science. Classical logic, independence-friendly logic and certain extensions of linear and intuitionistic logics turn out to be special fragments of computability logic, obtained merely by disallowing certain groups of operators or atoms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\forall x \\exists y\\, \\phi(x,y)" }, { "math_id": 1, "text": "\\exists f \\forall x \\, \\phi(x,f(x))" } ]
https://en.wikipedia.org/wiki?curid=617121
61712718
Economic batch quantity
In inventory management, Economic Batch Quantity (EBQ), also known as Optimum Batch Quantity (OBQ) is a measure used to determine the quantity of units that can be produced at the minimum average costs in a given batch or product run. EBQ is basically a refinement of the economic order quantity (EOQ) model to take into account circumstances in which the goods are produced in batches. The goal of calculating EBQ is that the product is produced in the required quantity and required quality at the lowest cost. The EOQ model was developed by Ford W. Harris in 1913, but R. H. Wilson, a consultant who applied it extensively, and K. Andler are given credit for their in-depth analysis. Aggterleky described the optimal planning planes and the meaning of under and over planning, and the influence of the reduction of total cost. Wiendahl used Harris and Andler's equation for the determination of the optimal quantity. Härdler took into account the costs of storage and delivery in determining the optimal batch quantity (EBQ). Muller and Piasecki asserted that inventory management is explained only with the basics of an optimal quantity calculation. Background. There are basically two options of planning the batch quantity: planning a large batch of a product in long intervals, and planning a small batch of a product in short intervals. The advantages of planning a large batch of product are that the price of ordering a large batch, administrative costs, costs of tests and shipping are lower, and there is a lower risk of interruption of production because of the large stock. The disadvantages of planning a large batch are that there is higher tied-up capital, and storage costs of product inventory are also higher. The advantages of planning a small batch of product are that there is less tied-up capital, storage costs of product inventory are low, and there is a higher flexibility if quantities change at suppliers and buyers. The disadvantages of planning a small batch are that there will be costs of frequent ordering, and a high risk of interruption of production because of a small product inventory. Somewhere between the large and small batch quantity is the optimal batch quantity, i.e. the quantity in which the cost per product unit is the lowest. Variables and assumptions. In the EOQ model, it is assumed that the orders are received all at once. However, in the EBQ model, this assumption is relaxed. There are two types of costs: those which increase with the batch size such as working capital investment in materials and labor, cost of handling and storing materials, insurance and tax charges, interest on capital investment, etc., and those which decrease with the batch size such as cost (per unit) of setting up machines, cost of preparing paper work that enters and controls the production of the order, etc. These costs, i.e., (a) and (b) are plotted and added graphically (figure). The figure graphs the holding cost and ordering cost per year equations. The third line is the addition of these two equations, which generates the total inventory cost per year. The lowest (minimum) part of the total cost curve will give the economic batch quantity as illustrated in the next section. This graph should give a better understanding of the derivation of the optimal ordering quantity equation, i.e., the EBQ equation. Thus, variables "Q, R, S, C, I" can be defined, which stand for economic batch quantity, annual requirements, preparation and set-up cost each time a new batch is started, constant cost per piece (material, direct labor and overheads), inventory carrying charge rate per year, respectively. Some assumptions have been made for calculating economic batch quantity. They are: Calculations. If formula_0 is the cost of setting up a batch, formula_1 is the annual demand, formula_2 is the daily rate at which inventory is demanded, formula_3 is the inventory holding cost per unit per annum, and formula_4 is the rate of production per annum, the total cost function formula_5 is calculated as follows: formula_6 formula_7 formula_8 In this case the ordering cost, formula_0 is often the setup cost for production. formula_9 The EBQ is calculated as the point where the total cost is minimum as follows: formula_10 formula_11 Where formula_0 is the cost of setting up a batch, formula_1 is the annual demand, formula_2 is the daily rate at which inventory is demanded, formula_12 is the inventory holding cost per unit per annum, and formula_4 is the rate of production per annum. Compared to the EOQ equation, there is a factor d/p introduced. This is due to the fact that when we produce a component while it is used in downstream production at the same time, inventory levels will not reach the same peak as when we order the components from a supplier and receive the batch at a single point in time. For instance, if we produce two different components (with the same processing time) intermittently then d/p is 0.5. It is evident from this equation that the economic batch quantity increases as the annual requirements or the preparation and setup costs increase that is, they are (not directly) proportional to each other. Similarly, it is also clear that the economic batch quantity decreases as the cost per piece and inventory carrying rate increase. Example. Set-up cost = $20 per set-up, Annual requirements = 1000, Inventory carrying cost = 10% of value/year, Cost per part = $2 In this example, the factor d/p is ignored. formula_13 Therefore, the number of batches to be made for manufacturing the parts are 1000/447 = 2.24. Nearest, 2 batches can be made and therefore the modified EBQ = 1000/2 = 500 parts. This rounding off only makes sense if we produce the item during exactly one year, and we do not carry over stock from one year to the next. See also. Economic Order Quantity Operations Management Inventory Control
[ { "math_id": 0, "text": "C_O" }, { "math_id": 1, "text": "D" }, { "math_id": 2, "text": "d" }, { "math_id": 3, "text": "C_C\n" }, { "math_id": 4, "text": "p" }, { "math_id": 5, "text": "(TC)" }, { "math_id": 6, "text": "\\text{Maximum Inventory Level}=Q-\\frac{Q d}{p}=Q(1-\\frac{d}{p})\n" }, { "math_id": 7, "text": "\\text{Average Inventory}=\\frac{1}{2}\\bigg[Q(1-\\frac{d}{p})\\bigg]=\\frac{Q}{2}\\bigg(1-\\frac{d}{p}\\bigg)" }, { "math_id": 8, "text": "\\text{Total carrying cost}=\\frac{C_CQ}{2}\\bigg(1-\\frac{d}{p}\\bigg)" }, { "math_id": 9, "text": "\\therefore \\text{Total cost TC} = \\bigg(\\frac{C_O D}{Q}\\bigg) + \\bigg[\\frac{C_C Q}{2}(1-\\frac{d}{p})\\bigg]" }, { "math_id": 10, "text": "\\text{For TC to be minimum,}{dT \\over dQ}=0." }, { "math_id": 11, "text": "\\therefore Q = \\sqrt{ \\frac{ 2 \\cdot C_O \\cdot D }{ C_C \\bigg(1-\\frac{d}{p}\\bigg) } }" }, { "math_id": 12, "text": "C_C" }, { "math_id": 13, "text": "EBQ=\\sqrt{ \\frac{ 2\\times1000\\times20 }{2\\times0.1} }=447 \\text{ parts}" } ]
https://en.wikipedia.org/wiki?curid=61712718
61717300
Dorina Mitrea
Romanian-American mathematician Dorina Irena-Rita Mitrea (born April 30, 1965) is a Romanian-American mathematician known for her work in harmonic analysis, partial differential equations, and the theory of distributions, and in mathematics education. She is a professor of mathematics and chair of the mathematics department at Baylor University. Education and career. Mitrea earned a master's degree in 1987 from the University of Bucharest. Her thesis, "Riemann’s Theorem for Simply Connected Riemann Surfaces", was supervised by Cabiria Andreian Cazacu. She completed her doctorate in 1996 from the University of Minnesota. Her dissertation, "Layer Potential Operators and Boundary Value Problems for Differential Forms on Lipschitz Domains", was supervised by Eugene Barry Fabes. Mitrea joined the University of Missouri mathematics faculty in 1996, and became M. &amp; R. Houchins Distinguished Professor of Mathematics at the University of Missouri in 2016. She moved to Baylor as professor and chair in 2019. Books. Mitrea is the author of: Recognition. Mitrea was elected as a Fellow of the American Mathematical Society in the 2024 class of fellows. Personal life. She is married to Marius Mitrea. Her husband is also a mathematician, and moved with Mitrea from Missouri to Baylor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^p" } ]
https://en.wikipedia.org/wiki?curid=61717300