id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
1450795
Jet propulsion
Thrust produced by ejecting a jet of fluid Jet propulsion is the propulsion of an object in one direction, produced by ejecting a jet of fluid in the opposite direction. By Newton's third law, the moving body is propelled in the opposite direction to the jet. Reaction engines operating on the principle of jet propulsion include the jet engine used for aircraft propulsion, the pump-jet used for marine propulsion, and the rocket engine and plasma thruster used for spacecraft propulsion. Underwater jet propulsion is also used by several marine animals, including cephalopods and salps, with the flying squid even displaying the only known instance of jet-powered aerial flight in the animal kingdom. Physics. Jet propulsion is produced by some reaction engines or animals when thrust is generated by a fast moving jet of fluid in accordance with Newton's laws of motion. It is most effective when the Reynolds number is high—that is, the object being propelled is relatively large and passing through a low-viscosity medium. In animals, the most efficient jets are pulsed, rather than continuous, at least when the Reynolds number is greater than 6. Specific impulse. Specific impulse (usually abbreviated "I"sp) is a measure of how effectively a rocket uses propellant or jet engine uses fuel. By definition, it is the total impulse (or change in momentum) delivered per unit of propellant consumed and is dimensionally equivalent to the generated thrust divided by the propellant mass flow rate or weight flow rate. If mass (kilogram, pound-mass, or slug) is used as the unit of propellant, then specific impulse has units of velocity. If weight (newton or pound-force) is used instead, then specific impulse has units of time (seconds). Multiplying flow rate by the standard gravity ("g"0) converts specific impulse from the mass basis to the weight basis. A propulsion system with a higher specific impulse uses the mass of the propellant more effectively in creating forward thrust and, in the case of a rocket, less propellant needed for a given delta-v, per the Tsiolkovsky rocket equation. In rockets, this means the engine is more effective at gaining altitude, distance, and velocity. This effectiveness is less important in jet engines that employ wings and use outside air for combustion and carry payloads that are much heavier than the propellant. Specific impulse includes the contribution to impulse provided by external air that has been used for combustion and is exhausted with the spent propellant. Jet engines use outside air, and therefore have a much higher specific impulse than rocket engines. The specific impulse in terms of propellant mass spent has units of distance per time, which is an artificial velocity called the "effective exhaust velocity". This is higher than the "actual" exhaust velocity because the mass of the combustion air is not being accounted for. Actual and effective exhaust velocity are the same in rocket engines not utilizing air. Specific impulse is inversely proportional to specific fuel consumption (SFC) by the relationship "I"sp = 1/("go"·SFC) for SFC in kg/(N·s) and "I"sp = 3600/SFC for SFC in lb/(lbf·hr). Thrust. From the definition of specific impulse thrust in SI units is: formula_0 where Ve is the effective exhaust velocity and formula_1 is the propellant flow rate. Types of reaction engine. Reaction engines produce thrust by expelling solid or fluid reaction mass; jet propulsion applies only to engines which use fluid reaction mass. Jet engine. A jet engine is a reaction engine which uses ambient air as the working fluid and converts it to a hot, high-pressure gas which is expanded through one or more nozzles. Technically, most jet engines are gas turbines, working on the Brayton Cycle. Two types of jet engines, the turbojet and turbofan, employ axial-flow or centrifugal compressors to raise the pressure before combustion and turbines to drive the compression. Ramjets operate only at high flight speeds because they omit the compressors and turbines, depending instead on the dynamic pressure generated by the high speed (known as ram compression). Pulse jets also omit the compressors and turbines but can generate static thrust and have limited maximum speed. Rocket engine. The rocket is capable of operating in the vacuum of space because it is dependent on the vehicle carrying its own oxidizer instead of using the oxygen in the air, or in the case of a nuclear rocket, heats an inert propellant (such as liquid hydrogen) by forcing it through a nuclear reactor. Plasma engine. Plasma thrusters accelerate a plasma by electromagnetic means. Pump-jet. The pump-jet, used for marine propulsion, uses water as the working fluid, pressurized by a ducted propeller, centrifugal pump, or a combination of the two. Jet-propelled animals. Cephalopods such as squid use jet propulsion for rapid escape from predators; they use other mechanisms for slow swimming. The jet is produced by ejecting water through a siphon, which typically narrows to a small opening to produce the maximum exhalent velocity. The water passes through the gills prior to exhalation, fulfilling the dual purpose of respiration and locomotion. Sea hares (gastropod molluscs) employ a similar method, but without the sophisticated neurological machinery of cephalopods they navigate somewhat more clumsily. Some teleost fish have also developed jet propulsion, passing water through the gills to supplement fin-driven motion. In some dragonfly larvae, jet propulsion is achieved by the expulsion of water from a specialised cavity through the anus. Given the small size of the organism, a great speed is achieved. Scallops and cardiids, siphonophores, tunicates (such as salps), and some jellyfish also employ jet propulsion. The most efficient jet-propelled organisms are the salps, which use an order of magnitude less energy (per kilogram per metre) than squid. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "F = \\dot m V_e" }, { "math_id": 1, "text": "\\dot m" } ]
https://en.wikipedia.org/wiki?curid=1450795
1451
APL (programming language)
Functional programming language for arrays APL (named after the book "A Programming Language") is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array. It uses a large range of special graphic symbols to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming, and computer math packages. It has also inspired several other programming languages. History. Mathematical notation. A mathematical notation for manipulating arrays was developed by Kenneth E. Iverson, starting in 1957 at Harvard University. In 1960, he began work for IBM where he developed this notation with Adin Falkoff and published it in his book "A Programming Language" in 1962. The preface states its premise: <templatestyles src="Template:Blockquote/styles.css" />Applied mathematics is largely concerned with the design and analysis of explicit procedures for calculating the exact or approximate values of various functions. Such explicit procedures are called algorithms or "programs". Because an effective notation for the description of programs exhibits considerable syntactic structure, it is called a "programming language". This notation was used inside IBM for short research reports on computer systems, such as the Burroughs B5000 and its stack mechanism when stack machines versus register machines were being evaluated by IBM for upcoming computers. Iverson also used his notation in a draft of the chapter "A Programming Language", written for a book he was writing with Fred Brooks, "Automatic Data Processing", which would be published in 1963. In 1979, Iverson received the Turing Award for his work on APL. Development into a computer programming language. As early as 1962, the first attempt to use the notation to describe a complete computer system happened after Falkoff discussed with William C. Carter his work to standardize the instruction set for the machines that later became the IBM System/360 family. In 1963, Herbert Hellerman, working at the IBM Systems Research Institute, implemented a part of the notation on an IBM 1620 computer, and it was used by students in a special high school course on calculating transcendental functions by series summation. Students tested their code in Hellerman's lab. This implementation of a part of the notation was called Personalized Array Translator (PAT). In 1963, Falkoff, Iverson, and Edward H. Sussenguth Jr., all working at IBM, used the notation for a formal description of the IBM System/360 series machine architecture and functionality, which resulted in a paper published in "IBM Systems Journal" in 1964. After this was published, the team turned their attention to an implementation of the notation on a computer system. One of the motivations for this focus of implementation was the interest of John L. Lawrence who had new duties with Science Research Associates, an educational company bought by IBM in 1964. Lawrence asked Iverson and his group to help use the language as a tool to develop and use computers in education. After Lawrence M. Breed and Philip S. Abrams of Stanford University joined the team at IBM Research, they continued their prior work on an implementation programmed in FORTRAN IV for a part of the notation which had been done for the IBM 7090 computer running on the IBSYS operating system. This work was finished in late 1965 and later named IVSYS (for Iverson system). The basis of this implementation was described in detail by Abrams in a Stanford University Technical Report, "An Interpreter for Iverson Notation" in 1966. The academic aspect of this was formally supervised by Niklaus Wirth. Like Hellerman's PAT system earlier, this implementation did not include the APL character set but used special English reserved words for functions and operators. The system was later adapted for a time-sharing system and, by November 1966, it had been reprogrammed for the IBM System/360 Model 50 computer running in a time-sharing mode and was used internally at IBM. Hardware. A key development in the ability to use APL effectively, before the wide use of cathode ray tube (CRT) terminals, was the development of a special IBM Selectric typewriter interchangeable typing element with all the special APL characters on it. This was used on paper printing terminal workstations using the Selectric typewriter and typing element mechanism, such as the IBM 1050 and IBM 2741 terminal. Keycaps could be placed over the normal keys to show which APL characters would be entered and typed when that key was struck. For the first time, a programmer could type in and see proper APL characters as used in Iverson's notation and not be forced to use awkward English keyword representations of them. Falkoff and Iverson had the special APL Selectric typing elements, 987 and 988, designed in late 1964, although no APL computer system was available to use them. Iverson cited Falkoff as the inspiration for the idea of using an IBM Selectric typing element for the APL character set. Many APL symbols, even with the APL characters on the Selectric typing element, still had to be typed in by over-striking two extant element characters. An example is the "grade up" character, which had to be made from a "delta" (shift-H) and a "Sheffer stroke" (shift-M). This was necessary because the APL character set was much larger than the 88 characters allowed on the typing element, even when letters were restricted to upper-case (capitals). Commercial availability. The first APL interactive login and creation of an APL workspace was in 1966 by Larry Breed using an IBM 1050 terminal at the IBM Mohansic Labs near Thomas J. Watson Research Center, the home of APL, in Yorktown Heights, New York. IBM was chiefly responsible for introducing APL to the marketplace. The first publicly available version of APL was released in 1968 for the IBM 1130. IBM provided "APL\1130" for free but without liability or support. It would run in as little as 8k 16-bit words of memory, and used a dedicated 1 megabyte hard disk. APL gained its foothold on mainframe timesharing systems from the late 1960s through the early 1980s, in part because it would support multiple users on lower-specification systems that had no dynamic address translation hardware. Additional improvements in performance for selected IBM System/370 mainframe systems included the "APL Assist Microcode" in which some support for APL execution was included in the processor's firmware, as distinct from being implemented entirely by higher-level software. Somewhat later, as suitably performing hardware was finally growing available in the mid- to late-1980s, many users migrated their applications to the personal computer environment. Early IBM APL interpreters for IBM 360 and IBM 370 hardware implemented their own multi-user management instead of relying on the host services, thus they were their own timesharing systems. First introduced for use at IBM in 1966, the "APL\360" system was a multi-user interpreter. The ability to programmatically communicate with the operating system for information and setting interpreter system variables was done through special privileged "I-beam" functions, using both monadic and dyadic operations. In 1973, IBM released "APL.SV", which was a continuation of the same product, but which offered shared variables as a means to access facilities outside of the APL system, such as operating system files. In the mid-1970s, the IBM mainframe interpreter was even adapted for use on the IBM 5100 desktop computer, which had a small CRT and an APL keyboard, when most other small computers of the time only offered BASIC. In the 1980s, the "VSAPL" program product enjoyed wide use with Conversational Monitor System (CMS), Time Sharing Option (TSO), VSPC, MUSIC/SP, and CICS users. In 1973–1974, Patrick E. Hagerty directed the implementation of the University of Maryland APL interpreter for the 1100 line of the Sperry UNIVAC 1100/2200 series mainframe computers. In 1974, student Alan Stebbens was assigned the task of implementing an internal function. Xerox APL was available from June 1975 for Xerox 560 and Sigma 6, 7, and 9 mainframes running CP-V and for Honeywell CP-6. In the 1960s and 1970s, several timesharing firms arose that sold APL services using modified versions of the IBM APL\360 interpreter. In North America, the better-known ones were IP Sharp Associates, Scientific Time Sharing Corporation (STSC), Time Sharing Resources (TSR), and The Computer Company (TCC). CompuServe also entered the market in 1978 with an APL Interpreter based on a modified version of Digital Equipment Corp and Carnegie Mellon's, which ran on DEC's KI and KL 36-bit machines. CompuServe's APL was available both to its commercial market and the consumer information service. With the advent first of less expensive mainframes such as the IBM 4300, and later the personal computer, by the mid-1980s, the timesharing industry was all but gone. "Sharp APL" was available from IP Sharp Associates, first as a timesharing service in the 1960s, and later as a program product starting around 1979. "Sharp APL" was an advanced APL implementation with many language extensions, such as "packages" (the ability to put one or more objects into a single variable), a file system, nested arrays, and shared variables. APL interpreters were available from other mainframe and mini-computer manufacturers also, notably Burroughs, Control Data Corporation (CDC), Data General, Digital Equipment Corporation (DEC), Harris, Hewlett-Packard (HP), Siemens, Xerox and others. Garth Foster of Syracuse University sponsored regular meetings of the APL implementers' community at Syracuse's Minnowbrook Conference Center in Blue Mountain Lake, New York. In later years, Eugene McDonnell organized similar meetings at the Asilomar Conference Grounds near Monterey, California, and at Pajaro Dunes near Watsonville, California. The SIGAPL special interest group of the Association for Computing Machinery continues to support the APL community. Microcomputers. On microcomputers, which became available from the mid-1970s onwards, BASIC became the dominant programming language. Nevertheless, some microcomputers provided APL instead – the first being the Intel 8008-based MCM/70 which was released in 1974 and which was primarily used in education. Another machine of this time was the VideoBrain Family Computer, released in 1977, which was supplied with its dialect of APL called APL/S. The Commodore SuperPET, introduced in 1981, included an APL interpreter developed by the University of Waterloo. In 1976, Bill Gates claimed in his Open Letter to Hobbyists that Microsoft Corporation was implementing APL for the Intel 8080 and Motorola 6800 but had "very little incentive to make [it] available to hobbyists" because of software piracy. It was never released. APL2. Starting in the early 1980s, IBM APL development, under the leadership of Jim Brown, implemented a new version of the APL language that contained as its primary enhancement the concept of "nested arrays", where an array can contain other arrays, and new language features which facilitated integrating nested arrays into program workflow. Ken Iverson, no longer in control of the development of the APL language, left IBM and joined I. P. Sharp Associates, where one of his major contributions was directing the evolution of Sharp APL to be more in accord with his vision. APL2 was first released for CMS and TSO in 1984. The APL2 Workstation edition (Windows, OS/2, AIX, Linux, and Solaris) followed later. As other vendors were busy developing APL interpreters for new hardware, notably Unix-based microcomputers, APL2 was almost always the standard chosen for new APL interpreter developments. Even today, most APL vendors or their users cite APL2 compatibility as a selling point for those products. IBM cites its use for problem solving, system design, prototyping, engineering and scientific computations, expert systems, for teaching mathematics and other subjects, visualization and database access. Modern implementations. Various implementations of APL by APLX, Dyalog, et al., include extensions for object-oriented programming, support for .NET, XML-array conversion primitives, graphing, operating system interfaces, and lambda calculus expressions. Freeware versions include GNU APL for Linux and NARS2000 for Windows (which runs on Linux under Wine). Both of these are fairly complete versions of APL2 with various language extensions. Derivative languages. APL has formed the basis of, or influenced, the following languages: Language characteristics. Character set. APL has been criticized and praised for its choice of a unique, non-standard character set. In the 1960s and 1970s, few terminal devices or even displays could reproduce the APL character set. The most popular ones employed the IBM Selectric print mechanism used with a special APL type element. One of the early APL line terminals (line-mode operation only, "not" full screen) was the Texas Instruments TI Model 745 (c. 1977) with the full APL character set which featured half and full duplex telecommunications modes, for interacting with an APL time-sharing service or remote mainframe to run a remote computer job, called an RJE. Over time, with the universal use of high-quality graphic displays, printing devices and Unicode support, the APL character font problem has largely been eliminated. However, entering APL characters requires the use of input method editors, keyboard mappings, virtual/on-screen APL symbol sets, or easy-reference printed keyboard cards which can frustrate beginners accustomed to other programming languages. With beginners who have no prior experience with other programming languages, a study involving high school students found that typing and using APL characters did not hinder the students in any measurable way. In defense of APL, it requires fewer characters to type, and keyboard mappings become memorized over time. Special APL keyboards are also made and in use today, as are freely downloadable fonts for operating systems such as Microsoft Windows. The reported productivity gains assume that one spends enough time working in the language to make it worthwhile to memorize the symbols, their semantics, and keyboard mappings, not to mention a substantial number of idioms for common tasks. Design. Unlike traditionally structured programming languages, APL code is typically structured as chains of monadic or dyadic functions, and operators acting on arrays. APL has many nonstandard "primitives" (functions and operators) that are indicated by a single symbol or a combination of a few symbols. All primitives are defined to have the same precedence, and always associate to the right. Thus, APL is "read" or best understood from right-to-left. Early APL implementations (c. 1970 or so) had no programming loop-flow control structures, such as codice_0 or codice_1 loops, and codice_2 constructs. Instead, they used array operations, and use of structured programming constructs was often not necessary, since an operation could be performed on a full array in one statement. For example, the codice_3 function (codice_4) can replace for-loop iteration: ιN when applied to a scalar positive integer yields a one-dimensional array (vector), 1 2 3 ... N. More recent implementations of APL generally include comprehensive control structures, so that data structure and program control flow can be clearly and cleanly separated. The APL environment is called a "workspace". In a workspace the user can define programs and data, i.e., the data values exist also outside the programs, and the user can also manipulate the data without having to define a program. In the examples below, the APL interpreter first types six spaces before awaiting the user's input. Its own output starts in column one. The user can save the workspace with all values, programs, and execution status. APL uses a set of non-ASCII symbols, which are an extension of traditional arithmetic and algebraic notation. Having single character names for single instruction, multiple data (SIMD) vector functions is one way that APL enables compact formulation of algorithms for data transformation such as computing Conway's Game of Life in one line of code. In nearly all versions of APL, it is theoretically possible to express any computable function in one expression, that is, in one line of code. Due to the unusual character set, many programmers use special keyboards with APL keytops to write APL code. Although there are various ways to write APL code using only ASCII characters, in practice it is almost never done. (This may be thought to support Iverson's thesis about notation as a tool of thought.) Most if not all modern implementations use standard keyboard layouts, with special mappings or input method editors to access non-ASCII characters. Historically, the APL font has been distinctive, with uppercase italic alphabetic characters and upright numerals and symbols. Most vendors continue to display the APL character set in a custom font. Advocates of APL claim that the examples of so-called "write-only code" (badly written and almost incomprehensible code) are almost invariably examples of poor programming practice or novice mistakes, which can occur in any language. Advocates also claim that they are far more productive with APL than with more conventional computer languages, and that working software can be implemented in far less time and with far fewer programmers than using other technology. They also may claim that because it is compact and terse, APL lends itself well to larger-scale software development and complexity, because the number of lines of code can be reduced greatly. Many APL advocates and practitioners also view standard programming languages such as COBOL and Java as being comparatively tedious. APL is often found where time-to-market is important, such as with trading systems. Terminology. APL makes a clear distinction between "functions" and "operators". Functions take arrays (variables or constants or expressions) as arguments, and return arrays as results. Operators (similar to higher-order functions) take functions or arrays as arguments, and derive related functions. For example, the "sum" function is derived by applying the "reduction" operator to the "addition" function. Applying the same reduction operator to the "maximum" function (which returns the larger of two numbers) derives a function which returns the largest of a group (vector) of numbers. In the J language, Iverson substituted the terms "verb" for "function" and "adverb" or "conjunction" for "operator". APL also identifies those features built into the language, and represented by a symbol, or a fixed combination of symbols, as "primitives". Most primitives are either functions or operators. Coding APL is largely a process of writing non-primitive functions and (in some versions of APL) operators. However a few primitives are considered to be neither functions nor operators, most noticeably assignment. Some words used in APL literature have meanings that differ from those in both mathematics and the generality of computer science. Syntax. APL has explicit representations of functions, operators, and syntax, thus providing a basis for the clear and explicit statement of extended facilities in the language, and tools to experiment on them. Examples. Hello, world. This displays "Hello, world": 'Hello, world' A design theme in APL is to define default actions in some cases that would produce syntax errors in most other programming languages. The 'Hello, world' string constant above displays, because display is the default action on any expression for which no action is specified explicitly (e.g. assignment, function parameter). Exponentiation. Another example of this theme is that exponentiation in APL is written as , which indicates raising 2 to the power 3 (this would be written as or in some languages, or relegated to a function call such as in others). Many languages use to signify multiplication, as in , but APL chooses to use . However, if no base is specified (as with the statement in APL, or in other languages), most programming languages one would see this as a syntax error. APL, however, assumes the missing base to be the natural logarithm constant e, and interprets as . Simple statistics. Suppose that is an array of numbers. Then gives its average. Reading "right-to-left", gives the number of elements in X, and since is a dyadic operator, the term to its left is required as well. It is surrounded by parentheses since otherwise X would be taken (so that the summation would be of —each element of X divided by the number of elements in X), and gives the sum of the elements of X. Building on this, the following expression computes standard deviation: Naturally, one would define this expression as a function for repeated use rather than rewriting it each time. Further, since assignment is an operator, it can appear within an expression, so the following would place suitable values into T, AV and SD: "Pick 6" lottery numbers. This following immediate-mode expression generates a typical set of "Pick 6" lottery numbers: six pseudo-random integers ranging from 1 to 40, "guaranteed non-repeating", and displays them sorted in ascending order: x[⍋x←6?40] The above does a lot, concisely, although it may seem complex to a new APLer. It combines the following APL "functions" (also called "primitives" and "glyphs"): Since there is no function to the left of the left-most x to tell APL what to do with the result, it simply outputs it to the display (on a single line, separated by spaces) without needing any explicit instruction to do that. codice_5 also has a monadic equivalent called codice_14, which simply returns one random integer between 1 and its sole operand [to the right of it], inclusive. Thus, a role-playing game program might use the expression codice_15 to roll a twenty-sided die. Prime numbers. The following expression finds all prime numbers from 1 to R. In both time and space, the calculation complexity is formula_0 (in Big O notation). (~R∊R∘.×R)/R←1↓⍳R Executed from right to left, this means: Sorting. The following expression sorts a word list stored in matrix X according to word length: X[⍋X+.≠' ';] Game of Life. The following function "life", written in Dyalog APL, takes a Boolean matrix and calculates the new generation according to Conway's Game of Life. It demonstrates the power of APL to implement a complex algorithm in very little code, but understanding it requires some advanced knowledge of APL (as the same program would in many languages). HTML tags removal. In the following example, also Dyalog, the first line assigns some HTML code to a variable codice_46 and then uses an APL expression to remove all the HTML tags: txt←'<html><body><p>This is <em>emphasized</em> text.</p></body></html>' {⍵ /⍨ ~{⍵∨≠\⍵}⍵∊'<>'} txt This is emphasized text. Naming. APL derives its name from the initials of Iverson's book "A Programming Language", even though the book describes Iverson's mathematical notation, rather than the implemented programming language described in this article. The name is used only for actual implementations, starting with APL\360. Adin Falkoff coined the name in 1966 during the implementation of APL\360 at IBM: "APL" is occasionally re-interpreted as "Array Programming Language" or "Array Processing Language", thereby making "APL" into a backronym. Logo. There has always been cooperation between APL vendors, and joint conferences were held on a regular basis from 1969 until 2010. At such conferences, APL merchandise was often handed out, featuring APL motifs or collection of vendor logos. Common were apples (as a pun on the similarity in pronunciation of "apple" and "APL") and the code snippet which are the symbols produced by the classic APL keyboard layout when holding the APL modifier key and typing "APL". Despite all these community efforts, no universal vendor-agnostic logo for the programming language emerged. As popular programming languages increasingly have established recognisable logos, Fortran getting one in 2020, British APL Association launched a campaign in the second half of 2021, to establish such a logo for APL, and after a community election and multiple rounds of feedback, a logo was chosen in May 2022. Use. APL is used for many purposes including financial and insurance applications, artificial intelligence, neural networks and robotics. It has been argued that APL is a calculation tool and not a programming language; its symbolic nature and array capabilities have made it popular with domain experts and data scientists who do not have or require the skills of a computer programmer. APL is well suited to image manipulation and computer animation, where graphic transformations can be encoded as matrix multiplications. One of the first commercial computer graphics houses, Digital Effects, produced an APL graphics product named "Visions", which was used to create television commercials and animation for the 1982 film "Tron". Latterly, the Stormwind boating simulator uses APL to implement its core logic, its interfacing to the rendering pipeline middleware and a major part of its physics engine. Today, APL remains in use in a wide range of commercial and scientific applications, for example investment management, asset management, health care, and DNA profiling. Notable implementations. APL\360. The first implementation of APL using recognizable APL symbols was APL\360 which ran on the IBM System/360, and was completed in November 1966 though at that time remained in use only within IBM. In 1973 its implementors, Larry Breed, Dick Lathwell and Roger Moore, were awarded the Grace Murray Hopper Award from the Association for Computing Machinery (ACM). It was given "for their work in the design and implementation of APL\360, setting new standards in simplicity, efficiency, reliability and response time for interactive systems." In 1975, the IBM 5100 microcomputer offered APL\360 as one of two built-in ROM-based interpreted languages for the computer, complete with a keyboard and display that supported all the special symbols used in the language. Significant developments to APL\360 included CMS/APL, which made use of the virtual storage capabilities of CMS and APLSV, which introduced shared variables, system variables and system functions. It was subsequently ported to the IBM System/370 and VSPC platforms until its final release in 1983, after which it was replaced by APL2. APL\1130. In 1968, APL\1130 became the first publicly available APL system, created by IBM for the IBM 1130. It became the most popular IBM Type-III Library software that IBM released. APL*Plus and Sharp APL. APL*Plus and Sharp APL are versions of APL\360 with added business-oriented extensions such as data formatting and facilities to store APL arrays in external files. They were jointly developed by two companies, employing various members of the original IBM APL\360 development team. The two companies were I. P. Sharp Associates (IPSA), an APL\360 services company formed in 1964 by Ian Sharp, Roger Moore and others, and STSC, a time-sharing and consulting service company formed in 1969 by Lawrence Breed and others. Together the two developed APL*Plus and thereafter continued to work together but develop APL separately as APL*Plus and Sharp APL. STSC ported APL*Plus to many platforms with versions being made for the VAX 11, PC and UNIX, whereas IPSA took a different approach to the arrival of the personal computer and made Sharp APL available on this platform using additional PC-XT/360 hardware. In 1993, Soliton Incorporated was formed to support Sharp APL and it developed Sharp APL into SAX (Sharp APL for Unix). As of 2018[ [update]], APL*Plus continues as APL2000 APL+Win. In 1985, Ian Sharp, and Dan Dyer of STSC, jointly received the Kenneth E. Iverson Award for Outstanding Contribution to APL. APL2. APL2 was a significant re-implementation of APL by IBM which was developed from 1971 and first released in 1984. It provides many additions to the language, of which the most notable is nested (non-rectangular) array support. The entire APL2 Products and Services Team was awarded the Iverson Award in 2007. In 2021, IBM sold APL2 to Log-On Software, who develop and sell the product as "Log-On APL2". APLGOL. In 1972, APLGOL was released as an experimental version of APL that added structured programming language constructs to the language framework. New statements were added for interstatement control, conditional statement execution, and statement structuring, as well as statements to clarify the intent of the algorithm. It was implemented for Hewlett-Packard in 1977. Dyalog APL. Dyalog APL was first released by British company Dyalog Ltd. in 1983 and, as of 2018[ [update]], is available for AIX, Linux (including on the Raspberry Pi), macOS and Microsoft Windows platforms. It is based on APL2, with extensions to support object-oriented programming, functional programming, and tacit programming. Licences are free for personal/non-commercial use. In 1995, two of the development team – John Scholes and Peter Donnelly – were awarded the Iverson Award for their work on the interpreter. Gitte Christensen and Morten Kromberg were joint recipients of the Iverson Award in 2016. NARS2000. NARS2000 is an open-source APL interpreter written by Bob Smith, a prominent APL developer and implementor from STSC in the 1970s and 1980s. NARS2000 contains advanced features and new datatypes and runs natively on Microsoft Windows, and other platforms under Wine. It is named after a development tool from the 1980s, NARS (Nested Arrays Research System). APLX. APLX is a cross-platform dialect of APL, based on APL2 and with several extensions, which was first released by British company MicroAPL in 2002. Although no longer in development or on commercial sale it is now available free of charge from Dyalog. York APL. York APL was developed at the York University, Ontario around 1968, running on IBM 360 mainframes. One notable difference between it and APL\360 was that it defined the "shape" (ρ) of a scalar as 1 whereas APL\360 defined it as the more mathematically correct 0 — this made it easier to write functions that acted the same with scalars and vectors. GNU APL. GNU APL is a free implementation of Extended APL as specified in ISO/IEC 13751:2001 and is thus an implementation of APL2. It runs on Linux, macOS, several BSD dialects, and on Windows (either using Cygwin for full support of all its system functions or as a native 64-bit Windows binary with some of its system functions missing). GNU APL uses Unicode internally and can be scripted. It was written by Jürgen Sauermann. Richard Stallman, founder of the GNU Project, was an early adopter of APL, using it to write a text editor as a high school student in the summer of 1969. Interpretation and compilation of APL. APL is traditionally an interpreted language, having language characteristics such as weak variable typing not well suited to compilation. However, with arrays as its core data structure it provides opportunities for performance gains through parallelism, parallel computing, massively parallel applications, and very-large-scale integration (VLSI), and from the outset APL has been regarded as a high-performance language – for example, it was noted for the speed with which it could perform complicated matrix operations "because it operates on arrays and performs operations like matrix inversion internally". Nevertheless, APL is rarely purely interpreted and compilation or partial compilation techniques that are, or have been, used include the following: Idiom recognition. Most APL interpreters support idiom recognition and evaluate common idioms as single operations. For example, by evaluating the idiom codice_47 as a single operation (where codice_48 is a Boolean vector and codice_49 is an array), the creation of two intermediate arrays is avoided. Optimised bytecode. Weak typing in APL means that a name may reference an array (of any datatype), a function or an operator. In general, the interpreter cannot know in advance which form it will be and must therefore perform analysis, syntax checking etc. at run-time. However, in certain circumstances, it is possible to deduce in advance what type a name is expected to reference and then generate bytecode which can be executed with reduced run-time overhead. This bytecode can also be optimised using compilation techniques such as constant folding or common subexpression elimination. The interpreter will execute the bytecode when present and when any assumptions which have been made are met. Dyalog APL includes support for optimised bytecode. Compilation. Compilation of APL has been the subject of research and experiment since the language first became available; the first compiler is considered to be the Burroughs APL-700 which was released around 1971. In order to be able to compile APL, language limitations have to be imposed. APEX is a research APL compiler which was written by Robert Bernecky and is available under the GNU General Public License. The STSC APL Compiler is a hybrid of a bytecode optimiser and a compiler – it enables compilation of functions to machine code provided that its sub-functions and globals are declared, but the interpreter is still used as a runtime library and to execute functions which do not meet the compilation requirements. Standards. APL has been standardized by the American National Standards Institute (ANSI) working group X3J10 and International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC Joint Technical Committee 1 Subcommittee 22 Working Group 3. The Core APL language is specified in ISO 8485:1989, and the Extended APL language is specified in ISO/IEC 13751:2001. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "O(R^2)\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=1451
14510148
Weibel instability
The Weibel instability is a plasma instability present in homogeneous or nearly homogeneous electromagnetic plasmas which possess an anisotropy in momentum (velocity) space. This anisotropy is most generally understood as two temperatures in different directions. Burton Fried showed that this instability can be understood more simply as the superposition of many counter-streaming beams. In this sense, it is like the two-stream instability except that the perturbations are electromagnetic and result in filamentation as opposed to electrostatic perturbations which would result in charge bunching. In the linear limit the instability causes exponential growth of electromagnetic fields in the plasma which help restore momentum space isotropy. In very extreme cases, the Weibel instability is related to one- or two-dimensional stream instabilities. Consider an electron-ion plasma in which the ions are fixed and the electrons are hotter in the y-direction than in x or z-direction. To see how magnetic field perturbation would grow, suppose a field B = B cos kx spontaneously arises from noise. The Lorentz force then bends the electron trajectories with the result that upward-moving-ev x B electrons congregate at B and downward-moving ones at A. The resulting current formula_0 sheets generate magnetic field that enhances the original field and thus perturbation grows. Weibel instability is also common in astrophysical plasmas, such as collisionless shock formation in supernova remnants and formula_1-ray bursts. A Simple Example of Weibel Instability. As a simple example of Weibel instability, consider an electron beam with density formula_2 and initial velocity formula_3 propagating in a plasma of density formula_4 with velocity formula_5. The analysis below will show how an electromagnetic perturbation in the form of a plane wave gives rise to a Weibel instability in this simple anisotropic plasma system. We assume a non-relativistic plasma for simplicity. We assume there is no background electric or magnetic field i.e. formula_6. The perturbation will be taken as an electromagnetic wave propagating along formula_7 i.e. formula_8. Assume the electric field has the form formula_9 With the assumed spatial and time dependence, we may use formula_10 and formula_11. From Faraday's Law, we may obtain the perturbation magnetic field formula_12 Consider the electron beam. We assume small perturbations, and so linearize the velocity formula_13 and density formula_14. The goal is to find the perturbation electron beam current density formula_15 where second-order terms have been neglected. To do that, we start with the fluid momentum equation for the electron beam formula_16 which can be simplified by noting that formula_17 and neglecting second-order terms. With the plane wave assumption for the derivatives, the momentum equation becomes formula_18 We can decompose the above equations in components, paying attention to the cross product at the far right, and obtain the non-zero components of the beam velocity perturbation: formula_19 formula_20 To find the perturbation density formula_21, we use the fluid continuity equation for the electron beam formula_22 which can again be simplified by noting that formula_23 and neglecting second-order terms. The result is formula_24 Using these results, we may use the equation for the beam perturbation current density given above to find formula_25 formula_26 Analogous expressions can be written for the perturbation current density of the left-moving plasma. By noting that the x-component of the perturbation current density is proportional to formula_27, we see that with our assumptions for the beam and plasma unperturbed densities and velocities the x-component of the net current density will vanish, whereas the z-components, which are proportional to formula_28, will add. The net current density perturbation is therefore formula_29 The dispersion relation can now be found from Maxwell's Equations: formula_30 formula_31 formula_32 where formula_33 is the speed of light in free space. By defining the effective plasma frequency formula_34, the equation above results in formula_35 This bi-quadratic equation may be easily solved to give the dispersion relation formula_36 In the search for instabilities, we look for formula_37 (formula_38 is assumed real). Therefore, we must take the dispersion relation/mode corresponding to the minus sign in the equation above. To gain further insight on the instability, it is useful to harness our non-relativistic assumption formula_39 to simplify the square root term, by noting that formula_40 The resulting dispersion relation is then much simpler formula_41 formula_42 is purely imaginary. Writing formula_43 formula_44 we see that formula_45, indeed corresponding to an instability. The electromagnetic fields then have the form formula_46 formula_47 Therefore, the electric and magnetic fields are formula_48 out of phase, and by noting that formula_49 so we see this is a primarily magnetic perturbation although there is a non-zero electric perturbation. The magnetic field growth results in the characteristic filamentation structure of Weibel instability. Saturation will happen when the growth rate formula_50 is on the order of the electron cyclotron frequency formula_51
[ { "math_id": 0, "text": "j = -en v_e" }, { "math_id": 1, "text": "\\gamma" }, { "math_id": 2, "text": "n_{b0}" }, { "math_id": 3, "text": "v_0 \\mathbf{z}" }, { "math_id": 4, "text": "n_{p0} = n_{b0}" }, { "math_id": 5, "text": "-v_0 \\mathbf{z}" }, { "math_id": 6, "text": "\\mathbf{B_0} = \\mathbf{E_0} = 0" }, { "math_id": 7, "text": " \\mathbf{\\hat{x}} " }, { "math_id": 8, "text": "\\mathbf{k} = k \\mathbf{\\hat{x}}" }, { "math_id": 9, "text": "\\mathbf{E_1} = A e^{i(kx-\\omega t)} \\mathbf{z} " }, { "math_id": 10, "text": " \\frac{\\partial}{\\partial t} \\rightarrow -i \\omega " }, { "math_id": 11, "text": " \\nabla \\rightarrow i k \\mathbf{\\hat{x}} " }, { "math_id": 12, "text": " \\nabla \\times \\mathbf{E_1} = - \\frac{\\partial \\mathbf{B_1}}{\\partial t} \\Rightarrow i \\mathbf{k} \\times \\mathbf{E_1} = i \\omega \\mathbf{B_1} \\Rightarrow \\mathbf{B_1} = \\mathbf{\\hat{y}} \\frac{k}{\\omega} E_1 " }, { "math_id": 13, "text": " \\mathbf{v_b} = \\mathbf{v_{b0}} + \\mathbf{v_{b1}} " }, { "math_id": 14, "text": " n_b = n_{b0} + n_{b1} " }, { "math_id": 15, "text": " \\mathbf{J_{b1}} = - e n_b \\mathbf{v_b} = - e n_{b0} \\mathbf{v_{b1}} - e n_{b1} \\mathbf{v_{b0}} " }, { "math_id": 16, "text": " m(\\frac{\\partial \\mathbf{v_b}}{\\partial t} + (\\mathbf{v_b} \\cdot \\nabla) \\mathbf{v_b}) = -e \\mathbf{E} - e \\mathbf{v_b} \\times \\mathbf{B} " }, { "math_id": 17, "text": "\\frac{\\partial \\mathbf{v_{b0}}}{\\partial t} = \\nabla \\cdot \\mathbf{v_{b0}} = 0 " }, { "math_id": 18, "text": " -i \\omega m \\mathbf{v_{b1}} = -e \\mathbf{E_1} - e \\mathbf{v_{b0}} \\times \\mathbf{B_1} " }, { "math_id": 19, "text": " v_{b1z} = \\frac{e E_1}{m i \\omega } " }, { "math_id": 20, "text": " v_{b1x} = \\frac{e E_1}{m i \\omega} \\frac{k v_{b0}}{\\omega} " }, { "math_id": 21, "text": " n_{b1} " }, { "math_id": 22, "text": " \\frac{\\partial n_b}{\\partial t} + \\nabla \\cdot (n_b \\mathbf{v_b}) = 0 " }, { "math_id": 23, "text": " \\frac{\\partial n_{b0}}{\\partial t} = \\nabla n_{b0} = 0 " }, { "math_id": 24, "text": " n_{b1} = n_{b0} \\frac{k}{\\omega} v_{b1x} " }, { "math_id": 25, "text": " J_{b1x} = - n_{b0} e^2 E_1 \\frac{k v_{b0}}{i m \\omega^2}" }, { "math_id": 26, "text": " J_{b1z} = - n_{b0} e^2 E_1 \\frac{1}{i m \\omega}(1+ \\frac{k^2 v_{b0}^2}{\\omega^2})" }, { "math_id": 27, "text": "v_0" }, { "math_id": 28, "text": "v_0^2" }, { "math_id": 29, "text": " \\mathbf{J_1} = -2 n_{b0} e^2 E_1 \\frac{1}{i m \\omega}(1+ \\frac{k^2 v_{b0}^2}{\\omega^2}) \\mathbf{\\hat{z}} " }, { "math_id": 30, "text": " \\nabla \\times \\mathbf{E_1} = i \\omega \\mathbf{B_1} " }, { "math_id": 31, "text": " \\nabla \\times \\mathbf{B_1} = \\mu_0 \\mathbf{J_1} - i \\omega \\epsilon_0 \\mu_0 \\mathbf{E_1} " }, { "math_id": 32, "text": " \\Rightarrow \\nabla \\times \\nabla \\times \\mathbf{E_1} = -\\nabla^2 \\mathbf{E_1} + \\nabla (\\nabla \\cdot \\mathbf{E_1}) = k^2 \\mathbf{E_1} + i \\mathbf{k} (i \\mathbf{k} \\cdot \\mathbf{E_1}) = k^2 \\mathbf{E_1} = i \\omega \\nabla \\times \\mathbf{B_1} = \\frac{i \\omega}{c^2 \\epsilon_0} \\mathbf{J_1} + \\frac{\\omega^2}{c^2} \\mathbf{E_1} " }, { "math_id": 33, "text": " c = \\frac{1}{\\sqrt{\\epsilon_0 \\mu_0}} " }, { "math_id": 34, "text": " \\omega_p^2 = \\frac{2 n_{b0} e^2}{\\epsilon_0 m} " }, { "math_id": 35, "text": " k^2 - \\frac{\\omega^2}{c^2} = -\\frac{\\omega_p^2}{c^2}(1+\\frac{k^2v_0^2}{\\omega^2}) \\Rightarrow \\omega^4 - \\omega^2 (\\omega_p^2 + k^2 c^2) - \\omega_p^2 k^2 v_0^2 = 0 " }, { "math_id": 36, "text": " \\omega^2 = \\frac{1}{2} (\\omega_p^2 + k^2 c^2 \\pm \\sqrt{(\\omega_p^2+k^2 c^2)^2 + 4 \\omega_p^2 k^2 v_0^2} )" }, { "math_id": 37, "text": " Im(\\omega) \\neq 0 " }, { "math_id": 38, "text": "k" }, { "math_id": 39, "text": " v_0 << c " }, { "math_id": 40, "text": " \\sqrt{(\\omega_p^2+k^2 c^2)^2 + 4 \\omega_p^2 k^2 v_0^2} = (\\omega_p^2 + k^2 c^2)(1+ \\frac{4 \\omega_p^2k^2v_0^2} {(\\omega_p^2+k^2c^2)^2})^{1/2} \\approx (\\omega_p^2 + k^2 c^2)(1+ \\frac{2 \\omega_p^2k^2v_0^2}{(\\omega_p^2+k^2c^2)^2}) " }, { "math_id": 41, "text": " \\omega^2 = \\frac{-\\omega_p^2 k^2 v_0^2}{\\omega_p^2 + k^2c^2} < 0 " }, { "math_id": 42, "text": " \\omega " }, { "math_id": 43, "text": " \\omega = i \\gamma " }, { "math_id": 44, "text": " \\gamma = \\frac{\\omega_p k v_0}{(\\omega_p^2+k^2 c^2)^{1/2}} = \\omega_p \\frac{v_0}{c} \\frac{1}{(1+\\frac{\\omega_p^2}{k^2 c^2})^{1/2}} " }, { "math_id": 45, "text": " Im(\\omega) > 0 " }, { "math_id": 46, "text": " \\mathbf{E_1} = A \\mathbf{\\hat{z}} e^{\\gamma t} e^{i k x} " }, { "math_id": 47, "text": " \\mathbf{B_1} = \\mathbf{\\hat{y}} \\frac{k}{\\omega} E_1 = \\mathbf{\\hat{y}} \\frac{k}{i \\gamma} A e^{\\gamma t} e^{i k x} " }, { "math_id": 48, "text": "90^o" }, { "math_id": 49, "text": " \\frac{|B_1|}{|E_1|} = \\frac{k}{\\gamma} \\propto \\frac{c}{v_0} >> 1 " }, { "math_id": 50, "text": " \\gamma " }, { "math_id": 51, "text": " \\gamma \\sim \\omega_p \\frac{v_0}{c} \\sim \\omega_c \\Rightarrow B \\sim \\frac{m}{e} \\omega_p \\frac{v_0}{c} " } ]
https://en.wikipedia.org/wiki?curid=14510148
14511439
Divisia index
A Divisia index is a theoretical construct to create index number series for continuous-time data on prices and quantities of goods exchanged. The name comes from François Divisia who first proposed and formally analyzed the indexes in 1926, and discussed them in related 1925 and 1928 works. The Divisia index is designed to incorporate quantity and price changes over time from subcomponents that are measured in different units, such as labor hours and equipment investment and materials purchases, and to summarize them in a time series that summarizes the changes in quantities and/or prices. The resulting index number series is unitless, like other index numbers. In practice, economic data are not measured in continuous time. Thus, when a series is said to be a Divisia index, it usually means the series follows a procedure that makes a close analogue in discrete time periods, usually the Törnqvist index procedure or the Fisher Ideal Index procedures. Uses. Divisia-type indices are used in these contexts for example: Data input. The theory of the Divisia indexes of goods (say, inputs to a production process, or prices for consumer goods) uses these components as data input: formula_2 Then a price index "P(t)" and quantity index "Q(t)" are the solution to a differential equation and if "P(0)" and "Q(0)" were chosen suitably the series summarize all transactions in the sense that for all t: formula_3 Discrete-time approximations. In practice, discrete time analogues to Divisia indexes are the ones computed and used. To define and compute changes in a discrete time index closely analogous to a Divisia index from time 0 to time 1: formula_4 (See, for example, Divisia monetary aggregates index.) History. Divisia indexes were proposed and analyzed formally by François Divisia in 1926, and discussed in related 1925 and 1928 works.
[ { "math_id": 0, "text": "p_{i}(t)" }, { "math_id": 1, "text": "q_{i}(t)" }, { "math_id": 2, "text": "\\sum_{i} p(0)*q(0) = P(0)Q(0) ." }, { "math_id": 3, "text": "\\sum_{i} p(t)*q(t) = P(t)Q(t)." }, { "math_id": 4, "text": "s_{j,t}^{*}=\\frac{1}{2}(s_{j,t}+s_{j,t-1})" } ]
https://en.wikipedia.org/wiki?curid=14511439
14511671
Interpretation (logic)
Assignment of meaning to the symbols of a formal language An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics. The most commonly studied formal logics are propositional logic, predicate logic and their modal analogs, and for these there are standard ways of presenting an interpretation. In these contexts an interpretation is a function that provides the extension of symbols and strings of symbols of an object language. For example, an interpretation function could take the predicate "T" (for "tall") and assign it the extension {"a"} (for "Abraham Lincoln"). All our interpretation does is assign the extension {a} to the non-logical constant "T", and does not make a claim about whether "T" is to stand for tall and 'a' for Abraham Lincoln. Nor does logical interpretation have anything to say about logical connectives like 'and', 'or' and 'not'. Though "we" may take these symbols to stand for certain things or concepts, this is not determined by the interpretation function. An interpretation often (but not always) provides a way to determine the truth values of sentences in a language. If a given interpretation assigns the value True to a sentence or theory, the interpretation is called a model of that sentence or theory. Formal languages. A formal language consists of a possibly infinite set of "sentences" (variously called "words" or "formulas") built from a fixed set of "letters" or "symbols". The inventory from which these letters are taken is called the "alphabet" over which the language is defined. To distinguish the strings of symbols that are in a formal language from arbitrary strings of symbols, the former are sometimes called "well-formed formulæ" (wff). The essential feature of a formal language is that its syntax can be defined without reference to interpretation. For example, we can determine that ("P" or "Q") is a well-formed formula even without knowing whether it is true or false. Example. A formal language formula_0 can be defined with the alphabet formula_1, and with a word being in formula_0 if it begins with formula_2 and is composed solely of the symbols formula_2 and formula_3. A possible interpretation of formula_0 could assign the decimal digit '1' to formula_2 and '0' to formula_3. Then formula_4 would denote 101 under this interpretation of formula_0. Logical constants. In the specific cases of propositional logic and predicate logic, the formal languages considered have alphabets that are divided into two sets: the logical symbols (logical constants) and the non-logical symbols. The idea behind this terminology is that "logical" symbols have the same meaning regardless of the subject matter being studied, while "non-logical" symbols change in meaning depending on the area of investigation. Logical constants are always given the same meaning by every interpretation of the standard kind, so that only the meanings of the non-logical symbols are changed. Logical constants include quantifier symbols ∀ ("all") and ∃ ("some"), symbols for logical connectives ∧ ("and"), ∨ ("or"), ¬ ("not"), parentheses and other grouping symbols, and (in many treatments) the equality symbol =. General properties of truth-functional interpretations. Many of the commonly studied interpretations associate each sentence in a formal language with a single truth value, either True or False. These interpretations are called "truth functional"; they include the usual interpretations of propositional and first-order logic. The sentences that are made true by a particular assignment are said to be "satisfied" by that assignment. In classical logic, no sentence can be made both true and false by the same interpretation, although this is not true of glut logics such as LP. Even in classical logic, however, it is possible that the truth value of the same sentence can be different under different interpretations. A sentence is "consistent" if it is true under at least one interpretation; otherwise it is "inconsistent". A sentence φ is said to be "logically valid" if it is satisfied by every interpretation (if φ is satisfied by every interpretation that satisfies ψ then φ is said to be a "logical consequence" of ψ). Logical connectives. Some of the logical symbols of a language (other than quantifiers) are truth-functional connectives that represent truth functions — functions that take truth values as arguments and return truth values as outputs (in other words, these are operations on truth values of sentences). The truth-functional connectives enable compound sentences to be built up from simpler sentences. In this way, the truth value of the compound sentence is defined as a certain truth function of the truth values of the simpler sentences. The connectives are usually taken to be logical constants, meaning that the meaning of the connectives is always the same, independent of what interpretations are given to the other symbols in a formula. This is how we define logical connectives in propositional logic: So under a given interpretation of all the sentence letters Φ and Ψ (i.e., after assigning a truth-value to each sentence letter), we can determine the truth-values of all formulas that have them as constituents, as a function of the logical connectives. The following table shows how this kind of thing looks. The first two columns show the truth-values of the sentence letters as determined by the four possible interpretations. The other columns show the truth-values of formulas built from these sentence letters, with truth-values determined recursively. Now it is easier to see what makes a formula logically valid. Take the formula "F": (Φ ∨ ¬Φ). If our interpretation function makes Φ True, then ¬Φ is made False by the negation connective. Since the disjunct Φ of "F" is True under that interpretation, "F" is True. Now the only other possible interpretation of Φ makes it False, and if so, ¬Φ is made True by the negation function. That would make "F" True again, since one of "F"s disjuncts, ¬Φ, would be true under this interpretation. Since these two interpretations for "F" are the only possible logical interpretations, and since "F" comes out True for both, we say that it is logically valid or tautologous. Interpretation of a theory. An "interpretation of a theory" is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a "full interpretation", otherwise it is called a "partial interpretation". Interpretations for propositional logic. The formal language for propositional logic consists of formulas built up from propositional symbols (also called sentential symbols, sentential variables, propositional variables) and logical connectives. The only non-logical symbols in a formal language for propositional logic are the propositional symbols, which are often denoted by capital letters. To make the formal language precise, a specific set of propositional symbols must be fixed. The standard kind of interpretation in this setting is a function that maps each propositional symbol to one of the truth values true and false. This function is known as a "truth assignment" or "valuation" function. In many presentations, it is literally a truth value that is assigned, but some presentations assign truthbearers instead. For a language with "n" distinct propositional variables there are 2"n" distinct possible interpretations. For any particular variable "a", for example, there are 21=2 possible interpretations: 1) "a" is assigned T, or 2) "a" is assigned F. For the pair "a", "b" there are 22=4 possible interpretations: 1) both are assigned T, 2) both are assigned F, 3) "a" is assigned T and "b" is assigned F, or 4) "a" is assigned F and "b" is assigned T. Given any truth assignment for a set of propositional symbols, there is a unique extension to an interpretation for all the propositional formulas built up from those variables. This extended interpretation is defined inductively, using the truth-table definitions of the logical connectives discussed above. First-order logic. Unlike propositional logic, where every language is the same apart from a choice of a different set of propositional variables, there are many different first-order languages. Each first-order language is defined by a signature. The signature consists of a set of non-logical symbols and an identification of each of these symbols as a constant symbol, a function symbol, or a predicate symbol. In the case of function and predicate symbols, a natural number arity is also assigned. The alphabet for the formal language consists of logical constants, the equality relation symbol =, all the symbols from the signature, and an additional infinite set of symbols known as variables. For example, in the language of rings, there are constant symbols 0 and 1, two binary function symbols + and ·, and no binary relation symbols. (Here the equality relation is taken as a logical constant.) Again, we might define a first-order language L, as consisting of individual symbols a, b, and c; predicate symbols F, G, H, I and J; variables x, y, z; no function letters; no sentential symbols. Formal languages for first-order logic. Given a signature σ, the corresponding formal language is known as the set of σ-formulas. Each σ-formula is built up out of atomic formulas by means of logical connectives; atomic formulas are built from terms using predicate symbols. The formal definition of the set of σ-formulas proceeds in the other direction: first, terms are assembled from the constant and function symbols together with the variables. Then, terms can be combined into an atomic formula using a predicate symbol (relation symbol) from the signature or the special predicate symbol "=" for equality (see the section "Interpreting equality" below). Finally, the formulas of the language are assembled from atomic formulas using the logical connectives and quantifiers. Interpretations of a first-order language. To ascribe meaning to all sentences of a first-order language, the following information is needed. An object carrying this information is known as a structure (of signature σ), or σ-structure, or "L"-structure (of language L), or as a "model". The information specified in the interpretation provides enough information to give a truth value to any atomic formula, after each of its free variables, if any, has been replaced by an element of the domain. The truth value of an arbitrary sentence is then defined inductively using the T-schema, which is a definition of first-order semantics developed by Alfred Tarski. The T-schema interprets the logical connectives using truth tables, as discussed above. Thus, for example, φ ∧ ψ is satisfied if and only if both φ and ψ are satisfied. This leaves the issue of how to interpret formulas of the form ∀ "x" φ("x") and ∃ "x" φ("x"). The domain of discourse forms the range for these quantifiers. The idea is that the sentence ∀ "x" φ("x") is true under an interpretation exactly when every substitution instance of φ("x"), where "x" is replaced by some element of the domain, is satisfied. The formula ∃ "x" φ("x") is satisfied if there is at least one element "d" of the domain such that φ("d") is satisfied. Strictly speaking, a substitution instance such as the formula φ("d") mentioned above is not a formula in the original formal language of φ, because "d" is an element of the domain. There are two ways of handling this technical issue. The first is to pass to a larger language in which each element of the domain is named by a constant symbol. The second is to add to the interpretation a function that assigns each variable to an element of the domain. Then the T-schema can quantify over variations of the original interpretation in which this variable assignment function is changed, instead of quantifying over substitution instances. Some authors also admit propositional variables in first-order logic, which must then also be interpreted. A propositional variable can stand on its own as an atomic formula. The interpretation of a propositional variable is one of the two truth values "true" and "false." Because the first-order interpretations described here are defined in set theory, they do not associate each predicate symbol with a property (or relation), but rather with the extension of that property (or relation). In other words, these first-order interpretations are extensional not intensional. Example of a first-order interpretation. An example of interpretation formula_5 of the language L described above is as follows. In the interpretation formula_5 of L: Non-empty domain requirement. As stated above, a first-order interpretation is usually required to specify a nonempty set as the domain of discourse. The reason for this requirement is to guarantee that equivalences such as formula_6 where "x" is not a free variable of φ, are logically valid. This equivalence holds in every interpretation with a nonempty domain, but does not always hold when empty domains are permitted. For example, the equivalence formula_7 fails in any structure with an empty domain. Thus the proof theory of first-order logic becomes more complicated when empty structures are permitted. However, the gain in allowing them is negligible, as both the intended interpretations and the interesting interpretations of the theories people study have non-empty domains. Empty relations do not cause any problem for first-order interpretations, because there is no similar notion of passing a relation symbol across a logical connective, enlarging its scope in the process. Thus it is acceptable for relation symbols to be interpreted as being identically false. However, the interpretation of a function symbol must always assign a well-defined and total function to the symbol. Interpreting equality. The equality relation is often treated specially in first order logic and other predicate logics. There are two general approaches. The first approach is to treat equality as no different than any other binary relation. In this case, if an equality symbol is included in the signature, it is usually necessary to add various axioms about equality to axiom systems (for example, the substitution axiom saying that if "a" = "b" and "R"("a") holds then "R"("b") holds as well). This approach to equality is most useful when studying signatures that do not include the equality relation, such as the signature for set theory or the signature for second-order arithmetic in which there is only an equality relation for numbers, but not an equality relation for set of numbers. The second approach is to treat the equality relation symbol as a logical constant that must be interpreted by the real equality relation in any interpretation. An interpretation that interprets equality this way is known as a "normal model", so this second approach is the same as only studying interpretations that happen to be normal models. The advantage of this approach is that the axioms related to equality are automatically satisfied by every normal model, and so they do not need to be explicitly included in first-order theories when equality is treated this way. This second approach is sometimes called "first order logic with equality", but many authors adopt it for the general study of first-order logic without comment. There are a few other reasons to restrict study of first-order logic to normal models. First, it is known that any first-order interpretation in which equality is interpreted by an equivalence relation and satisfies the substitution axioms for equality can be cut down to an elementarily equivalent interpretation on a subset of the original domain. Thus there is little additional generality in studying non-normal models. Second, if non-normal models are considered, then every consistent theory has an infinite model; this affects the statements of results such as the Löwenheim–Skolem theorem, which are usually stated under the assumption that only normal models are considered. Many-sorted first-order logic. A generalization of first order logic considers languages with more than one "sort" of variables. The idea is different sorts of variables represent different types of objects. Every sort of variable can be quantified; thus an interpretation for a many-sorted language has a separate domain for each of the sorts of variables to range over (there is an infinite collection of variables of each of the different sorts). Function and relation symbols, in addition to having arities, are specified so that each of their arguments must come from a certain sort. One example of many-sorted logic is for planar Euclidean geometry. There are two sorts; points and lines. There is an equality relation symbol for points, an equality relation symbol for lines, and a binary incidence relation "E" which takes one point variable and one line variable. The intended interpretation of this language has the point variables range over all points on the Euclidean plane, the line variable range over all lines on the plane, and the incidence relation "E"("p","l") holds if and only if point "p" is on line "l". Higher-order predicate logics. A formal language for higher-order predicate logic looks much the same as a formal language for first-order logic. The difference is that there are now many different types of variables. Some variables correspond to elements of the domain, as in first-order logic. Other variables correspond to objects of higher type: subsets of the domain, functions from the domain, functions that take a subset of the domain and return a function from the domain to subsets of the domain, etc. All of these types of variables can be quantified. There are two kinds of interpretations commonly employed for higher-order logic. "Full semantics" require that, once the domain of discourse is satisfied, the higher-order variables range over all possible elements of the correct type (all subsets of the domain, all functions from the domain to itself, etc.). Thus the specification of a full interpretation is the same as the specification of a first-order interpretation. "Henkin semantics", which are essentially multi-sorted first-order semantics, require the interpretation to specify a separate domain for each type of higher-order variable to range over. Thus an interpretation in Henkin semantics includes a domain "D", a collection of subsets of "D", a collection of functions from "D" to "D", etc. The relationship between these two semantics is an important topic in higher order logic. Non-classical interpretations. The interpretations of propositional logic and predicate logic described above are not the only possible interpretations. In particular, there are other types of interpretations that are used in the study of non-classical logic (such as intuitionistic logic), and in the study of modal logic. Interpretations used to study non-classical logic include topological models, Boolean-valued models, and Kripke models. Modal logic is also studied using Kripke models. Intended interpretations. Many formal languages are associated with a particular interpretation that is used to motivate them. For example, the first-order signature for set theory includes only one binary relation, ∈, which is intended to represent set membership, and the domain of discourse in a first-order theory of the natural numbers is intended to be the set of natural numbers. The intended interpretation is called the "standard model" (a term introduced by Abraham Robinson in 1960). In the context of Peano arithmetic, it consists of the natural numbers with their ordinary arithmetical operations. All models that are isomorphic to the one just given are also called standard; these models all satisfy the Peano axioms. There are also non-standard models of the (first-order version of the) Peano axioms, which contain elements not correlated with any natural number. While the intended interpretation can have no explicit indication in the strictly formal syntactical rules, it naturally affects the choice of the formation and transformation rules of the syntactical system. For example, primitive signs must permit expression of the concepts to be modeled; sentential formulas are chosen so that their counterparts in the intended interpretation are meaningful declarative sentences; primitive sentences need to come out as true sentences in the interpretation; rules of inference must be such that, if the sentence formula_8 is directly derivable from a sentence formula_9, then formula_10 turns out to be a true sentence, with &lt;math&gt;\to&lt;/math&gt; meaning implication, as usual. These requirements ensure that all provable sentences also come out to be true. Most formal systems have many more models than they were intended to have (the existence of non-standard models is an example). When we speak about 'models' in empirical sciences, we mean, if we want reality to be a model of our science, to speak about an "intended model". A model in the empirical sciences is an "intended factually-true descriptive interpretation" (or in other contexts: a non-intended arbitrary interpretation used to clarify such an intended factually-true descriptive interpretation.) All models are interpretations that have the same domain of discourse as the intended one, but other assignments for non-logical constants. Example. Given a simple formal system (we shall call this one formula_11) whose alphabet α consists only of three symbols formula_12 and whose formation rule for formulas is: 'Any string of symbols of formula_11 which is at least 6 symbols long, and which is not infinitely long, is a formula of formula_11. Nothing else is a formula of formula_11.' The single axiom schema of formula_11 is: " formula_13 " (where " formula_14 " is a metasyntactic variable standing for a finite string of " formula_15 "s ) A formal proof can be constructed as follows: In this example the theorem produced " formula_18 " can be interpreted as meaning "One plus three equals four." A different interpretation would be to read it backwards as "Four minus three equals one." Other concepts of interpretation. There are other uses of the term "interpretation" that are commonly used, which do not refer to the assignment of meanings to formal languages. In model theory, a structure "A" is said to interpret a structure "B" if there is a definable subset "D" of "A", and definable relations and functions on "D", such that "B" is isomorphic to the structure with domain "D" and these functions and relations. In some settings, it is not the domain "D" that is used, but rather "D" modulo an equivalence relation definable in "A". For additional information, see Interpretation (model theory). A theory "T" is said to interpret another theory "S" if there is a finite extension by definitions "T"′ of "T" such that "S" is contained in "T"′. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{W}" }, { "math_id": 1, "text": "\\alpha = \\{ \\triangle, \\square \\}" }, { "math_id": 2, "text": "\\triangle" }, { "math_id": 3, "text": "\\square" }, { "math_id": 4, "text": "\\triangle \\square \\triangle" }, { "math_id": 5, "text": "\\mathcal{I}" }, { "math_id": 6, "text": "(\\phi \\lor \\exists x \\psi) \\leftrightarrow \\exists x (\\phi \\lor \\psi)," }, { "math_id": 7, "text": "[\\forall y (y = y) \\lor \\exists x ( x = x)] \\equiv \\exists x [ \\forall y ( y = y) \\lor x = x]" }, { "math_id": 8, "text": "\\mathcal{I}_j" }, { "math_id": 9, "text": "\\mathcal{I}_i" }, { "math_id": 10, "text": "\\mathcal{I}_i \\to \\mathcal{I}_j" }, { "math_id": 11, "text": "\\mathcal{FS'}" }, { "math_id": 12, "text": "\\{ \\blacksquare, \\bigstar, \\blacklozenge \\}" }, { "math_id": 13, "text": "\\blacksquare \\ \\bigstar \\ast \\blacklozenge \\ \\blacksquare \\ast" }, { "math_id": 14, "text": "\\ast" }, { "math_id": 15, "text": "\\blacksquare" }, { "math_id": 16, "text": "\\blacksquare \\ \\bigstar \\ \\blacksquare \\ \\blacklozenge \\ \\blacksquare \\ \\blacksquare" }, { "math_id": 17, "text": "\\blacksquare \\ \\bigstar \\ \\blacksquare \\ \\blacksquare \\ \\blacklozenge \\ \\blacksquare \\ \\blacksquare \\ \\blacksquare" }, { "math_id": 18, "text": "\\blacksquare \\ \\bigstar \\ \\blacksquare \\ \\blacksquare \\ \\blacksquare \\ \\blacklozenge \\ \\blacksquare \\ \\blacksquare \\ \\blacksquare \\ \\blacksquare" } ]
https://en.wikipedia.org/wiki?curid=14511671
1451250
Bidiagonal matrix
In mathematics, a bidiagonal matrix is a banded matrix with non-zero entries along the main diagonal and "either" the diagonal above or the diagonal below. This means there are exactly two non-zero diagonals in the matrix. When the diagonal above the main diagonal has the non-zero entries the matrix is upper bidiagonal. When the diagonal below the main diagonal has the non-zero entries the matrix is lower bidiagonal. For example, the following matrix is upper bidiagonal: formula_0 and the following matrix is lower bidiagonal: formula_1 Usage. One variant of the QR algorithm starts with reducing a general matrix into a bidiagonal one, and the singular value decomposition (SVD) uses this method as well. Bidiagonalization. Bidiagonalization allows guaranteed accuracy when using floating-point arithmetic to compute singular values. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{pmatrix}\n1 & 4 & 0 & 0 \\\\\n0 & 4 & 1 & 0 \\\\\n0 & 0 & 3 & 4 \\\\\n0 & 0 & 0 & 3 \\\\\n\\end{pmatrix}" }, { "math_id": 1, "text": "\\begin{pmatrix}\n1 & 0 & 0 & 0 \\\\\n2 & 4 & 0 & 0 \\\\\n0 & 3 & 3 & 0 \\\\\n0 & 0 & 4 & 3 \\\\\n\\end{pmatrix}." } ]
https://en.wikipedia.org/wiki?curid=1451250
145126
Principal ideal
Ring ideal generated by a single element of the ring In mathematics, specifically ring theory, a principal ideal is an ideal formula_0 in a ring formula_1 that is generated by a single element formula_2 of formula_1 through multiplication by every element of formula_3 The term also has another, similar meaning in order theory, where it refers to an (order) ideal in a poset formula_4 generated by a single element formula_5 which is to say the set of all elements less than or equal to formula_6 in formula_7 The remainder of this article addresses the ring-theoretic concept. Definitions. While this definition for two-sided principal ideal may seem more complicated than the others, it is necessary to ensure that the ideal remains closed under addition. If formula_1 is a commutative ring with identity, then the above three notions are all the same. In that case, it is common to write the ideal generated by formula_2 as formula_13 or formula_14 Examples of non-principal ideal. Not all ideals are principal. For example, consider the commutative ring formula_15 of all polynomials in two variables formula_6 and formula_16 with complex coefficients. The ideal formula_17 generated by formula_6 and formula_16 which consists of all the polynomials in formula_15 that have zero for the constant term, is not principal. To see this, suppose that formula_18 were a generator for formula_19 Then formula_6 and formula_20 would both be divisible by formula_21 which is impossible unless formula_18 is a nonzero constant. But zero is the only constant in formula_22 so we have a contradiction. In the ring formula_23 the numbers where formula_24 is even form a non-principal ideal. This ideal forms a regular hexagonal lattice in the complex plane. Consider formula_25 and formula_26 These numbers are elements of this ideal with the same norm (two), but because the only units in the ring are formula_27 and formula_28 they are not associates. Related definitions. A ring in which every ideal is principal is called "principal", or a "principal ideal ring". A "principal ideal domain" (PID) is an integral domain in which every ideal is principal. Any PID is a unique factorization domain; the normal proof of unique factorization in the integers (the so-called fundamental theorem of arithmetic) holds in any PID. Examples of principal ideal. The principal ideals in formula_29 are of the form formula_30 In fact, formula_29 is a principal ideal domain, which can be shown as follows. Suppose formula_31 where formula_32 and consider the surjective homomorphisms formula_33 Since formula_34 is finite, for sufficiently large formula_35 we have formula_36 Thus formula_37 which implies formula_0 is always finitely generated. Since the ideal formula_38 generated by any integers formula_2 and formula_39 is exactly formula_40 by induction on the number of generators it follows that formula_0 is principal. However, all rings have principal ideals, namely, any ideal generated by exactly one element. For example, the ideal formula_41 is a principal ideal of formula_42 and formula_43 is a principal ideal of formula_44 In fact, formula_45 and formula_46 are principal ideals of any ring formula_3 Properties. Any Euclidean domain is a PID; the algorithm used to calculate greatest common divisors may be used to find a generator of any ideal. More generally, any two principal ideals in a commutative ring have a greatest common divisor in the sense of ideal multiplication. In principal ideal domains, this allows us to calculate greatest common divisors of elements of the ring, up to multiplication by a unit; we define formula_47 to be any generator of the ideal formula_48 For a Dedekind domain formula_49 we may also ask, given a non-principal ideal formula_0 of formula_49 whether there is some extension formula_50 of formula_1 such that the ideal of formula_50 generated by formula_0 is principal (said more loosely, formula_0 "becomes principal" in formula_50). This question arose in connection with the study of rings of algebraic integers (which are examples of Dedekind domains) in number theory, and led to the development of class field theory by Teiji Takagi, Emil Artin, David Hilbert, and many others. The principal ideal theorem of class field theory states that every integer ring formula_1 (i.e. the ring of integers of some number field) is contained in a larger integer ring formula_50 which has the property that "every" ideal of formula_1 becomes a principal ideal of formula_51 In this theorem we may take formula_50 to be the ring of integers of the Hilbert class field of formula_1; that is, the maximal unramified abelian extension (that is, Galois extension whose Galois group is abelian) of the fraction field of formula_49 and this is uniquely determined by formula_3 Krull's principal ideal theorem states that if formula_1 is a Noetherian ring and formula_0 is a principal, proper ideal of formula_49 then formula_0 has height at most one.
[ { "math_id": 0, "text": "I" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "a" }, { "math_id": 3, "text": "R." }, { "math_id": 4, "text": "P" }, { "math_id": 5, "text": "x \\in P," }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "P." }, { "math_id": 8, "text": "Ra = \\{ra : r \\in R\\}" }, { "math_id": 9, "text": "a," }, { "math_id": 10, "text": "aR = \\{ar : r \\in R\\}" }, { "math_id": 11, "text": "RaR = \\{r_1 a s_1 + \\ldots + r_n a s_n: r_1,s_1, \\ldots, r_n, s_n \\in R\\}" }, { "math_id": 12, "text": "ras." }, { "math_id": 13, "text": "\\langle a \\rangle" }, { "math_id": 14, "text": "(a)." }, { "math_id": 15, "text": "\\mathbb{C}[x, y]" }, { "math_id": 16, "text": "y," }, { "math_id": 17, "text": "\\langle x, y \\rangle" }, { "math_id": 18, "text": "p" }, { "math_id": 19, "text": "\\langle x, y \\rangle." }, { "math_id": 20, "text": "y" }, { "math_id": 21, "text": "p," }, { "math_id": 22, "text": "\\langle x, y \\rangle," }, { "math_id": 23, "text": "\\mathbb{Z}[\\sqrt{-3}] = \\{a + b\\sqrt{-3}: a, b\\in \\mathbb{Z} \\}," }, { "math_id": 24, "text": "a + b" }, { "math_id": 25, "text": "(a,b) = (2,0)" }, { "math_id": 26, "text": "(1,1)." }, { "math_id": 27, "text": "1" }, { "math_id": 28, "text": "-1," }, { "math_id": 29, "text": "\\mathbb{Z}" }, { "math_id": 30, "text": "\\langle n \\rangle = n\\mathbb{Z}." }, { "math_id": 31, "text": "I=\\langle n_1, n_2, \\ldots\\rangle" }, { "math_id": 32, "text": "n_1\\neq 0," }, { "math_id": 33, "text": "\\mathbb{Z}/\\langle n_1\\rangle \\rightarrow \\mathbb{Z}/\\langle n_1, n_2\\rangle \\rightarrow \\mathbb{Z}/\\langle n_1, n_2, n_3\\rangle\\rightarrow \\cdots." }, { "math_id": 34, "text": "\\mathbb{Z}/\\langle n_1\\rangle" }, { "math_id": 35, "text": "k" }, { "math_id": 36, "text": "\\mathbb{Z}/\\langle n_1, n_2, \\ldots, n_k\\rangle = \\mathbb{Z}/\\langle n_1, n_2, \\ldots, n_{k+1}\\rangle = \\cdots." }, { "math_id": 37, "text": "I=\\langle n_1, n_2, \\ldots, n_k\\rangle," }, { "math_id": 38, "text": "\\langle a,b\\rangle" }, { "math_id": 39, "text": "b" }, { "math_id": 40, "text": "\\langle \\mathop{\\mathrm{gcd}}(a,b)\\rangle," }, { "math_id": 41, "text": "\\langle x\\rangle" }, { "math_id": 42, "text": "\\mathbb{C}[x,y]," }, { "math_id": 43, "text": "\\langle \\sqrt{-3} \\rangle" }, { "math_id": 44, "text": "\\mathbb{Z}[\\sqrt{-3}]." }, { "math_id": 45, "text": "\\{0\\} = \\langle 0\\rangle" }, { "math_id": 46, "text": "R=\\langle 1\\rangle" }, { "math_id": 47, "text": "\\gcd(a, b)" }, { "math_id": 48, "text": "\\langle a, b \\rangle." }, { "math_id": 49, "text": "R," }, { "math_id": 50, "text": "S" }, { "math_id": 51, "text": "S." } ]
https://en.wikipedia.org/wiki?curid=145126
145128
Algorithmic efficiency
Property of an algorithm In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process. For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important. For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort organizes the list in time proportional to the number of elements squared (formula_0, see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list (formula_1). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (formula_2), but has a space requirement linear in the length of the list (formula_3). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice. Background. The importance of efficiency with respect to time was emphasized by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine: "In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation" Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory. Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless, Donald Knuth emphasized that efficiency is still an important consideration: "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering" Overview. An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago. Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer. There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order. In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues. Theoretical analysis. In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input formula_4. Big O notation is an asymptotic measure of function complexity, where formula_5 roughly means the time requirement for an algorithm is proportional to formula_6, omitting lower-order terms that contribute less than formula_6 to the growth of the function as formula_4 grows arbitrarily large. This estimate may be misleading when formula_4 is small, but is generally sufficiently accurate when formula_4 is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs. Some examples of Big O notation applied to algorithms' asymptotic time complexity include: Measuring performance. For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed. Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages. Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example. Implementation concerns. Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded, or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler. See the articles on just-in-time compilation and interpreted languages. There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls. Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI. Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have extensive knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations. Measures of resource usage. Measures are normally expressed as a function of the size of the input formula_7. The two most common measures are: For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are: As of 2018[ [update]], power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing. Less common measures of computational efficiency may also be relevant in some cases: Time. Theory. Analysis of algorithms, typically using concepts like time complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Parallel algorithms may be more difficult to analyze. Practice. A benchmark can be used to assess the performance of an algorithm in practice. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests. Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment. This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions. Space. This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation. There are up to four aspects of memory usage to consider: Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory. Caching and memory hierarchy. Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant: An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another. In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality, and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^2)" }, { "math_id": 1, "text": "O(1)" }, { "math_id": 2, "text": "O(n\\log n)" }, { "math_id": 3, "text": "O(n)" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "f(n) = O\\bigl( g(n)\\bigr)" }, { "math_id": 6, "text": "g(n)" }, { "math_id": 7, "text": "\\scriptstyle {n}" } ]
https://en.wikipedia.org/wiki?curid=145128
14513019
Comparison of programming languages (basic instructions)
This article compares a large number of programming languages by tabulating their data types, their expression, statement, and declaration syntax, and some common operating-system interfaces. Conventions of this article. Generally, "var", var, or var is how variable names or other non-literal values to be interpreted by the reader are represented. The rest is literal code. Guillemets ( and ) enclose optional sections. indicates a necessary (whitespace) indentation. The tables are not sorted lexicographically ascending by programming language name by default, and that some languages have entries in some tables but not others. Functions. See "reflective programming" for calling and declaring functions by strings. Type conversions. Where "string" is a signed decimal number: Execution of commands. &lt;templatestyles src="Citation/styles.css"/&gt;^a Fortran 2008 or newer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\forall" }, { "math_id": 1, "text": "\\exists" } ]
https://en.wikipedia.org/wiki?curid=14513019
145132
Dedekind domain
Ring with unique factorization for ideals (mathematics) In abstract algebra, a Dedekind domain or Dedekind ring, named after Richard Dedekind, is an integral domain in which every nonzero proper ideal factors into a product of prime ideals. It can be shown that such a factorization is then necessarily unique up to the order of the factors. There are at least three other characterizations of Dedekind domains that are sometimes taken as the definition: see below. A field is a commutative ring in which there are no nontrivial proper ideals, so that any field is a Dedekind domain, however in a rather vacuous way. Some authors add the requirement that a Dedekind domain not be a field. Many more authors state theorems for Dedekind domains with the implicit proviso that they may require trivial modifications for the case of fields. An immediate consequence of the definition is that every principal ideal domain (PID) is a Dedekind domain. In fact a Dedekind domain is a unique factorization domain (UFD) if and only if it is a PID. The prehistory of Dedekind domains. In the 19th century it became a common technique to gain insight into integer solutions of polynomial equations using rings of algebraic numbers of higher degree. For instance, fix a positive integer formula_0. In the attempt to determine which integers are represented by the quadratic form formula_1, it is natural to factor the quadratic form into formula_2, the factorization taking place in the ring of integers of the quadratic field formula_3. Similarly, for a positive integer formula_4 the polynomial formula_5 (which is relevant for solving the Fermat equation formula_6) can be factored over the ring formula_7, where formula_8 is a primitive "n"-th root of unity. For a few small values of formula_0 and formula_4 these rings of algebraic integers are PIDs, and this can be seen as an explanation of the classical successes of Fermat (formula_9) and Euler (formula_10). By this time a procedure for determining whether the ring of all algebraic integers of a given quadratic field formula_11 is a PID was well known to the quadratic form theorists. Especially, Gauss had looked at the case of imaginary quadratic fields: he found exactly nine values of formula_12 for which the ring of integers is a PID and conjectured that there were no further values. (Gauss's conjecture was proven more than one hundred years later by Kurt Heegner, Alan Baker and Harold Stark.) However, this was understood (only) in the language of equivalence classes of quadratic forms, so that in particular the analogy between quadratic forms and the Fermat equation seems not to have been perceived. In 1847 Gabriel Lamé announced a solution of Fermat's Last Theorem for all formula_13; that is, that the Fermat equation has no solutions in nonzero integers, but it turned out that his solution hinged on the assumption that the cyclotomic ring formula_7 is a UFD. Ernst Kummer had shown three years before that this was not the case already for formula_14 (the full, finite list of values for which formula_7 is a UFD is now known). At the same time, Kummer developed powerful new methods to prove Fermat's Last Theorem at least for a large class of prime exponents formula_4 using what we now recognize as the fact that the ring formula_7 is a Dedekind domain. In fact Kummer worked not with ideals but with "ideal numbers", and the modern definition of an ideal was given by Dedekind. By the 20th century, algebraists and number theorists had come to realize that the condition of being a PID is rather delicate, whereas the condition of being a Dedekind domain is quite robust. For instance the ring of ordinary integers is a PID, but as seen above the ring formula_15 of algebraic integers in a number field formula_16 need not be a PID. In fact, although Gauss also conjectured that there are infinitely many primes formula_17 such that the ring of integers of formula_18 is a PID, it is not yet known whether there are infinitely many number fields formula_16 (of arbitrary degree) such that formula_15 is a PID. On the other hand, the ring of integers in a number field is always a Dedekind domain. Another illustration of the delicate/robust dichotomy is the fact that being a Dedekind domain is, among Noetherian domains, a local property: a Noetherian domain formula_19 is Dedekind iff for every maximal ideal formula_20 of formula_19 the localization formula_21 is a Dedekind ring. But a local domain is a Dedekind ring iff it is a PID iff it is a discrete valuation ring (DVR), so the same local characterization cannot hold for PIDs: rather, one may say that the concept of a Dedekind ring is the globalization of that of a DVR. Alternative definitions. For an integral domain formula_19 that is not a field, all of the following conditions are equivalent: (DD1) Every nonzero proper ideal factors into primes. (DD2) formula_19 is Noetherian, and the localization at each maximal ideal is a discrete valuation ring. (DD3) Every nonzero fractional ideal of formula_19 is invertible. (DD4) formula_19 is an integrally closed, Noetherian domain with Krull dimension one (that is, every nonzero prime ideal is maximal). (DD5) For any two ideals formula_22 and formula_23 in formula_19, formula_22 is contained in formula_23 if and only if formula_23 divides formula_22 as ideals. That is, there exists an ideal formula_24 such that formula_25. A commutative ring (not necessarily a domain) with unity satisfying this condition is called a containment-division ring (CDR). Thus a Dedekind domain is a domain that either is a field, or satisfies any one, and hence all five, of (DD1) through (DD5). Which of these conditions one takes as the definition is therefore merely a matter of taste. In practice, it is often easiest to verify (DD4). A Krull domain is a higher-dimensional analog of a Dedekind domain: a Dedekind domain that is not a field is a Krull domain of dimension 1. This notion can be used to study the various characterizations of a Dedekind domain. In fact, this is the definition of a Dedekind domain used in Bourbaki's "Commutative algebra". A Dedekind domain can also be characterized in terms of homological algebra: an integral domain is a Dedekind domain if and only if it is a hereditary ring; that is, every submodule of a projective module over it is projective. Similarly, an integral domain is a Dedekind domain if and only if every divisible module over it is injective. Some examples of Dedekind domains. All principal ideal domains and therefore all discrete valuation rings are Dedekind domains. The ring formula_26 of algebraic integers in a number field "K" is Noetherian, integrally closed, and of dimension one: to see the last property, observe that for any nonzero prime ideal "I" of "R", "R"/"I" is a finite set, and recall that a finite integral domain is a field; so by (DD4) "R" is a Dedekind domain. As above, this includes all the examples considered by Kummer and Dedekind and was the motivating case for the general definition, and these remain among the most studied examples. The other class of Dedekind rings that is arguably of equal importance comes from geometry: let "C" be a nonsingular geometrically integral affine algebraic curve over a field "k". Then the coordinate ring "k"["C"] of regular functions on "C" is a Dedekind domain. This is largely clear simply from translating geometric terms into algebra: the coordinate ring of any affine variety is, by definition, a finitely generated "k"-algebra, hence Noetherian; moreover "curve" means "dimension one" and "nonsingular" implies (and, in dimension one, is equivalent to) "normal", which by definition means "integrally closed". Both of these constructions can be viewed as special cases of the following basic result: Theorem: Let "R" be a Dedekind domain with fraction field "K". Let "L" be a finite degree field extension of "K" and denote by "S" the integral closure of "R" in "L". Then "S" is itself a Dedekind domain. Applying this theorem when "R" is itself a PID gives us a way of building Dedekind domains out of PIDs. Taking "R" = Z, this construction says precisely that rings of integers of number fields are Dedekind domains. Taking "R" = "k"["t"], one obtains the above case of nonsingular affine curves as branched coverings of the affine line. Zariski and Samuel were sufficiently taken with this construction to ask whether every Dedekind domain arises from it; that is, by starting with a PID and taking the integral closure in a finite degree field extension. A surprisingly simple negative answer was given by L. Claborn. If the situation is as above but the extension "L" of "K" is algebraic of infinite degree, then it is still possible for the integral closure "S" of "R" in "L" to be a Dedekind domain, but it is not guaranteed. For example, take again "R" = Z, "K" = Q and now take "L" to be the field formula_27 of all algebraic numbers. The integral closure is nothing else than the ring formula_28 of all algebraic integers. Since the square root of an algebraic integer is again an algebraic integer, it is not possible to factor any nonzero nonunit algebraic integer into a finite product of irreducible elements, which implies that formula_28 is not even Noetherian! In general, the integral closure of a Dedekind domain in an infinite algebraic extension is a Prüfer domain; it turns out that the ring of algebraic integers is slightly more special than this: it is a Bézout domain. Fractional ideals and the class group. Let "R" be an integral domain with fraction field "K". A fractional ideal is a nonzero "R"-submodule "I" of "K" for which there exists a nonzero "x" in "K" such that formula_29 Given two fractional ideals "I" and "J", one defines their product "IJ" as the set of all finite sums formula_30: the product "IJ" is again a fractional ideal. The set Frac("R") of all fractional ideals endowed with the above product is a commutative semigroup and in fact a monoid: the identity element is the fractional ideal "R". For any fractional ideal "I", one may define the fractional ideal formula_31 One then tautologically has formula_32. In fact one has equality if and only if "I", as an element of the monoid of Frac("R"), is invertible. In other words, if "I" has any inverse, then the inverse must be formula_33. A principal fractional ideal is one of the form formula_34 for some nonzero "x" in "K". Note that each principal fractional ideal is invertible, the inverse of formula_34 being simply formula_35. We denote the subgroup of principal fractional ideals by Prin("R"). A domain "R" is a PID if and only if every fractional ideal is principal. In this case, we have Frac("R") = Prin("R") = formula_36, since two principal fractional ideals formula_34 and formula_37 are equal iff formula_38 is a unit in "R". For a general domain "R", it is meaningful to take the quotient of the monoid Frac("R") of all fractional ideals by the submonoid Prin("R") of principal fractional ideals. However this quotient itself is generally only a monoid. In fact it is easy to see that the class of a fractional ideal I in Frac("R")/Prin("R") is invertible if and only if I itself is invertible. Now we can appreciate (DD3): in a Dedekind domain (and only in a Dedekind domain) every fractional ideal is invertible. Thus these are precisely the class of domains for which Frac("R")/Prin("R") forms a group, the ideal class group Cl("R") of "R". This group is trivial if and only if "R" is a PID, so can be viewed as quantifying the obstruction to a general Dedekind domain being a PID. We note that for an arbitrary domain one may define the Picard group Pic("R") as the group of invertible fractional ideals Inv("R") modulo the subgroup of principal fractional ideals. For a Dedekind domain this is of course the same as the ideal class group. However, on a more general class of domains, including Noetherian domains and Krull domains, the ideal class group is constructed in a different way, and there is a canonical homomorphism Pic("R") → Cl("R") which is however generally neither injective nor surjective. This is an affine analogue of the distinction between Cartier divisors and Weil divisors on a singular algebraic variety. A remarkable theorem of L. Claborn (Claborn 1966) asserts that for any abelian group "G" whatsoever, there exists a Dedekind domain "R" whose ideal class group is isomorphic to "G". Later, C.R. Leedham-Green showed that such an "R" may be constructed as the integral closure of a PID in a quadratic field extension (Leedham-Green 1972). In 1976, M. Rosen showed how to realize any countable abelian group as the class group of a Dedekind domain that is a subring of the rational function field of an elliptic curve, and conjectured that such an "elliptic" construction should be possible for a general abelian group (Rosen 1976). Rosen's conjecture was proven in 2008 by P.L. Clark (Clark 2009). In contrast, one of the basic theorems in algebraic number theory asserts that the class group of the ring of integers of a number field is finite; its cardinality is called the class number and it is an important and rather mysterious invariant, notwithstanding the hard work of many leading mathematicians from Gauss to the present day. Finitely generated modules over a Dedekind domain. In view of the well known and exceedingly useful structure theorem for finitely generated modules over a principal ideal domain (PID), it is natural to ask for a corresponding theory for finitely generated modules over a Dedekind domain. Let us briefly recall the structure theory in the case of a finitely generated module formula_20 over a PID formula_19. We define the torsion submodule formula_39 to be the set of elements formula_0 of formula_20 such that formula_40 for some nonzero formula_41 in formula_19. Then: (M1) formula_39 can be decomposed into a direct sum of cyclic torsion modules, each of the form formula_42 for some nonzero ideal formula_22 of formula_19. By the Chinese Remainder Theorem, each formula_42 can further be decomposed into a direct sum of submodules of the form formula_43, where formula_44 is a power of a prime ideal. This decomposition need not be unique, but any two decompositions formula_45 differ only in the order of the factors. (M2) The torsion submodule is a direct summand. That is, there exists a complementary submodule formula_46 of formula_20 such that formula_47. (M3PID) formula_46 isomorphic to formula_48 for a uniquely determined non-negative integer formula_4. In particular, formula_46 is a finitely generated free module. Now let formula_20 be a finitely generated module over an arbitrary Dedekind domain formula_19. Then (M1) and (M2) hold verbatim. However, it follows from (M3PID) that a finitely generated torsionfree module formula_46 over a PID is free. In particular, it asserts that all fractional ideals are principal, a statement that is false whenever formula_19 is not a PID. In other words, the nontriviality of the class group formula_49 causes (M3PID) to fail. Remarkably, the additional structure in torsionfree finitely generated modules over an arbitrary Dedekind domain is precisely controlled by the class group, as we now explain. Over an arbitrary Dedekind domain one has (M3DD) formula_46 is isomorphic to a direct sum of rank one projective modules: formula_50. Moreover, for any rank one projective modules formula_51, one has formula_52 if and only if formula_53 and formula_54 Rank one projective modules can be identified with fractional ideals, and the last condition can be rephrased as formula_55 Thus a finitely generated torsionfree module of rank formula_56 can be expressed as formula_57, where formula_22 is a rank one projective module. The Steinitz class for formula_46 over formula_19 is the class formula_58 of formula_22 in formula_49: it is uniquely determined. A consequence of this is: Theorem: Let formula_19 be a Dedekind domain. Then formula_59, where formula_60 is the Grothendieck group of the commutative monoid of finitely generated projective formula_19 modules. These results were established by Ernst Steinitz in 1912. An additional consequence of this structure, which is not implicit in the preceding theorem, is that if the two projective modules over a Dedekind domain have the same class in the Grothendieck group, then they are in fact abstractly isomorphic. Locally Dedekind rings. There exist integral domains formula_19 that are locally but not globally Dedekind: the localization of formula_19 at each maximal ideal is a Dedekind ring (equivalently, a DVR) but formula_19 itself is not Dedekind. As mentioned above, such a ring cannot be Noetherian. It seems that the first examples of such rings were constructed by N. Nakano in 1953. In the literature such rings are sometimes called "proper almost Dedekind rings". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "m" }, { "math_id": 1, "text": "x^2+my^2" }, { "math_id": 2, "text": "(x+\\sqrt{-m}y)(x-\\sqrt{-m}y)" }, { "math_id": 3, "text": "\\mathbb{Q}(\\sqrt{-m})" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "z^n-y^n" }, { "math_id": 6, "text": "x^n+y^n = z^n" }, { "math_id": 7, "text": "\\mathbb{Z}[\\zeta_n]" }, { "math_id": 8, "text": "\\zeta_n" }, { "math_id": 9, "text": "m = 1, n = 4" }, { "math_id": 10, "text": "m = 2,3, n = 3" }, { "math_id": 11, "text": "\\mathbb{Q}(\\sqrt{D})" }, { "math_id": 12, "text": "D < 0" }, { "math_id": 13, "text": "n > 2" }, { "math_id": 14, "text": "n = 23" }, { "math_id": 15, "text": "\\mathcal{O}_K" }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "p" }, { "math_id": 18, "text": "\\mathbb{Q}(\\sqrt{p})" }, { "math_id": 19, "text": "R" }, { "math_id": 20, "text": "M" }, { "math_id": 21, "text": "R_M" }, { "math_id": 22, "text": "I" }, { "math_id": 23, "text": "J" }, { "math_id": 24, "text": "H" }, { "math_id": 25, "text": "I=JH" }, { "math_id": 26, "text": "R = \\mathcal{O}_K" }, { "math_id": 27, "text": "\\overline{\\textbf{Q}}" }, { "math_id": 28, "text": "\\overline{\\textbf{Z}}" }, { "math_id": 29, "text": "xI \\subset R." }, { "math_id": 30, "text": "\\sum_n i_n j_n, \\, i_n \\in I, \\, j_n \\in J" }, { "math_id": 31, "text": "I^* = (R:I) = \\{x \\in K \\mid xI \\subset R\\}." }, { "math_id": 32, "text": "I^*I \\subset R" }, { "math_id": 33, "text": "I^*" }, { "math_id": 34, "text": "xR" }, { "math_id": 35, "text": "\\frac{1}{x}R" }, { "math_id": 36, "text": "K^{\\times}/R^{\\times}" }, { "math_id": 37, "text": "yR" }, { "math_id": 38, "text": "xy^{-1}" }, { "math_id": 39, "text": "T" }, { "math_id": 40, "text": "rm = 0" }, { "math_id": 41, "text": "r" }, { "math_id": 42, "text": "R/I" }, { "math_id": 43, "text": "R/P^i" }, { "math_id": 44, "text": "P^i" }, { "math_id": 45, "text": "T \\cong R/P_1^{a_1} \\oplus \\cdots \\oplus R/P_r^{a_r} \\cong R/Q_1^{b_1} \\oplus \\cdots \\oplus R/Q_s^{b_s} " }, { "math_id": 46, "text": "P" }, { "math_id": 47, "text": "M = T \\oplus P" }, { "math_id": 48, "text": "R^n" }, { "math_id": 49, "text": "Cl(R)" }, { "math_id": 50, "text": "P \\cong I_1 \\oplus \\cdots \\oplus I_r" }, { "math_id": 51, "text": "I_1,\\ldots,I_r,J_1,\\ldots,J_s" }, { "math_id": 52, "text": " I_1 \\oplus \\cdots \\oplus I_r \\cong J_1 \\oplus \\cdots \\oplus J_s" }, { "math_id": 53, "text": "r = s" }, { "math_id": 54, "text": "I_1 \\otimes \\cdots \\otimes I_r \\cong J_1 \\otimes \\cdots \\otimes J_s.\\," }, { "math_id": 55, "text": " [I_1 \\cdots I_r] = [J_1 \\cdots J_s] \\in Cl(R). " }, { "math_id": 56, "text": "n > 0" }, { "math_id": 57, "text": "R^{n-1} \\oplus I" }, { "math_id": 58, "text": "[I]" }, { "math_id": 59, "text": "K_0(R) \\cong \\mathbb{Z} \\oplus Cl(R)" }, { "math_id": 60, "text": "K_0(R)" } ]
https://en.wikipedia.org/wiki?curid=145132
1451352
Block reflector
"A block reflector is an orthogonal, symmetric matrix that reverses a subspace whose dimension may be greater than one." It is built out of many elementary reflectors. It is also referred to as a triangular factor, and is a triangular matrix and they are used in the Householder transformation. A reflector formula_0 belonging to formula_1 can be written in the form : formula_2 where formula_3 is the identity matrix for formula_1, formula_4 is a scalar and formula_5 belongs to formula_6 . LAPACK routines. Here are some of the LAPACK routines that apply to block reflectors References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " Q " }, { "math_id": 1, "text": "\\mathcal M_n(\\R) " }, { "math_id": 2, "text": " Q = I -auu^T " }, { "math_id": 3, "text": "I" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "u" }, { "math_id": 6, "text": "\\R^n" } ]
https://en.wikipedia.org/wiki?curid=1451352
1451419
Upside potential ratio
The upside-potential ratio is a measure of a return of an investment asset relative to the minimal acceptable return. The measurement allows a firm or individual to choose investments which have had relatively good upside performance, per unit of downside risk. formula_0 where the returns formula_1 have been put into increasing order. Here formula_2 is the probability of the return formula_1 and formula_3 which occurs at formula_4 is the minimal acceptable return. In the secondary formula formula_5 and formula_6. The upside-potential ratio may also be expressed as a ratio of partial moments since formula_7 is the first upper moment and formula_8 is the second lower partial moment. The measure was developed by Frank A. Sortino. Discussion. The upside-potential ratio is a measure of risk-adjusted returns. All such measures are dependent on some measure of risk. In practice, standard deviation is often used, perhaps because it is mathematically easy to manipulate. However, standard deviation treats deviations above the mean (which are desirable, from the investor's perspective) exactly the same as it treats deviations below the mean (which are less desirable, at the very least). In practice, rational investors have a preference for good returns (e.g., deviations above the mean) and an aversion to bad returns (e.g., deviations below the mean). Sortino further found that investors are (or, at least, should be) averse not to deviations below the mean, but to deviations below some "minimal acceptable return" (MAR), which is meaningful to them specifically. Thus, this measure uses deviations above the MAR in the numerator, rewarding performance above the MAR. In the denominator, it has deviations below the MAR, thus penalizing performance below the MAR. Thus, by rewarding desirable results in the numerator and penalizing undesirable results in the denominator, this measure attempts to serve as a pragmatic measure of the goodness of an investment portfolio's returns in a sense that is not just mathematically simple (a primary reason to use standard deviation as a risk measure), but one that considers the realities of investor psychology and behavior. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "U = {{{\\sum_\\min^{+\\infty} {(R_r - R_\\min}) P_r}} \\over \\sqrt{{{\\sum_{-\\infty}^\\min {(R_r - R_\\min})^2 P_r}}}} = \\frac{\\mathbb{E}[(R_r - R_\\min)_+]}{\\sqrt{\\mathbb{E}[(R_r - R_\\min)_-^2]}}, " }, { "math_id": 1, "text": "R_r" }, { "math_id": 2, "text": "P_r" }, { "math_id": 3, "text": "R_\\min" }, { "math_id": 4, "text": "r=\\min" }, { "math_id": 5, "text": "(X)_+ = \\begin{cases}X &\\text{if }X \\geq 0\\\\ 0 &\\text{else}\\end{cases}" }, { "math_id": 6, "text": "(X)_- = (-X)_+" }, { "math_id": 7, "text": "\\mathbb{E}[(R_r - R_\\min)_+]" }, { "math_id": 8, "text": "\\mathbb{E}[(R_r - R_\\min)_-^2]" } ]
https://en.wikipedia.org/wiki?curid=1451419
1451427
Nick Katz
American mathematician Nicholas Michael Katz (born December 7, 1943) is an American mathematician, working in arithmetic geometry, particularly on "p"-adic methods, monodromy and moduli problems, and number theory. He is currently a professor of Mathematics at Princeton University and an editor of the journal "Annals of Mathematics". Life and work. Katz graduated from Johns Hopkins University (BA 1964) and from Princeton University, where in 1965 he received his master's degree and in 1966 he received his doctorate under supervision of Bernard Dwork with thesis "On the Differential Equations Satisfied by Period Matrices". After that, at Princeton, he was an instructor, an assistant professor in 1968, associate professor in 1971 and professor in 1974. From 2002 to 2005 he was the chairman of faculty there. He was also a visiting scholar at the University of Minnesota, the University of Kyoto, Paris VI, Orsay Faculty of Sciences, the Institute for Advanced Study and the IHES. While in France, he adapted methods of scheme theory and category theory to the theory of modular forms. Subsequently, he has applied geometric methods to various exponential sums. From 1968 to 1969, he was a NATO Postdoctoral Fellow, from 1975 to 1976 and from 1987–1988 Guggenheim Fellow and from 1971 to 1972 Sloan Fellow. In 1970 he was an invited speaker at the International Congress of Mathematicians in Nice ("The regularity theorem in algebraic geometry") and in 1978 in Helsinki ("p-adic L functions, Serre-Tate local moduli and ratios of solutions of differential equations"). Since 2003 he is a member of the American Academy of Arts and Sciences and since 2004 the National Academy of Sciences. In 2003 he was awarded with Peter Sarnak the Levi L. Conant Prize of the American Mathematical Society (AMS) for the essay "Zeroes of Zeta Functions and Symmetry" in the "Bulletin of the American Mathematical Society". Since 2004 he is an editor of the "Annals of Mathematics". In 2023 he received from the AMS the Leroy P. Steele Prize for Lifetime Achievement. He played a significant role as a sounding-board for Andrew Wiles when Wiles was developing in secret his proof of Fermat's Last Theorem. Mathematician and cryptographer Neal Koblitz was one of Katz's students. Katz studied, with Sarnak among others, the connection of the eigenvalue distribution of large random matrices of classical groups to the distribution of the distances of the zeros of various "L" and zeta functions in algebraic geometry. He also studied trigonometric sums (Gauss sums) with algebro-geometric methods. He introduced the Katz–Lang finiteness theorem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L" } ]
https://en.wikipedia.org/wiki?curid=1451427
14514547
Range tree
In computer science, a range tree is an ordered tree data structure to hold a list of points. It allows all points within a given range to be reported efficiently, and is typically used in two or higher dimensions. Range trees were introduced by Jon Louis Bentley in 1979. Similar data structures were discovered independently by Lueker, Lee and Wong, and Willard. The range tree is an alternative to the "k"-d tree. Compared to "k"-d trees, range trees offer faster query times of (in Big O notation) formula_1 but worse storage of formula_2, where "n" is the number of points stored in the tree, "d" is the dimension of each point and "k" is the number of points reported by a given query. In 1990, Bernard Chazelle improved this to query time formula_3 and space complexity formula_4. Data structure. A range tree on a set of 1-dimensional points is a balanced binary search tree on those points. The points stored in the tree are stored in the leaves of the tree; each internal node stores the largest value of its left subtree. A range tree on a set of points in "d"-dimensions is a recursively defined multi-level binary search tree. Each level of the data structure is a binary search tree on one of the "d"-dimensions. The first level is a binary search tree on the first of the "d"-coordinates. Each vertex "v" of this tree contains an associated structure that is a ("d"−1)-dimensional range tree on the last ("d"−1)-coordinates of the points stored in the subtree of "v". Operations. Construction. A 1-dimensional range tree on a set of "n" points is a binary search tree, which can be constructed in formula_5 time. Range trees in higher dimensions are constructed recursively by constructing a balanced binary search tree on the first coordinate of the points, and then, for each vertex "v" in this tree, constructing a ("d"−1)-dimensional range tree on the points contained in the subtree of "v". Constructing a range tree this way would require formula_6 time. This construction time can be improved for 2-dimensional range trees to formula_5. Let "S" be a set of "n" 2-dimensional points. If "S" contains only one point, return a leaf containing that point. Otherwise, construct the associated structure of "S", a 1-dimensional range tree on the "y"-coordinates of the points in "S". Let "x"m be the median "x"-coordinate of the points. Let "S"L be the set of points with "x"-coordinate less than or equal to "x"m and let "S"R be the set of points with "x"-coordinate greater than "x"m. Recursively construct "v"L, a 2-dimensional range tree on "S"L, and "v"R, a 2-dimensional range tree on "S"R. Create a vertex "v" with left-child "v"L and right-child "v"R. If we sort the points by their "y"-coordinates at the start of the algorithm, and maintain this ordering when splitting the points by their "x"-coordinate, we can construct the associated structures of each subtree in linear time. This reduces the time to construct a 2-dimensional range tree to formula_5, and also reduces the time to construct a "d"-dimensional range tree to formula_0. Range queries. A range query on a range tree reports the set of points that lie inside a given interval. To report the points that lie in the interval ["x"1, "x"2], we start by searching for "x"1 and "x"2. At some vertex in the tree, the search paths to "x"1 and "x"2 will diverge. Let "v"split be the last vertex that these two search paths have in common. For every vertex "v" in the search path from "v"split to "x"1, if the value stored at "v" is greater than "x"1, report every point in the right-subtree of "v". If "v" is a leaf, report the value stored at "v" if it is inside the query interval. Similarly, reporting all of the points stored in the left-subtrees of the vertices with values less than "x"2 along the search path from "v"split to "x"2, and report the leaf of this path if it lies within the query interval. Since the range tree is a balanced binary tree, the search paths to "x"1 and "x"2 have length formula_7. Reporting all of the points stored in the subtree of a vertex can be done in linear time using any tree traversal algorithm. It follows that the time to perform a range query is formula_8, where "k" is the number of points in the query interval. Range queries in "d"-dimensions are similar. Instead of reporting all of the points stored in the subtrees of the search paths, perform a ("d"−1)-dimensional range query on the associated structure of each subtree. Eventually, a 1-dimensional range query will be performed and the correct points will be reported. Since a "d"-dimensional query consists of formula_7 ("d"−1)-dimensional range queries, it follows that the time required to perform a "d"-dimensional range query is formula_9, where "k" is the number of points in the query interval. This can be reduced to formula_10 using a variant of fractional cascading. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n \\log^{d - 1} n)" }, { "math_id": 1, "text": "O(\\log^dn+k)" }, { "math_id": 2, "text": "O(n\\log^{d-1} n)" }, { "math_id": 3, "text": "O(\\log^{d-1} n + k)" }, { "math_id": 4, "text": "O\\left(n\\left(\\frac{\\log n}{\\log\\log n}\\right)^{d-1}\\right)" }, { "math_id": 5, "text": "O(n \\log n)" }, { "math_id": 6, "text": "O(n \\log ^d n)" }, { "math_id": 7, "text": "O(\\log n)" }, { "math_id": 8, "text": "O(\\log n + k)" }, { "math_id": 9, "text": "O(\\log^{d} n + k)" }, { "math_id": 10, "text": "O(\\log^{d - 1} n + k)" } ]
https://en.wikipedia.org/wiki?curid=14514547
1451476
Band matrix
Matrix with non-zero elements only in a diagonal band In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal "band", comprising the main diagonal and zero or more diagonals on either side. Band matrix. Bandwidth. Formally, consider an "n"×"n" matrix "A"=("a""i,j" ). If all matrix elements are zero outside a diagonally bordered band whose range is determined by constants "k"1 and "k"2: formula_0 then the quantities "k"1 and "k"2 are called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;lower bandwidth and &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;upper bandwidth, respectively. The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;bandwidth of the matrix is the maximum of "k"1 and "k"2; in other words, it is the number "k" such that formula_1 if formula_2. Applications. In numerical analysis, matrices from finite element or finite difference problems are often banded. Such matrices can be viewed as descriptions of the coupling between the problem variables; the banded property corresponds to the fact that variables are not coupled over arbitrarily large distances. Such matrices can be further divided – for instance, banded matrices exist where every element in the band is nonzero. Problems in higher dimensions also lead to banded matrices, in which case the band itself also tends to be sparse. For instance, a partial differential equation on a square domain (using central differences) will yield a matrix with a bandwidth equal to the square root of the matrix dimension, but inside the band only 5 diagonals are nonzero. Unfortunately, applying Gaussian elimination (or equivalently an LU decomposition) to such a matrix results in the band being filled in by many non-zero elements. Band storage. Band matrices are usually stored by storing the diagonals in the band; the rest is implicitly zero. For example, a tridiagonal matrix has bandwidth 1. The 6-by-6 matrix formula_3 is stored as the 6-by-3 matrix formula_4 A further saving is possible when the matrix is symmetric. For example, consider a symmetric 6-by-6 matrix with an upper bandwidth of 2: formula_5 This matrix is stored as the 6-by-3 matrix: formula_6 Band form of sparse matrices. From a computational point of view, working with band matrices is always preferential to working with similarly dimensioned square matrices. A band matrix can be likened in complexity to a rectangular matrix whose row dimension is equal to the bandwidth of the band matrix. Thus the work involved in performing operations such as multiplication falls significantly, often leading to huge savings in terms of calculation time and complexity. As sparse matrices lend themselves to more efficient computation than dense matrices, as well as in more efficient utilization of computer storage, there has been much research focused on finding ways to minimise the bandwidth (or directly minimise the fill-in) by applying permutations to the matrix, or other such equivalence or similarity transformations. The Cuthill–McKee algorithm can be used to reduce the bandwidth of a sparse symmetric matrix. There are, however, matrices for which the reverse Cuthill–McKee algorithm performs better. There are many other methods in use. The problem of finding a representation of a matrix with minimal bandwidth by means of permutations of rows and columns is NP-hard. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_{i,j}=0 \\quad\\mbox{if}\\quad j<i-k_1 \\quad\\mbox{ or }\\quad j>i+k_2; \\quad k_1, k_2 \\ge 0.\\," }, { "math_id": 1, "text": " a_{i,j}=0 " }, { "math_id": 2, "text": " |i-j| > k " }, { "math_id": 3, "text": "\n\\begin{bmatrix}\n B_{11} & B_{12} & 0 & \\cdots & \\cdots & 0 \\\\\n B_{21} & B_{22} & B_{23} & \\ddots & \\ddots & \\vdots \\\\\n 0 & B_{32} & B_{33} & B_{34} & \\ddots & \\vdots \\\\\n \\vdots & \\ddots & B_{43} & B_{44} & B_{45} & 0 \\\\\n \\vdots & \\ddots & \\ddots & B_{54} & B_{55} & B_{56} \\\\\n 0 & \\cdots & \\cdots & 0 & B_{65} & B_{66}\n\\end{bmatrix}\n" }, { "math_id": 4, "text": "\n\\begin{bmatrix}\n 0 & B_{11} & B_{12}\\\\\n B_{21} & B_{22} & B_{23} \\\\\n B_{32} & B_{33} & B_{34} \\\\\n B_{43} & B_{44} & B_{45} \\\\\n B_{54} & B_{55} & B_{56} \\\\\n B_{65} & B_{66} & 0\n\\end{bmatrix}.\n" }, { "math_id": 5, "text": "\n\\begin{bmatrix}\n A_{11} & A_{12} & A_{13} & 0 & \\cdots & 0 \\\\\n & A_{22} & A_{23} & A_{24} & \\ddots & \\vdots \\\\\n & & A_{33} & A_{34} & A_{35} & 0 \\\\\n & & & A_{44} & A_{45} & A_{46} \\\\\n & sym & & & A_{55} & A_{56} \\\\\n & & & & & A_{66}\n\\end{bmatrix}.\n" }, { "math_id": 6, "text": "\n\\begin{bmatrix}\n A_{11} & A_{12} & A_{13} \\\\\n A_{22} & A_{23} & A_{24} \\\\\n A_{33} & A_{34} & A_{35} \\\\\n A_{44} & A_{45} & A_{46} \\\\\n A_{55} & A_{56} & 0 \\\\\n A_{66} & 0 & 0\n\\end{bmatrix}.\n" } ]
https://en.wikipedia.org/wiki?curid=1451476
1451556
Packed storage matrix
Programming term A packed storage matrix, also known as packed matrix, is a term used in programming for representing an formula_0 matrix. It is a more compact way than an m-by-n rectangular array by exploiting a special structure of the matrix. Typical examples of matrices that can take advantage of packed storage include: Code examples (Fortran). Both of the following storage schemes are used extensively in BLAS and LAPACK. An example of packed storage for Hermitian matrix: complex :: A(n,n) ! a hermitian matrix complex :: AP(n*(n+1)/2) ! packed storage for A ! the lower triangle of A is stored column-by-column in AP. ! unpacking the matrix AP to A do j=1,n k = j*(j-1)/2 A(1:j,j) = AP(1+k:j+k) A(j,1:j-1) = conjg(AP(1+k:j-1+k)) end do An example of packed storage for banded matrix: real :: A(m,n) ! a banded matrix with kl subdiagonals and ku superdiagonals real :: AP(-kl:ku,n) ! packed storage for A ! the band of A is stored column-by-column in AP. Some elements of AP are unused. ! unpacking the matrix AP to A do j = 1, n forall(i=max(1,j-kl):min(m,j+ku)) A(i,j) = AP(i-j,j) end do print *,AP(0,:) ! the diagonal
[ { "math_id": 0, "text": "m\\times n" } ]
https://en.wikipedia.org/wiki?curid=1451556
145162
Parallel computing
Programming paradigm in which many processes are executed simultaneously Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. In computer science, parallelism and concurrency are two different things: a parallel program uses multiple CPU cores, each core performing a task independently. On the other hand, concurrency enables a program to deal with multiple tasks even on a single CPU core; the core switches between tasks (i.e. threads) without necessarily completing each one. A program can have both, neither or a combination of parallelism and concurrency characteristics. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance. A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Background. Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Historically parallel computing was used for scientific computing and the simulation of scientific problems, particularly in the natural and engineering sciences, such as meteorology. This led to the design of parallel hardware and software, as well as high performance computing. Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption "P" by a chip is given by the equation "P" = "C" × "V" 2 × "F", where "C" is the capacitance being switched per clock cycle (proportional to the number of transistors whose inputs change), "V" is voltage, and "F" is the processor frequency (cycles per second). Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with the problem of power consumption and overheating the major central processing unit (CPU or processor) manufacturers started to produce power efficient processors with multiple cores. The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers. Thus parallelization of serial programs has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers have 10+ core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months. This could mean that after 2020 a typical processor will have dozens or hundreds of cores, however in reality the standard is somewhere in the region of 4 to 16 cores, with some designs having a mix of performance and efficiency cores (such as ARM's big.LITTLE design) due to thermal and design constraints. An operating system can ensure that different tasks and user programs are run in parallel on the available cores. However, for a serial software program to take full advantage of the multi-core architecture the programmer needs to restructure and parallelize the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelize their software code to take advantage of the increasing computing power of multicore architectures. Amdahl's law and Gustafson's law. Optimally, the speedup from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements. The potential speedup of an algorithm on a parallel computing platform is given by Amdahl's law formula_0 where Since "S"latency &lt; 1/(1 - "p"), it shows that a small part of the program which cannot be parallelized will limit the overall speedup available from parallelization. A program solving a large mathematical or engineering problem will typically consist of several parallelizable parts and several non-parallelizable (serial) parts. If the non-parallelizable part of a program accounts for 10% of the runtime ("p" = 0.9), we can get no more than a 10 times speedup, regardless of how many processors are added. This puts an upper limit on the usefulness of adding more parallel execution units. "When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule. The bearing of a child takes nine months, no matter how many women are assigned." Amdahl's law only applies to cases where the problem size is fixed. In practice, as more computing resources become available, they tend to get used on larger problems (larger datasets), and the time spent in the parallelizable part often grows much faster than the inherently serial work. In this case, Gustafson's law gives a less pessimistic and more realistic assessment of parallel performance: formula_1 Both Amdahl's law and Gustafson's law assume that the running time of the serial part of the program is independent of the number of processors. Amdahl's law assumes that the entire problem is of fixed size so that the total amount of work to be done in parallel is also "independent of the number of processors", whereas Gustafson's law assumes that the total amount of work to be done in parallel "varies linearly with the number of processors". Dependencies. Understanding data dependencies is fundamental in implementing parallel algorithms. No program can run more quickly than the longest chain of dependent calculations (known as the critical path), since calculations that depend upon prior calculations in the chain must be executed in order. However, most algorithms do not consist of just a long chain of dependent calculations; there are usually opportunities to execute independent calculations in parallel. Let "P""i" and "P""j" be two program segments. Bernstein's conditions describe when the two are independent and can be executed in parallel. For "P""i", let "I""i" be all of the input variables and "O""i" the output variables, and likewise for "P""j". "P""i" and "P""j" are independent if they satisfy formula_2 formula_3 formula_4 Violation of the first condition introduces a flow dependency, corresponding to the first segment producing a result used by the second segment. The second condition represents an anti-dependency, when the second segment produces a variable needed by the first segment. The third and final condition represents an output dependency: when two segments write to the same location, the result comes from the logically last executed segment. Consider the following functions, which demonstrate several kinds of dependencies: 1: function Dep(a, b) 2: c := a * b 3: d := 3 * c 4: end function In this example, instruction 3 cannot be executed before (or even in parallel with) instruction 2, because instruction 3 uses a result from instruction 2. It violates condition 1, and thus introduces a flow dependency. 1: function NoDep(a, b) 2: c := a * b 3: d := 3 * b 4: e := a + b 5: end function In this example, there are no dependencies between the instructions, so they can all be run in parallel. Bernstein's conditions do not allow memory to be shared between different processes. For that, some means of enforcing an ordering between accesses is necessary, such as semaphores, barriers or some other synchronization method. Race conditions, mutual exclusion, synchronization, and parallel slowdown. Subtasks in a parallel program are often called threads. Some parallel computer architectures use smaller, lightweight versions of threads known as fibers, while others use bigger versions known as processes. However, "threads" is generally accepted as a generic term for subtasks. Threads will often need synchronized access to an object or other resource, for example when they must update a variable that is shared between them. Without synchronization, the instructions between the two threads may be interleaved in any order. For example, consider the following program: If instruction 1B is executed between 1A and 3A, or if instruction 1A is executed between 1B and 3B, the program will produce incorrect data. This is known as a race condition. The programmer must use a lock to provide mutual exclusion. A lock is a programming language construct that allows one thread to take control of a variable and prevent other threads from reading or writing it, until that variable is unlocked. The thread holding the lock is free to execute its critical section (the section of a program that requires exclusive access to some variable), and to unlock the data when it is finished. Therefore, to guarantee correct program execution, the above program can be rewritten to use locks: One thread will successfully lock variable V, while the other thread will be locked out—unable to proceed until V is unlocked again. This guarantees correct execution of the program. Locks may be necessary to ensure correct program execution when threads must serialize access to resources, but their use can greatly slow a program and may affect its reliability. Locking multiple variables using non-atomic locks introduces the possibility of program deadlock. An atomic lock locks multiple variables all at once. If it cannot lock all of them, it does not lock any of them. If two threads each need to lock the same two variables using non-atomic locks, it is possible that one thread will lock one of them and the second thread will lock the second variable. In such a case, neither thread can complete, and deadlock results. Many parallel programs require that their subtasks act in synchrony. This requires the use of a barrier. Barriers are typically implemented using a lock or a semaphore. One class of algorithms, known as lock-free and wait-free algorithms, altogether avoids the use of locks and barriers. However, this approach is generally difficult to implement and requires correctly designed data structures. Not all parallelization results in speed-up. Generally, as a task is split up into more and more threads, those threads spend an ever-increasing portion of their time communicating with each other or waiting on each other for access to resources. Once the overhead from resource contention or communication dominates the time spent on other computation, further parallelization (that is, splitting the workload over even more threads) increases rather than decreases the amount of time required to finish. This problem, known as parallel slowdown, can be improved in some cases by software analysis and redesign. Fine-grained, coarse-grained, and embarrassing parallelism. Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it exhibits embarrassing parallelism if they rarely or never have to communicate. Embarrassingly parallel applications are considered the easiest to parallelize. Flynn's taxonomy. Michael J. Flynn created one of the earliest classification systems for parallel (and sequential) computers and programs, now known as Flynn's taxonomy. Flynn classified programs and computers by whether they were operating using a single set or multiple sets of instructions, and whether or not those instructions were using a single set or multiple sets of data. The single-instruction-single-data (SISD) classification is equivalent to an entirely sequential program. The single-instruction-multiple-data (SIMD) classification is analogous to doing the same operation repeatedly over a large data set. This is commonly done in signal processing applications. Multiple-instruction-single-data (MISD) is a rarely used classification. While computer architectures to deal with this were devised (such as systolic arrays), few applications that fit this class materialized. Multiple-instruction-multiple-data (MIMD) programs are by far the most common type of parallel programs. According to David A. Patterson and John L. Hennessy, "Some machines are hybrids of these categories, of course, but this classic model has survived because it is simple, easy to understand, and gives a good first approximation. It is also—perhaps because of its understandability—the most widely used scheme." Granularity. Bit-level parallelism. From the advent of very-large-scale integration (VLSI) computer-chip fabrication technology in the 1970s until about 1986, speed-up in computer architecture was driven by doubling computer word size—the amount of information the processor can manipulate per cycle. Increasing the word size reduces the number of instructions the processor must execute to perform an operation on variables whose sizes are greater than the length of the word. For example, where an 8-bit processor must add two 16-bit integers, the processor must first add the 8 lower-order bits from each integer using the standard addition instruction, then add the 8 higher-order bits using an add-with-carry instruction and the carry bit from the lower order addition; thus, an 8-bit processor requires two instructions to complete a single operation, where a 16-bit processor would be able to complete the operation with a single instruction. Historically, 4-bit microprocessors were replaced with 8-bit, then 16-bit, then 32-bit microprocessors. This trend generally came to an end with the introduction of 32-bit processors, which has been a standard in general-purpose computing for two decades. Not until the early 2000s, with the advent of x86-64 architectures, did 64-bit processors become commonplace. Instruction-level parallelism. A computer program is, in essence, a stream of instructions executed by a processor. Without instruction-level parallelism, a processor can only issue less than one instruction per clock cycle (IPC &lt; 1). These processors are known as "subscalar" processors. These instructions can be re-ordered and combined into groups which are then executed in parallel without changing the result of the program. This is known as instruction-level parallelism. Advances in instruction-level parallelism dominated computer architecture from the mid-1980s until the mid-1990s. All modern processors have multi-stage instruction pipelines. Each stage in the pipeline corresponds to a different action the processor performs on that instruction in that stage; a processor with an "N"-stage pipeline can have up to "N" different instructions at different stages of completion and thus can issue one instruction per clock cycle (IPC = 1). These processors are known as "scalar" processors. The canonical example of a pipelined processor is a RISC processor, with five stages: instruction fetch (IF), instruction decode (ID), execute (EX), memory access (MEM), and register write back (WB). The Pentium 4 processor had a 35-stage pipeline. Most modern processors also have multiple execution units. They usually combine this feature with pipelining and thus can issue more than one instruction per clock cycle (IPC &gt; 1). These processors are known as "superscalar" processors. Superscalar processors differ from multi-core processors in that the several execution units are not entire processors (i.e. processing units). Instructions can be grouped together only if there is no data dependency between them. Scoreboarding and the Tomasulo algorithm (which is similar to scoreboarding but makes use of register renaming) are two of the most common techniques for implementing out-of-order execution and instruction-level parallelism. Task parallelism. Task parallelisms is the characteristic of a parallel program that "entirely different calculations can be performed on either the same or different sets of data". This contrasts with data parallelism, where the same calculation is performed on the same or different sets of data. Task parallelism involves the decomposition of a task into sub-tasks and then allocating each sub-task to a processor for execution. The processors would then execute these sub-tasks concurrently and often cooperatively. Task parallelism does not usually scale with the size of a problem. Superword level parallelism. Superword level parallelism is a vectorization technique based on loop unrolling and basic block vectorization. It is distinct from loop vectorization algorithms in that it can exploit parallelism of inline code, such as manipulating coordinates, color channels or in loops unrolled by hand. Hardware. Memory and communication. Main memory in a parallel computer is either shared memory (shared between all processing elements in a single address space), or distributed memory (in which each processing element has its own local address space). Distributed memory refers to the fact that the memory is logically distributed, but often implies that it is physically distributed as well. Distributed shared memory and memory virtualization combine the two approaches, where the processing element has its own local memory and access to the memory on non-local processors. Accesses to local memory are typically faster than accesses to non-local memory. On the supercomputers, distributed shared memory space can be implemented using the programming model such as PGAS. This model allows processes on one compute node to transparently access the remote memory of another compute node. All compute nodes are also connected to an external shared memory system via high-speed interconnect, such as Infiniband, this external shared memory system is known as burst buffer, which is typically built from arrays of non-volatile memory physically distributed across multiple I/O nodes. Computer architectures in which each element of main memory can be accessed with equal latency and bandwidth are known as uniform memory access (UMA) systems. Typically, that can be achieved only by a shared memory system, in which the memory is not physically distributed. A system that does not have this property is known as a non-uniform memory access (NUMA) architecture. Distributed memory systems have non-uniform memory access. Computer systems make use of caches—small and fast memories located close to the processor which store temporary copies of memory values (nearby in both the physical and logical sense). Parallel computer systems have difficulties with caches that may store the same value in more than one location, with the possibility of incorrect program execution. These computers require a cache coherency system, which keeps track of cached values and strategically purges them, thus ensuring correct program execution. Bus snooping is one of the most common methods for keeping track of which values are being accessed (and thus should be purged). Designing large, high-performance cache coherence systems is a very difficult problem in computer architecture. As a result, shared memory computer architectures do not scale as well as distributed memory systems do. Processor–processor and processor–memory communication can be implemented in hardware in several ways, including via shared (either multiported or multiplexed) memory, a crossbar switch, a shared bus or an interconnect network of a myriad of topologies including star, ring, tree, hypercube, fat hypercube (a hypercube with more than one processor at a node), or n-dimensional mesh. Parallel computers based on interconnected networks need to have some kind of routing to enable the passing of messages between nodes that are not directly connected. The medium used for communication between the processors is likely to be hierarchical in large multiprocessor machines. Classes of parallel computers. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually exclusive; for example, clusters of symmetric multiprocessors are relatively common. Multi-core computing. A multi-core processor is a processor that includes multiple processing units (called "cores") on the same chip. This processor differs from a superscalar processor, which includes multiple execution units and can issue multiple instructions per clock cycle from one instruction stream (thread); in contrast, a multi-core processor can issue multiple instructions per clock cycle from multiple instruction streams. IBM's Cell microprocessor, designed for use in the Sony PlayStation 3, is a prominent multi-core processor. Each core in a multi-core processor can potentially be superscalar as well—that is, on every clock cycle, each core can issue multiple instructions from one thread. Simultaneous multithreading (of which Intel's Hyper-Threading is the best known) was an early form of pseudo-multi-coreism. A processor capable of concurrent multithreading includes multiple execution units in the same processing unit—that is it has a superscalar architecture—and can issue multiple instructions per clock cycle from "multiple" threads. Temporal multithreading on the other hand includes a single execution unit in the same processing unit and can issue one instruction at a time from "multiple" threads. Symmetric multiprocessing. A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors. Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists. Distributed computing. A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. The terms "concurrent computing", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Cluster computing. A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer. Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric, load balancing is more difficult if they are not. The most common type of cluster is the Beowulf cluster, which is a cluster implemented on multiple identical commercial off-the-shelf computers connected with a TCP/IP Ethernet local area network. Beowulf technology was originally developed by Thomas Sterling and Donald Becker. 87% of all Top500 supercomputers are clusters. The remaining are Massively Parallel Processors, explained below. Because grid computing systems (described below) can easily handle embarrassingly parallel problems, modern clusters are typically designed to handle more difficult problems—problems that require nodes to share intermediate results with each other more often. This requires a high bandwidth and, more importantly, a low-latency interconnection network. Many historic and current supercomputers use customized high-performance network hardware specifically designed for cluster computing, such as the Cray Gemini network. As of 2014, most current supercomputers use some off-the-shelf standard network hardware, often Myrinet, InfiniBand, or Gigabit Ethernet. Massively parallel computing. A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks (whereas clusters use commodity hardware for networking). MPPs also tend to be larger than clusters, typically having "far more" than 100 processors. In an MPP, "each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect." IBM's Blue Gene/L, the fifth fastest supercomputer in the world according to the June 2009 TOP500 ranking, is an MPP. Grid computing. Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over the Internet to work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, distributed computing typically deals only with embarrassingly parallel problems. Most grid computing applications use middleware (software that sits between the operating system and the application to manage network resources and standardize the software interface). The most common grid computing middleware is the Berkeley Open Infrastructure for Network Computing (BOINC). Often volunteer computing software makes use of "spare cycles", performing computations at times when a computer is idling. Cloud computing. The ubiquity of Internet brought the possibility of large-scale cloud computing. Specialized parallel computers. Within parallel computing, there are specialized parallel devices that remain niche areas of interest. While not domain-specific, they tend to be applicable to only a few classes of parallel problems. Reconfigurable computing with field-programmable gate arrays. Reconfigurable computing is the use of a field-programmable gate array (FPGA) as a co-processor to a general-purpose computer. An FPGA is, in essence, a computer chip that can rewire itself for a given task. FPGAs can be programmed with hardware description languages such as VHDL or Verilog. Several vendors have created C to HDL languages that attempt to emulate the syntax and semantics of the C programming language, with which most programmers are familiar. The best known C to HDL languages are Mitrion-C, Impulse C, and Handel-C. Specific subsets of SystemC based on C++ can also be used for this purpose. AMD's decision to open its HyperTransport technology to third-party vendors has become the enabling technology for high-performance reconfigurable computing. According to Michael R. D'Amour, Chief Operating Officer of DRC Computer Corporation, "when we first walked into AMD, they called us 'the socket stealers.' Now they call us their partners." General-purpose computing on graphics processing units (GPGPU). General-purpose computing on graphics processing units (GPGPU) is a fairly recent trend in computer engineering research. GPUs are co-processors that have been heavily optimized for computer graphics processing. Computer graphics processing is a field dominated by data parallel operations—particularly linear algebra matrix operations. In the early days, GPGPU programs used the normal graphics APIs for executing programs. However, several new programming languages and platforms have been built to do general purpose computation on GPUs with both Nvidia and AMD releasing programming environments with CUDA and Stream SDK respectively. Other GPU programming languages include BrookGPU, PeakStream, and RapidMind. Nvidia has also released specific products for computation in their Tesla series. The technology consortium Khronos Group has released the OpenCL specification, which is a framework for writing programs that execute across platforms consisting of CPUs and GPUs. AMD, Apple, Intel, Nvidia and others are supporting OpenCL. Application-specific integrated circuits. Several application-specific integrated circuit (ASIC) approaches have been devised for dealing with parallel applications. Because an ASIC is (by definition) specific to a given application, it can be fully optimized for that application. As a result, for a given application, an ASIC tends to outperform a general-purpose computer. However, ASICs are created by UV photolithography. This process requires a mask set, which can be extremely expensive. A mask set can cost over a million US dollars. (The smaller the transistors required for the chip, the more expensive the mask will be.) Meanwhile, performance increases in general-purpose computing over time (as described by Moore's law) tend to wipe out these gains in only one or two chip generations. High initial cost, and the tendency to be overtaken by Moore's-law-driven general-purpose computing, has rendered ASICs unfeasible for most parallel computing applications. However, some have been built. One example is the PFLOPS RIKEN MDGRAPE-3 machine which uses custom ASICs for molecular dynamics simulation. Vector processors. A vector processor is a CPU or computer system that can execute the same instruction on large sets of data. Vector processors have high-level operations that work on linear arrays of numbers or vectors. An example vector operation is "A" = "B" × "C", where "A", "B", and "C" are each 64-element vectors of 64-bit floating-point numbers. They are closely related to Flynn's SIMD classification. Cray computers became famous for their vector-processing computers in the 1970s and 1980s. However, vector processors—both as CPUs and as full computer systems—have generally disappeared. Modern processor instruction sets do include some vector processing instructions, such as with Freescale Semiconductor's AltiVec and Intel's Streaming SIMD Extensions (SSE). Software. Parallel programming languages. Concurrent programming languages, libraries, APIs, and parallel programming models (such as algorithmic skeletons) have been created for programming parallel computers. These can generally be divided into classes based on the assumptions they make about the underlying memory architecture—shared memory, distributed memory, or shared distributed memory. Shared memory programming languages communicate by manipulating shared memory variables. Distributed memory uses message passing. POSIX Threads and OpenMP are two of the most widely used shared memory APIs, whereas Message Passing Interface (MPI) is the most widely used message-passing system API. One concept used in programming parallel programs is the future concept, where one part of a program promises to deliver a required datum to another part of a program at some future time. Efforts to standardize parallel programming include an open standard called OpenHMPP for hybrid multi-core parallel programming. The OpenHMPP directive-based programming model offers a syntax to efficiently offload computations on hardware accelerators and to optimize data movement to/from the hardware memory using remote procedure calls. The rise of consumer GPUs has led to support for compute kernels, either in graphics APIs (referred to as compute shaders), in dedicated APIs (such as OpenCL), or in other language extensions. Automatic parallelization. Automatic parallelization of a sequential program by a compiler is the "holy grail" of parallel computing, especially with the aforementioned limit of processor frequency. Despite decades of work by compiler researchers, automatic parallelization has had only limited success. Mainstream parallel programming languages remain either explicitly parallel or (at best) partially implicit, in which a programmer gives the compiler directives for parallelization. A few fully implicit parallel programming languages exist—SISAL, Parallel Haskell, SequenceL, System C (for FPGAs), Mitrion-C, VHDL, and Verilog. Application checkpointing. As a computer system grows in complexity, the mean time between failures usually decreases. Application checkpointing is a technique whereby the computer system takes a "snapshot" of the application—a record of all current resource allocations and variable states, akin to a core dump—; this information can be used to restore the program if the computer should fail. Application checkpointing means that the program has to restart from only its last checkpoint rather than the beginning. While checkpointing provides benefits in a variety of situations, it is especially useful in highly parallel systems with a large number of processors used in high performance computing. Algorithmic methods. As parallel computers become larger and faster, we are now able to solve problems that had previously taken too long to run. Fields as varied as bioinformatics (for protein folding and sequence analysis) and economics have taken advantage of parallel computing. Common types of problems in parallel computing applications include: Fault tolerance. Parallel computing can also be applied to the design of fault-tolerant computer systems, particularly via lockstep systems performing the same operation in parallel. This provides redundancy in case one component fails, and also allows automatic error detection and error correction if the results differ. These methods can be used to help prevent single-event upsets caused by transient errors. Although additional measures may be required in embedded or specialized systems, this method can provide a cost-effective approach to achieve n-modular redundancy in commercial off-the-shelf systems. History. The origins of true (MIMD) parallelism go back to Luigi Federico Menabrea and his "Sketch of the Analytic Engine Invented by Charles Babbage". In 1957, Compagnie des Machines Bull announced the first computer architecture specifically designed for parallelism, the Gamma 60. It utilized a fork-join model and a "Program Distributor" to dispatch and collect data to and from independent processing units connected to a central memory. In April 1958, Stanley Gill (Ferranti) discussed parallel programming and the need for branching and waiting. Also in 1958, IBM researchers John Cocke and Daniel Slotnick discussed the use of parallelism in numerical calculations for the first time. Burroughs Corporation introduced the D825 in 1962, a four-processor computer that accessed up to 16 memory modules through a crossbar switch. In 1967, Amdahl and Slotnick published a debate about the feasibility of parallel processing at American Federation of Information Processing Societies Conference. It was during this debate that Amdahl's law was coined to define the limit of speed-up due to parallelism. In 1969, Honeywell introduced its first Multics system, a symmetric multiprocessor system capable of running up to eight processors in parallel. C.mmp, a multi-processor project at Carnegie Mellon University in the 1970s, was among the first multiprocessors with more than a few processors. The first bus-connected multiprocessor with snooping caches was the Synapse N+1 in 1984. SIMD parallel computers can be traced back to the 1970s. The motivation behind early SIMD computers was to amortize the gate delay of the processor's control unit over multiple instructions. In 1964, Slotnick had proposed building a massively parallel computer for the Lawrence Livermore National Laboratory. His design was funded by the US Air Force, which was the earliest SIMD parallel-computing effort, ILLIAC IV. The key to its design was a fairly high parallelism, with up to 256 processors, which allowed the machine to work on large datasets in what would later be known as vector processing. However, ILLIAC IV was called "the most infamous of supercomputers", because the project was only one-fourth completed, but took 11 years and cost almost four times the original estimate. When it was finally ready to run its first real application in 1976, it was outperformed by existing commercial supercomputers such as the Cray-1. Biological brain as massively parallel computer. In the early 1970s, at the MIT Computer Science and Artificial Intelligence Laboratory, Marvin Minsky and Seymour Papert started developing the "Society of Mind" theory, which views the biological brain as massively parallel computer. In 1986, Minsky published "The Society of Mind", which claims that "mind is formed from many little agents, each mindless by itself". The theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses a robotic arm, a video camera, and a computer to build with children's blocks. Similar models (which also view the biological brain as a massively parallel computer, i.e., the brain is made up of a constellation of independent or semi-independent agents) were also described by: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S_\\text{latency}(s) = \\frac{1}{1 - p + \\frac{p}{s}} = \\frac{s}{s + p(1-s)}" }, { "math_id": 1, "text": "S_\\text{latency}(s) = 1 - p + sp." }, { "math_id": 2, "text": "I_j \\cap O_i = \\varnothing," }, { "math_id": 3, "text": "I_i \\cap O_j = \\varnothing," }, { "math_id": 4, "text": "O_i \\cap O_j = \\varnothing." } ]
https://en.wikipedia.org/wiki?curid=145162
14517330
Sieve of Sundaram
In mathematics, the sieve of Sundaram is a variant of the sieve of Eratosthenes, a simple deterministic algorithm for finding all the prime numbers up to a specified integer. It was discovered by Indian student S. P. Sundaram in 1934. Algorithm. Start with a list of the integers from 1 to "n". From this list, remove all numbers of the form "i" + "j" + 2"ij" where: The remaining numbers are doubled and incremented by one, giving a list of the odd prime numbers (i.e., all primes except 2) below 2"n" + 2. The sieve of Sundaram sieves out the composite numbers just as the sieve of Eratosthenes does, but even numbers are not considered; the work of "crossing out" the multiples of 2 is done by the final double-and-increment step. Whenever Eratosthenes' method would cross out "k" different multiples of a prime &amp;NoBreak;&amp;NoBreak;, Sundaram's method crosses out &amp;NoBreak;&amp;NoBreak; for formula_2. Correctness. If we start with integers from 1 to n, the final list contains only odd integers from 3 to &amp;NoBreak;&amp;NoBreak;. From this final list, some odd integers have been excluded; we must show these are precisely the "composite" odd integers less than &amp;NoBreak;&amp;NoBreak;. Let q be an odd integer of the form &amp;NoBreak;&amp;NoBreak;. Then, q is excluded if and only if k is of the form &amp;NoBreak;&amp;NoBreak;, that is &amp;NoBreak;&amp;NoBreak;. Then we have: formula_3 So, an odd integer is excluded from the final list if and only if it has a factorization of the form &amp;NoBreak;&amp;NoBreak; — which is to say, if it has a non-trivial odd factor. Therefore the list must be composed of exactly the set of odd "prime" numbers less than or equal to &amp;NoBreak;&amp;NoBreak;. def sieve_of_Sundaram(n): """The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer.""" k = (n - 2) // 2 integers_list = [True] * (k + 1) for i in range(1, k + 1): j = i while i + j + 2 * i * j &lt;= k: integers_list[i + j + 2 * i * j] = False j += 1 if n &gt; 2: print(2, end=' ') for i in range(1, k + 1): if integers_list[i]: print(2 * i + 1, end=' ') Asymptotic complexity. The above obscure but as commonly implemented Python version of the Sieve of Sundaram hides the true complexity of the algorithm due to the following reasons: The following Python code in the same style resolves the above three issues, as well converting the code to a prime counting function that also displays the total number of composite culling representation culling operations: import math def sieve_of_Sundaram(n): """The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer.""" if n &lt; 3: if n &lt; 2: return 0 else: return 1 k = (n - 3) // 2 + 1 integers_list = [True for i in range(k)] ops = 0 for i in range((int(math.sqrt(n)) - 3) // 2 + 1): p = 2 * i + 3 s = (p * p - 3) // 2 # compute cull start for j in range(s, k, p): integers_list[j] = False ops += 1 print("Total operations: ", ops, ";", sep=") count = 1 for i in range(k): if integers_list[i]: count += 1 print("Found ", count, " primes to ", n, ".", sep=") Note the commented out line which is all that is necessary to convert the Sieve of Sundaram to the Odds-Only (wheel factorized by the only even prime of two) Sieve of Eratosthenes; this clarifies that the only difference between these two algorithms is that the Sieve of Sundaram culls composite numbers using all odd numbers as the base values, whereas the Odds-Only Sieve of Eratosthenes uses only the odd primes as base values, with both ranges of base values bounded to the square root of the range. When run for various ranges, it is immediately clear that while, of course, the resulting count of primes for a given range is identical between the two algorithms, the number of culling operations is much higher for the Sieve of Sundaram and also grows much more quickly with increasing range. From the above implementation, it is clear that the amount of work done is according to the following:&lt;br&gt; formula_4 or formula_5 where: - the "a" to "b" range actually starts at the square of the odd base values, but this difference is negligible for large ranges. As the integral of the reciprocal of "x" is exactly formula_6, and as the lower value for "a" is relatively very small (close to one which has a "log" value of zero), this is about as follows:&lt;br&gt; formula_7 or formula_8 or formula_9. Ignoring the constant factor of one eighth, the asymptotic complexity in Big O notation is clearly formula_10. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i,j\\in\\mathbb{N},\\ 1 \\le i \\le j" }, { "math_id": 1, "text": "i + j + 2ij \\le n" }, { "math_id": 2, "text": "1\\le j\\le \\lfloor k/2\\rfloor" }, { "math_id": 3, "text": "\\begin{align}\nq &= 2(i + j + 2ij) + 1 \\\\\n&= 2i + 2j + 4ij + 1 \\\\\n&= (2i + 1)(2j + 1).\n\\end{align}" }, { "math_id": 4, "text": "\\int_{a}^{b} \\frac{n}{2x} \\,dx." }, { "math_id": 5, "text": "\\frac{n}{2} \\int_{a}^{b} \\frac{1}{x} \\,dx." }, { "math_id": 6, "text": "\\log{x}" }, { "math_id": 7, "text": "\\frac{n}{4} \\log{\\sqrt{n}}" }, { "math_id": 8, "text": "\\frac{n}{4} \\frac{1}{2} \\log{n}" }, { "math_id": 9, "text": "\\frac{n}{8} \\log{n}" }, { "math_id": 10, "text": "O({n} \\log{n})" } ]
https://en.wikipedia.org/wiki?curid=14517330
145243
Limnology
Science of inland aquatic ecosystems Limnology ( ; from grc " ' ()" 'lake' and " ' ()" 'study of') is the study of inland aquatic ecosystems. The study of limnology includes aspects of the biological, chemical, physical, and geological characteristics of fresh and saline, natural and man-made bodies of water. This includes the study of lakes, reservoirs, ponds, rivers, springs, streams, wetlands, and groundwater. Water systems are often categorized as either running (lotic) or standing (lentic). Limnology includes the study of the drainage basin, movement of water through the basin and biogeochemical changes that occur en route. A more recent sub-discipline of limnology, termed landscape limnology, studies, manages, and seeks to conserve these ecosystems using a landscape perspective, by explicitly examining connections between an aquatic ecosystem and its drainage basin. Recently, the need to understand global inland waters as part of the Earth system created a sub-discipline called global limnology. This approach considers processes in inland waters on a global scale, like the role of inland aquatic ecosystems in global biogeochemical cycles. Limnology is closely related to aquatic ecology and hydrobiology, which study aquatic organisms and their interactions with the abiotic (non-living) environment. While limnology has substantial overlap with freshwater-focused disciplines (e.g., freshwater biology), it also includes the study of inland salt lakes. History. The term limnology was coined by François-Alphonse Forel (1841–1912) who established the field with his studies of Lake Geneva. Interest in the discipline rapidly expanded, and in 1922 August Thienemann (a German zoologist) and Einar Naumann (a Swedish botanist) co-founded the International Society of Limnology (SIL, from Societas Internationalis Limnologiae). Forel's original definition of limnology, "the oceanography of lakes", was expanded to encompass the study of all inland waters, and influenced Benedykt Dybowski's work on Lake Baikal. Prominent early American limnologists included G. Evelyn Hutchinson and Ed Deevey. At the University of Wisconsin-Madison, Edward A. Birge, Chancey Juday, Charles R. Goldman, and Arthur D. Hasler contributed to the development of the Center for Limnology. General limnology. Physical properties. Physical properties of aquatic ecosystems are determined by a combination of heat, currents, waves and other seasonal distributions of environmental conditions. The morphometry of a body of water depends on the type of feature (such as a lake, river, stream, wetland, estuary etc.) and the structure of the earth surrounding the body of water. Lakes, for instance, are classified by their formation, and zones of lakes are defined by water depth. River and stream system morphometry is driven by underlying geology of the area as well as the general velocity of the water. Stream morphometry is also influenced by topography (especially slope) as well as precipitation patterns and other factors such as vegetation and land development. Connectivity between streams and lakes relates to the landscape drainage density, lake surface area and lake shape. Other types of aquatic systems which fall within the study of limnology are estuaries. Estuaries are bodies of water classified by the interaction of a river and the ocean or sea. Wetlands vary in size, shape, and pattern however the most common types, marshes, bogs and swamps, often fluctuate between containing shallow, freshwater and being dry depending on the time of year. The volume and quality of water in underground aquifers rely on the vegetation cover, which fosters recharge and aids in maintaining water quality. Light interactions. Light zonation is the concept of how the amount of sunlight penetration into water influences the structure of a body of water. These zones define various levels of productivity within an aquatic ecosystems such as a lake. For instance, the depth of the water column which sunlight is able to penetrate and where most plant life is able to grow is known as the photic or euphotic zone. The rest of the water column which is deeper and does not receive sufficient amounts of sunlight for plant growth is known as the aphotic zone. The amount of solar energy present underwater and the spectral quality of the light that are present at various depths have a significant impact on the behavior of many aquatic organisms. For example, zooplankton's vertical migration is influenced by solar energy levels. Thermal stratification. Similar to light zonation, thermal stratification or thermal zonation is a way of grouping parts of the water body within an aquatic system based on the temperature of different lake layers. The less turbid the water, the more light is able to penetrate, and thus heat is conveyed deeper in the water. Heating declines exponentially with depth in the water column, so the water will be warmest near the surface but progressively cooler as moving downwards. There are three main sections that define thermal stratification in a lake. The epilimnion is closest to the water surface and absorbs long- and shortwave radiation to warm the water surface. During cooler months, wind shear can contribute to cooling of the water surface. The thermocline is an area within the water column where water temperatures rapidly decrease. The bottom layer is the hypolimnion, which tends to have the coldest water because its depth restricts sunlight from reaching it. In temperate lakes, fall-season cooling of surface water results in turnover of the water column, where the thermocline is disrupted, and the lake temperature profile becomes more uniform. In cold climates, when water cools below 4oC (the temperature of maximum density) many lakes can experience an inverse thermal stratification in winter. These lakes are often dimictic, with a brief spring overturn in addition to longer fall overturn. The relative thermal resistance is the energy needed to mix these strata of different temperatures. Lake Heat Budget. An annual heat budget, also shown as θa, is the total amount of heat needed to raise the water from its minimum winter temperature to its maximum summer temperature. This can be calculated by integrating the area of the lake at each depth interval (Az) multiplied by the difference between the summer (θsz) and winter (θwz) temperatures or formula_0Az(θsz-θwz) Chemical properties. The chemical composition of water in aquatic ecosystems is influenced by natural characteristics and processes including precipitation, underlying soil and bedrock in the drainage basin, erosion, evaporation, and sedimentation. All bodies of water have a certain composition of both organic and inorganic elements and compounds. Biological reactions also affect the chemical properties of water. In addition to natural processes, human activities strongly influence the chemical composition of aquatic systems and their water quality. "Allochthonous" sources of carbon or nutrients come from outside the aquatic system (such as plant and soil material). Carbon sources from within the system, such as algae and the microbial breakdown of aquatic particulate organic carbon, are "autochthonous". In aquatic food webs, the portion of biomass derived from allochthonous material is then named "allochthony". In streams and small lakes, allochthonous sources of carbon are dominant while in large lakes and the ocean, autochthonous sources dominate. Oxygen and carbon dioxide. Dissolved oxygen and dissolved carbon dioxide are often discussed together due their coupled role in respiration and photosynthesis. Dissolved oxygen concentrations can be altered by physical, chemical, and biological processes and reaction. Physical processes including wind mixing can increase dissolved oxygen concentrations, particularly in surface waters of aquatic ecosystems. Because dissolved oxygen solubility is linked to water temperatures, changes in temperature affect dissolved oxygen concentrations as warmer water has a lower capacity to "hold" oxygen as colder water. Biologically, both photosynthesis and aerobic respiration affect dissolved oxygen concentrations. Photosynthesis by autotrophic organisms, such as phytoplankton and aquatic algae, increases dissolved oxygen concentrations while simultaneously reducing carbon dioxide concentrations, since carbon dioxide is taken up during photosynthesis. All aerobic organisms in the aquatic environment take up dissolved oxygen during aerobic respiration, while carbon dioxide is released as a byproduct of this reaction. Because photosynthesis is light-limited, both photosynthesis and respiration occur during the daylight hours, while only respiration occurs during dark hours or in dark portions of an ecosystem. The balance between dissolved oxygen production and consumption is calculated as the aquatic metabolism rate. Vertical changes in the concentrations of dissolved oxygen are affected by both wind mixing of surface waters and the balance between photosynthesis and respiration of organic matter. These vertical changes, known as profiles, are based on similar principles as thermal stratification and light penetration. As light availability decreases deeper in the water column, photosynthesis rates also decrease, and less dissolved oxygen is produced. This means that dissolved oxygen concentrations generally decrease as you move deeper into the body of water because of photosynthesis is not replenishing dissolved oxygen that is being taken up through respiration. During periods of thermal stratification, water density gradients prevent oxygen-rich surface waters from mixing with deeper waters. Prolonged periods of stratification can result in the depletion of bottom-water dissolved oxygen; when dissolved oxygen concentrations are below 2 milligrams per liter, waters are considered hypoxic. When dissolved oxygen concentrations are approximately 0 milligrams per liter, conditions are anoxic. Both hypoxic and anoxic waters reduce available habitat for organisms that respire oxygen, and contribute to changes in other chemical reactions in the water. Nitrogen and phosphorus. Nitrogen and phosphorus are ecologically significant nutrients in aquatic systems. Nitrogen is generally present as a gas in aquatic ecosystems however most water quality studies tend to focus on nitrate, nitrite and ammonia levels. Most of these dissolved nitrogen compounds follow a seasonal pattern with greater concentrations in the fall and winter months compared to the spring and summer. Phosphorus has a different role in aquatic ecosystems as it is a limiting factor in the growth of phytoplankton because of generally low concentrations in the water. Dissolved phosphorus is also crucial to all living things, is often very limiting to primary productivity in freshwater, and has its own distinctive ecosystem cycling. Biological properties. Role in ecology. Lakes "are relatively easy to sample, because they have clear-cut boundaries (compared to terrestrial ecosystems) and because field experiments are relatively easy to perform.", which make then especially useful for ecologists who try to understand ecological dynamics. Lake trophic classification. One way to classify lakes (or other bodies of water) is with the trophic state index. An oligotrophic lake is characterized by relatively low levels of primary production and low levels of nutrients. A eutrophic lake has high levels of primary productivity due to very high nutrient levels. Eutrophication of a lake can lead to algal blooms. Dystrophic lakes have high levels of humic matter and typically have yellow-brown, tea-coloured waters. These categories do not have rigid specifications; the classification system can be seen as more of a spectrum encompassing the various levels of aquatic productivity. Tropical limnology. Tropical limnology is a unique and important subfield of limnology that focuses on the distinct physical, chemical, biological, and cultural aspects of freshwater systems in tropical regions. The physical and chemical properties of tropical aquatic environments are different from those in temperate regions, with warmer and more stable temperatures, higher nutrient levels, and more complex ecological interactions. Moreover, the biodiversity of tropical freshwater systems is typically higher, human impacts are often more severe, and there are important cultural and socioeconomic factors that influence the use and management of these systems. Professional organizations. People who study limnology are called limnologists. These scientists largely study the characteristics of inland fresh-water systems such as lakes, rivers, streams, ponds and wetlands. They may also study non-oceanic bodies of salt water, such as the Great Salt Lake. There are many professional organizations related to limnology and other aspects of the aquatic science, including the Association for the Sciences of Limnology and Oceanography, the , the International Society of Limnology, the Polish Limnological Society, the Society of Canadian Limnologists, and the Freshwater Biological Association. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\displaystyle \\int" } ]
https://en.wikipedia.org/wiki?curid=145243
14526742
Canopy clustering algorithm
The canopy clustering algorithm is an unsupervised pre-clustering algorithm introduced by Andrew McCallum, Kamal Nigam and Lyle Ungar in 2000. It is often used as preprocessing step for the K-means algorithm or the Hierarchical clustering algorithm. It is intended to speed up clustering operations on large data sets, where using another algorithm directly may be impractical due to the size of the data set. Description. The algorithm proceeds as follows, using two thresholds formula_0 (the loose distance) and formula_1 (the tight distance), where formula_2 . An important note is that individual data points may be part of several canopies. As an additional speed-up, an approximate and fast distance metric can be used for 3, where a more accurate and slow distance metric can be used for step 4. Applicability. Since the algorithm uses distance functions and requires the specification of distance thresholds, its applicability for high-dimensional data is limited by the curse of dimensionality. Only when a cheap and approximative – low-dimensional – distance function is available, the produced canopies will preserve the clusters produced by K-means. Its benefits include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_1" }, { "math_id": 1, "text": "T_2" }, { "math_id": 2, "text": "T_1 > T_2" } ]
https://en.wikipedia.org/wiki?curid=14526742
14526859
Installment sales method
The installment sales method is one of several approaches used to recognize revenue under the US GAAP, specifically when revenue and expense are recognized at the time of cash collection rather than at the time of sale. Under the US GAAP, it is the principal method of revenue recognition when the recognition occurs subsequently to the sale. Installment sales method. The installment sales method, is used to recognize revenue after the sale has occurred and when sales are stipulated under very extended cash collection terms. In general, when the risk of not being able to collect is reasonably high and when there is no reasonable basis for estimating the proportion of installment accounts, revenue recognition is deferred, and the installment sales method is used. The installment sales method is typically used to account for sales of consumer durables, retail land sales, and retirement property. Under the cost recovery method, another method to recognize income after the sale is made, no profit is recognized until all the costs are recovered. Calculation under the installment sales method. The installment sales method recognizes revenue and income proportionately as cash is collected. The amount recognized in any period is thus based on two factors: Below is an example of calculation of installment sales for years 2009 and 2010. The income recognized in 2009 equals cash collections in 2009 multiplied by the gross profit percentage in 2009 and is calculated as follows: $300,000×30% = $90,000 Such income is shown on the 2009 income statement as 2009 income from installment sales. The deferred gross profit is an A/R contra-account and is the difference between gross profit and recognized income and is calculated as follows: "$360,000" − "$90,000" = "$270,000" The deferred gross profit is thus deferred and recognized in income in subsequent periods, i.e. when the installment receivables are collected in cash. A more comprehensive table would clearly show gross profit and deferred income recognized for each year: 2009 and 2010. Installment sales and the related costs of good sold must be tracked by individual year in order to compute the gross profit percentage that applies to each year. Furthermore, the accounting system must correctly match the cash collections with the specific sales year so that the correct gross profit percentage be applied. On the balance sheet, "the accounts receivable - installment sales" is classified as current assets if it is due within 12 months of the balance sheet. Otherwise, it is classified as long term assets. Under the GAAP, the interest component of the periodic cash proceeds is computed separately. In fact, interest payments are not considered when the recognized gross profit is computed on installment sales. Certain procedures differentiate between principal and interest payments on customer receivables. Comparison to the cash and accrual method. Cash method – The cash method requires that an amount be included in gross income when it is actually or constructively received. The installment method allows greater deferral when the payment is received in the form of a negotiable note. The cash method does not allow for differing between cost recovery and gain. Accrual method – The accrual method requires income to be recognized as soon as the taxpayer has a right to the income regardless of when the payment is actually received. As such, the taxpayer would have to recognize the full amount of the sale despite the fact that the purchase price may not be paid in full for years.
[ { "math_id": 0, "text": "\\frac{Gross Profit}{Sales}" } ]
https://en.wikipedia.org/wiki?curid=14526859
14527587
Average treatment effect
Measure used to compare treatments in randomised trials The average treatment effect (ATE) is a measure used to compare treatments (or interventions) in randomized experiments, evaluation of policy interventions, and medical trials. The ATE measures the difference in mean (average) outcomes between units assigned to the treatment and units assigned to the control. In a randomized trial (i.e., an experimental study), the average treatment effect can be estimated from a sample using a comparison in mean outcomes for treated and untreated units. However, the ATE is generally understood as a causal parameter (i.e., an estimate or property of a population) that a researcher desires to know, defined without reference to the study design or estimation procedure. Both observational studies and experimental study designs with random assignment may enable one to estimate an ATE in a variety of ways. The average treatment effect is under some conditions directly related to the partial dependence plot General definition. Originating from early statistical analysis in the fields of agriculture and medicine, the term "treatment" is now applied, more generally, to other fields of natural and social science, especially psychology, political science, and economics such as, for example, the evaluation of the impact of public policies. The nature of a treatment or outcome is relatively unimportant in the estimation of the ATE—that is to say, calculation of the ATE requires that a treatment be applied to some units and not others, but the nature of that treatment (e.g., a pharmaceutical, an incentive payment, a political advertisement) is irrelevant to the definition and estimation of the ATE. The expression "treatment effect" refers to the causal effect of a given treatment or intervention (for example, the administering of a drug) on an outcome variable of interest (for example, the health of the patient). In the Neyman-Rubin "potential outcomes framework" of causality a treatment effect is defined for each individual unit in terms of two "potential outcomes." Each unit has one outcome that would manifest if the unit were exposed to the treatment and another outcome that would manifest if the unit were exposed to the control. The "treatment effect" is the difference between these two potential outcomes. However, this individual-level treatment effect is unobservable because individual units can only receive the treatment or the control, but not both. Random assignment to treatment ensures that units assigned to the treatment and units assigned to the control are identical (over a large number of iterations of the experiment). Indeed, units in both groups have identical distributions of covariates and potential outcomes. Thus the average outcome among the treatment units serves as a counterfactual for the average outcome among the control units. The differences between these two averages is the ATE, which is an estimate of the central tendency of the distribution of unobservable individual-level treatment effects. If a sample is randomly constituted from a population, the sample ATE (abbreviated SATE) is also an estimate of the population ATE (abbreviated PATE). While an experiment ensures, in expectation, that potential outcomes (and all covariates) are equivalently distributed in the treatment and control groups, this is not the case in an observational study. In an observational study, units are not assigned to treatment and control randomly, so their assignment to treatment may depend on unobserved or unobservable factors. Observed factors can be statistically controlled (e.g., through regression or matching), but any estimate of the ATE could be confounded by unobservable factors that influenced which units received the treatment versus the control. Formal definition. In order to define formally the ATE, we define two potential outcomes : formula_0 is the value of the outcome variable for individual formula_1 if they are not treated, formula_2 is the value of the outcome variable for individual formula_1 if they are treated. For example, formula_0 is the health status of the individual if they are not administered the drug under study and formula_2 is the health status if they are administered the drug. The treatment effect for individual formula_1 is given by formula_3. In the general case, there is no reason to expect this effect to be constant across individuals. The average treatment effect is given by formula_4 and can be estimated (if a law of large numbers holds) formula_5 where the summation occurs over all formula_6 individuals in the population. If we could observe, for each individual, formula_2 and formula_0 among a large representative sample of the population, we could estimate the ATE simply by taking the average value of formula_7 across the sample. However, we can not observe both formula_2 and formula_0 for each individual since an individual cannot be both treated and not treated. For example, in the drug example, we can only observe formula_2 for individuals who have received the drug and formula_0 for those who did not receive it. This is the main problem faced by scientists in the evaluation of treatment effects and has triggered a large body of estimation techniques. Estimation. Depending on the data and its underlying circumstances, many methods can be used to estimate the ATE. The most common ones are: An example. Consider an example where all units are unemployed individuals, and some experience a policy intervention (the treatment group), while others do not (the control group). The causal effect of interest is the impact a job search monitoring policy (the treatment) has on the length of an unemployment spell: On average, how much shorter would one's unemployment be if they experienced the intervention? The ATE, in this case, is the difference in expected values (means) of the treatment and control groups' length of unemployment. A positive ATE, in this example, would suggest that the job policy increased the length of unemployment. A negative ATE would suggest that the job policy decreased the length of unemployment. An ATE estimate equal to zero would suggest that there was no advantage or disadvantage to providing the treatment in terms of the length of unemployment. Determining whether an ATE estimate is distinguishable from zero (either positively or negatively) requires statistical inference. Because the ATE is an estimate of the average effect of the treatment, a positive or negative ATE does not indicate that any particular individual would benefit or be harmed by the treatment. Thus the average treatment effect neglects the distribution of the treatment effect. Some parts of the population might be worse off with the treatment even if the mean effect is positive. Heterogenous treatment effects. Some researchers call a treatment effect "heterogenous" if it affects different individuals differently (heterogeneously). For example, perhaps the above treatment of a job search monitoring policy affected men and women differently, or people who live in different states differently. ATE requires a strong assumption known as the stable unit treatment value assumption (SUTVA) which requires the value of the potential outcome formula_8 be unaffected by the mechanism used to assign the treatment and the treatment exposure of all other individuals. Let formula_9 be the treatment, the treatment effect for individual formula_1 is given by formula_10. The SUTVA assumption allows us to declare formula_11. One way to look for heterogeneous treatment effects is to divide the study data into subgroups (e.g., men and women, or by state), and see if the average treatment effects are different by subgroup. If the average treatment effects are different, SUTVA is violated. A per-subgroup ATE is called a "conditional average treatment effect" (CATE), i.e. the ATE conditioned on membership in the subgroup. CATE can be used as an estimate if SUTVA does not hold. A challenge with this approach is that each subgroup may have substantially less data than the study as a whole, so if the study has been powered to detect the main effects without subgroup analysis, there may not be enough data to properly judge the effects on subgroups. There is some work on detecting heterogeneous treatment effects using random forests as well as detecting heterogeneous subpopulations using cluster analysis. Recently, metalearning approaches have been developed that use arbitrary regression frameworks as base learners to infer the CATE. Representation learning can be used to further improve the performance of these methods. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_{0}(i)" }, { "math_id": 1, "text": "i" }, { "math_id": 2, "text": "y_{1}(i)" }, { "math_id": 3, "text": "y_{1}(i)-y_{0}(i)=\\beta(i)" }, { "math_id": 4, "text": "\\text{ATE} = \\mathbb{E}[y_{1}-y_{0}]" }, { "math_id": 5, "text": "\\widehat{ATE} = \\frac{1}{N}\\sum_i (y_{1}(i)-y_{0}(i))" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "y_{1}(i)-y_{0}(i)" }, { "math_id": 8, "text": "y(i)" }, { "math_id": 9, "text": "d" }, { "math_id": 10, "text": "y_{1}(i,d)-y_{0}(i,d)" }, { "math_id": 11, "text": "y_{1}(i,d) = y_{1}(i), y_{0}(i,d)=y_{0}(i)" } ]
https://en.wikipedia.org/wiki?curid=14527587
14529261
Rademacher complexity
Measure of complexity of real-valued functions In computational learning theory (machine learning and theory of computation), Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions. Definitions. Rademacher complexity of a set. Given a set formula_0, the Rademacher complexity of "A" is defined as follows: formula_1 where formula_2 are independent random variables drawn from the Rademacher distribution i.e. formula_3 for formula_4, and formula_5. Some authors take the absolute value of the sum before taking the supremum, but if formula_6 is symmetric this makes no difference. Rademacher complexity of a function class. Let formula_7 be a sample of points and consider a function class formula_8 of real-valued functions over formula_9. Then, the empirical Rademacher complexity of formula_8 given formula_10 is defined as: formula_11 This can also be written using the previous definition: formula_12 where formula_13 denotes function composition, i.e.: formula_14 Let formula_15 be a probability distribution over formula_9. The Rademacher complexity of the function class formula_8 with respect to formula_15 for sample size formula_16 is: formula_17 where the above expectation is taken over an identically independently distributed (i.i.d.) sample formula_18 generated according to formula_15. Intuition. The Rademacher complexity is typically applied on a function class of models that are used for classification, with the goal of measuring their ability to classify points drawn from a probability space under arbitrary labellings. When the function class is rich enough, it contains functions that can appropriately adapt for each arrangement of labels, simulated by the random draw of formula_19 under the expectation, so that this quantity in the sum is maximised. Examples. 1. formula_6 contains a single vector, e.g., formula_20. Then: formula_21 The same is true for every singleton hypothesis class. 2. formula_6 contains two vectors, e.g., formula_22. Then: formula_23 Using the Rademacher complexity. The Rademacher complexity can be used to derive data-dependent upper-bounds on the learnability of function classes. Intuitively, a function-class with smaller Rademacher complexity is easier to learn. Bounding the representativeness. In machine learning, it is desired to have a training set that represents the true distribution of some sample data formula_10. This can be quantified using the notion of representativeness. Denote by formula_15 the probability distribution from which the samples are drawn. Denote by formula_24 the set of hypotheses (potential classifiers) and denote by formula_25 the corresponding set of error functions, i.e., for every hypothesis formula_26, there is a function formula_27, that maps each training sample (features,label) to the error of the classifier formula_28 (note in this case hypothesis and classifier are used interchangeably). For example, in the case that formula_28 represents a binary classifier, the error function is a 0–1 loss function, i.e. the error function formula_29 returns 0 if formula_28 correctly classifies a sample and 1 else. We omit the index and write formula_30 instead of formula_29 when the underlying hypothesis is irrelevant. Define: formula_31 – the expected error of some error function formula_32 on the real distribution formula_15; formula_33 – the estimated error of some error function formula_32 on the sample formula_10. The representativeness of the sample formula_10, with respect to formula_15 and formula_25, is defined as: formula_34 Smaller representativeness is better, since it provides a way to avoid overfitting: it means that the true error of a classifier is not much higher than its estimated error, and so selecting a classifier that has low estimated error will ensure that the true error is also low. Note however that the concept of representativeness is relative and hence can not be compared across distinct samples. The expected representativeness of a sample can be bounded above by the Rademacher complexity of the function class: formula_35 Bounding the generalization error. When the Rademacher complexity is small, it is possible to learn the hypothesis class H using empirical risk minimization. For example, (with binary error function), for every formula_36, with probability at least formula_37, for every hypothesis formula_26: formula_38 Bounding the Rademacher complexity. Since smaller Rademacher complexity is better, it is useful to have upper bounds on the Rademacher complexity of various function sets. The following rules can be used to upper-bound the Rademacher complexity of a set formula_39. 1. If all vectors in formula_6 are translated by a constant vector formula_40, then Rad("A") does not change. 2. If all vectors in formula_6 are multiplied by a scalar formula_41, then Rad("A") is multiplied by formula_42. 3. formula_43. 4. (Kakade &amp; Tewari Lemma) If all vectors in formula_6 are operated by a Lipschitz function, then Rad("A") is (at most) multiplied by the Lipschitz constant of the function. In particular, if all vectors in formula_6 are operated by a contraction mapping, then Rad("A") strictly decreases. 5. The Rademacher complexity of the convex hull of formula_6 equals Rad("A"). 6. (Massart Lemma) The Rademacher complexity of a finite set grows logarithmically with the set size. Formally, let formula_6 be a set of formula_44 vectors in formula_45, and let formula_46 be the mean of the vectors in formula_6. Then: formula_47 In particular, if formula_6 is a set of binary vectors, the norm is at most formula_48, so: formula_49 Bounds related to the VC dimension. Let formula_24 be a set family whose VC dimension is formula_50. It is known that the growth function of formula_24 is bounded as: for all formula_51: formula_52 This means that, for every set formula_28 with at most formula_16 elements, formula_53. The set-family formula_54 can be considered as a set of binary vectors over formula_45. Substituting this in Massart's lemma gives: formula_55 With more advanced techniques (Dudley's entropy bound and Haussler's upper bound) one can show, for example, that there exists a constant formula_56, such that any class of formula_57-indicator functions with Vapnik–Chervonenkis dimension formula_50 has Rademacher complexity upper-bounded by formula_58. Bounds related to linear classes. The following bounds are related to linear operations on formula_10 – a constant set of formula_16 vectors in formula_59. 1. Define formula_60 the set of dot-products of the vectors in formula_10 with vectors in the unit ball. Then: formula_61 2. Define formula_62 the set of dot-products of the vectors in formula_10 with vectors in the unit ball of the 1-norm. Then: formula_63 Bounds related to covering numbers. The following bound relates the Rademacher complexity of a set formula_6 to its external covering number – the number of balls of a given radius formula_64 whose union contains formula_6. The bound is attributed to Dudley. Suppose formula_65 is a set of vectors whose length (norm) is at most formula_66. Then, for every integer formula_67: formula_68 In particular, if formula_6 lies in a "d"-dimensional subspace of formula_45, then: formula_69 Substituting this in the previous bound gives the following bound on the Rademacher complexity: formula_70 Gaussian complexity. Gaussian complexity is a similar complexity with similar physical meanings, and can be obtained from the Rademacher complexity using the random variables formula_71 instead of formula_19, where formula_71 are Gaussian i.i.d. random variables with zero-mean and variance 1, i.e. formula_72. Gaussian and Rademacher complexities are known to be equivalent up to logarithmic factors. Equivalence of Rademacher and Gaussian complexity. Given a set formula_73 then it holds that: formula_74 Where formula_75 is the Gaussian Complexity of A. As an example, consider the rademacher and gaussian complexities of the L1 ball. The Rademacher complexity is given by exactly 1, whereas the Gaussian complexity is on the order of formula_76 (which can be shown by applying known properties of suprema of a set of subgaussian random variables). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A\\subseteq \\mathbb{R}^m" }, { "math_id": 1, "text": "\n\\operatorname{Rad}(A)\n:= \n\\frac{1}{m}\n \\mathbb{E}_\\sigma \\left[\n \\sup_{a \\in A}\n \\sum_{i=1}^m \\sigma_i a_i\n\\right]\n" }, { "math_id": 2, "text": "\\sigma_1, \\sigma_2, \\dots, \\sigma_m" }, { "math_id": 3, "text": "\\Pr(\\sigma_i = +1) = \\Pr(\\sigma_i = -1) = 1/2" }, { "math_id": 4, "text": "i=1,2,\\dots,m" }, { "math_id": 5, "text": " a=(a_1, \\ldots, a_m)" }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "S=\\{z_1, z_2, \\dots, z_m\\} \\subset Z" }, { "math_id": 8, "text": "\\mathcal{F}" }, { "math_id": 9, "text": "Z" }, { "math_id": 10, "text": "S" }, { "math_id": 11, "text": "\n\\operatorname{Rad}_S(\\mathcal{F}) \n= \n\\frac{1}{m}\n \\mathbb{E}_\\sigma \\left[\n \\sup_{f \\in \\mathcal{F}}\n \\sum_{i=1}^m \\sigma_i f(z_i) \n\\right]\n" }, { "math_id": 12, "text": "\\operatorname{Rad}_S(\\mathcal{F}) = \\operatorname{Rad}(\\mathcal{F} \\circ S) " }, { "math_id": 13, "text": "\\mathcal{F} \\circ S" }, { "math_id": 14, "text": "\\mathcal{F} \\circ S := \\{ (f(z_1),\\ldots,f(z_m))\\mid f\\in \\mathcal{F}\\}" }, { "math_id": 15, "text": "P" }, { "math_id": 16, "text": "m" }, { "math_id": 17, "text": "\n\\operatorname{Rad}_{P,m}(\\mathcal{F}) \n:= \n\\mathbb{E}_{S\\sim P^m} \\left[ \\operatorname{Rad}_S(\\mathcal{F}) \\right]\n" }, { "math_id": 18, "text": "S=(z_1, z_2, \\dots, z_m)" }, { "math_id": 19, "text": "\\sigma_i" }, { "math_id": 20, "text": "A = \\{(a,b)\\} \\subset \\mathbb{R}^2" }, { "math_id": 21, "text": "\\operatorname{Rad}(A) = {1\\over 2}\\cdot \\left({1\\over 4}\\cdot(a+b) + {1\\over 4}\\cdot(a-b) + {1\\over 4}\\cdot(-a+b) + {1\\over 4}\\cdot(-a-b)\\right) = 0" }, { "math_id": 22, "text": "A = \\{(1,1),(1,2)\\} \\subset \\mathbb{R}^2" }, { "math_id": 23, "text": "\n\\begin{align}\n\\operatorname{Rad}(A) & = {1\\over 2}\\cdot \\left({1\\over 4}\\cdot\\max(1+1, 1+2) + {1\\over 4}\\cdot\\max(1-1, 1-2) + {1\\over 4} \\cdot \\max(-1+1, -1+2) + {1\\over 4}\\cdot\\max(-1-1, -1-2)\\right) \\\\[5pt]\n& = {1\\over 8}(3+0+1-2) = {1\\over 4}\n\\end{align}\n" }, { "math_id": 24, "text": "H" }, { "math_id": 25, "text": "F" }, { "math_id": 26, "text": "h\\in H" }, { "math_id": 27, "text": "f_h\\in F" }, { "math_id": 28, "text": "h" }, { "math_id": 29, "text": "f_h" }, { "math_id": 30, "text": "f" }, { "math_id": 31, "text": "L_P(f) := \\mathbb E_{z\\sim P}[f(z)]" }, { "math_id": 32, "text": "f\\in F" }, { "math_id": 33, "text": "L_S(f) := {1\\over m} \\sum_{i=1}^m f(z_i)" }, { "math_id": 34, "text": " \\operatorname{Rep}_P(F,S) := \\sup_{f\\in F} (L_P(f) - L_S(f))" }, { "math_id": 35, "text": " \\mathbb E_{S\\sim P^m} [\\operatorname{Rep}_P(F,S)] \\leq 2 \\cdot \\mathbb E_{S\\sim P^m} [\\operatorname{Rad}(F\\circ S)]" }, { "math_id": 36, "text": "\\delta>0" }, { "math_id": 37, "text": "1-\\delta" }, { "math_id": 38, "text": "L_P(h) - L_S(h) \\leq 2 \\operatorname{Rad}(F\\circ S) + 4 \\sqrt{2\\ln(4/\\delta)\\over m}" }, { "math_id": 39, "text": "A \\subset \\mathbb{R}^m" }, { "math_id": 40, "text": "a_0 \\in \\mathbb{R}^m" }, { "math_id": 41, "text": "c\\in \\mathbb{R}" }, { "math_id": 42, "text": "|c|" }, { "math_id": 43, "text": " \\operatorname{Rad}(A+B) = \\operatorname{Rad}(A) + \\operatorname{Rad}(B)" }, { "math_id": 44, "text": "N" }, { "math_id": 45, "text": "\\mathbb{R}^m" }, { "math_id": 46, "text": "\\bar{a}" }, { "math_id": 47, "text": "\\operatorname{Rad}(A) \\leq \\max_{a\\in A} \\|a-\\bar{a}\\| \\cdot {\\sqrt{2\\log N}\\over m}" }, { "math_id": 48, "text": "\\sqrt{m}" }, { "math_id": 49, "text": "\\operatorname{Rad}(A) \\leq \\sqrt{2\\log N \\over m} " }, { "math_id": 50, "text": "d" }, { "math_id": 51, "text": "m>d+1" }, { "math_id": 52, "text": "\\operatorname{Growth}(H,m)\\leq (em/d)^d" }, { "math_id": 53, "text": "|H\\cap h|\\leq (em/d)^d" }, { "math_id": 54, "text": "H\\cap h" }, { "math_id": 55, "text": "\\operatorname{Rad}(H\\cap h) \\leq {\\sqrt{2 d \\log(em/d) \\over m}}" }, { "math_id": 56, "text": "C" }, { "math_id": 57, "text": "\\{0,1\\}" }, { "math_id": 58, "text": "C\\sqrt{\\frac{d}{m}}" }, { "math_id": 59, "text": "\\mathbb{R}^n" }, { "math_id": 60, "text": "A_2 = \\{(w\\cdot x_1,\\ldots,w\\cdot x_m) \\mid \\|w\\|_2\\leq 1\\} = " }, { "math_id": 61, "text": "\\operatorname{Rad}(A_2) \\leq {\\max_i\\|x_i\\|_2 \\over \\sqrt{m}}" }, { "math_id": 62, "text": "A_1 = \\{(w\\cdot x_1,\\ldots,w\\cdot x_m) \\mid \\|w\\|_1\\leq 1\\} = " }, { "math_id": 63, "text": "\\operatorname{Rad}(A_1) \\leq \\max_i\\|x_i\\|_\\infty\\cdot \\sqrt{2\\log(2n) \\over m}" }, { "math_id": 64, "text": "r" }, { "math_id": 65, "text": "A\\subset \\mathbb{R}^m" }, { "math_id": 66, "text": "c" }, { "math_id": 67, "text": "M>0" }, { "math_id": 68, "text": "\n\\operatorname{Rad}(A) \\leq \n{c\\cdot 2^{-M}\\over \\sqrt{m}}\n+\n{6c \\over m}\\cdot\n\\sum_{i=1}^M 2^{-i}\\sqrt{\\log\\left(N^{\\text{ext}}_{c\\cdot 2^{-i}}(A)\\right)} \n" }, { "math_id": 69, "text": "\\forall r>0: N^{\\text{ext}}_r(A) \\leq (2 c \\sqrt{d}/r)^d" }, { "math_id": 70, "text": "\n\\operatorname{Rad}(A) \\leq \n{6c \\over m}\\cdot\n\\bigg(\\sqrt{d\\log(2\\sqrt{d})} + 2\\sqrt{d}\\bigg)\n=\nO\\bigg({c\\sqrt{d\\log(d)}\\over m}\\bigg)\n" }, { "math_id": 71, "text": "g_i" }, { "math_id": 72, "text": "g_i \\sim \\mathcal{N}(0,1)" }, { "math_id": 73, "text": "A\\subseteq\\mathbb{R}^n" }, { "math_id": 74, "text": "\\frac{G(A)}{2\\sqrt{\\log{n}}} \\leq \\text{Rad}(A) \\leq \\sqrt{\\frac{\\pi}{2}}G(A)" }, { "math_id": 75, "text": "G(A)" }, { "math_id": 76, "text": "\\sqrt {\\log d}" } ]
https://en.wikipedia.org/wiki?curid=14529261
1452960
Hankel transform
Mathematical operation In mathematics, the Hankel transform expresses any given function "f"("r") as the weighted sum of an infinite number of Bessel functions of the first kind "Jν"("kr"). The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor "k" along the "r" axis. The necessary coefficient "Fν" of each Bessel function in the sum, as a function of the scaling factor "k" constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval. Definition. The Hankel transform of order formula_0 of a function "f"("r") is given by formula_1 where formula_2 is the Bessel function of the first kind of order formula_0 with formula_3. The inverse Hankel transform of "Fν"("k") is defined as formula_4 which can be readily verified using the orthogonality relationship described below. Domain of definition. Inverting a Hankel transform of a function "f"("r") is valid at every point at which "f"("r") is continuous, provided that the function is defined in (0, ∞), is piecewise continuous and of bounded variation in every finite subinterval in (0, ∞), and formula_5 However, like the Fourier transform, the domain can be extended by a density argument to include some functions whose above integral is not finite, for example formula_6. Alternative definition. An alternative definition says that the Hankel transform of "g"("r") is formula_7 The two definitions are related: If formula_8, then formula_9 This means that, as with the previous definition, the Hankel transform defined this way is also its own inverse: formula_10 The obvious domain now has the condition formula_11 but this can be extended. According to the reference given above, we can take the integral as the limit as the upper limit goes to infinity (an improper integral rather than a Lebesgue integral), and in this way the Hankel transform and its inverse work for all functions in L2(0, ∞). Transforming Laplace's equation. The Hankel transform can be used to transform and solve Laplace's equation expressed in cylindrical coordinates. Under the Hankel transform, the Bessel operator becomes a multiplication by formula_12. In the axisymmetric case, the partial differential equation is transformed as formula_13 where formula_14. Therefore, the Laplacian in cylindrical coordinates becomes an ordinary differential equation in the transformed function formula_15. Orthogonality. The Bessel functions form an orthogonal basis with respect to the weighting factor "r": formula_16 The Plancherel theorem and Parseval's theorem. If "f"("r") and "g"("r") are such that their Hankel transforms "Fν"("k") and "Gν"("k") are well defined, then the Plancherel theorem states formula_17 Parseval's theorem, which states formula_18 is a special case of the Plancherel theorem. These theorems can be proven using the orthogonality property. Relation to the multidimensional Fourier transform. The Hankel transform appears when one writes the multidimensional Fourier transform in hyperspherical coordinates, which is the reason why the Hankel transform often appears in physical problems with cylindrical or spherical symmetry. Consider a function formula_19 of a formula_20-dimensional vector r. Its formula_20-dimensional Fourier transform is defined asformula_21To rewrite it in hyperspherical coordinates, we can use the decomposition of a plane wave into formula_20-dimensional hyperspherical harmonics formula_22:formula_23where formula_24 and formula_25 are the sets of all hyperspherical angles in the formula_26-space and formula_27-space. This gives the following expression for the formula_20-dimensional Fourier transform in hyperspherical coordinates:formula_28If we expand formula_19 and formula_29 in hyperspherical harmonics:formula_30the Fourier transform in hyperspherical coordinates simplifies toformula_31This means that functions with angular dependence in form of a hyperspherical harmonic retain it upon the multidimensional Fourier transform, while the radial part undergoes the Hankel transform (up to some extra factors like formula_32). Special cases. Fourier transform in two dimensions. If a two-dimensional function "f"(r) is expanded in a multipole series, formula_33 then its two-dimensional Fourier transform is given byformula_34whereformula_35is the formula_36-th order Hankel transform of formula_37 (in this case formula_36 plays the role of the angular momentum, which was denoted by formula_38 in the previous section). Fourier transform in three dimensions. If a three-dimensional function "f"(r) is expanded in a multipole series over spherical harmonics, formula_39 then its three-dimensional Fourier transform is given byformula_40whereformula_41is the Hankel transform of formula_42 of order formula_43. This kind of Hankel transform of half-integer order is also known as the spherical Bessel transform. Fourier transform in "d" dimensions (radially symmetric case). If a "d"-dimensional function "f"("r") does not depend on angular coordinates, then its "d"-dimensional Fourier transform "F"("k") also does not depend on angular coordinates and is given byformula_44which is the Hankel transform of formula_45 of order formula_46 up to a factor of formula_47. 2D functions inside a limited radius. If a two-dimensional function "f"(r) is expanded in a multipole series and the expansion coefficients "fm" are sufficiently smooth near the origin and zero outside a radius R, the radial part "f"("r")/"rm" may be expanded into a power series of 1 − ("r"/"R")^2: formula_48 such that the two-dimensional Fourier transform of "f"(r) becomes formula_49 where the last equality follows from §6.567.1 of. The expansion coefficients "fm,t" are accessible with discrete Fourier transform techniques: if the radial distance is scaled with formula_50 the Fourier-Chebyshev series coefficients "g" emerge as formula_51 Using the re-expansion formula_52 yields "f""m,t" expressed as sums of "g""m,j". This is one flavor of fast Hankel transform techniques. Relation to the Fourier and Abel transforms. The Hankel transform is one member of the FHA cycle of integral operators. In two dimensions, if we define A as the Abel transform operator, F as the Fourier transform operator, and H as the zeroth-order Hankel transform operator, then the special case of the projection-slice theorem for circularly symmetric functions states that formula_53 In other words, applying the Abel transform to a 1-dimensional function and then applying the Fourier transform to that result is the same as applying the Hankel transform to that function. This concept can be extended to higher dimensions. Numerical evaluation. A simple and efficient approach to the numerical evaluation of the Hankel transform is based on the observation that it can be cast in the form of a convolution by a logarithmic change of variables formula_54 In these new variables, the Hankel transform reads formula_55 where formula_56 formula_57 formula_58 Now the integral can be calculated numerically with formula_59 complexity using fast Fourier transform. The algorithm can be further simplified by using a known analytical expression for the Fourier transform of formula_60: formula_61 The optimal choice of parameters formula_62 depends on the properties of formula_63 in particular its asymptotic behavior at formula_64 and formula_65 This algorithm is known as the "quasi-fast Hankel transform", or simply "fast Hankel transform". Since it is based on fast Fourier transform in logarithmic variables, formula_66 has to be defined on a logarithmic grid. For functions defined on a uniform grid, a number of other algorithms exist, including straightforward quadrature, methods based on the projection-slice theorem, and methods using the asymptotic expansion of Bessel functions. Some Hankel transform pairs. "Kn"("z") is a modified Bessel function of the second kind. "K"("z") is the complete elliptic integral of the first kind. The expression formula_67 coincides with the expression for the Laplace operator in polar coordinates ( "k", "θ" ) applied to a spherically symmetric function "F"0("k") . The Hankel transform of Zernike polynomials are essentially Bessel Functions (Noll 1976): formula_68 for even "n" − "m" ≥ 0. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Div col/styles.css"/&gt;
[ { "math_id": 0, "text": "\\nu" }, { "math_id": 1, "text": "F_\\nu(k) = \\int_0^\\infty f(r) J_\\nu(kr) \\,r\\,\\mathrm{d}r," }, { "math_id": 2, "text": "J_\\nu" }, { "math_id": 3, "text": "\\nu \\geq -1/2" }, { "math_id": 4, "text": "f(r) = \\int_0^\\infty F_\\nu(k) J_\\nu(kr) \\,k\\,\\mathrm{d}k," }, { "math_id": 5, "text": "\\int_0^\\infty |f(r)|\\,r^{\\frac{1}{2}} \\,\\mathrm{d}r < \\infty." }, { "math_id": 6, "text": "f(r) = (1 + r)^{-3/2}" }, { "math_id": 7, "text": "h_\\nu(k) = \\int_0^\\infty g(r) J_\\nu(kr) \\,\\sqrt{kr}\\,\\mathrm{d}r." }, { "math_id": 8, "text": "g(r) = f(r) \\sqrt r" }, { "math_id": 9, "text": "h_\\nu(k) = F_\\nu(k) \\sqrt k." }, { "math_id": 10, "text": "g(r) = \\int_0^\\infty h_\\nu(k) J_\\nu(kr) \\,\\sqrt{kr}\\,\\mathrm{d}k." }, { "math_id": 11, "text": "\\int_0^\\infty |g(r)| \\,\\mathrm{d}r < \\infty," }, { "math_id": 12, "text": "-k^2" }, { "math_id": 13, "text": "\\mathcal{H}_0 \\left\\{ \\frac{\\partial^2 u}{\\partial r^2} + \\frac 1 r \\frac{\\partial u}{\\partial r} \n+ \\frac{\\partial ^2 u}{\\partial z^2} \\right\\} = -k^2 U + \\frac{\\partial^2}{\\partial z^2} U," }, { "math_id": 14, "text": "U = \\mathcal{H}_0 u" }, { "math_id": 15, "text": "U" }, { "math_id": 16, "text": "\\int_0^\\infty J_\\nu(kr) J_\\nu(k'r) \\,r\\,\\mathrm{d}r = \\frac{\\delta(k - k')}{k}, \\quad k, k' > 0." }, { "math_id": 17, "text": "\\int_0^\\infty f(r) g(r) \\,r\\,\\mathrm{d}r = \\int_0^\\infty F_\\nu(k) G_\\nu(k) \\,k\\,\\mathrm{d}k." }, { "math_id": 18, "text": "\\int_0^\\infty |f(r)|^2 \\,r\\,\\mathrm{d}r = \\int_0^\\infty |F_\\nu(k)|^2 \\,k\\,\\mathrm{d}k," }, { "math_id": 19, "text": "f(\\mathbf{r})" }, { "math_id": 20, "text": "d" }, { "math_id": 21, "text": "F(\\mathbf{k}) = \\int_{\\R^d} f(\\mathbf{r}) e^{-i\\mathbf{k} \\cdot \\mathbf{r}} \\,\\mathrm{d}\\mathbf{r}." }, { "math_id": 22, "text": "Y_{l,m}" }, { "math_id": 23, "text": "e^{-i\\mathbf{k} \\cdot \\mathbf{r}} = (2 \\pi)^{d/2} (kr)^{1-d/2}\\sum_{l = 0}^{+\\infty}\n(-i)^{l} J_{d/2-1+l}(kr)\\sum_{m}\nY_{l,m}(\\Omega_{\\mathbf{k}}) Y^{*}_{l,m}(\\Omega_{\\mathbf{r}})," }, { "math_id": 24, "text": "\\Omega_{\\mathbf{r}}" }, { "math_id": 25, "text": "\\Omega_{\\mathbf{k}}" }, { "math_id": 26, "text": "\\mathbf{r}" }, { "math_id": 27, "text": "\\mathbf{k}" }, { "math_id": 28, "text": "F(\\mathbf{k}) = (2 \\pi)^{d/2} k^{1-d/2} \\sum_{l = 0}^{+\\infty} (-i)^{l} \\sum_{m}Y_{l,m}(\\Omega_{\\mathbf{k}})\n\\int_{0}^{+\\infty}J_{d/2-1+l}(kr)r^{d/2}\\mathrm{d}r \\int f(\\mathbf{r}) Y_{l,m}^{*}(\\Omega_{\\mathbf{r}}) \\mathrm{d}\\Omega_{\\mathbf{r}}. " }, { "math_id": 29, "text": "F(\\mathbf{k})" }, { "math_id": 30, "text": "f(\\mathbf{r}) = \\sum_{l = 0}^{+\\infty} \\sum_{m}f_{l,m}(r)Y_{l,m}(\\Omega_{\\mathbf{r}}),\\quad F(\\mathbf{k}) = \\sum_{l = 0}^{+\\infty} \\sum_{m} F_{l,m}(k) Y_{l,m}(\\Omega_{\\mathbf{k}}), " }, { "math_id": 31, "text": "k^{d/2-1}F_{l,m}(k) = (2 \\pi)^{d/2} (-i)^{l} \n\\int_{0}^{+\\infty}r^{d/2-1}f_{l,m}(r)J_{d/2-1+l}(kr)r\\mathrm{d}r. " }, { "math_id": 32, "text": "r^{d/2-1}" }, { "math_id": 33, "text": "f(r, \\theta) = \\sum_{m=-\\infty}^\\infty f_m(r) e^{im\\theta_{\\mathbf{r}}}," }, { "math_id": 34, "text": "F(\\mathbf k) = 2\\pi \\sum_m i^{-m} e^{im\\theta_{\\mathbf{k}}} F_m(k)," }, { "math_id": 35, "text": "F_m(k) = \\int_0^\\infty f_m(r) J_m(kr) \\,r\\,\\mathrm{d}r" }, { "math_id": 36, "text": "m" }, { "math_id": 37, "text": "f_m(r)" }, { "math_id": 38, "text": "l" }, { "math_id": 39, "text": "f(r,\\theta_{\\mathbf{r}},\\varphi_{\\mathbf{r}}) = \\sum_{l = 0}^{+\\infty} \\sum_{m=-l}^{+l}f_{l,m}(r)Y_{l,m}(\\theta_{\\mathbf{r}},\\varphi_{\\mathbf{r}})," }, { "math_id": 40, "text": "F(k,\\theta_{\\mathbf{k}},\\varphi_{\\mathbf{k}}) = (2 \\pi)^{3/2} \\sum_{l = 0}^{+\\infty} (-i)^{l} \\sum_{m=-l}^{+l} F_{l,m}(k) Y_{l,m}(\\theta_{\\mathbf{k}},\\varphi_{\\mathbf{k}})," }, { "math_id": 41, "text": "\\sqrt{k} F_{l,m}(k) = \n\\int_{0}^{+\\infty}\\sqrt{r} f_{l,m}(r)J_{l+1/2}(kr)r\\mathrm{d}r." }, { "math_id": 42, "text": "\\sqrt{r} f_{l,m}(r)" }, { "math_id": 43, "text": "(l+1/2)" }, { "math_id": 44, "text": "k^{d/2-1}F(k) = (2 \\pi)^{d/2}\n\\int_{0}^{+\\infty}r^{d/2-1}f(r)J_{d/2-1}(kr)r\\mathrm{d}r." }, { "math_id": 45, "text": "r^{d/2-1}f(r)" }, { "math_id": 46, "text": "(d/2-1)" }, { "math_id": 47, "text": "(2 \\pi)^{d/2} " }, { "math_id": 48, "text": "f_m(r)= r^m \\sum_{t \\ge 0} f_{m,t} \\left(1 - \\left(\\tfrac{r}{R}\\right)^2 \\right)^t, \\quad 0 \\le r \\le R," }, { "math_id": 49, "text": "\\begin{align}\n F(\\mathbf k)\n &= 2\\pi\\sum_m i^{-m} e^{i m\\theta_k} \\sum_t f_{m,t} \\int_0^R r^m \\left(1 - \\left(\\tfrac{r}{R}\\right)^2 \\right)^t J_m(kr) r\\,\\mathrm{d}r && \\\\\n &= 2\\pi\\sum_m i^{-m} e^{i m\\theta_k} R^{m+2} \\sum_t f_{m,t} \\int_0^1 x^{m+1} (1-x^2)^t J_m(kxR) \\,\\mathrm{d}x && (x = \\tfrac{r}{R})\\\\\n &= 2\\pi\\sum_m i^{-m} e^{i m\\theta_k} R^{m+2} \\sum_t f_{m,t} \\frac{t!2^t}{(kR)^{1+t}} J_{m+t+1}(kR),\n\\end{align}" }, { "math_id": 50, "text": "r/R\\equiv \\sin\\theta,\\quad 1-(r/R)^2 = \\cos^2\\theta," }, { "math_id": 51, "text": "f(r)\\equiv r^m \\sum_j g_{m,j} \\cos(j\\theta)= r^m\\sum_jg_{m,j} T_j(\\cos\\theta)." }, { "math_id": 52, "text": "\n\\cos(j\\theta) = 2^{j-1}\\cos^j\\theta-\\frac{j}{1}2^{j-3}\\cos^{j-2}\\theta +\\frac{j}{2}\\binom{j-3}{1}2^{j-5}\\cos^{j-4}\\theta - \\frac{j}{3}\\binom{j-4}{2}2^{j-7}\\cos^{j-6}\\theta + \\cdots\n" }, { "math_id": 53, "text": "FA = H." }, { "math_id": 54, "text": "r = r_0 e^{-\\rho}, \\quad k = k_0 \\, e^{\\kappa}." }, { "math_id": 55, "text": "\\tilde F_\\nu(\\kappa) = \\int_{-\\infty}^\\infty \\tilde f(\\rho) \\tilde J_\\nu(\\kappa - \\rho) \\,\\mathrm{d}\\rho," }, { "math_id": 56, "text": "\\tilde f(\\rho) = \\left(r_0 \\, e^{-\\rho} \\right)^{1-n} \\, f(r_0 e^{-\\rho})," }, { "math_id": 57, "text": "\\tilde F_\\nu(\\kappa) = \\left(k_0 \\, e^{\\kappa} \\right)^{1+n} \\, F_\\nu(k_0 e^\\kappa)," }, { "math_id": 58, "text": "\\tilde J_\\nu(\\kappa-\\rho) = \\left(k_0 \\, r_0 \\, e^{\\kappa-\\rho} \\right)^{1+n} \\, J_\\nu(k_0 r_0 e^{\\kappa-\\rho})." }, { "math_id": 59, "text": "O(N \\log N)" }, { "math_id": 60, "text": "\\tilde J_\\nu" }, { "math_id": 61, "text": "\n \\int_{-\\infty}^{+\\infty} \\tilde J_\\nu(x) e^{-i q x} \\,\\mathrm{d}x =\n \\frac{\\Gamma\\left(\\frac{\\nu + 1 + n - iq}{2} \\right)}{\\Gamma\\left(\\frac{\\nu + 1 - n + iq}{2}\\right)} \\, 2^{n - iq}e^{iq \\ln(k_0 r_0)}." }, { "math_id": 62, "text": "r_0, k_0, n" }, { "math_id": 63, "text": "f(r)," }, { "math_id": 64, "text": "r \\to 0" }, { "math_id": 65, "text": "r \\to \\infty." }, { "math_id": 66, "text": "f(r)" }, { "math_id": 67, "text": "\\frac{\\, \\mathrm{d}^2 F_0 \\,}{\\mathrm{d}k^2} + \\frac{1}{k} \\frac{\\, \\mathrm{d} F_0 \\,}{\\mathrm{d}k}" }, { "math_id": 68, "text": "R_n^m(r) = (-1)^{\\frac{n-m}{2}} \\int_0^\\infty J_{n+1}(k) J_m(kr) \\,\\mathrm{d}k" } ]
https://en.wikipedia.org/wiki?curid=1452960
1452979
Znám's problem
On divisibility among sets of integers In number theory, Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972, although other mathematicians had considered similar problems around the same time. The initial terms of Sylvester's sequence almost solve this problem, except that the last chosen term equals one plus the product of the others, rather than being a proper divisor. showed that there is at least one solution to the (proper) Znám problem for each formula_0. Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values. The Znám problem is closely related to Egyptian fractions. It is known that there are only finitely many solutions for any fixed formula_1. It is unknown whether there are any solutions to Znám's problem using only odd numbers, and there remain several other open questions. The problem. Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. That is, given formula_1, what sets of integers formula_2 are there such that, for each formula_3, formula_4 divides but is not equal to formula_5 A closely related problem concerns sets of integers in which each integer in the set is a divisor, but not necessarily a proper divisor, of one plus the product of the other integers in the set. This problem does not seem to have been named in the literature, and will be referred to as the improper Znám problem. Any solution to Znám's problem is also a solution to the improper Znám problem, but not necessarily vice versa. History. Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972. had posed the improper Znám problem for formula_6, and , independently of Znám, found all solutions to the improper problem for formula_7. showed that Znám's problem is unsolvable for formula_8, and credited J. Janák with finding the solution formula_9 for formula_10. Examples. Sylvester's sequence is an integer sequence in which each term is one plus the product of the previous terms. The first few terms of the sequence are &lt;templatestyles src="Block indent/styles.css"/&gt; Stopping the sequence early produces a set like formula_11 that almost meets the conditions of Znám's problem, except that the largest value equals one plus the product of the other terms, rather than being a proper divisor. Thus, it is a solution to the improper Znám problem, but not a solution to Znám's problem as it is usually defined. One solution to the proper Znám problem, for formula_10, is formula_12. A few calculations will show that Connection to Egyptian fractions. Any solution to the improper Znám problem is equivalent (via division by the product of the values formula_13) to a solution to the equation formula_14 where formula_15 as well as each formula_13 must be an integer, and conversely any such solution corresponds to a solution to the improper Znám problem. However, all known solutions have formula_16, so they satisfy the equation formula_17 That is, they lead to an Egyptian fraction representation of the number one as a sum of unit fractions. Several of the cited papers on Znám's problem study also the solutions to this equation. describe an application of the equation in topology, to the classification of singularities on surfaces, and describe an application to the theory of nondeterministic finite automata. Number of solutions. The number of solutions to Znám's problem for any formula_1 is finite, so it makes sense to count the total number of solutions for each formula_1. showed that there is at least one solution to the (proper) Znám problem for each formula_0. Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values. The number of solutions for small values of formula_1, starting with formula_10, forms the sequence 2, 5, 18, 96 (sequence in the OEIS). Presently, a few solutions are known for formula_18 and formula_19, but it is unclear how many solutions remain undiscovered for those values of formula_1. However, there are infinitely many solutions if formula_1 is not fixed: showed that there are at least 39 solutions for each formula_20, improving earlier results proving the existence of fewer solutions; conjecture that the number of solutions for each value of formula_1 grows monotonically with formula_1. It is unknown whether there are any solutions to Znám's problem using only odd numbers. With one exception, all known solutions start with 2. If all numbers in a solution to Znám's problem or the improper Znám problem are prime, their product is a primary pseudoperfect number; it is unknown whether infinitely many solutions of this type exist. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "k\\ge 5" }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "\\{n_1, \\ldots, n_k\\}" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "n_i" }, { "math_id": 5, "text": "\\Bigl(\\prod_{j \\ne i}^n n_j\\Bigr) + 1 ?" }, { "math_id": 6, "text": "k=3" }, { "math_id": 7, "text": "k\\le 5" }, { "math_id": 8, "text": "k<5" }, { "math_id": 9, "text": "\\{2, 3, 11, 23, 31\\}" }, { "math_id": 10, "text": "k=5" }, { "math_id": 11, "text": "\\{2, 3, 7, 43\\}" }, { "math_id": 12, "text": "\\{2, 3, 7, 47, 395\\}" }, { "math_id": 13, "text": "x_i" }, { "math_id": 14, "text": "\\sum\\frac1{x_i} + \\prod\\frac1{x_i}=y," }, { "math_id": 15, "text": "y" }, { "math_id": 16, "text": "y=1" }, { "math_id": 17, "text": "\\sum\\frac1{x_i} + \\prod\\frac1{x_i}=1." }, { "math_id": 18, "text": "k=9" }, { "math_id": 19, "text": "k=10" }, { "math_id": 20, "text": "k\\ge 12" } ]
https://en.wikipedia.org/wiki?curid=1452979
14530635
Tail value at risk
Risk measure In financial mathematics, tail value at risk (TVaR), also known as tail conditional expectation (TCE) or conditional tail expectation (CTE), is a risk measure associated with the more general value at risk. It quantifies the expected value of the loss given that an event outside a given probability level has occurred. Background. There are a number of related, but subtly different, formulations for TVaR in the literature. A common case in literature is to define TVaR and average value at risk as the same measure. Under some formulations, it is only equivalent to expected shortfall when the underlying distribution function is continuous at formula_0, the value at risk of level formula_1. Under some other settings, TVaR is the conditional expectation of loss above a given value, whereas the expected shortfall is the product of this value with the probability of it occurring. The former definition may not be a coherent risk measure in general, however it is coherent if the underlying distribution is continuous. The latter definition is a coherent risk measure. TVaR accounts for the severity of the failure, not only the chance of failure. The TVaR is a measure of the expectation only in the tail of the distribution. Mathematical definition. The canonical tail value at risk is the left-tail (large negative values) in some disciplines and the right-tail (large positive values) in other, such as actuarial science. This is usually due to the differing conventions of treating losses as large negative or positive values. Using the negative value convention, Artzner and others define the tail value at risk as: Given a random variable formula_2 which is the payoff of a portfolio at some future time and given a parameter formula_3 then the tail value at risk is defined by formula_4 where formula_5 is the upper formula_1-quantile given by formula_6. Typically the payoff random variable formula_2 is in some Lp-space where formula_7 to guarantee the existence of the expectation. The typical values for formula_1 are 5% and 1%. Formulas for continuous probability distributions. Closed-form formulas exist for calculating TVaR when the payoff of a portfolio formula_2 or a corresponding loss formula_8 follows a specific continuous distribution. If formula_2 follows some probability distribution with the probability density function (p.d.f.) formula_9 and the cumulative distribution function (c.d.f.) formula_10, the left-tail TVaR can be represented as formula_11 For engineering or actuarial applications it is more common to consider the distribution of losses formula_12, in this case the right-tail TVaR is considered (typically for formula_1 95% or 99%): formula_13 Since some formulas below were derived for the left-tail case and some for the right-tail case, the following reconciliations can be useful: formula_14 and formula_15 Normal distribution. If the payoff of a portfolio formula_2 follows normal (Gaussian) distribution with the p.d.f. formula_16 then the left-tail TVaR is equal to formula_17 where formula_18 is the standard normal p.d.f., formula_19 is the standard normal c.d.f., so formula_20 is the standard normal quantile. If the loss of a portfolio formula_21 follows normal distribution, the right-tail TVaR is equal to formula_22 Generalized Student's t-distribution. If the payoff of a portfolio formula_2 follows generalized Student's t-distribution with the p.d.f. formula_23 then the left-tail TVaR is equal to formula_24 where formula_25 is the standard t-distribution p.d.f., formula_26 is the standard t-distribution c.d.f., so formula_27 is the standard t-distribution quantile. If the loss of a portfolio formula_21 follows generalized Student's t-distribution, the right-tail TVaR is equal to formula_28 Laplace distribution. If the payoff of a portfolio formula_2 follows Laplace distribution with the p.d.f. formula_29 and the c.d.f. formula_30 then the left-tail TVaR is equal to formula_31 for formula_32. If the loss of a portfolio formula_21 follows Laplace distribution, the right-tail TVaR is equal to formula_33 Logistic distribution. If the payoff of a portfolio formula_2 follows logistic distribution with the p.d.f. formula_34 and the c.d.f. formula_35 then the left-tail TVaR is equal to formula_36 If the loss of a portfolio formula_21 follows logistic distribution, the right-tail TVaR is equal to formula_37 Exponential distribution. If the loss of a portfolio formula_21 follows exponential distribution with the p.d.f. formula_38 and the c.d.f. formula_39 then the right-tail TVaR is equal to formula_40 Pareto distribution. If the loss of a portfolio formula_21 follows Pareto distribution with the p.d.f. formula_41 and the c.d.f. formula_42 then the right-tail TVaR is equal to formula_43 Generalized Pareto distribution (GPD). If the loss of a portfolio formula_21 follows GPD with the p.d.f. formula_44 and the c.d.f. formula_45 then the right-tail TVaR is equal to formula_46 and the VaR is equal to formula_47 Weibull distribution. If the loss of a portfolio formula_21 follows Weibull distribution with the p.d.f. formula_48 and the c.d.f. formula_49 then the right-tail TVaR is equal to formula_50 where formula_51 is the upper incomplete gamma function. Generalized extreme value distribution (GEV). If the payoff of a portfolio formula_2 follows GEV with the p.d.f. formula_52 and the c.d.f. formula_53 then the left-tail TVaR is equal to formula_54 and the VaR is equal to formula_55 where formula_51 is the upper incomplete gamma function, formula_56 is the logarithmic integral function. If the loss of a portfolio formula_21 follows GEV, then the right-tail TVaR is equal to formula_57 where formula_58 is the lower incomplete gamma function, formula_59 is the Euler-Mascheroni constant. Generalized hyperbolic secant (GHS) distribution. If the payoff of a portfolio formula_2 follows GHS distribution with the p.d.f. formula_60and the c.d.f. formula_61 then the left-tail TVaR is equal to formula_62 where formula_63 is the dilogarithm and formula_64 is the imaginary unit. Johnson's SU-distribution. If the payoff of a portfolio formula_2 follows Johnson's SU-distribution with the c.d.f. formula_65 then the left-tail TVaR is equal to formula_66 where formula_67 is the c.d.f. of the standard normal distribution. Burr type XII distribution. If the payoff of a portfolio formula_2 follows the Burr type XII distribution with the p.d.f. formula_68 and the c.d.f. formula_69 the left-tail TVaR is equal to formula_70 where formula_71 is the hypergeometric function. Alternatively, formula_72 Dagum distribution. If the payoff of a portfolio formula_2 follows the Dagum distribution with the p.d.f. formula_73 and the c.d.f. formula_74 the left-tail TVaR is equal to formula_75 where formula_71 is the hypergeometric function. Lognormal distribution. If the payoff of a portfolio formula_2 follows lognormal distribution, i.e. the random variable formula_76 follows normal distribution with the p.d.f. formula_77 then the left-tail TVaR is equal to formula_78 where formula_19 is the standard normal c.d.f., so formula_20 is the standard normal quantile. Log-logistic distribution. If the payoff of a portfolio formula_2 follows log-logistic distribution, i.e. the random variable formula_76 follows logistic distribution with the p.d.f. formula_79 then the left-tail TVaR is equal to formula_80 where formula_81 is the regularized incomplete beta function, formula_82. As the incomplete beta function is defined only for positive arguments, for a more generic case the left-tail TVaR can be expressed with the hypergeometric function: formula_83 If the loss of a portfolio formula_21 follows log-logistic distribution with p.d.f. formula_84 and c.d.f. formula_85 then the right-tail TVaR is equal to formula_86 where formula_87 is the incomplete beta function. Log-Laplace distribution. If the payoff of a portfolio formula_2 follows log-Laplace distribution, i.e. the random variable formula_76 follows Laplace distribution the p.d.f. formula_88 then the left-tail TVaR is equal to formula_89 Log-generalized hyperbolic secant (log-GHS) distribution. If the payoff of a portfolio formula_2 follows log-GHS distribution, i.e. the random variable formula_76 follows GHS distribution with the p.d.f. formula_90 then the left-tail TVaR is equal to formula_91 where formula_71 is the hypergeometric function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{VaR}_{\\alpha}(X)" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "0 < \\alpha < 1" }, { "math_id": 4, "text": "\\operatorname{TVaR}_{\\alpha}(X) = \\operatorname{E} [-X|X \\leq -\\operatorname{VaR}_{\\alpha}(X)] = \\operatorname{E} [-X | X \\leq x^{\\alpha}] ," }, { "math_id": 5, "text": "x^{\\alpha}" }, { "math_id": 6, "text": "x^{\\alpha} = \\inf\\{x \\in \\mathbb{R}: \\Pr(X \\leq x) > \\alpha\\}" }, { "math_id": 7, "text": "p \\geq 1" }, { "math_id": 8, "text": "L = -X" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "F" }, { "math_id": 11, "text": "\\operatorname{TVaR}_{\\alpha}(X) = \\operatorname{E} [-X|X \\leq -\\operatorname{VaR}_{\\alpha}(X)] = -\\frac{1}{\\alpha} \\int_0^\\alpha \\operatorname{VaR}_\\gamma(X)d\\gamma = -\\frac{1}{\\alpha}\\int_{-\\infty}^{F^{-1}(\\alpha)}xf(x)dx." }, { "math_id": 12, "text": "L=-X" }, { "math_id": 13, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = E[L\\mid L \\geq \\operatorname{VaR}_{\\alpha}(L)] = \\frac{1}{1-\\alpha} \\int^1_\\alpha \\operatorname{VaR}_\\gamma(L)d\\gamma = \\frac{1}{1-\\alpha}\\int^{+\\infty}_{F^{-1}(\\alpha)}yf(y)dy." }, { "math_id": 14, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\frac{1}{\\alpha}E[X]+\\frac{1-\\alpha}{\\alpha}\\operatorname{TVaR}^\\text{right}_\\alpha(L)" }, { "math_id": 15, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\frac{1}{1-\\alpha}E[L]+\\frac{\\alpha}{1-\\alpha}\\operatorname{TVaR}_{\\alpha}(X)." }, { "math_id": 16, "text": "f(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}" }, { "math_id": 17, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\mu+\\sigma\\frac{\\phi(\\Phi^{-1}(\\alpha))}{\\alpha}," }, { "math_id": 18, "text": "\\phi(x) = \\frac{1}{\\sqrt{2\\pi}}e^{-{x^2}/{2}}" }, { "math_id": 19, "text": "\\Phi(x)" }, { "math_id": 20, "text": "\\Phi^{-1}(\\alpha)" }, { "math_id": 21, "text": "L" }, { "math_id": 22, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\mu+\\sigma\\frac{\\phi(\\Phi^{-1}(\\alpha))}{1-\\alpha}." }, { "math_id": 23, "text": "f(x) = \\frac{\\Gamma\\left(\\frac{\\nu+1}{2}\\right)}{\\Gamma\\left(\\frac{\\nu}{2}\\right)\\sqrt{\\pi\\nu}\\sigma}\\left(1+\\frac{1}{\\nu}\\left(\\frac{x-\\mu}{\\sigma}\\right)^2\\right)^{-\\frac{\\nu+1}{2}}" }, { "math_id": 24, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\mu+\\sigma\\frac{\\nu+(\\Tau^{-1}(\\alpha))^2}{\\nu-1}\\frac{\\tau(\\Tau^{-1}(\\alpha))}{\\alpha}," }, { "math_id": 25, "text": "\\tau(x)=\\frac{\\Gamma\\left(\\frac{\\nu+1}{2}\\right)}{\\Gamma\\left(\\frac{\\nu}{2}\\right)\\sqrt{\\pi\\nu}}\\left(1+\\frac{x^2}{\\nu}\\right)^{-\\frac{\\nu+1}{2}}" }, { "math_id": 26, "text": "\\Tau(x)" }, { "math_id": 27, "text": "\\Tau^{-1}(\\alpha)" }, { "math_id": 28, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\mu+\\sigma\\frac{\\nu+(\\Tau^{-1}(\\alpha))^2}{\\nu-1}\\frac{\\tau(\\Tau^{-1}(\\alpha))}{1-\\alpha}." }, { "math_id": 29, "text": "f(x) = \\frac{1}{2b}e^{-\\frac{|x-\\mu|}{b}}" }, { "math_id": 30, "text": "F(x) = \\begin{cases}1 - \\frac{1}{2} e^{-\\frac{x-\\mu}{b}} & \\text{if }x \\geq \\mu,\\\\ \\frac{1}{2} e^\\frac{x-\\mu}{b} & \\text{if }x < \\mu.\\end{cases}" }, { "math_id": 31, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\mu+b(1-\\ln2\\alpha)" }, { "math_id": 32, "text": "\\alpha \\le 0.5" }, { "math_id": 33, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\begin{cases}\n\\mu + b \\frac{\\alpha}{1-\\alpha} (1-\\ln2\\alpha) & \\text{if }\\alpha < 0.5,\\\\[1ex]\n\\mu + b[1 - \\ln(2(1-\\alpha))] & \\text{if }\\alpha \\ge 0.5.\n\\end{cases}" }, { "math_id": 34, "text": "f(x) = \\frac{1}{s}e^{-\\frac{x-\\mu}{s}}\\left(1+e^{-\\frac{x-\\mu}{s}}\\right)^{-2}" }, { "math_id": 35, "text": "F(x) = \\left(1+e^{-\\frac{x-\\mu}{s}}\\right)^{-1}" }, { "math_id": 36, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\mu+s\\ln\\frac{(1-\\alpha)^{1-\\frac{1}{\\alpha}}}{\\alpha}." }, { "math_id": 37, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\mu + s\\frac{-\\alpha\\ln\\alpha-(1-\\alpha)\\ln(1-\\alpha)}{1-\\alpha}." }, { "math_id": 38, "text": "f(x) = \\begin{cases}\\lambda e^{-\\lambda x} & \\text{if }x \\geq 0,\\\\ 0 & \\text{if }x < 0.\\end{cases}" }, { "math_id": 39, "text": "F(x) = \\begin{cases}1 - e^{-\\lambda x} & \\text{if }x \\geq 0,\\\\ 0 & \\text{if }x < 0.\\end{cases}" }, { "math_id": 40, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\frac{-\\ln(1-\\alpha)+1}{\\lambda}." }, { "math_id": 41, "text": "f(x) = \\begin{cases}\\frac{a x_m^a}{x^{a+1}} & \\text{if }x \\geq x_m,\\\\ 0 & \\text{if }x < x_m.\\end{cases}" }, { "math_id": 42, "text": "F(x) = \\begin{cases}1 - (x_m/x)^a & \\text{if }x \\geq x_m,\\\\ 0 & \\text{if }x < x_m.\\end{cases}" }, { "math_id": 43, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\frac{x_m a}{(1-\\alpha)^{1/a}(a-1)}." }, { "math_id": 44, "text": "f(x) = \\frac{1}{s} \\left( 1+\\frac{\\xi (x-\\mu)}{s} \\right)^{\\left(-\\frac{1}{\\xi}-1\\right)}" }, { "math_id": 45, "text": "F(x) = \\begin{cases}1 - \\left(1+\\frac{\\xi(x-\\mu)}{s}\\right)^{-\\frac{1}{\\xi}} & \\text{if }\\xi \\ne 0,\\\\ 1-\\exp \\left( -\\frac{x-\\mu}{s} \\right) & \\text{if }\\xi = 0.\\end{cases}" }, { "math_id": 46, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\begin{cases}\\mu + s \\left[ \\frac{(1-\\alpha)^{-\\xi}}{1-\\xi}+\\frac{(1-\\alpha)^{-\\xi}-1}{\\xi} \\right] & \\text{if }\\xi \\ne 0,\\\\ \\mu + s[1 - \\ln(1-\\alpha)] & \\text{if }\\xi = 0.\\end{cases}" }, { "math_id": 47, "text": "\\mathrm{VaR}_\\alpha(L) = \\begin{cases}\n\\mu + s \\frac{(1-\\alpha)^{-\\xi}-1}{\\xi} & \\text{if }\\xi \\ne 0,\\\\\n\\mu - s \\ln(1-\\alpha) & \\text{if }\\xi = 0. \\end{cases}" }, { "math_id": 48, "text": "f(x) = \\begin{cases}\\frac{k}{\\lambda} \\left(\\frac{x}{\\lambda}\\right)^{k-1} e^{-(x/\\lambda)^k} & \\text{if }x \\geq 0,\\\\ 0 & \\text{if }x < 0.\\end{cases}" }, { "math_id": 49, "text": "F(x) = \\begin{cases}1 - e^{-(x/\\lambda)^k} & \\text{if }x \\geq 0,\\\\ 0 & \\text{if }x < 0.\\end{cases}" }, { "math_id": 50, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\frac{\\lambda}{1-\\alpha} \\Gamma\\left(1+\\frac{1}{k},-\\ln(1-\\alpha)\\right)," }, { "math_id": 51, "text": "\\Gamma(s,x)" }, { "math_id": 52, "text": "f(x) = \\begin{cases} \\frac{1}{\\sigma} \\left( 1+\\xi \\frac{ x-\\mu}{\\sigma} \\right)^{-\\frac{1}{\\xi}-1} \\exp\\left[-\\left( 1+\\xi \\frac{x-\\mu}{\\sigma} \\right)^{-\\frac{1}{\\xi}}\\right] & \\text{if } \\xi \\ne 0,\\\\ \\frac{1}{\\sigma}e^{-\\frac{x-\\mu}{\\sigma}}e^{-e^{-\\frac{x-\\mu}{\\sigma}}} & \\text{if } \\xi = 0. \\end{cases}" }, { "math_id": 53, "text": "F(x) = \\begin{cases}\n\\exp\\left(-\\left(1+\\xi\\frac{x-\\mu}{\\sigma}\\right)^{-\\frac{1}{\\xi}}\\right) & \\text{if } \\xi \\ne 0,\\\\\n\\exp\\left(-e^{-\\frac{x-\\mu}{\\sigma}}\\right) & \\text{if }\\xi = 0.\n\\end{cases}" }, { "math_id": 54, "text": "\\operatorname{TVaR}_{\\alpha}(X) = \\begin{cases}-\\mu - \\frac{\\sigma}{\\alpha \\xi} \\left[ \\Gamma(1-\\xi,-\\ln\\alpha)-\\alpha \\right] & \\text{if }\\xi \\ne 0,\\\\ -\\mu - \\frac{\\sigma}{\\alpha} \\left[ \\text{li}(\\alpha) - \\alpha \\ln(-\\ln \\alpha) \\right] & \\text{if }\\xi = 0.\\end{cases}" }, { "math_id": 55, "text": "\\mathrm{VaR}_\\alpha(X) = \\begin{cases}-\\mu - \\frac{\\sigma}{\\xi} \\left[(-\\ln \\alpha)^{-\\xi}-1 \\right] & \\text{if }\\xi \\ne 0,\\\\ -\\mu + \\sigma \\ln(-\\ln\\alpha) & \\text{if }\\xi = 0.\\end{cases}" }, { "math_id": 56, "text": "\\text{li}(x)=\\int \\frac{dx}{\\ln x}" }, { "math_id": 57, "text": "\\operatorname{TVaR}_{\\alpha}(X) = \\begin{cases}\\mu + \\frac{\\sigma}{(1-\\alpha) \\xi} \\left[ \\gamma(1-\\xi,-\\ln\\alpha)-(1-\\alpha) \\right] & \\text{if }\\xi \\ne 0,\\\\ \\mu + \\frac{\\sigma}{1-\\alpha} \\left[y - \\text{li}(\\alpha) + \\alpha \\ln(-\\ln \\alpha) \\right] & \\text{if }\\xi = 0.\\end{cases}" }, { "math_id": 58, "text": "\\gamma(s,x)" }, { "math_id": 59, "text": "y" }, { "math_id": 60, "text": "f(x) = \\frac{1}{2 \\sigma} \\operatorname{sech}\\left(\\frac{\\pi}{2}\\frac{x-\\mu}{\\sigma}\\right)" }, { "math_id": 61, "text": "F(x) = \\frac{2}{\\pi}\\arctan\\left[\\exp\\left(\\frac{\\pi}{2}\\frac{x-\\mu}{\\sigma}\\right)\\right]" }, { "math_id": 62, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\mu - \\frac{2\\sigma}{\\pi} \\ln\\left( \\tan \\frac{\\pi\\alpha}{2} \\right) - \\frac{2\\sigma}{\\pi^2\\alpha}i\\left[\\text{Li}_2\\left(-i\\tan\\frac{\\pi\\alpha}{2}\\right)-\\text{Li}_2\\left(i\\tan\\frac{\\pi\\alpha}{2}\\right)\\right]," }, { "math_id": 63, "text": "\\text{Li}_2" }, { "math_id": 64, "text": "i=\\sqrt{-1}" }, { "math_id": 65, "text": "F(x) = \\Phi\\left[\\gamma+\\delta\\sinh^{-1}\\left(\\frac{x-\\xi}{\\lambda}\\right)\\right]" }, { "math_id": 66, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\xi - \\frac{\\lambda}{2\\alpha} \\left[ \\exp\\left(\\frac{1-2\\gamma\\delta}{2\\delta^2}\\right) \\Phi\\left(\\Phi^{-1}(\\alpha)-\\frac{1}{\\delta}\\right) - \\exp\\left(\\frac{1+2\\gamma\\delta}{2\\delta^2}\\right)\\Phi\\left(\\Phi^{-1}(\\alpha)+\\frac{1}{\\delta}\\right) \\right]," }, { "math_id": 67, "text": "\\Phi" }, { "math_id": 68, "text": "f(x) = \\frac{ck}{\\beta}\\left(\\frac{x-\\gamma}{\\beta}\\right)^{c-1}\\left[1+\\left(\\frac{x-\\gamma}{\\beta}\\right)^c\\right]^{-k-1}" }, { "math_id": 69, "text": "F(x) = 1-\\left[1+\\left(\\frac{x-\\gamma}{\\beta}\\right)^c\\right]^{-k}," }, { "math_id": 70, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\gamma -\\frac{\\beta}{\\alpha}\\left( (1-\\alpha)^{-1/k}-1 \\right)^{1/c} \\left[ \\alpha -1+{_2F_1}\\left(\\frac{1}{c},k;1+\\frac{1}{c};1-(1-\\alpha)^{-1/k}\\right) \\right]," }, { "math_id": 71, "text": "_2F_1" }, { "math_id": 72, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\gamma -\\frac{\\beta}{\\alpha}\\frac{ck}{c+1}\\left( (1-\\alpha)^{-1/k}-1 \\right)^{1+\\frac{1}{c}} {_2F_1}\\left(1+\\frac{1}{c},k+1;2+\\frac{1}{c};1-(1-\\alpha)^{-1/k}\\right). " }, { "math_id": 73, "text": "f(x) = \\frac{ck}{\\beta}\\left(\\frac{x-\\gamma}{\\beta}\\right)^{ck-1}\\left[1+\\left(\\frac{x-\\gamma}{\\beta}\\right)^c\\right]^{-k-1}" }, { "math_id": 74, "text": "F(x) = \\left[1+\\left(\\frac{x-\\gamma}{\\beta}\\right)^{-c}\\right]^{-k}," }, { "math_id": 75, "text": "\\operatorname{TVaR}_{\\alpha}(X) = -\\gamma -\\frac{\\beta}{\\alpha}\\frac{ck}{ck+1}\\left( \\alpha^{-1/k}-1 \\right)^{-k-\\frac{1}{c}} {_2F_1}\\left(k+1,k+\\frac{1}{c};k+1+\\frac{1}{c};-\\frac{1}{\\alpha^{-1/k}-1}\\right), " }, { "math_id": 76, "text": "\\ln(1+X)" }, { "math_id": 77, "text": "f(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma}e^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}," }, { "math_id": 78, "text": "\\operatorname{TVaR}_{\\alpha}(X) = 1-\\exp\\left(\\mu+\\frac{\\sigma^2}{2}\\right) \\frac{\\Phi(\\Phi^{-1}(\\alpha)-\\sigma)}{\\alpha}," }, { "math_id": 79, "text": "f(x) = \\frac{1}{s}e^{-\\frac{x-\\mu}{s}}\\left(1+e^{-\\frac{x-\\mu}{s}}\\right)^{-2}," }, { "math_id": 80, "text": "\\operatorname{TVaR}_{\\alpha}(X) = 1-\\frac{e^\\mu}{\\alpha}I_\\alpha(1+s,1-s)\\frac{\\pi s}{\\sin\\pi s}," }, { "math_id": 81, "text": "I_\\alpha" }, { "math_id": 82, "text": "I_\\alpha(a,b)=\\frac{\\Beta_\\alpha(a,b)}{\\Beta(a,b)}" }, { "math_id": 83, "text": "\\operatorname{TVaR}_{\\alpha}(X) = 1-\\frac{e^\\mu \\alpha^s}{s+1} {_2F_1}(s,s+1;s+2;\\alpha)." }, { "math_id": 84, "text": "f(x) = \\frac{\\frac{b}{a}(x/a)^{b-1}}{(1+(x/a)^b)^2}" }, { "math_id": 85, "text": "F(x) = \\frac{1}{1+(x/a)^{-b}}," }, { "math_id": 86, "text": "\\operatorname{TVaR}^\\text{right}_\\alpha(L) = \\frac{a}{1-\\alpha}\\left[\\frac{\\pi}{b}\\csc\n\\left(\\frac{\\pi}{b}\\right)-\\Beta_\\alpha\\left(\\frac{1}{b}+1,1-\\frac{1}{b}\\right)\\right]," }, { "math_id": 87, "text": "B_\\alpha" }, { "math_id": 88, "text": "f(x) = \\frac{1}{2b}e^{-\\frac{|x-\\mu|}{b}}," }, { "math_id": 89, "text": "\\operatorname{TVaR}_{\\alpha}(X) = \\begin{cases}1 - \\frac{e^\\mu (2\\alpha)^b}{b+1} & \\text{if }\\alpha \\le 0.5,\\\\ 1 - \\frac{e^\\mu 2^{-b}}{\\alpha(b-1)}\\left[(1-\\alpha)^{(1-b)}-1\\right] & \\text{if }\\alpha > 0.5.\\end{cases}" }, { "math_id": 90, "text": "f(x) = \\frac{1}{2 \\sigma} \\operatorname{sech}\\left(\\frac{\\pi}{2}\\frac{x-\\mu}{\\sigma}\\right)," }, { "math_id": 91, "text": "\\operatorname{TVaR}_{\\alpha}(X) = 1-\\frac{1}{\\alpha(\\sigma+{\\pi/2})} \\left(\\tan\\frac{\\pi \\alpha}{2}\\exp\\frac{\\pi \\mu}{2\\sigma}\\right)^{2\\sigma/\\pi} \\tan\\frac{\\pi \\alpha}{2} {_2F_1}\\left(1,\\frac{1}{2}+\\frac{\\sigma}{\\pi};\\frac{3}{2}+\\frac{\\sigma}{\\pi};-\\tan\\left(\\frac{\\pi \\alpha}{2}\\right)^2\\right)," } ]
https://en.wikipedia.org/wiki?curid=14530635
14531126
Berlin Gold Hat
Late Bronze Age headdress artefact made of thin gold leaf The Berlin Gold Hat or Berlin Golden Hat (German: "Berliner Goldhut") is a Late Bronze Age artefact made of thin gold leaf. It served as the external covering on a long conical brimmed headdress, probably of an organic material. It is now in the Neues Museum on Museum Island in Berlin, in a room by itself with an elaborate explanatory display. The Berlin Gold Hat is the best preserved specimen among the four known conical golden hats from Bronze Age Europe so far. Of the three others, two were found in southern Germany, and one in the west of France. All were found in the 19th and 20th centuries. It is generally assumed that the hats served as the insignia of deities or priests in the context of a sun cult that appears to have been widespread in Central Europe at the time. The hats are also suggested to have served astronomical/calendrical functions. The Berlin Gold Hat was acquired in 1996 by the Berlin "Museum für Vor- und Frühgeschichte" as a single find without provenance. A comparative study of the ornaments and techniques in conjunction with dateable finds suggests that it was made in the Late Bronze Age, roughly around 1000 to 800 BC. Description. The Berlin gold hat is a 490 g (15.75 troy ounces) gold hat with a long and slender conical shaft and a differentiated convex foot, decorated all over with repousse traced motifs, applied with small stamps and wheels. Its composition is very similar to the previously known Golden Cone of Ezelsdorf-Buch. At the bottom of the cone, the sheet gold of the Berlin hat is reinforced by a 10 mm wide ring of sheet bronze. The external edge of the brim is strengthened by a twisted square-sectioned wire, around which the gold leaf is turned upwards. The overall height is 745 mm. The hat was hammered from a gold alloy of 87.7% Gold, 9.8% Silver, 0.4% Copper and 0.1% Tin. It was made of a single piece; its average thickness is 0.6 mm. The cone is ornamented with 21 zones of horizontal bands and rows of symbols along all of its length. Fourteen different stamps and three decorated wheels or cylindrical stamps were used. The horizontal bands were decorated systematically with repeated similar patterns. The individual ornamental bands were optically separated traced ribs and bulges, mostly achieved with the use of cylindrical stamps. The bands of ornaments contain mostly buckle and circle motifs, most with a circular central buckle surrounded by up to six concentric circles. One of the bands is distinctive: It is decorated with a row of recumbent crescents, each atop an almond- or eye-shaped symbol. The point of the cone is embellished with an eight-spoke star on a background of decorative punches. An overview of the type and number of stamps used in the ornamental zones is shown on the right. The meeting of the shaft with the foot is taken up by a wide vertically ribbed band. The foot is decorated with similar motifs to the cone itself. Near the reinforcing bronze band, it turns into a brim, also decorated with disk-shaped symbols. Calendar. Modern scholarship has demonstrated that the ornamentation of the gold leaf cones of the Schifferstadt type, to which the Berlin example belongs, represent systematic sequences in terms of number and types of ornaments per band. A detailed study of the Berlin example, which is the only one fully preserved, showed that the symbols probably represent a lunisolar calendar. The object would have permitted the determination of dates or periods in both lunar and solar calendars. The functions discovered so far would permit the counting of temporal units of up to 57 months. A simple multiplication of such values would also permit the calculation of longer periods, such as metonic cycles. Each symbol, or each ring of a symbol, represents a single day. Apart from ornament bands incorporating differing numbers of rings there are special symbols and zones in intercalary areas, which would have had to be added to or subtracted from the periods in question. The system of this mathematical function incorporated into the artistic ornamentation has not been fully deciphered so far, but a schematic understanding of the Berlin Golden Hat and the periods it delimits has been achieved. In principle, starting with zone formula_0, a sum is achieved by adding a relevant contiguous number of neighbouring sections: formula_1. To reach the equivalent lunar or solar value, from this initial sum must be subtracted the sum of symbols from the intercalary zone(s) within the area counted. The illustration depicts the solar representation on the left and the lunar one on the right. The red or blue fields in Zones 5, 7, 16, and 17 are intercalary zones. The values in the individual fields are reached by multiplying the number of symbols per zone with the number of rings or circles incorporated in each predominant symbol. The special symbols in Zone 5 are assigned the value of 38, as indicated by their number. "Zone 12 is dominated by 20 repetitions of punched symbol No. 14, a circular disc symbol surrounded by 5 concentric circles." "Thus, the symbol has the value of 20 × 5 = 100." "The smaller ring symbols placed between the larger repetitions of No. 14 are considered as mere ornaments and thus not counted." Through this system, the Hats can be used to calculate a lunisolar calendrical system, i.e. a direct reading in either lunar or solar dates, as well as the conversion between them. The table can be used in the same way as the original Golden Hats. To determine the number of days in a specific time period (yellow fields), the values of the coloured fields above are added, reaching an intermediate sum. If any of the red intercalary zones are included, their sum has to be subtracted. This allows the calculation of 12, 24, 36, 48, 54, and 57 synodic months in the lunar system and of 12, 18, 24, 36, 48, 54, and 57 solar months (twelfths of a tropical year). "To determine a 54 month cycle in the lunar system, the numerical values of the green or blue Zones 3 to 21 are added, reaching a sum of 1,739 days. From this, the values of the red intercalary fields 5, 16, and 17 are subtracted, The result is 1739 − 142 = 1597 days, exactly 54 synodic months of 29.5305 days each." The overall discrepancy of 2 days to the astronomically accurate value is probably the result of a slight imprecision in the Bronze Age observation of synodic and solar month. Provenance and find history. The Berlin Gold Hat was put on sale in the international arts trade in 1995. In 1996, the Berlin Museum für Vor- und Frühgeschichte bought it as an important Bronze Age artefact. The seller claimed that the object came from an anonymous Swiss private collection that had been assembled in the 1950s and 1960s. It is assumed that the object was found in Southern Germany or Switzerland. No further details are known. The good preservation of the cone suggests that, like the Schifferstadt example, it must have been carefully filled with soil or ashes and then buried upright in relatively fine soil. Manufacture. The Berlin Gold Hat is made of a gold alloy of 87.7% gold, 9.8% silver, 0.4% copper and 0.1% tin. It is hammered seamlessly from a single piece. The amount of gold used would form a cube of only 3 cm dimensions. The average thickness is 0.6 mm. Because of the tribological characteristics of the material, it tends to harden with increasing deformation (see ductility), increasing its potential to crack. To avoid cracking, an extremely even deformation was necessary. Additionally, the material had to be softened by repeatedly heating it to a temperature of at least 750 °C. Since gold alloy has a relatively low melting point of "circa" 960 °C, a very careful temperature control and an isothermal heating process were required, so as to avoid melting any of the surface. For this, the Bronze Age artisans used a charcoal fire or oven similar to those used for pottery. The temperature could only be controlled through the addition of oxygen, using a bellows. Considering the tribologic conditions and the technical means available at the time, the production even of an undecorated Golden hat would represent an immense technical achievement. In the course of its further manufacture, the Berlin Hat was embellished with rows of radial ornamental bands, chased into the metal. To make this possible, it was probably filled with a putty or pitch based on tree resin and wax - in the Schifferstadt specimen, traces of this survived. The thin gold leaf was structured by chasing: stamp-like tools or moulds depicting the individual symbols were repeatedly pressed into (or rolled along) the exterior of the gold. At least 17 separate tools (17 stamps and 3 cylindrical stamps) were used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z_i" }, { "math_id": 1, "text": "Z_i\\, ...\\, Z_{i+n}" } ]
https://en.wikipedia.org/wiki?curid=14531126
145343
Wave function
Mathematical description of quantum state In quantum physics, a wave function (or wavefunction) is a mathematical description of the quantum state of an isolated quantum system. The most common symbols for a wave function are the Greek letters "ψ" and Ψ (lower-case and capital psi, respectively). Wave functions are complex-valued. For example, a wave function might assign a complex number to each point in a region of space. The Born rule provides the means to turn these complex probability amplitudes into actual probabilities. In one common form, it says that the squared modulus of a wave function that depends upon position is the probability density of measuring a particle as being at a given place. The integral of a wavefunction's squared modulus over all the system's degrees of freedom must be equal to 1, a condition called "normalization". Since the wave function is complex-valued, only its relative phase and relative magnitude can be measured; its value does not, in isolation, tell anything about the magnitudes or directions of measurable observables. One has to apply quantum operators, whose eigenvalues correspond to sets of possible results of measurements, to the wave function "ψ" and calculate the statistical distributions for measurable quantities. Wave functions can be functions of variables other than position, such as momentum. The information represented by a wave function that is dependent upon position can be converted into a wave function dependent upon momentum and vice versa, by means of a Fourier transform. Some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom; other discrete variables can also be included, such as isospin. When a system has internal degrees of freedom, the wave function at each point in the continuous degrees of freedom (e.g., a point in space) assigns a complex number for "each" possible value of the discrete degrees of freedom (e.g., z-component of spin). These values are often displayed in a column matrix (e.g., a 2 × 1 column vector for a non-relativistic electron with spin ). According to the superposition principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions and form a Hilbert space. The inner product between two wave functions is a measure of the overlap between the corresponding physical states and is used in the foundational probabilistic interpretation of quantum mechanics, the Born rule, relating transition probabilities to inner products. The Schrödinger equation determines how wave functions evolve over time, and a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality. However, the wave function in quantum mechanics describes a kind of physical phenomenon, as of 2023 still open to different interpretations, which fundamentally differs from that of classic mechanical waves. Historical background. In 1900, Max Planck postulated the proportionality between the frequency formula_0 of a photon and its energy formula_1, formula_2, and in 1916 the corresponding relation between a photon's momentum formula_3 and wavelength formula_4, formula_5, where formula_6 is the Planck constant. In 1923, De Broglie was the first to suggest that the relation formula_5, now called the De Broglie relation, holds for "massive" particles, the chief clue being Lorentz invariance, and this can be viewed as the starting point for the modern development of quantum mechanics. The equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and linear algebra. Those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, and others, developing "matrix mechanics". Schrödinger subsequently showed that the two approaches were equivalent. In 1926, Schrödinger published the famous wave equation now named after him, the Schrödinger equation. This equation was based on classical conservation of energy using quantum operators and the de Broglie relations and the solutions of the equation are the wave functions for the quantum system. However, no one was clear on how to interpret it. At first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the elastic scattering of a wave packet (representing a particle) off a target; it spreads out in all directions. While a scattered particle may scatter in any direction, it does not break up and take off in all directions. In 1926, Born provided the perspective of probability amplitude. This relates calculations of quantum mechanics directly to probabilistic experimental observations. It is accepted as part of the Copenhagen interpretation of quantum mechanics. There are many other interpretations of quantum mechanics. In 1927, Hartree and Fock made the first step in an attempt to solve the "N"-body wave function, and developed the "self-consistency cycle": an iterative algorithm to approximate the solution. Now it is also known as the Hartree–Fock method. The Slater determinant and permanent (of a matrix) was part of the method, provided by John C. Slater. Schrödinger did encounter an equation for the wave function that satisfied relativistic energy conservation "before" he published the non-relativistic one, but discarded it as it predicted negative probabilities and negative energies. In 1927, Klein, Gordon and Fock also found it, but incorporated the electromagnetic interaction and proved that it was Lorentz invariant. De Broglie also arrived at the same equation in 1928. This relativistic wave equation is now most commonly known as the Klein–Gordon equation. In 1927, Pauli phenomenologically found a non-relativistic equation to describe spin-1/2 particles in electromagnetic fields, now called the Pauli equation. Pauli found the wave function was not described by a single complex function of space and time, but needed two complex numbers, which respectively correspond to the spin +1/2 and −1/2 states of the fermion. Soon after in 1928, Dirac found an equation from the first successful unification of special relativity and quantum mechanics applied to the electron, now called the Dirac equation. In this, the wave function is a "spinor" represented by four complex-valued components: two for the electron and two for the electron's antiparticle, the positron. In the non-relativistic limit, the Dirac wave function resembles the Pauli wave function for the electron. Later, other relativistic wave equations were found. Wave functions and wave equations in modern theories. All these wave equations are of enduring importance. The Schrödinger equation and the Pauli equation are under many circumstances excellent approximations of the relativistic variants. They are considerably easier to solve in practical problems than the relativistic counterparts. The Klein–Gordon equation and the Dirac equation, while being relativistic, do not represent full reconciliation of quantum mechanics and special relativity. The branch of quantum mechanics where these equations are studied the same way as the Schrödinger equation, often called relativistic quantum mechanics, while very successful, has its limitations (see e.g. Lamb shift) and conceptual problems (see e.g. Dirac sea). Relativity makes it inevitable that the number of particles in a system is not constant. For full reconciliation, quantum field theory is needed. In this theory, the wave equations and the wave functions have their place, but in a somewhat different guise. The main objects of interest are not the wave functions, but rather operators, so called "field operators" (or just fields where "operator" is understood) on the Hilbert space of states (to be described next section). It turns out that the original relativistic wave equations and their solutions are still needed to build the Hilbert space. Moreover, the "free fields operators", i.e. when interactions are assumed not to exist, turn out to (formally) satisfy the same equation as do the fields (wave functions) in many cases. Thus the Klein–Gordon equation (spin 0) and the Dirac equation (spin ) in this guise remain in the theory. Higher spin analogues include the Proca equation (spin 1), Rarita–Schwinger equation (spin ), and, more generally, the Bargmann–Wigner equations. For "massless" free fields two examples are the free field Maxwell equation (spin 1) and the free field Einstein equation (spin 2) for the field operators. All of them are essentially a direct consequence of the requirement of Lorentz invariance. Their solutions must transform under Lorentz transformation in a prescribed way, i.e. under a particular representation of the Lorentz group and that together with few other reasonable demands, e.g. the cluster decomposition property, with implications for causality is enough to fix the equations. This applies to free field equations; interactions are not included. If a Lagrangian density (including interactions) is available, then the Lagrangian formalism will yield an equation of motion at the classical level. This equation may be very complex and not amenable to solution. Any solution would refer to a "fixed" number of particles and would not account for the term "interaction" as referred to in these theories, which involves the creation and annihilation of particles and not external potentials as in ordinary "first quantized" quantum theory. In string theory, the situation remains analogous. For instance, a wave function in momentum space has the role of Fourier expansion coefficient in a general state of a particle (string) with momentum that is not sharply defined. Definition (one spinless particle in one dimension). For now, consider the simple case of a non-relativistic single particle, without spin, in one spatial dimension. More general cases are discussed below. According to the postulates of quantum mechanics, the state of a physical system, at fixed time formula_7, is given by the wave function belonging to a separable complex Hilbert space. As such, the inner product of two wave functions Ψ1 and Ψ2 can be defined as the complex number (at time t) formula_8. More details are given below. However, the inner product of a wave function Ψ with itself, formula_9, is "always" a positive real number. The number (not ) is called the norm of the wave function Ψ. The separable Hilbert space being considered is infinite-dimensional, which means there is no finite set of square integrable functions which can be added together in various combinations to create every possible square integrable function. Position-space wave functions. The state of such a particle is completely described by its wave function, formula_10 where x is position and t is time. This is a complex-valued function of two real variables x and t. For one spinless particle in one dimension, if the wave function is interpreted as a probability amplitude; the square modulus of the wave function, the positive real number formula_11 is interpreted as the probability density for a measurement of the particle's position at a given time "t". The asterisk indicates the complex conjugate. If the particle's position is measured, its location cannot be determined from the wave function, but is described by a probability distribution. Normalization condition. The probability that its position "x" will be in the interval "a" ≤ "x" ≤ "b" is the integral of the density over this interval: formula_12 where t is the time at which the particle was measured. This leads to the normalization condition: formula_13 because if the particle is measured, there is 100% probability that it will be "somewhere". For a given system, the set of all possible normalizable wave functions (at any given time) forms an abstract mathematical vector space, meaning that it is possible to add together different wave functions, and multiply wave functions by complex numbers. Technically, wave functions form a ray in a projective Hilbert space rather than an ordinary vector space. Quantum states as vectors. At a particular instant of time, all values of the wave function Ψ("x", "t") are components of a vector. There are uncountably infinitely many of them and integration is used in place of summation. In Bra–ket notation, this vector is written formula_14 and is referred to as a "quantum state vector", or simply "quantum state". There are several advantages to understanding wave functions as representing elements of an abstract vector space: The time parameter is often suppressed, and will be in the following. The x coordinate is a continuous index. The are called "improper vectors" which, unlike "proper vectors" that are normalizable to unity, can only be normalized to a Dirac delta function. formula_15 thus formula_16 and formula_17 which illuminates the identity operator formula_18which is analogous to completeness relation of orthonormal basis in N-dimensional Hilbert space. Finding the identity operator in a basis allows the abstract state to be expressed explicitly in a basis, and more (the inner product between two state vectors, and other operators for observables, can be expressed in the basis). Momentum-space wave functions. The particle also has a wave function in momentum space: formula_19 where p is the momentum in one dimension, which can be any value from −∞ to +∞, and t is time. Analogous to the position case, the inner product of two wave functions Φ1("p", "t") and Φ2("p", "t") can be defined as: formula_20 One particular solution to the time-independent Schrödinger equation is formula_21 a plane wave, which can be used in the description of a particle with momentum exactly p, since it is an eigenfunction of the momentum operator. These functions are not normalizable to unity (they are not square-integrable), so they are not really elements of physical Hilbert space. The set formula_22 forms what is called the momentum basis. This "basis" is not a basis in the usual mathematical sense. For one thing, since the functions are not normalizable, they are instead normalized to a delta function, formula_23 For another thing, though they are linearly independent, there are too many of them (they form an uncountable set) for a basis for physical Hilbert space. They can still be used to express all functions in it using Fourier transforms as described next. Relations between position and momentum representations. The "x" and "p" representations are formula_24 Now take the projection of the state Ψ onto eigenfunctions of momentum using the last expression in the two equations, formula_25 Then utilizing the known expression for suitably normalized eigenstates of momentum in the position representation solutions of the free Schrödinger equation formula_26 one obtains formula_27 Likewise, using eigenfunctions of position, formula_28 The position-space and momentum-space wave functions are thus found to be Fourier transforms of each other. They are two representations of the same state; containing the same information, and either one is sufficient to calculate any property of the particle. In practice, the position-space wave function is used much more often than the momentum-space wave function. The potential entering the relevant equation (Schrödinger, Dirac, etc.) determines in which basis the description is easiest. For the harmonic oscillator, x and p enter symmetrically, so there it does not matter which description one uses. The same equation (modulo constants) results. From this, with a little bit of afterthought, it follows that solutions to the wave equation of the harmonic oscillator are eigenfunctions of the Fourier transform in "L"2. Definitions (other cases). Following are the general forms of the wave function for systems in higher dimensions and more particles, as well as including other degrees of freedom than position coordinates or momentum components. Finite dimensional Hilbert space. While Hilbert spaces originally refer to infinite dimensional complete inner product spaces they, by definition, include finite dimensional complete inner product spaces as well. In physics, they are often referred to as "finite dimensional Hilbert spaces". For every finite dimensional Hilbert space there exist orthonormal basis kets that span the entire Hilbert space. If the "N"-dimensional set formula_29 is orthonormal, then the projection operator for the space spanned by these states is given by: formula_30where the projection is equivalent to identity operator since formula_29 spans the entire Hilbert space, thus leaving any vector from Hilbert space unchanged. This is also known as completeness relation of finite dimensional Hilbert space. The wavefunction is instead given by: formula_31where formula_32, is a set of complex numbers which can be used to construct a wavefunction using the above formula. Probability interpretation of inner product. If the set formula_29 are eigenkets of a non-degenerate observable with eigenvalues formula_33, by the postulates of quantum mechanics, the probability of measuring the observable to be formula_33 is given according to Born rule as: formula_34 For non-degenerate formula_29 of some observable, if eigenvalues formula_35 have subset of eigenvectors labelled as formula_36, by the postulates of quantum mechanics, the probability of measuring the observable to be formula_33 is given by: formula_37where formula_38 is a projection operator of states to subspace spanned by formula_36. The equality follows due to orthogonal nature of formula_29. Hence, formula_32 which specify state of the quantum mechanical system, have magnitudes whose square gives the probability of measuring the respective formula_39 state. Physical significance of relative phase. While the relative phase has observable effects in experiments, the global phase of the system is experimentally indistinguishable. For example in a particle in superposition of two states, the global phase of the particle cannot be distinguished by finding expectation value of observable or probabilities of observing different states but relative phases can affect the expectation values of observables. While the overall phase of the system is considered to be arbitrary, the relative phase for each state formula_39 of a prepared state in superposition can be determined based on physical meaning of the prepared state and its symmetry. For example, the construction of spin states along x direction as a superposition of spin states along z direction, can done by applying appropriate rotation transformation on the spin along z states which provides appropriate phase of the states relative to each other. Application to include spin. An example of finite dimensional Hilbert space can be constructed using spin eigenkets of formula_40-spin particles which forms a formula_41 dimensional Hilbert space. However, the general wavefunction of a particle that fully describes its state, is always from an infinite dimensional Hilbert space since it involves a tensor product with Hilbert space relating to the position or momentum of the particle. Nonetheless, the techniques developed for finite dimensional Hilbert space are useful since they can either be treated independently or treated in consideration of linearity of tensor product. Since the spin operator for a given formula_40-spin particles can be represented as a finite formula_42 matrix which acts on formula_41 independent spin vector components, it is usually preferable to denote spin components using matrix/column/row notation as applicable. For example, each is usually identified as a column vector:formula_43 but it is a common abuse of notation, because the kets are not synonymous or equal to the column vectors. Column vectors simply provide a convenient way to express the spin components. Corresponding to the notation, the z-component spin operator can be written as:formula_44 since the eigenvectors of z-component spin operator are the above column vectors, with eigenvalues being the corresponding spin quantum numbers. Corresponding to the notation, a vector from such a finite dimensional Hilbert space is hence represented as: formula_45where formula_46 are corresponding complex numbers. In the following discussion involving spin, the complete wavefunction is considered as tensor product of spin states from finite dimensional Hilbert spaces and the wavefunction which was previously developed. The basis for this Hilbert space are hence considered: formula_47. One-particle states in 3d position space. The position-space wave function of a single particle without spin in three spatial dimensions is similar to the case of one spatial dimension above: formula_48 where r is the position vector in three-dimensional space, and "t" is time. As always Ψ(r, "t") is a complex-valued function of real variables. As a single vector in Dirac notation formula_49 All the previous remarks on inner products, momentum space wave functions, Fourier transforms, and so on extend to higher dimensions. For a particle with spin, ignoring the position degrees of freedom, the wave function is a function of spin only (time is a parameter); formula_50 where "s"z is the spin projection quantum number along the z axis. (The z axis is an arbitrary choice; other axes can be used instead if the wave function is transformed appropriately, see below.) The "sz" parameter, unlike r and t, is a discrete variable. For example, for a spin-1/2 particle, "s"z can only be +1/2 or −1/2, and not any other value. (In general, for spin s, "sz" can be "s", "s" − 1, ..., −"s" + 1, −"s"). Inserting each quantum number gives a complex valued function of space and time, there are 2"s" + 1 of them. These can be arranged into a column vector formula_51 In bra–ket notation, these easily arrange into the components of a vector: formula_52 The entire vector "ξ" is a solution of the Schrödinger equation (with a suitable Hamiltonian), which unfolds to a coupled system of 2"s" + 1 ordinary differential equations with solutions "ξ"("s", "t"), "ξ"("s" − 1, "t"), ..., "ξ"(−"s", "t"). The term "spin function" instead of "wave function" is used by some authors. This contrasts the solutions to position space wave functions, the position coordinates being continuous degrees of freedom, because then the Schrödinger equation does take the form of a wave equation. More generally, for a particle in 3d with any spin, the wave function can be written in "position–spin space" as: formula_53 and these can also be arranged into a column vector formula_54 in which the spin dependence is placed in indexing the entries, and the wave function is a complex vector-valued function of space and time only. All values of the wave function, not only for discrete but continuous variables also, collect into a single vector formula_55 For a single particle, the tensor product ⊗ of its position state vector and spin state vector gives the composite position-spin state vector formula_56 with the identifications formula_57 formula_58 formula_59 The tensor product factorization of energy eigenstates is always possible if the orbital and spin angular momenta of the particle are separable in the Hamiltonian operator underlying the system's dynamics (in other words, the Hamiltonian can be split into the sum of orbital and spin terms). The time dependence can be placed in either factor, and time evolution of each can be studied separately. Under such Hamiltonians, any tensor product state evolves into another tensor product state, which essentially means any unentangled state remains unentangled under time evolution. This is said to happen when there is no physical interaction between the states of the tensor products. In the case of non separable Hamiltonians, energy eigenstates are said to be some linear combination of such states, which need not be factorizable; examples include a particle in a magnetic field, and spin–orbit coupling. The preceding discussion is not limited to spin as a discrete variable, the total angular momentum "J" may also be used. Other discrete degrees of freedom, like isospin, can expressed similarly to the case of spin above. Many-particle states in 3d position space. If there are many particles, in general there is only one wave function, not a separate wave function for each particle. The fact that "one" wave function describes "many" particles is what makes quantum entanglement and the EPR paradox possible. The position-space wave function for "N" particles is written: formula_60 where r"i" is the position of the i-th particle in three-dimensional space, and t is time. Altogether, this is a complex-valued function of 3"N" + 1 real variables. In quantum mechanics there is a fundamental distinction between "identical particles" and "distinguishable" particles. For example, any two electrons are identical and fundamentally indistinguishable from each other; the laws of physics make it impossible to "stamp an identification number" on a certain electron to keep track of it. This translates to a requirement on the wave function for a system of identical particles: formula_61 where the + sign occurs if the particles are "all bosons" and − sign if they are "all fermions". In other words, the wave function is either totally symmetric in the positions of bosons, or totally antisymmetric in the positions of fermions. The physical interchange of particles corresponds to mathematically switching arguments in the wave function. The antisymmetry feature of fermionic wave functions leads to the Pauli principle. Generally, bosonic and fermionic symmetry requirements are the manifestation of particle statistics and are present in other quantum state formalisms. For "N" "distinguishable" particles (no two being identical, i.e. no two having the same set of quantum numbers), there is no requirement for the wave function to be either symmetric or antisymmetric. For a collection of particles, some identical with coordinates r1, r2, ... and others distinguishable x1, x2, ... (not identical with each other, and not identical to the aforementioned identical particles), the wave function is symmetric or antisymmetric in the identical particle coordinates r"i" only: formula_62 Again, there is no symmetry requirement for the distinguishable particle coordinates x"i". The wave function for "N" particles each with spin is the complex-valued function formula_63 Accumulating all these components into a single vector, formula_64 For identical particles, symmetry requirements apply to both position and spin arguments of the wave function so it has the overall correct symmetry. The formulae for the inner products are integrals over all coordinates or momenta and sums over all spin quantum numbers. For the general case of "N" particles with spin in 3-d, formula_65 this is altogether N three-dimensional volume integrals and N sums over the spins. The differential volume elements "d"3r"i" are also written ""dV""i"" or ""dxi dyi dzi"". The multidimensional Fourier transforms of the position or position–spin space wave functions yields momentum or momentum–spin space wave functions. Probability interpretation. For the general case of N particles with spin in 3d, if Ψ is interpreted as a probability amplitude, the probability density is formula_66 and the probability that particle 1 is in region "R"1 with spin "s""z"1 = "m"1 "and" particle 2 is in region "R"2 with spin "s""z"2 = "m"2 etc. at time "t" is the integral of the probability density over these regions and evaluated at these spin numbers: formula_67 Physical significance of phase. In non-relativistic quantum mechanics, it can be shown using Schrodinger's time dependent wave equation that the equation: formula_68is satisfied, where formula_69 is the probability density and formula_70, is known as the probability flux in accordance with the continuity equation form of the above equation. Using the following expression for wavefunction:formula_71where formula_69 is the probability density and formula_72 is the phase of the wavefunction, it can be shown that: formula_73 Hence the spacial variation of phase characterizes the probability flux. In classical analogy, for formula_74, the quantity formula_75 is analogous with velocity. Note that this does not imply a literal interpretation of formula_75 as velocity since velocity and position cannot be simultaneously determined as per the uncertainty principle. Substituting the form of wavefunction in Schrodinger's time dependent wave equation, and taking the classical limit, formula_76: formula_77 Which is analogous to Hamilton-Jacobi equation from classical mechanics. This interpretation fits with Hamilton–Jacobi theory, in which formula_78, where "S" is Hamilton's principal function. Time dependence. For systems in time-independent potentials, the wave function can always be written as a function of the degrees of freedom multiplied by a time-dependent phase factor, the form of which is given by the Schrödinger equation. For N particles, considering their positions only and suppressing other degrees of freedom, formula_79 where E is the energy eigenvalue of the system corresponding to the eigenstate Ψ. Wave functions of this form are called stationary states. The time dependence of the quantum state and the operators can be placed according to unitary transformations on the operators and states. For any quantum state and operator "O", in the Schrödinger picture changes with time according to the Schrödinger equation while "O" is constant. In the Heisenberg picture it is the other way round, is constant while "O"("t") evolves with time according to the Heisenberg equation of motion. The Dirac (or interaction) picture is intermediate, time dependence is places in both operators and states which evolve according to equations of motion. It is useful primarily in computing S-matrix elements. Non-relativistic examples. The following are solutions to the Schrödinger equation for one non-relativistic spinless particle. Finite potential barrier. One of the most prominent features of wave mechanics is the possibility for a particle to reach a location with a prohibitive (in classical mechanics) force potential. A common model is the "potential barrier", the one-dimensional case has the potential formula_80 and the steady-state solutions to the wave equation have the form (for some constants "k", "κ") formula_81 Note that these wave functions are not normalized; see scattering theory for discussion. The standard interpretation of this is as a stream of particles being fired at the step from the left (the direction of negative x): setting "A"r = 1 corresponds to firing particles singly; the terms containing "A"r and "C"r signify motion to the right, while "A"l and "C"l – to the left. Under this beam interpretation, put "C"l = 0 since no particles are coming from the right. By applying the continuity of wave functions and their derivatives at the boundaries, it is hence possible to determine the constants above. In a semiconductor crystallite whose radius is smaller than the size of its exciton Bohr radius, the excitons are squeezed, leading to quantum confinement. The energy levels can then be modeled using the particle in a box model in which the energy of different states is dependent on the length of the box. Quantum harmonic oscillator. The wave functions for the quantum harmonic oscillator can be expressed in terms of Hermite polynomials "Hn", they are formula_82 where "n" = 0, 1, 2, ... Hydrogen atom. The wave functions of an electron in a Hydrogen atom are expressed in terms of spherical harmonics and generalized Laguerre polynomials (these are defined differently by different authors—see main article on them and the hydrogen atom). It is convenient to use spherical coordinates, and the wave function can be separated into functions of each coordinate, formula_83 where "R" are radial functions and "Y"("θ", "φ") are spherical harmonics of degree "ℓ" and order "m". This is the only atom for which the Schrödinger equation has been solved exactly. Multi-electron atoms require approximative methods. The family of solutions is: formula_84 where "a"0 = 4"πε"0"ħ"2/"mee"2 is the Bohr radius, "L" are the generalized Laguerre polynomials of degree "n" − "ℓ" − 1, "n" = 1, 2, ... is the principal quantum number, "ℓ" = 0, 1, ..., "n" − 1 the azimuthal quantum number, "m" = −"ℓ", −"ℓ" + 1, ..., "ℓ" − 1, "ℓ" the magnetic quantum number. Hydrogen-like atoms have very similar solutions. This solution does not take into account the spin of the electron. In the figure of the hydrogen orbitals, the 19 sub-images are images of wave functions in position space (their norm squared). The wave functions represent the abstract state characterized by the triple of quantum numbers ("n", "ℓ", "m"), in the lower right of each image. These are the principal quantum number, the orbital angular momentum quantum number, and the magnetic quantum number. Together with one spin-projection quantum number of the electron, this is a complete set of observables. The figure can serve to illustrate some further properties of the function spaces of wave functions. Wave functions and function spaces. The concept of function spaces enters naturally in the discussion about wave functions. A function space is a set of functions, usually with some defining requirements on the functions (in the present case that they are square integrable), sometimes with an algebraic structure on the set (in the present case a vector space structure with an inner product), together with a topology on the set. The latter will sparsely be used here, it is only needed to obtain a precise definition of what it means for a subset of a function space to be closed. It will be concluded below that the function space of wave functions is a Hilbert space. This observation is the foundation of the predominant mathematical formulation of quantum mechanics. Vector space structure. A wave function is an element of a function space partly characterized by the following concrete and abstract descriptions. This similarity is of course not accidental. There are also a distinctions between the spaces to keep in mind. Representations. Basic states are characterized by a set of quantum numbers. This is a set of eigenvalues of a maximal set of commuting observables. Physical observables are represented by linear operators, also called observables, on the vectors space. Maximality means that there can be added to the set no further algebraically independent observables that commute with the ones already present. A choice of such a set may be called a choice of representation. The abstract states are "abstract" only in that an arbitrary choice necessary for a particular "explicit" description of it is not given. This is the same as saying that no choice of maximal set of commuting observables has been given. This is analogous to a vector space without a specified basis. Wave functions corresponding to a state are accordingly not unique. This non-uniqueness reflects the non-uniqueness in the choice of a maximal set of commuting observables. For one spin particle in one dimension, to a particular state there corresponds two wave functions, Ψ("x", "S""z") and Ψ("p", "S""y"), both describing the "same" state. Each choice of representation should be thought of as specifying a unique function space in which wave functions corresponding to that choice of representation lives. This distinction is best kept, even if one could argue that two such function spaces are mathematically equal, e.g. being the set of square integrable functions. One can then think of the function spaces as two distinct copies of that set. Inner product. There is an additional algebraic structure on the vector spaces of wave functions and the abstract state space. This motivates the introduction of an inner product on the vector space of abstract quantum states, compatible with the mathematical observations above when passing to a representation. It is denoted (Ψ, Φ), or in the Bra–ket notation . It yields a complex number. With the inner product, the function space is an inner product space. The explicit appearance of the inner product (usually an integral or a sum of integrals) depends on the choice of representation, but the complex number (Ψ, Φ) does not. Much of the physical interpretation of quantum mechanics stems from the Born rule. It states that the probability p of finding upon measurement the state Φ given the system is in the state Ψ is formula_86 where Φ and Ψ are assumed normalized. Consider a scattering experiment. In quantum field theory, if Φout describes a state in the "distant future" (an "out state") after interactions between scattering particles have ceased, and Ψin an "in state" in the "distant past", then the quantities (Φout, Ψin), with Φout and Ψin varying over a complete set of in states and out states respectively, is called the S-matrix or scattering matrix. Knowledge of it is, effectively, having "solved" the theory at hand, at least as far as predictions go. Measurable quantities such as decay rates and scattering cross sections are calculable from the S-matrix. Hilbert space. The above observations encapsulate the essence of the function spaces of which wave functions are elements. However, the description is not yet complete. There is a further technical requirement on the function space, that of completeness, that allows one to take limits of sequences in the function space, and be ensured that, if the limit exists, it is an element of the function space. A complete inner product space is called a Hilbert space. The property of completeness is crucial in advanced treatments and applications of quantum mechanics. For instance, the existence of projection operators or orthogonal projections relies on the completeness of the space. These projection operators, in turn, are essential for the statement and proof of many useful theorems, e.g. the spectral theorem. It is not very important in introductory quantum mechanics, and technical details and links may be found in footnotes like the one that follows. The space "L"2 is a Hilbert space, with inner product presented later. The function space of the example of the figure is a subspace of "L"2. A subspace of a Hilbert space is a Hilbert space if it is closed. In summary, the set of all possible normalizable wave functions for a system with a particular choice of basis, together with the null vector, constitute a Hilbert space. Not all functions of interest are elements of some Hilbert space, say "L"2. The most glaring example is the set of functions "e"&lt;templatestyles src="Fraction/styles.css" /&gt;2"πi"p · x⁄h. These are plane wave solutions of the Schrödinger equation for a free particle, but are not normalizable, hence not in "L"2. But they are nonetheless fundamental for the description. One can, using them, express functions that "are" normalizable using wave packets. They are, in a sense, a basis (but not a Hilbert space basis, nor a Hamel basis) in which wave functions of interest can be expressed. There is also the artifact "normalization to a delta function" that is frequently employed for notational convenience, see further down. The delta functions themselves are not square integrable either. The above description of the function space containing the wave functions is mostly mathematically motivated. The function spaces are, due to completeness, very "large" in a certain sense. Not all functions are realistic descriptions of any physical system. For instance, in the function space "L"2 one can find the function that takes on the value 0 for all rational numbers and -"i" for the irrationals in the interval [0, 1]. This "is" square integrable, but can hardly represent a physical state. Common Hilbert spaces. While the space of solutions as a whole is a Hilbert space there are many other Hilbert spaces that commonly occur as ingredients. More generally, one may consider a unified treatment of all second order polynomial solutions to the Sturm–Liouville equations in the setting of Hilbert space. These include the Legendre and Laguerre polynomials as well as Chebyshev polynomials, Jacobi polynomials and Hermite polynomials. All of these actually appear in physical problems, the latter ones in the harmonic oscillator, and what is otherwise a bewildering maze of properties of special functions becomes an organized body of facts. For this, see . There occurs also finite-dimensional Hilbert spaces. The space C"n" is a Hilbert space of dimension n. The inner product is the standard inner product on these spaces. In it, the "spin part" of a single particle wave function resides. With more particles, the situations is more complicated. One has to employ tensor products and use representation theory of the symmetry groups involved (the rotation group and the Lorentz group respectively) to extract from the tensor product the spaces in which the (total) spin wave functions reside. (Further problems arise in the relativistic case unless the particles are free. See the Bethe–Salpeter equation.) Corresponding remarks apply to the concept of isospin, for which the symmetry group is SU(2). The models of the nuclear forces of the sixties (still useful today, see nuclear force) used the symmetry group SU(3). In this case, as well, the part of the wave functions corresponding to the inner symmetries reside in some C"n" or subspaces of tensor products of such spaces. Due to the infinite-dimensional nature of the system, the appropriate mathematical tools are objects of study in functional analysis. Simplified description. Not all introductory textbooks take the long route and introduce the full Hilbert space machinery, but the focus is on the non-relativistic Schrödinger equation in position representation for certain standard potentials. The following constraints on the wave function are sometimes explicitly formulated for the calculations and physical interpretation to make sense: It is possible to relax these conditions somewhat for special purposes. If these requirements are not met, it is not possible to interpret the wave function as a probability amplitude. Note that exceptions can arise to the continuity of derivatives rule at points of infinite discontinuity of potential field. For example, in particle in a box where the derivative of wavefunction can be discontinuous at the boundary of the box where the potential is known to have infinite discontinuity. This does not alter the structure of the Hilbert space that these particular wave functions inhabit, but the subspace of the square-integrable functions "L"2, which is a Hilbert space, satisfying the second requirement "is not closed" in "L"2, hence not a Hilbert space in itself. The functions that does not meet the requirements are still needed for both technical and practical reasons. More on wave functions and abstract state space. As has been demonstrated, the set of all possible wave functions in some representation for a system constitute an in general infinite-dimensional Hilbert space. Due to the multiple possible choices of representation basis, these Hilbert spaces are not unique. One therefore talks about an abstract Hilbert space, state space, where the choice of representation and basis is left undetermined. Specifically, each state is represented as an abstract vector in state space. A quantum state in any representation is generally expressed as a vector formula_87 where These quantum numbers index the components of the state vector. More, all α are in an "n"-dimensional set "A" = "A"1 × "A"2 × ... × "An" where each "Ai" is the set of allowed values for "αi"; all ω are in an "m"-dimensional "volume" Ω ⊆ ℝ"m" where Ω = Ω1 × Ω2 × ... × Ω"m" and each Ω"i" ⊆ R is the set of allowed values for "ωi", a subset of the real numbers R. For generality n and m are not necessarily equal. Example: The probability density of finding the system at time formula_7at state is formula_88 The probability of finding system with α in some or all possible discrete-variable configurations, "D" ⊆ "A", and ω in some or all possible continuous-variable configurations, "C" ⊆ Ω, is the sum and integral over the density, formula_89 Since the sum of all probabilities must be 1, the normalization condition formula_90 must hold at all times during the evolution of the system. The normalization condition requires "ρ dm"ω to be dimensionless, by dimensional analysis Ψ must have the same units as ("ω"1"ω"2..."ωm")−1/2. Ontology. Whether the wave function exists in reality, and what it represents, are major questions in the interpretation of quantum mechanics. Many famous physicists of a previous generation puzzled over this problem, such as Erwin Schrödinger, Albert Einstein and Niels Bohr. Some advocate formulations or variants of the Copenhagen interpretation (e.g. Bohr, Eugene Wigner and John von Neumann) while others, such as John Archibald Wheeler or Edwin Thompson Jaynes, take the more classical approach and regard the wave function as representing information in the mind of the observer, i.e. a measure of our knowledge of reality. Some, including Schrödinger, David Bohm and Hugh Everett III and others, argued that the wave function must have an objective, physical existence. Einstein thought that a complete description of physical reality should refer directly to physical space and time, as distinct from the wave function, which refers to an abstract mathematical space. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Remarks. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; General sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "E = hf" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "\\lambda" }, { "math_id": 5, "text": "\\lambda = \\frac{h}{p}" }, { "math_id": 6, "text": "h" }, { "math_id": 7, "text": "t" }, { "math_id": 8, "text": "( \\Psi_1 , \\Psi_2 ) = \\int_{-\\infty}^\\infty \\, \\Psi_1^*(x, t)\\Psi_2(x, t)\\,dx < \\infty" }, { "math_id": 9, "text": "(\\Psi,\\Psi) = \\|\\Psi\\|^2" }, { "math_id": 10, "text": "\\Psi(x,t)\\,," }, { "math_id": 11, "text": " \\left|\\Psi(x, t)\\right|^2 = \\Psi^*(x, t)\\Psi(x, t) = \\rho(x), " }, { "math_id": 12, "text": "P_{a\\le x\\le b} (t) = \\int_a^b \\,|\\Psi(x,t)|^2 dx " }, { "math_id": 13, "text": "\\int_{-\\infty}^\\infty \\, |\\Psi(x,t)|^2dx = 1\\,," }, { "math_id": 14, "text": "|\\Psi(t)\\rangle = \\int\\Psi(x,t) |x\\rangle dx " }, { "math_id": 15, "text": "\\langle x' | x \\rangle = \\delta(x' - x) " }, { "math_id": 16, "text": "\\langle x' |\\Psi\\rangle = \\int \\Psi(x) \\langle x'|x\\rangle dx= \\Psi(x') " }, { "math_id": 17, "text": "|\\Psi\\rangle = \\int |x\\rangle \\langle x |\\Psi\\rangle dx= \\left( \\int |x\\rangle \\langle x |dx\\right) |\\Psi\\rangle " }, { "math_id": 18, "text": "I = \\int |x\\rangle \\langle x | dx\\,. " }, { "math_id": 19, "text": "\\Phi(p,t)" }, { "math_id": 20, "text": "(\\Phi_1 , \\Phi_2 ) = \\int_{-\\infty}^\\infty \\, \\Phi_1^*(p, t)\\Phi_2(p, t) dp\\,." }, { "math_id": 21, "text": "\\Psi_p(x) = e^{ipx/\\hbar}," }, { "math_id": 22, "text": "\\{\\Psi_p(x, t), -\\infty \\le p \\le \\infty\\}" }, { "math_id": 23, "text": "(\\Psi_{p},\\Psi_{p'}) = \\delta(p - p')." }, { "math_id": 24, "text": "\\begin{align}\n|\\Psi\\rangle = I|\\Psi\\rangle &= \\int |x\\rangle \\langle x|\\Psi\\rangle dx = \\int \\Psi(x) |x\\rangle dx,\\\\\n|\\Psi\\rangle = I|\\Psi\\rangle &= \\int |p\\rangle \\langle p|\\Psi\\rangle dp = \\int \\Phi(p) |p\\rangle dp.\n\\end{align}" }, { "math_id": 25, "text": "\\int \\Psi(x) \\langle p|x\\rangle dx = \\int \\Phi(p') \\langle p|p'\\rangle dp' = \\int \\Phi(p') \\delta(p-p') dp' = \\Phi(p)." }, { "math_id": 26, "text": "\\langle x | p \\rangle = p(x) = \\frac{1}{\\sqrt{2\\pi\\hbar}}e^{\\frac{i}{\\hbar}px} \\Rightarrow \\langle p | x \\rangle = \\frac{1}{\\sqrt{2\\pi\\hbar}}e^{-\\frac{i}{\\hbar}px}," }, { "math_id": 27, "text": "\\Phi(p) = \\frac{1}{\\sqrt{2\\pi\\hbar}}\\int \\Psi(x)e^{-\\frac{i}{\\hbar}px}dx\\,." }, { "math_id": 28, "text": "\\Psi(x) = \\frac{1}{\\sqrt{2\\pi\\hbar}}\\int \\Phi(p)e^{\\frac{i}{\\hbar}px}dp\\,." }, { "math_id": 29, "text": "\\{ |\\phi_i\\rangle \\}" }, { "math_id": 30, "text": "P = \\sum_i |\\phi_i\\rangle\\langle \\phi_i | = I " }, { "math_id": 31, "text": "|\\psi\\rangle = I|\\psi\\rangle = \\sum_i |\\phi_i\\rangle\\langle \\phi_i |\\psi\\rangle " }, { "math_id": 32, "text": "\\{ \\langle \\phi_i |\\psi\\rangle \\} " }, { "math_id": 33, "text": "\\lambda_i" }, { "math_id": 34, "text": "P_\\psi(\\lambda_i) = |\\langle \\phi_i|\\psi \\rangle|^2 " }, { "math_id": 35, "text": "\\lambda" }, { "math_id": 36, "text": "\\{ |\\lambda^{(j)}\\rangle \\}" }, { "math_id": 37, "text": "P_\\psi(\\lambda) =\\sum_j |\\langle \\lambda^{(j)}|\\psi \\rangle|^2 = |\\widehat P_\\lambda |\\psi \\rangle |^2 " }, { "math_id": 38, "text": "\\widehat P_\\lambda =\\sum_j|\\lambda^{(j)}\\rangle\\langle\\lambda^{(j)}| " }, { "math_id": 39, "text": "|\\phi_i\\rangle " }, { "math_id": 40, "text": "s" }, { "math_id": 41, "text": "2s+1" }, { "math_id": 42, "text": "(2s+1)^2 " }, { "math_id": 43, "text": "|s\\rangle \\leftrightarrow \\begin{bmatrix} 1 \\\\ 0 \\\\ \\vdots \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} \\,, \\quad |s-1\\rangle \\leftrightarrow \\begin{bmatrix} 0 \\\\ 1 \\\\ \\vdots \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} \\,, \\ldots \\,, \\quad |-(s-1)\\rangle \\leftrightarrow \\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 1 \\\\ 0 \\\\ \\end{bmatrix} \\,,\\quad |-s\\rangle \\leftrightarrow \\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\\\ 1 \\\\ \\end{bmatrix}" }, { "math_id": 44, "text": "\\frac{1}{\\hbar}\\hat{S}_z = \\begin{bmatrix} s & 0 & \\cdots & 0 & 0 \\\\ 0 & s-1 & \\cdots & 0 & 0 \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 & \\cdots & -(s-1) & 0 \\\\ 0 & 0 & \\cdots & 0 & -s \\end{bmatrix} " }, { "math_id": 45, "text": "|\\phi\\rangle = \\begin{bmatrix} \\langle s| \\phi\\rangle \\\\ \\langle s-1| \\phi\\rangle \\\\ \\vdots \\\\ \\langle -(s-1)| \\phi\\rangle \\\\ \\langle -s| \\phi\\rangle \\\\ \\end{bmatrix} =\\begin{bmatrix} \\varepsilon_s \\\\ \\varepsilon_{s-1}\\\\ \\vdots \\\\ \\varepsilon_{-s+1} \\\\ \\varepsilon_{-s} \\\\ \\end{bmatrix} " }, { "math_id": 46, "text": " \\{ \\varepsilon_i \\} " }, { "math_id": 47, "text": " |\\mathbf{r}, s_z\\rangle = |\\mathbf{r}\\rangle |s_z\\rangle " }, { "math_id": 48, "text": "\\Psi(\\mathbf{r},t)" }, { "math_id": 49, "text": "|\\Psi(t)\\rangle = \\int d^3\\! \\mathbf{r}\\, \\Psi(\\mathbf{r},t) \\,|\\mathbf{r}\\rangle " }, { "math_id": 50, "text": "\\xi(s_z,t)" }, { "math_id": 51, "text": "\\xi = \\begin{bmatrix} \\xi(s,t) \\\\ \\xi(s-1,t) \\\\ \\vdots \\\\ \\xi(-(s-1),t) \\\\ \\xi(-s,t) \\\\ \\end{bmatrix} = \\xi(s,t) \\begin{bmatrix} 1 \\\\ 0 \\\\ \\vdots \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} + \\xi(s-1,t)\\begin{bmatrix} 0 \\\\ 1 \\\\ \\vdots \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} + \\cdots + \\xi(-(s-1),t) \\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 1 \\\\ 0 \\\\ \\end{bmatrix} + \\xi(-s,t) \\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\\\ 1 \\\\ \\end{bmatrix} " }, { "math_id": 52, "text": "|\\xi (t)\\rangle = \\sum_{s_z=-s}^s \\xi(s_z,t) \\,| s_z \\rangle " }, { "math_id": 53, "text": "\\Psi(\\mathbf{r},s_z,t)" }, { "math_id": 54, "text": "\\Psi(\\mathbf{r},t) = \\begin{bmatrix} \\Psi(\\mathbf{r},s,t) \\\\ \\Psi(\\mathbf{r},s-1,t) \\\\ \\vdots \\\\ \\Psi(\\mathbf{r},-(s-1),t) \\\\ \\Psi(\\mathbf{r},-s,t) \\\\ \\end{bmatrix}" }, { "math_id": 55, "text": "|\\Psi(t)\\rangle =\n\\sum_{s_z}\\int d^3\\!\\mathbf{r} \\,\\Psi(\\mathbf{r},s_z,t)\\, |\\mathbf{r}, s_z\\rangle " }, { "math_id": 56, "text": "|\\psi(t)\\rangle\\! \\otimes\\! |\\xi(t)\\rangle =\n\\sum_{s_z}\\int d^3\\! \\mathbf{r}\\, \\psi(\\mathbf{r},t)\\,\\xi(s_z,t) \\,|\\mathbf{r}\\rangle \\!\\otimes\\! |s_z\\rangle " }, { "math_id": 57, "text": "|\\Psi (t)\\rangle = |\\psi(t)\\rangle\n\\!\\otimes\\!\n |\\xi(t)\\rangle " }, { "math_id": 58, "text": "\\Psi(\\mathbf{r},s_z,t) = \\psi(\\mathbf{r},t)\\,\\xi(s_z,t) " }, { "math_id": 59, "text": "|\\mathbf{r},s_z \\rangle= |\\mathbf{r}\\rangle \\!\\otimes\\! |s_z\\rangle " }, { "math_id": 60, "text": "\\Psi(\\mathbf{r}_1,\\mathbf{r}_2 \\cdots \\mathbf{r}_N,t)" }, { "math_id": 61, "text": "\\Psi \\left ( \\ldots \\mathbf{r}_a, \\ldots , \\mathbf{r}_b, \\ldots \\right ) = \\pm \\Psi \\left ( \\ldots \\mathbf{r}_b, \\ldots , \\mathbf{r}_a, \\ldots \\right )" }, { "math_id": 62, "text": "\\Psi \\left ( \\ldots \\mathbf{r}_a, \\ldots , \\mathbf{r}_b, \\ldots , \\mathbf{x}_1, \\mathbf{x}_2, \\ldots \\right ) = \\pm \\Psi \\left ( \\ldots \\mathbf{r}_b, \\ldots , \\mathbf{r}_a, \\ldots , \\mathbf{x}_1, \\mathbf{x}_2, \\ldots \\right )" }, { "math_id": 63, "text": "\\Psi(\\mathbf{r}_1, \\mathbf{r}_2 \\cdots \\mathbf{r}_N, s_{z\\,1}, s_{z\\,2} \\cdots s_{z\\,N}, t)" }, { "math_id": 64, "text": "| \\Psi \\rangle = \\overbrace{\\sum_{s_{z\\,1},\\ldots,s_{z\\,N}}}^{\\text{discrete labels}} \\overbrace{\\int_{R_N} d^3\\mathbf{r}_N \\cdots \\int_{R_1} d^3\\mathbf{r}_1}^{\\text{continuous labels}} \\; \\underbrace{{\\Psi}( \\mathbf{r}_1, \\ldots, \\mathbf{r}_N , s_{z\\,1} , \\ldots , s_{z\\,N} )}_{\\begin{array}{c}\\text{wave function (component of } \\\\ \\text{ state vector along basis state)}\\end{array}} \\; \\underbrace{| \\mathbf{r}_1, \\ldots, \\mathbf{r}_N , s_{z\\,1} , \\ldots , s_{z\\,N} \\rangle }_{\\text{basis state (basis ket)}}\\,." }, { "math_id": 65, "text": " ( \\Psi_1 , \\Psi_2 ) = \\sum_{s_{z\\,N}} \\cdots \\sum_{s_{z\\,2}} \\sum_{s_{z\\,1}} \\int\\limits_{\\mathrm{ all \\, space}} d ^3\\mathbf{r}_1 \\int\\limits_{\\mathrm{ all \\, space}} d ^3\\mathbf{r}_2\\cdots \\int\\limits_{\\mathrm{ all \\, space}} d ^3 \\mathbf{r}_N \\Psi^{*}_1 \\left(\\mathbf{r}_1 \\cdots \\mathbf{r}_N,s_{z\\,1}\\cdots s_{z\\,N},t \\right )\\Psi_2 \\left(\\mathbf{r}_1 \\cdots \\mathbf{r}_N,s_{z\\,1}\\cdots s_{z\\,N},t \\right ) " }, { "math_id": 66, "text": "\\rho\\left(\\mathbf{r}_1 \\cdots \\mathbf{r}_N,s_{z\\,1}\\cdots s_{z\\,N},t \\right ) = \\left | \\Psi\\left (\\mathbf{r}_1 \\cdots \\mathbf{r}_N,s_{z\\,1}\\cdots s_{z\\,N},t \\right ) \\right |^2" }, { "math_id": 67, "text": "P_{\\mathbf{r}_1\\in R_1,s_{z\\,1} = m_1, \\ldots, \\mathbf{r}_N\\in R_N,s_{z\\,N} = m_N} (t) = \\int_{R_1} d ^3\\mathbf{r}_1 \\int_{R_2} d ^3\\mathbf{r}_2\\cdots \\int_{R_N} d ^3\\mathbf{r}_N \\left | \\Psi\\left (\\mathbf{r}_1 \\cdots \\mathbf{r}_N,m_1\\cdots m_N,t \\right ) \\right |^2" }, { "math_id": 68, "text": "\\frac{\\partial \\rho}{\\partial t} + \\nabla\\cdot\\mathbf J = 0 " }, { "math_id": 69, "text": "\\rho(\\mathbf x,t) = | \\psi(\\mathbf x,t)|^2 " }, { "math_id": 70, "text": "\\mathbf J(\\mathbf x,t) = \\frac{\\hbar}{2im}(\\psi^* \\nabla\\psi-\\psi\\nabla\\psi^*) = \\frac{\\hbar}{m} \\text{Im}(\\psi^* \\nabla\\psi) " }, { "math_id": 71, "text": "\\psi(\\mathbf x,t)= \\sqrt{\\rho(\\mathbf x,t)}\\exp{\\frac{iS(\\mathbf x,t )}{\\hbar}} " }, { "math_id": 72, "text": "S(\\mathbf x,t) " }, { "math_id": 73, "text": "\\mathbf J(\\mathbf x,t) = \\frac{\\rho \\nabla S}{m} " }, { "math_id": 74, "text": "\\mathbf J = \\rho \\mathbf v " }, { "math_id": 75, "text": " \\frac{\\nabla S}{m} " }, { "math_id": 76, "text": " \\hbar |\\nabla^2 S| \\ll |\\nabla S|^2 " }, { "math_id": 77, "text": "\\frac{1}{2m} |\\nabla S(\\mathbf x, t)|^2 + V(\\mathbf x) + \\frac{\\partial S}{\\partial t} = 0 " }, { "math_id": 78, "text": " \\mathbf{P}_{\\text{class.}} = \\nabla S " }, { "math_id": 79, "text": "\\Psi(\\mathbf{r}_1,\\mathbf{r}_2,\\ldots,\\mathbf{r}_N,t) = e^{-i Et/\\hbar} \\,\\psi(\\mathbf{r}_1,\\mathbf{r}_2,\\ldots,\\mathbf{r}_N)\\,," }, { "math_id": 80, "text": "V(x)=\\begin{cases}V_0 & |x|<a \\\\ 0 & | x | \\geq a\\end{cases}" }, { "math_id": 81, "text": "\\Psi (x) = \\begin{cases}\nA_{\\mathrm{r}}e^{ikx}+A_{\\mathrm{l}}e^{-ikx} & x<-a, \\\\\nB_{\\mathrm{r}}e^{\\kappa x}+B_{\\mathrm{l}}e^{-\\kappa x} & |x|\\le a, \\\\\nC_{\\mathrm{r}}e^{ikx}+C_{\\mathrm{l}}e^{-ikx} & x>a.\n\\end{cases}" }, { "math_id": 82, "text": " \\Psi_n(x) = \\sqrt{\\frac{1}{2^n\\,n!}} \\cdot \\left(\\frac{m\\omega}{\\pi \\hbar}\\right)^{1/4} \\cdot e^{\n- \\frac{m\\omega x^2}{2 \\hbar}} \\cdot H_n{\\left(\\sqrt{\\frac{m\\omega}{\\hbar}} x \\right)} " }, { "math_id": 83, "text": " \\Psi_{n\\ell m}(r,\\theta,\\phi) = R(r)\\,\\,Y_\\ell^m\\!(\\theta, \\phi)" }, { "math_id": 84, "text": " \\Psi_{n\\ell m}(r,\\theta,\\phi) = \\sqrt {{\\left ( \\frac{2}{n a_0} \\right )}^3\\frac{(n-\\ell-1)!}{2n[(n+\\ell)!]} } e^{- r/na_0} \\left(\\frac{2r}{na_0}\\right)^{\\ell} L_{n-\\ell-1}^{2\\ell+1}\\left(\\frac{2r}{na_0}\\right) \\cdot Y_{\\ell}^{m}(\\theta, \\phi ) " }, { "math_id": 85, "text": "\\int\\Psi_m^*\\Psi_n w\\, dV = \\delta_{nm}," }, { "math_id": 86, "text": "p = |(\\Phi, \\Psi)|^2," }, { "math_id": 87, "text": "|\\Psi\\rangle =\n\\sum_{\\boldsymbol{\\alpha}}\\int d^m\\!\\boldsymbol{\\omega}\\,\\,\n\\Psi(\\boldsymbol{\\alpha},\\boldsymbol{\\omega},t)\\,\n|\\boldsymbol{\\alpha},\\boldsymbol{\\omega}\\rangle" }, { "math_id": 88, "text": "\\rho_{\\alpha, \\omega} (t)= |\\Psi(\\boldsymbol{\\alpha},\\boldsymbol{\\omega},t)|^2" }, { "math_id": 89, "text": "P(t)=\\sum_{\\boldsymbol{\\alpha}\\in D}\\int_C \\rho_{\\alpha, \\omega} (t) \\,\\, d^m\\!\\boldsymbol{\\omega}" }, { "math_id": 90, "text": "1=\\sum_{\\boldsymbol{\\alpha}\\in A}\\int_{\\Omega} \\rho_{\\alpha, \\omega} (t) \\, d^m\\!\\boldsymbol{\\omega}" } ]
https://en.wikipedia.org/wiki?curid=145343
14535189
Fisher's inequality
Fisher's inequality is a necessary condition for the existence of a balanced incomplete block design, that is, a system of subsets that satisfy certain prescribed conditions in combinatorial mathematics. Outlined by Ronald Fisher, a population geneticist and statistician, who was concerned with the design of experiments such as studying the differences among several different varieties of plants, under each of a number of different growing conditions, called "blocks". Let: To be a balanced incomplete block design it is required that: Fisher's inequality states simply that "b" ≥ "v". Proof. Let the incidence matrix M be a "v" × "b" matrix defined so that Mi,j is 1 if element "i" is in block "j" and 0 otherwise. Then B = MMT is a "v" × "v" matrix such that Bi,i = "r" and Bi,j = λ for "i" ≠ "j". Since "r" ≠ λ, det(B) ≠ 0, so rank(B) = "v"; on the other hand, rank(B) ≤ rank(M) ≤ "b", so "v" ≤ "b". Generalization. Fisher's inequality is valid for more general classes of designs. A "pairwise balanced design" (or PBD) is a set "X" together with a family of non-empty subsets of "X" (which need not have the same size and may contain repeats) such that every pair of distinct elements of "X" is contained in exactly λ (a positive integer) subsets. The set "X" is allowed to be one of the subsets, and if all the subsets are copies of "X", the PBD is called "trivial". The size of "X" is "v" and the number of subsets in the family (counted with multiplicity) is "b". Theorem: For any non-trivial PBD, "v" ≤ "b". This result also generalizes the Erdős–De Bruijn theorem: For a PBD with λ = 1 having no blocks of size 1 or size v, "v" ≤ "b", with equality if and only if the PBD is a projective plane or a near-pencil (meaning that exactly "n" − 1 of the points are collinear). In another direction, Ray-Chaudhuri and Wilson proved in 1975 that in a 2"s"-("v", "k", λ) design, the number of blocks is at least formula_0. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\binom{v}{s}" } ]
https://en.wikipedia.org/wiki?curid=14535189
1453583
Predicate transformer semantics
Reformulation of Floyd-Hoare logic Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each "statement" in this language a corresponding "predicate transformer": a total function between two "predicates" on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions (see below). Moreover, predicate transformer semantics are a reformulation of Floyd–Hoare logic. Whereas Hoare logic is presented as a deductive system, predicate transformer semantics (either by weakest-preconditions or by strongest-postconditions see below) are complete strategies to build valid deductions of Hoare logic. In other words, they provide an effective algorithm to reduce the problem of verifying a Hoare triple to the problem of proving a first-order formula. Technically, predicate transformer semantics perform a kind of symbolic execution of statements into predicates: execution runs "backward" in the case of weakest-preconditions, or runs "forward" in the case of strongest-postconditions. Weakest preconditions. Definition. For a statement "S" and a postcondition "R", a weakest precondition is a predicate "Q" such that for any precondition P, formula_0 if and only if formula_1. In other words, it is the "loosest" or least restrictive requirement needed to guarantee that "R" holds after "S". Uniqueness follows easily from the definition: If both "Q" and "Q' " are weakest preconditions, then by the definition formula_2 so formula_3 and formula_4 so formula_5, and thus formula_6. We often use formula_7 to denote the weakest precondition for statement "S" with respect to a postcondition "R". Conventions. We use " T " to denote the predicate that is everywhere true and " F " to denote the one that is everywhere false. We shouldn't at least conceptually confuse ourselves with a Boolean expression defined by some language syntax, which might also contain true and false as Boolean scalars. For such scalars we need to do a type coercion such that we have T = predicate(true) and F = predicate(false). Such a promotion is carried out often casually, so people tend to take T as true and F as false. Assignment. We give below two equivalent weakest-preconditions for the assignment statement. In these formulas, formula_8 is a copy of "R" where free occurrences of "x" are replaced by "E". Hence, here, expression "E" is implicitly coerced into a "valid term" of the underlying logic: it is thus a "pure" expression, totally defined, terminating and without side effect. Provided that " E " is well defined, we just apply the so-called "one-point" rule on version 1. Then The first version avoids a potential duplication of "x" in "R", whereas the second version is simpler when there is at most a single occurrence of "x" in "R". The first version also reveals a deep duality between weakest-precondition and strongest-postcondition (see below). An example of a valid calculation of "wp" (using version 2) for assignments with integer valued variable "x" is: formula_9 This means that in order for the postcondition "x &gt; 10" to be true after the assignment, the precondition "x &gt; 15" must be true before the assignment. This is also the "weakest precondition", in that it is the "weakest" restriction on the value of "x" which makes "x &gt; 10" true after the assignment. Sequence. For example, formula_10 Conditional. As example: formula_11 While loop. Partial Correctness. Ignoring termination for a moment, we can define the rule for the "weakest liberal precondition", denoted "wlp", using a predicate "INV", called the Loop "INV"ariant, typically supplied by the programmer: Total Correctness. To show total correctness, we also have to show that the loop terminates. For this we define a well-founded relation on the state space denoted as (wfs, &lt;) and define a variant function vf , such that we have: Informally, in the above conjunction of three formulas: However, the conjunction of those three is not a necessary condition. Exactly, we have Non-deterministic guarded commands. Actually, Dijkstra's Guarded Command Language (GCL) is an extension of the simple imperative language given until here with non-deterministic statements. Indeed, GCL aims to be a formal notation to define algorithms. Non-deterministic statements represent choices left to the actual implementation (in an effective programming language): properties proved on non-deterministic statements are ensured for all possible choices of implementation. In other words, weakest-preconditions of non-deterministic statements ensure Notice that the definitions of weakest-precondition given above (in particular for while-loop) preserve this property. Selection. Selection is a generalization of if statement: Here, when two guards formula_12 and formula_13 are simultaneously true, then execution of this statement can run any of the associated statement formula_14 or formula_15. Repetition. Repetition is a generalization of while statement in a similar way. Specification statement. Refinement calculus extends GCL with the notion of "specification statement". Syntactically, we prefer to write a specification statement as formula_16 which specifies a computation that starts in a state satisfying "pre" and is guaranteed to end in a state satisfying "post" by changing only "x". We call formula_17 a logical constant employed to aid in a specification. For example, we can specify a computation that increment x by 1 as formula_18 Another example is a computation of a square root of an integer. formula_19 The specification statement appears like a primitive in the sense that it does not contain other statements. However, it is very expressive, as "pre" and "post" are arbitrary predicates. Its weakest precondition is as follows. It combines Morgan's syntactic idea with the sharpness idea by Bijlsma, Matthews and Wiltink. The very advantage of this is its capability of defining wp of goto L and other jump statements. Goto statement. Formalization of jump statements like "goto L" takes a very long bumpy process. A common belief seems to indicate the goto statement could only be argued operationally. This is probably due to a failure to recognize that "goto L" is actually miraculous (i.e. non-strict) and does not follow Dijkstra's coined Law of Miracle Excluded, as stood in itself. But it enjoys an extremely simple operational view from the weakest precondition perspective, which was unexpected. We define For "goto L" execution transfers control to label "L" at which the weakest precondition has to hold. The way that "wpL" is referred to in the rule should not be taken as a big surprise. It is just &amp;NoBreak;&amp;NoBreak; for some "Q" computed to that point. This is like any wp rules, using constituent statements to give wp definitions, even though "goto L" appears a primitive. The rule does not require the uniqueness for locations where "wpL" holds within a program, so theoretically it allows the same label to appear in multiple locations as long as the weakest precondition at each location is the same wpL. The goto statement can jump to any of such locations. This actually justifies that we could place the same labels at the same location multiple times, as &amp;NoBreak;&amp;NoBreak;, which is the same as &amp;NoBreak;&amp;NoBreak;. Also, it does not imply any scoping rule, thus allowing a jump into a loop body, for example. Let us calculate wp of the following program S, which has a jump into the loop body. wp(do x &gt; 0 → L: x := x-1 od; if x &lt; 0 → x := -x; goto L ⫿ x ≥ 0 → skip fi, post) wp(do x &gt; 0 → L: x := x-1 od, (x&lt;0 ∧ wp(x := -x; goto L, post)) ∨ (x ≥ 0 ∧ post) wp(do x &gt; 0 → L: x := x-1 od, x&lt;0 ∧ wpL(x ← -x) ∨ x≥0 ∧ post) the strongest solution of Z: [ Z ≡ x &gt; 0 ∧ wp(L: x := x-1, Z) ∨ x &lt; 0 ∧ wpL(x ← -x) ∨ x=0 ∧ post ] the strongest solution of Z: [ Z ≡ x &gt; 0 ∧ Z(x ← x-1) ∨ x &lt; 0 ∧ Z(x ← x-1) (x ← -x) ∨ x=0 ∧ post] the strongest solution of Z:[ Z ≡ x &gt; 0 ∧ Z(x ← x-1) ∨ x &lt; 0 ∧ Z(x ← -x-1) ∨ x=0 ∧ post ] post(x ← 0) Therefore, wp(S, post) = post(x ← 0). Other predicate transformers. Weakest liberal precondition. An important variant of the weakest precondition is the weakest liberal precondition formula_20, which yields the weakest condition under which "S" either does not terminate or establishes "R". It therefore differs from "wp" in not guaranteeing termination. Hence it corresponds to Hoare logic in partial correctness: for the statement language given above, "wlp" differs with "wp" only on while-loop, in not requiring a variant (see above). Strongest postcondition. Given "S" a statement and "R" a precondition (a predicate on the initial state), then formula_21 is their strongest-postcondition: it implies any postcondition satisfied by the final state of any execution of S, for any initial state satisfying R. In other words, a Hoare triple formula_22 is provable in Hoare logic if and only if the predicate below hold: formula_23 Usually, strongest-postconditions are used in partial correctness. Hence, we have the following relation between weakest-liberal-preconditions and strongest-postconditions: formula_24 For example, on assignment we have: Above, the logical variable "y" represents the initial value of variable "x". Hence, formula_25 On sequence, it appears that "sp" runs forward (whereas "wp" runs backward): Win and sin predicate transformers. Leslie Lamport has suggested "win" and "sin" as "predicate transformers" for concurrent programming. Predicate transformers properties. This section presents some characteristic properties of predicate transformers. Below, "S" denotes a predicate transformer (a function between two predicates on the state space) and "P" a predicate. For instance, "S(P)" may denote "wp(S,P)" or "sp(S,P)". We keep "x" as the variable of the state space. Monotonic. Predicate transformers of interest ("wp", "wlp", and "sp") are monotonic. A predicate transformer "S" is monotonic if and only if: formula_26 This property is related to the consequence rule of Hoare logic. Strict. A predicate transformer "S" is strict iff: formula_27 For instance, "wp" is artificially made strict, whereas "wlp" is generally not. In particular, if statement "S" may not terminate then formula_28 is satisfiable. We have formula_29 Indeed, T is a valid invariant of that loop. The non-strict but monotonic or conjunctive predicate transformers are called miraculous and can also be used to define a class of programming constructs, in particular, jump statements, which Dijkstra cared less about. Those jump statements include straight goto L, break and continue in a loop and return statements in a procedure body, exception handling, etc. It turns out that all jump statements are executable miracles, i.e. they can be implemented but not strict. Terminating. A predicate transformer "S" is terminating if: formula_30 Actually, this terminology makes sense only for strict predicate transformers: indeed, formula_31 is the weakest-precondition ensuring termination of "S". It seems that naming this property non-aborting would be more appropriate: in total correctness, non-termination is abortion, whereas in partial correctness, it is not. Conjunctive. A predicate transformer "S" is conjunctive iff: formula_32 This is the case for formula_33, even if statement "S" is non-deterministic as a selection statement or a specification statement. Disjunctive. A predicate transformer "S" is disjunctive iff: formula_34 This is generally not the case of formula_33 when "S" is non-deterministic. Indeed, consider a non-deterministic statement "S" choosing an arbitrary boolean. This statement is given here as the following "selection statement": formula_35 Then, formula_36 reduces to the formula formula_37. Hence, formula_38 reduces to the "tautology" formula_39 Whereas, the formula formula_40 reduces to the "wrong proposition" formula_41. Beyond predicate transformers. Weakest-preconditions and strongest-postconditions of imperative expressions. In predicate transformers semantics, expressions are restricted to terms of the logic (see above). However, this restriction seems too strong for most existing programming languages, where expressions may have side effects (call to a function having a side effect), may not terminate or abort (like "division by zero"). There are many proposals to extend weakest-preconditions or strongest-postconditions for imperative expression languages and in particular for monads. Among them, "Hoare Type Theory" combines Hoare logic for a Haskell-like language, separation logic and type theory. This system is currently implemented as a Coq library called Ynot. In this language, evaluation of expressions corresponds to computations of "strongest-postconditions". Probabilistic Predicate Transformers. "Probabilistic Predicate Transformers" are an extension of predicate transformers for probabilistic programs. Indeed, such programs have many applications in cryptography (hiding of information using some randomized noise), distributed systems (symmetry breaking). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{ P \\} S \\{ R \\}" }, { "math_id": 1, "text": " P \\Rightarrow Q " }, { "math_id": 2, "text": "\\{ Q' \\} S \\{ R \\}" }, { "math_id": 3, "text": " Q' \\Rightarrow Q " }, { "math_id": 4, "text": "\\{ Q \\} S \\{ R \\}" }, { "math_id": 5, "text": " Q \\Rightarrow Q' " }, { "math_id": 6, "text": " Q=Q' " }, { "math_id": 7, "text": "wp(S, R)" }, { "math_id": 8, "text": "R[x \\leftarrow E]" }, { "math_id": 9, "text": "\\begin{array}{rcl}\nwp(x := x - 5, x > 10) & = & x - 5 > 10 \\\\\n & \\Leftrightarrow & x > 15 \n\\end{array}" }, { "math_id": 10, "text": "\\begin{array}[t]{rcl} wp(x:=x-5;x:=x*2\\ ,\\ x>20) & = & wp(x:=x-5,wp(x:=x*2, x > 20))\\\\ \n & = & wp(x:=x-5,x*2 > 20)\\\\\n & = & (x-5)*2 > 20\\\\\n & = & x > 15\n \\end{array}" }, { "math_id": 11, "text": "\\begin{array}[t]{rcl} \nwp(\\texttt{if}\\ x < y\\ \\texttt{then}\\ x:=y\\ \\texttt{else}\\;\\;\\texttt{skip}\\;\\;\\texttt{end},\\ x \\geq y)\n& = & (x < y \\Rightarrow wp(x:=y,x\\geq y))\\ \\wedge\\ (\\neg (x<y) \\Rightarrow wp(\\texttt{skip}, x \\geq y))\\\\\n& = & (x < y \\Rightarrow y\\geq y) \\ \\wedge\\ (\\neg (x<y) \\Rightarrow x \\geq y)\\\\\n& \\Leftrightarrow & \\texttt{true}\n\\end{array}" }, { "math_id": 12, "text": "E_i" }, { "math_id": 13, "text": "E_j" }, { "math_id": 14, "text": "S_i" }, { "math_id": 15, "text": "S_j" }, { "math_id": 16, "text": " x:l[pre, post] " }, { "math_id": 17, "text": "l" }, { "math_id": 18, "text": " x:l[x = l, x = l+1] " }, { "math_id": 19, "text": " x:l[x = l^2, x = l] " }, { "math_id": 20, "text": "wlp(S, R)" }, { "math_id": 21, "text": "sp(S, R)" }, { "math_id": 22, "text": "\\{ P \\} S \\{ Q \\}" }, { "math_id": 23, "text": "\\forall x, sp(S,P) \\Rightarrow Q" }, { "math_id": 24, "text": "(\\forall x, P \\Rightarrow wlp(S,Q)) \\ \\Leftrightarrow\\ (\\forall x, sp(S,P) \\Rightarrow Q)" }, { "math_id": 25, "text": " sp(x := x - 5, x > 15)\\ =\\ \\exists y, x = y - 5 \\wedge y > 15 \\ \\Leftrightarrow \\ x > 10" }, { "math_id": 26, "text": "(\\forall x: P : Q) \\Rightarrow (\\forall x: S(P): S(Q))" }, { "math_id": 27, "text": "S(\\texttt{F})\\ \\Leftrightarrow\\ \\texttt{F}" }, { "math_id": 28, "text": "wlp(S,\\texttt{F})" }, { "math_id": 29, "text": "wlp(\\texttt{while}\\ \\texttt{true}\\ \\texttt{do}\\ \\texttt{skip}\\ \\texttt{done}, \\texttt{F}) \\ \\Leftrightarrow \\texttt{T}" }, { "math_id": 30, "text": "S(\\texttt{T})\\ \\Leftrightarrow\\ \\texttt{T}" }, { "math_id": 31, "text": "wp(S,\\texttt{T})" }, { "math_id": 32, "text": "S(P \\wedge Q)\\ \\Leftrightarrow\\ S(P) \\wedge S(Q)" }, { "math_id": 33, "text": "wp(S,.)" }, { "math_id": 34, "text": "S(P \\vee Q)\\ \\Leftrightarrow\\ S(P) \\vee S(Q)" }, { "math_id": 35, "text": "S\\ =\\ \\texttt{if}\\ \\texttt{true} \\rightarrow x:=0\\ [\\!]\\ \\texttt{true} \\rightarrow x:=1\\ \\texttt{fi}" }, { "math_id": 36, "text": "wp(S,R)" }, { "math_id": 37, "text": "R[x \\leftarrow 0] \\wedge R[x \\leftarrow 1]" }, { "math_id": 38, "text": "wp(S,\\ x=0 \\vee x=1)" }, { "math_id": 39, "text": "(0=0 \\vee 0=1) \\wedge (1=0 \\vee 1=1)" }, { "math_id": 40, "text": "wp(S, x=0) \\vee wp(S,x=1)" }, { "math_id": 41, "text": "(0=0 \\wedge 1=0) \\vee (1=0 \\wedge 1=1)" } ]
https://en.wikipedia.org/wiki?curid=1453583
145375
Local field
Locally compact topological field In mathematics, a field "K" is called a non-Archimedean local field if it is complete with respect to a metric induced by a discrete valuation "v" and if its residue field "k" is finite. In general, a local field is a locally compact topological field with respect to a non-discrete topology. The real numbers R, and the complex numbers C (with their standard topologies) are Archimedean local fields. Given a local field, the valuation defined on it can be of either of two types, each one corresponds to one of the two basic types of local fields: those in which the valuation is Archimedean and those in which it is not. In the first case, one calls the local field an Archimedean local field, in the second case, one calls it a non-Archimedean local field. Local fields arise naturally in number theory as completions of global fields. While Archimedean local fields have been quite well known in mathematics for at least 250 years, the first examples of non-Archimedean local fields, the fields of "p"-adic numbers for positive prime integer "p", were introduced by Kurt Hensel at the end of the 19th century. Every local field is isomorphic (as a topological field) to one of the following: In particular, of importance in number theory, classes of local fields show up as the completions of algebraic number fields with respect to their discrete valuation corresponding to one of their maximal ideals. Research papers in modern number theory often consider a more general notion, requiring only that the residue field be perfect of positive characteristic, not necessarily finite. This article uses the former definition. Induced absolute value. Given such an absolute value on a field "K", the following topology can be defined on "K": for a positive real number "m", define the subset "B"m of "K" by formula_0 Then, the "b+B"m make up a neighbourhood basis of b in "K". Conversely, a topological field with a non-discrete locally compact topology has an absolute value defining its topology. It can be constructed using the Haar measure of the additive group of the field. Basic features of non-Archimedean local fields. For a non-Archimedean local field "F" (with absolute value denoted by |·|), the following objects are important: Every non-zero element "a" of "F" can be written as "a" = ϖ"n""u" with "u" a unit, and "n" a unique integer. The normalized valuation of "F" is the surjective function "v" : "F" → Z ∪ {∞} defined by sending a non-zero "a" to the unique integer "n" such that "a" = ϖ"n""u" with "u" a unit, and by sending 0 to ∞. If "q" is the cardinality of the residue field, the absolute value on "F" induced by its structure as a local field is given by: formula_8 An equivalent and very important definition of a non-Archimedean local field is that it is a field that is complete with respect to a discrete valuation and whose residue field is finite. Higher unit groups. The "n"th higher unit group of a non-Archimedean local field "F" is formula_10 for "n" ≥ 1. The group "U"(1) is called the group of principal units, and any element of it is called a principal unit. The full unit group formula_11 is denoted "U"(0). The higher unit groups form a decreasing filtration of the unit group formula_12 whose quotients are given by formula_13 for "n" ≥ 1. (Here "formula_14" means a non-canonical isomorphism.) Structure of the unit group. The multiplicative group of non-zero elements of a non-Archimedean local field "F" is isomorphic to formula_15 where "q" is the order of the residue field, and μ"q"−1 is the group of ("q"−1)st roots of unity (in "F"). Its structure as an abelian group depends on its characteristic: formula_16 where N denotes the natural numbers; formula_17 where "a" ≥ 0 is defined so that the group of "p"-power roots of unity in "F" is formula_18. Theory of local fields. This theory includes the study of types of local fields, extensions of local fields using Hensel's lemma, Galois extensions of local fields, ramification groups filtrations of Galois groups of local fields, the behavior of the norm map on local fields, the local reciprocity homomorphism and existence theorem in local class field theory, local Langlands correspondence, Hodge-Tate theory (also called "p"-adic Hodge theory), explicit formulas for the Hilbert symbol in local class field theory, see e.g. Higher-dimensional local fields. A local field is sometimes called a "one-dimensional local field". A non-Archimedean local field can be viewed as the field of fractions of the completion of the local ring of a one-dimensional arithmetic scheme of rank 1 at its non-singular point. For a non-negative integer "n", an "n"-dimensional local field is a complete discrete valuation field whose residue field is an ("n" − 1)-dimensional local field. Depending on the definition of local field, a "zero-dimensional local field" is then either a finite field (with the definition used in this article), or a perfect field of positive characteristic. From the geometric point of view, "n"-dimensional local fields with last finite residue field are naturally associated to a complete flag of subschemes of an "n"-dimensional arithmetic scheme. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "B_m:=\\{ a\\in K:|a|\\leq m\\}." }, { "math_id": 1, "text": "\\mathcal{O} = \\{a\\in F: |a|\\leq 1\\}" }, { "math_id": 2, "text": "\\mathcal{O}^\\times = \\{a\\in F: |a|= 1\\}" }, { "math_id": 3, "text": "\\mathfrak{m}" }, { "math_id": 4, "text": "\\{a\\in F: |a|< 1\\}" }, { "math_id": 5, "text": "\\varpi" }, { "math_id": 6, "text": "F" }, { "math_id": 7, "text": "k=\\mathcal{O}/\\mathfrak{m}" }, { "math_id": 8, "text": "|a|=q^{-v(a)}." }, { "math_id": 9, "text": "v\\left(\\sum_{i=-m}^\\infty a_iT^i\\right) = -m" }, { "math_id": 10, "text": "U^{(n)}=1+\\mathfrak{m}^n=\\left\\{u\\in\\mathcal{O}^\\times:u\\equiv1\\, (\\mathrm{mod}\\,\\mathfrak{m}^n)\\right\\}" }, { "math_id": 11, "text": "\\mathcal{O}^\\times" }, { "math_id": 12, "text": "\\mathcal{O}^\\times\\supseteq U^{(1)}\\supseteq U^{(2)}\\supseteq\\cdots" }, { "math_id": 13, "text": "\\mathcal{O}^\\times/U^{(n)}\\cong\\left(\\mathcal{O}/\\mathfrak{m}^n\\right)^\\times\\text{ and }\\,U^{(n)}/U^{(n+1)}\\approx\\mathcal{O}/\\mathfrak{m}" }, { "math_id": 14, "text": "\\approx" }, { "math_id": 15, "text": "F^\\times\\cong(\\varpi)\\times\\mu_{q-1}\\times U^{(1)}" }, { "math_id": 16, "text": "F^\\times\\cong\\mathbf{Z}\\oplus\\mathbf{Z}/{(q-1)}\\oplus\\mathbf{Z}_p^\\mathbf{N}" }, { "math_id": 17, "text": "F^\\times\\cong\\mathbf{Z}\\oplus\\mathbf{Z}/(q-1)\\oplus\\mathbf{Z}/p^a\\oplus\\mathbf{Z}_p^d" }, { "math_id": 18, "text": "\\mu_{p^a}" } ]
https://en.wikipedia.org/wiki?curid=145375
1453781
Free field
Physical field theory with no forces/interactions In physics a free field is a field without interactions, which is described by the terms of motion and mass. Description. In classical physics, a free field is a field whose equations of motion are given by linear partial differential equations. Such linear PDE's have a unique solution for a given initial condition. In quantum field theory, an operator valued distribution is a free field if it satisfies some linear partial differential equations such that the corresponding case of the same linear PDEs for a classical field (i.e. not an operator) would be the Euler–Lagrange equation for some quadratic Lagrangian. We can differentiate distributions by defining their derivatives via differentiated test functions. See Schwartz distribution for more details. Since we are dealing not with ordinary distributions but operator valued distributions, it is understood these PDEs aren't constraints on states but instead a description of the relations among the smeared fields. Beside the PDEs, the operators also satisfy another relation, the commutation/anticommutation relations. Canonical Commutation Relation. Basically, commutator (for bosons)/anticommutator (for fermions) of two smeared fields is i times the Peierls bracket of the field with itself (which is really a distribution, not a function) for the PDEs smeared over both test functions. This has the form of a CCR/CAR algebra. CCR/CAR algebras with infinitely many degrees of freedom have many inequivalent irreducible unitary representations. If the theory is defined over Minkowski space, we may choose the unitary irrep containing a vacuum state although that isn't always necessary. Example. Let φ be an operator valued distribution and the (Klein–Gordon) PDE be formula_0. This is a bosonic field. Let's call the distribution given by the Peierls bracket Δ. Then, formula_1 where here, φ is a classical field and {,} is the Peierls bracket. Then, the canonical commutation relation is formula_2. Note that Δ is a distribution over two arguments, and so, can be smeared as well. Equivalently, we could have insisted that formula_3 where formula_4 is the time ordering operator and that if the supports of f and g are spacelike separated, formula_5.
[ { "math_id": 0, "text": "\\partial^\\mu \\partial_\\mu \\phi+m^2 \\phi=0" }, { "math_id": 1, "text": "\\{\\phi(x),\\phi(y)\\}=\\Delta(x;y)" }, { "math_id": 2, "text": "[\\phi[f],\\phi[g]]=i\\Delta[f,g] \\," }, { "math_id": 3, "text": "\\mathcal{T}\\{[((\\partial^\\mu \\partial_\\mu+m^2)\\phi)[f],\\phi[g]]\\}=-i\\int d^dx f(x)g(x)" }, { "math_id": 4, "text": "\\mathcal{T}" }, { "math_id": 5, "text": "[\\phi[f],\\phi[g]]=0" } ]
https://en.wikipedia.org/wiki?curid=1453781
145381
Ellipsoid
Quadric surface that looks like a deformed sphere An ellipsoid is a surface that can be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation. An ellipsoid is a quadric surface;  that is, a surface that may be defined as the zero set of a polynomial of degree two in three variables. Among quadric surfaces, an ellipsoid is characterized by either of the two following properties. Every planar cross section is either an ellipse, or is empty, or is reduced to a single point (this explains the name, meaning "ellipse-like"). It is bounded, which means that it may be enclosed in a sufficiently large sphere. An ellipsoid has three pairwise perpendicular axes of symmetry which intersect at a center of symmetry, called the center of the ellipsoid. The line segments that are delimited on the axes of symmetry by the ellipsoid are called the "principal axes", or simply axes of the ellipsoid. If the three axes have different lengths, the figure is a triaxial ellipsoid (rarely scalene ellipsoid), and the axes are uniquely defined. If two of the axes have the same length, then the ellipsoid is an "ellipsoid of revolution", also called a "spheroid". In this case, the ellipsoid is invariant under a rotation around the third axis, and there are thus infinitely many ways of choosing the two perpendicular axes of the same length. If the third axis is shorter, the ellipsoid is an "oblate spheroid"; if it is longer, it is a "prolate spheroid". If the three axes have the same length, the ellipsoid is a sphere. Standard equation. The general ellipsoid, also known as triaxial ellipsoid, is a quadratic surface which is defined in Cartesian coordinates as: formula_0 where formula_1, formula_2 and formula_3 are the length of the semi-axes. The points formula_4, formula_5 and formula_6 lie on the surface. The line segments from the origin to these points are called the principal semi-axes of the ellipsoid, because "a", "b", "c" are half the length of the principal axes. They correspond to the semi-major axis and semi-minor axis of an ellipse. In spherical coordinate system for which formula_7, the general ellipsoid is defined as: formula_8 where formula_9 is the polar angle and formula_10 is the azimuthal angle. When formula_11, the ellipsoid is a sphere. When formula_12, the ellipsoid is a spheroid or ellipsoid of revolution. In particular, if formula_13, it is an oblate spheroid; if formula_14, it is a prolate spheroid. Parameterization. The ellipsoid may be parameterized in several ways, which are simpler to express when the ellipsoid axes coincide with coordinate axes. A common choice is formula_15 where formula_16 These parameters may be interpreted as spherical coordinates, where θ is the polar angle and φ is the azimuth angle of the point ("x", "y", "z") of the ellipsoid. Measuring from the equator rather than a pole, formula_17 where formula_18 θ is the reduced latitude, parametric latitude, or eccentric anomaly and λ is azimuth or longitude. Measuring angles directly to the surface of the ellipsoid, not to the circumscribed sphere, formula_19 where formula_20 γ would be geocentric latitude on the Earth, and λ is longitude. These are true spherical coordinates with the origin at the center of the ellipsoid. In geodesy, the geodetic latitude is most commonly used, as the angle between the vertical and the equatorial plane, defined for a biaxial ellipsoid. For a more general triaxial ellipsoid, see ellipsoidal latitude. Volume. The volume bounded by the ellipsoid is formula_21 In terms of the principal diameters "A", "B", "C" (where "A" 2"a", "B" 2"b", "C" 2"c"), the volume is formula_22. This equation reduces to that of the volume of a sphere when all three elliptic radii are equal, and to that of an oblate or prolate spheroid when two of them are equal. The volume of an ellipsoid is the volume of a circumscribed elliptic cylinder, and the volume of the circumscribed box. The volumes of the inscribed and circumscribed boxes are respectively: formula_23 Surface area. The surface area of a general (triaxial) ellipsoid is formula_24 where formula_25 and where "F"("φ", "k") and "E"("φ", "k") are incomplete elliptic integrals of the first and second kind respectively. The surface area of this general ellipsoid can also be expressed in terms of &amp;NoBreak;&amp;NoBreak;, one of the Carlson symmetric forms of elliptic integrals: formula_26 Simplifying above formula using properties of "R""G", this can be also be expressed in terms of the volume of the ellipsoid "V": formula_27 Unlike the expression with "F"("φ", "k") and "E"("φ", "k"), the equations in terms of "R""G" do not depend on the choice of an order on "a", "b", and "c". The surface area of an ellipsoid of revolution (or spheroid) may be expressed in terms of elementary functions: formula_28 or formula_29 or formula_30 and formula_31 which, as follows from basic trigonometric identities, are equivalent expressions (i.e. the formula for "S"oblate can be used to calculate the surface area of a prolate ellipsoid and vice versa). In both cases e may again be identified as the eccentricity of the ellipse formed by the cross section through the symmetry axis. (See ellipse). Derivations of these results may be found in standard sources, for example Mathworld. formula_32 Approximate formula. Here "p" ≈ 1.6075 yields a relative error of at most 1.061%; a value of "p" = = 1.6 is optimal for nearly spherical ellipsoids, with a relative error of at most 1.178%. In the "flat" limit of c much smaller than a and b, the area is approximately 2π"ab", equivalent to "p" = log23 ≈ 1.5849625007. Plane sections. The intersection of a plane and a sphere is a circle (or is reduced to a single point, or is empty). Any ellipsoid is the image of the unit sphere under some affine transformation, and any plane is the image of some other plane under the same transformation. So, because affine transformations map circles to ellipses, the intersection of a plane with an ellipsoid is an ellipse or a single point, or is empty. Obviously, spheroids contain circles. This is also true, but less obvious, for triaxial ellipsoids (see Circular section). Determining the ellipse of a plane section. Given: Ellipsoid + + 1 and the plane with equation "nxx" + "nyy" + "nzz" "d", which have an ellipse in common. Wanted: Three vectors f0 (center) and f1, f2 (conjugate vectors), such that the ellipse can be represented by the parametric equation formula_33 (see ellipse). Solution: The scaling "u" = , "v" = , "w" = transforms the ellipsoid onto the unit sphere "u"2 + "v"2 + "w"2 1 and the given plane onto the plane with equation formula_34 Let "muu" + "mvv" + "mww" "δ" be the Hesse normal form of the new plane and formula_35 its unit normal vector. Hence formula_36 is the "center" of the intersection circle and formula_37 its radius (see diagram). Where "mw" ±1 (i.e. the plane is horizontal), let formula_38 Where "mw" ≠ ±1, let formula_39 In any case, the vectors e1, e2 are orthogonal, parallel to the intersection plane and have length ρ (radius of the circle). Hence the intersection circle can be described by the parametric equation formula_40 The reverse scaling (see above) transforms the unit sphere back to the ellipsoid and the vectors e0, e1, e2 are mapped onto vectors f0, f1, f2, which were wanted for the parametric representation of the intersection ellipse. How to find the vertices and semi-axes of the ellipse is described in ellipse. Example: The diagrams show an ellipsoid with the semi-axes "a" = 4, "b" = 5, "c" = 3 which is cut by the plane "x" + "y" + "z" = 5. Pins-and-string construction. The pins-and-string construction of an ellipsoid is a transfer of the idea constructing an ellipse using two pins and a string (see diagram). A pins-and-string construction of an ellipsoid of revolution is given by the pins-and-string construction of the rotated ellipse. The construction of points of a "triaxial ellipsoid" is more complicated. First ideas are due to the Scottish physicist J. C. Maxwell (1868). Main investigations and the extension to quadrics was done by the German mathematician O. Staude in 1882, 1886 and 1898. The description of the pins-and-string construction of ellipsoids and hyperboloids is contained in the book "Geometry and the imagination" written by D. Hilbert &amp; S. Vossen, too. Semi-axes. Equations for the semi-axes of the generated ellipsoid can be derived by special choices for point P: formula_44 The lower part of the diagram shows that "F"1 and "F"2 are the foci of the ellipse in the xy-plane, too. Hence, it is confocal to the given ellipse and the length of the string is "l" 2"rx" + ("a" − "c"). Solving for rx yields "rx" ("l" − "a" + "c"); furthermore "r" "r" − "c"2. From the upper diagram we see that "S"1 and "S"2 are the foci of the ellipse section of the ellipsoid in the xz-plane and that "r" "r" − "a"2. Converse. If, conversely, a triaxial ellipsoid is given by its equation, then from the equations in step 3 one can derive the parameters a, b, l for a pins-and-string construction. Confocal ellipsoids. If is an ellipsoid confocal to E with the squares of its semi-axes formula_45 then from the equations of E formula_46 one finds, that the corresponding focal conics used for the pins-and-string construction have the same semi-axes "a", "b", "c" as ellipsoid E. Therefore (analogously to the foci of an ellipse) one considers the focal conics of a triaxial ellipsoid as the (infinite many) foci and calls them the focal curves of the ellipsoid. The converse statement is true, too: if one chooses a second string of length and defines formula_47 then the equations formula_48 are valid, which means the two ellipsoids are confocal. Limit case, ellipsoid of revolution. In case of "a" "c" (a spheroid) one gets "S"1 "F"1 and "S"2 "F"2, which means that the focal ellipse degenerates to a line segment and the focal hyperbola collapses to two infinite line segments on the x-axis. The ellipsoid is rotationally symmetric around the x-axis and formula_49. If one views an ellipsoid from an external point V of its focal hyperbola, then it seems to be a sphere, that is its apparent shape is a circle. Equivalently, the tangents of the ellipsoid containing point V are the lines of a circular cone, whose axis of rotation is the tangent line of the hyperbola at V. If one allows the center V to disappear into infinity, one gets an orthogonal parallel projection with the corresponding asymptote of the focal hyperbola as its direction. The "true curve of shape" (tangent points) on the ellipsoid is not a circle. The lower part of the diagram shows on the left a parallel projection of an ellipsoid (with semi-axes 60, 40, 30) along an asymptote and on the right a central projection with center V and main point H on the tangent of the hyperbola at point V. (H is the foot of the perpendicular from V onto the image plane.) For both projections the apparent shape is a circle. In the parallel case the image of the origin O is the circle's center; in the central case main point H is the center. The focal hyperbola intersects the ellipsoid at its four umbilical points. Property of the focal ellipse. The focal ellipse together with its inner part can be considered as the limit surface (an infinitely thin ellipsoid) of the pencil of confocal ellipsoids determined by "a", "b" for "rz" → 0. For the limit case one gets formula_50 Ellipsoids in higher dimensions and general position. Standard equation. A hyperellipsoid, or ellipsoid of dimension formula_51 in a Euclidean space of dimension formula_52, is a quadric hypersurface defined by a polynomial of degree two that has a homogeneous part of degree two which is a positive definite quadratic form. One can also define a hyperellipsoid as the image of a sphere under an invertible affine transformation. The spectral theorem can again be used to obtain a standard equation of the form formula_53 The volume of an n-dimensional "hyperellipsoid" can be obtained by replacing Rn by the product of the semi-axes "a"1"a"2..."an" in the formula for the volume of a hypersphere: formula_54 (where Γ is the gamma function). As a quadric. If A is a real, symmetric, n-by-n positive-definite matrix, and v is a vector in formula_55 then the set of points x that satisfy the equation formula_56 is an "n"-dimensional ellipsoid centered at v. The expression formula_57 is also called the ellipsoidal norm of x - v. For every ellipsoid, there are unique A and v that satisfy the above equation.67 The eigenvectors of A are the principal axes of the ellipsoid, and the eigenvalues of A are the reciprocals of the squares of the semi-axes (in three dimensions these are "a"−2, "b"−2 and "c"−2). In particular: An invertible linear transformation applied to a sphere produces an ellipsoid, which can be brought into the above standard form by a suitable rotation, a consequence of the polar decomposition (also, see spectral theorem). If the linear transformation is represented by a symmetric 3 × 3 matrix, then the eigenvectors of the matrix are orthogonal (due to the spectral theorem) and represent the directions of the axes of the ellipsoid; the lengths of the semi-axes are computed from the eigenvalues. The singular value decomposition and polar decomposition are matrix decompositions closely related to these geometric observations. For every positive definite matrix formula_58, there exists a unique positive definite matrix denoted A1/2, such that formula_59 this notation is motivated by the fact that this matrix can be seen as the "positive square root" of formula_60 The ellipsoid defined by formula_56 can also be presented as67formula_61where S(0,1) is the unit sphere around the origin. Parametric representation. The key to a parametric representation of an ellipsoid in general position is the alternative definition: "An ellipsoid is an affine image of the unit sphere." An affine transformation can be represented by a translation with a vector f0 and a regular 3 × 3 matrix A: formula_62 where f1, f2, f3 are the column vectors of matrix A. A parametric representation of an ellipsoid in general position can be obtained by the parametric representation of a unit sphere (see above) and an affine transformation: formula_63. If the vectors f1, f2, f3 form an orthogonal system, the six points with vectors f0 ± f1,2,3 are the vertices of the ellipsoid and are the semi-principal axes. A surface normal vector at point x("θ", "φ") is formula_64 For any ellipsoid there exists an implicit representation "F"("x", "y", "z") 0. If for simplicity the center of the ellipsoid is the origin, f0 0, the following equation describes the ellipsoid above: formula_65 Applications. The ellipsoidal shape finds many practical applications: Dynamical properties. The mass of an ellipsoid of uniform density ρ is formula_66 The moments of inertia of an ellipsoid of uniform density are formula_67 For "a" = "b" = "c" these moments of inertia reduce to those for a sphere of uniform density. Ellipsoids and cuboids rotate stably along their major or minor axes, but not along their median axis. This can be seen experimentally by throwing an eraser with some spin. In addition, moment of inertia considerations mean that rotation along the major axis is more easily perturbed than rotation along the minor axis. One practical effect of this is that scalene astronomical bodies such as Haumea generally rotate along their minor axes (as does Earth, which is merely oblate); in addition, because of tidal locking, moons in synchronous orbit such as Mimas orbit with their major axis aligned radially to their planet. A spinning body of homogeneous self-gravitating fluid will assume the form of either a Maclaurin spheroid (oblate spheroid) or Jacobi ellipsoid (scalene ellipsoid) when in hydrostatic equilibrium, and for moderate rates of rotation. At faster rotations, non-ellipsoidal piriform or oviform shapes can be expected, but these are not stable. Fluid dynamics. The ellipsoid is the most general shape for which it has been possible to calculate the creeping flow of fluid around the solid shape. The calculations include the force required to translate through a fluid and to rotate within it. Applications include determining the size and shape of large molecules, the sinking rate of small particles, and the swimming abilities of microorganisms. In probability and statistics. The elliptical distributions, which generalize the multivariate normal distribution and are used in finance, can be defined in terms of their density functions. When they exist, the density functions f have the structure: formula_68 where k is a scale factor, x is an n-dimensional random row vector with median vector μ (which is also the mean vector if the latter exists), Σ is a positive definite matrix which is proportional to the covariance matrix if the latter exists, and g is a function mapping from the non-negative reals to the non-negative reals giving a finite area under the curve. The multivariate normal distribution is the special case in which "g"("z") exp(−) for quadratic form z. Thus the density function is a scalar-to-scalar transformation of a quadric expression. Moreover, the equation for any iso-density surface states that the quadric expression equals some constant specific to that value of the density, and the iso-density surface is an ellipsoid.
[ { "math_id": 0, "text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} + \\frac{z^2}{c^2} = 1," }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "b" }, { "math_id": 3, "text": "c" }, { "math_id": 4, "text": "(a, 0, 0)" }, { "math_id": 5, "text": "(0, b, 0)" }, { "math_id": 6, "text": "(0, 0, c)" }, { "math_id": 7, "text": "(x,y,z)=(r\\sin\\theta\\cos\\varphi, r\\sin\\theta\\sin\\varphi,r\\cos\\theta)" }, { "math_id": 8, "text": "{r^2\\sin^2\\theta\\cos^2\\varphi\\over a^2}+{r^2\\sin^2\\theta\\sin^2\\varphi \\over b^2}+{r^2\\cos^2\\theta \\over c^2}=1," }, { "math_id": 9, "text": "\\theta" }, { "math_id": 10, "text": "\\varphi" }, { "math_id": 11, "text": "a=b=c" }, { "math_id": 12, "text": "a=b\\neq c" }, { "math_id": 13, "text": "a = b > c" }, { "math_id": 14, "text": "a = b < c" }, { "math_id": 15, "text": "\\begin{align}\n x &= a\\sin(\\theta)\\cos(\\varphi),\\\\\n y &= b\\sin(\\theta)\\sin(\\varphi),\\\\\n z &= c\\cos(\\theta),\n\\end{align}\\,\\!" }, { "math_id": 16, "text": "\n 0 \\le \\theta \\le \\pi,\\qquad\n 0 \\le \\varphi < 2\\pi.\n" }, { "math_id": 17, "text": "\\begin{align}\n x &= a\\cos(\\theta)\\cos(\\lambda),\\\\\n y &= b\\cos(\\theta)\\sin(\\lambda),\\\\\n z &= c\\sin(\\theta),\n\\end{align}\\,\\!" }, { "math_id": 18, "text": "\n -\\tfrac{\\pi}2 \\le \\theta \\le \\tfrac{\\pi}2,\\qquad\n 0 \\le \\lambda < 2\\pi,\n" }, { "math_id": 19, "text": "\\begin{bmatrix}\n x \\\\ y \\\\ z\n \\end{bmatrix} =\n R \\begin{bmatrix}\n \\cos(\\gamma)\\cos(\\lambda)\\\\\n \\cos(\\gamma)\\sin(\\lambda)\\\\\n \\sin(\\gamma)\n \\end{bmatrix}\n\\,\\!" }, { "math_id": 20, "text": "\\begin{align}\n R ={} &\\frac{abc}{\\sqrt{c^2 \\left(b^2\\cos^2\\lambda + a^2\\sin^2\\lambda\\right) \\cos^2\\gamma\n + a^2 b^2\\sin^2\\gamma}}, \\\\[3pt]\n &-\\tfrac{\\pi}2 \\le \\gamma \\le \\tfrac{\\pi}2,\\qquad\n 0 \\le \\lambda < 2\\pi.\n\\end{align}" }, { "math_id": 21, "text": "V = \\tfrac{4}{3}\\pi abc." }, { "math_id": 22, "text": "V = \\tfrac16 \\pi ABC" }, { "math_id": 23, "text": "\n V_\\text{inscribed} = \\frac{8}{3\\sqrt{3}} abc,\\qquad\n V_\\text{circumscribed} = 8abc.\n" }, { "math_id": 24, "text": "S = 2\\pi c^2 + \\frac{2\\pi ab}{\\sin(\\varphi)}\\left(E(\\varphi, k)\\,\\sin^2(\\varphi) + F(\\varphi, k)\\,\\cos^2(\\varphi)\\right)," }, { "math_id": 25, "text": " \n \\cos(\\varphi) = \\frac{c}{a},\\qquad\n k^2 = \\frac{a^2\\left(b^2 - c^2\\right)}{b^2\\left(a^2 - c^2\\right)},\\qquad\n a \\ge b \\ge c,\n" }, { "math_id": 26, "text": "S = 4\\pi bc R_{G} {\\left( \\frac{b^2}{a^2} , \\frac{c^2}{a^2} , 1\\right)}" }, { "math_id": 27, "text": "S = 3VR_{G}{\\left(a^{-2},b^{-2},c^{-2}\\right)}" }, { "math_id": 28, "text": "\n S_\\text{oblate} = 2\\pi a^2\\left(1 + \\frac{c^2}{ea^2} \\operatorname{artanh}e\\right),\n \\qquad\\text{where }e^2 = 1 - \\frac{c^2}{a^2}\\text{ and }(c < a), " }, { "math_id": 29, "text": "\n S_\\text{oblate} = 2\\pi a^2\\left(1 + \\frac{1 - e^2}{e} \\operatorname{artanh}e\\right)" }, { "math_id": 30, "text": "\nS_\\text{oblate} = 2\\pi a^2\\ + \\frac{\\pi c^2}{e}\\ln\\frac{1+e}{1-e}" }, { "math_id": 31, "text": "\n S_\\text{prolate} = 2\\pi a^2\\left(1 + \\frac{c}{ae} \\arcsin e\\right)\n \\qquad\\text{where } e^2 = 1 - \\frac{a^2}{c^2}\\text{ and } (c > a),\n" }, { "math_id": 32, "text": "S \\approx 4\\pi \\sqrt[p]{\\frac{a^p b^p + a^p c^p + b^p c^p}{3}}.\\,\\!" }, { "math_id": 33, "text": "\\mathbf x = \\mathbf f_0 + \\mathbf f_1\\cos t + \\mathbf f_2\\sin t" }, { "math_id": 34, "text": "\\ n_x au + n_y bv + n_z cw = d. " }, { "math_id": 35, "text": "\\;\\mathbf m = \\begin{bmatrix} m_u \\\\ m_v \\\\ m_w \\end{bmatrix}\\;" }, { "math_id": 36, "text": "\\mathbf e_0 = \\delta \\mathbf m \\;" }, { "math_id": 37, "text": "\\;\\rho = \\sqrt{1 - \\delta^2}\\;" }, { "math_id": 38, "text": "\\ \\mathbf e_1 = \\begin{bmatrix} \\rho \\\\ 0 \\\\ 0 \\end{bmatrix},\\qquad \\mathbf e_2 = \\begin{bmatrix} 0 \\\\ \\rho \\\\ 0 \\end{bmatrix}." }, { "math_id": 39, "text": "\\mathbf e_1 = \\frac{\\rho}{\\sqrt{m_u^2 + m_v^2}}\\, \\begin{bmatrix} m_v \\\\ -m_u \\\\ 0 \\end{bmatrix}\\, ,\\qquad \\mathbf e_2 = \\mathbf m \\times \\mathbf e_1\\ ." }, { "math_id": 40, "text": "\\;\\mathbf u = \\mathbf e_0 + \\mathbf e_1\\cos t + \\mathbf e_2\\sin t\\;." }, { "math_id": 41, "text": "\\begin{align}\nE(\\varphi) &= (a\\cos\\varphi, b\\sin\\varphi, 0) \\\\\nH(\\psi) &= (c\\cosh\\psi, 0, b\\sinh\\psi),\\quad c^2 = a^2 - b^2\n\\end{align} " }, { "math_id": 42, "text": "S_1 = (a, 0, 0),\\quad F_1 = (c, 0, 0),\\quad F_2 = (-c, 0, 0),\\quad S_2 = (-a, 0, 0)" }, { "math_id": 43, "text": "\\begin{align}\n &\\frac{x^2}{r_x^2} + \\frac{y^2}{r_y^2} + \\frac{z^2}{r_z^2} = 1 \\\\\n &r_x = \\tfrac{1}{2}(l - a + c), \\quad\n r_y = {\\textstyle \\sqrt{r^2_x - c^2}}, \\quad\n r_z = {\\textstyle \\sqrt{r^2_x - a^2}}.\n\\end{align}" }, { "math_id": 44, "text": "Y = (0, r_y, 0),\\quad Z = (0, 0, r_z)." }, { "math_id": 45, "text": "\\overline r_x^2 = r_x^2 - \\lambda, \\quad\n \\overline r_y^2 = r_y^2 - \\lambda, \\quad\n \\overline r_z^2 = r_z^2 - \\lambda" }, { "math_id": 46, "text": " r_x^2 - r_y^2 = c^2, \\quad\n r_x^2 - r_z^2 = a^2, \\quad\n r_y^2 - r_z^2 = a^2 - c^2 = b^2" }, { "math_id": 47, "text": "\\lambda = r^2_x - \\overline r^2_x" }, { "math_id": 48, "text": "\\overline r_y^2 = r_y^2 - \\lambda,\\quad \\overline r_z^2 = r_z^2 - \\lambda" }, { "math_id": 49, "text": "r_x = \\tfrac12l,\\quad r_y = r_z = {\\textstyle \\sqrt{r^2_x - c^2}}" }, { "math_id": 50, "text": "r_x = a,\\quad r_y = b,\\quad l = 3a - c." }, { "math_id": 51, "text": "n - 1" }, { "math_id": 52, "text": "n" }, { "math_id": 53, "text": "\\frac{x_1^2}{a_1^2}+\\frac{x_2^2}{a_2^2}+\\cdots + \\frac{x_n^2}{a_n^2}=1." }, { "math_id": 54, "text": "V = \\frac{\\pi^\\frac{n}{2}}{\\Gamma\\left(\\frac{n}{2} + 1\\right)}a_1a_2\\cdots a_n \\approx \\frac{1}{\\sqrt{\\pi n}} \\cdot \\left(\\frac{2 e \\pi}{n}\\right)^{n/2} a_1a_2\\cdots a_n " }, { "math_id": 55, "text": "\\R^n," }, { "math_id": 56, "text": "(\\mathbf{x}-\\mathbf{v})^\\mathsf{T}\\! \\boldsymbol{A}\\, (\\mathbf{x}-\\mathbf{v}) = 1" }, { "math_id": 57, "text": "(\\mathbf{x}-\\mathbf{v})^\\mathsf{T}\\! \\boldsymbol{A}\\, (\\mathbf{x}-\\mathbf{v}) " }, { "math_id": 58, "text": "\\boldsymbol{A}" }, { "math_id": 59, "text": "\\boldsymbol{A} = \\boldsymbol{A}^{1/ 2}\\boldsymbol{A}^{1/ 2}; " }, { "math_id": 60, "text": "\\boldsymbol{A}." }, { "math_id": 61, "text": "A^{-1/2}\\cdot S(\\mathbf{0},1) + \\mathbf{v}" }, { "math_id": 62, "text": "\\mathbf x \\mapsto \\mathbf f_0 + \\boldsymbol A \\mathbf x = \\mathbf f_0 + x\\mathbf f_1 + y\\mathbf f_2 + z\\mathbf f_3" }, { "math_id": 63, "text": "\\mathbf x(\\theta, \\varphi) = \\mathbf f_0 + \\mathbf f_1 \\cos\\theta \\cos\\varphi + \\mathbf f_2 \\cos\\theta \\sin\\varphi + \\mathbf f_3 \\sin\\theta, \\qquad -\\tfrac{\\pi}{2} < \\theta < \\tfrac{\\pi}{2},\\quad 0 \\le \\varphi < 2\\pi" }, { "math_id": 64, "text": "\\mathbf n(\\theta, \\varphi) = \\mathbf f_2 \\times \\mathbf f_3\\cos\\theta\\cos\\varphi + \\mathbf f_3 \\times \\mathbf f_1\\cos\\theta\\sin\\varphi + \\mathbf f_1 \\times \\mathbf f_2\\sin\\theta." }, { "math_id": 65, "text": "F(x, y, z) = \\operatorname{det}\\left(\\mathbf x, \\mathbf f_2, \\mathbf f_3\\right)^2 + \\operatorname{det}\\left(\\mathbf f_1,\\mathbf x, \\mathbf f_3\\right)^2 + \\operatorname{det}\\left(\\mathbf f_1, \\mathbf f_2, \\mathbf x\\right)^2 - \\operatorname{det}\\left(\\mathbf f_1, \\mathbf f_2, \\mathbf f_3\\right)^2 = 0" }, { "math_id": 66, "text": "m = V \\rho = \\tfrac{4}{3} \\pi abc \\rho." }, { "math_id": 67, "text": "\\begin{align}\n I_\\mathrm{xx} &= \\tfrac{1}{5}m\\left(b^2 + c^2\\right), &\n I_\\mathrm{yy} &= \\tfrac{1}{5}m\\left(c^2 + a^2\\right), &\n I_\\mathrm{zz} &= \\tfrac{1}{5}m\\left(a^2 + b^2\\right), \\\\[3pt]\n I_\\mathrm{xy} &= I_\\mathrm{yz} = I_\\mathrm{zx} = 0.\n\\end{align}" }, { "math_id": 68, "text": "f(x) = k \\cdot g\\left((\\mathbf x - \\boldsymbol\\mu)\\boldsymbol\\Sigma^{-1}(\\mathbf x - \\boldsymbol\\mu)^\\mathsf{T}\\right)" } ]
https://en.wikipedia.org/wiki?curid=145381
14539370
Segregation (materials science)
Enrichment of atoms, ions and molecules In materials science, segregation is the enrichment of atoms, ions, or molecules at a microscopic region in a materials system. While the terms segregation and adsorption are essentially synonymous, in practice, segregation is often used to describe the partitioning of molecular constituents to defects from "solid" solutions, whereas adsorption is generally used to describe such partitioning from liquids and gases to surfaces. The molecular-level segregation discussed in this article is distinct from other types of materials phenomena that are often called segregation, such as particle segregation in granular materials, and phase separation or precipitation, wherein molecules are segregated in to macroscopic regions of different compositions. Segregation has many practical consequences, ranging from the formation of soap bubbles, to microstructural engineering in materials science, to the stabilization of colloidal suspensions. Segregation can occur in various materials classes. In polycrystalline solids, segregation occurs at defects, such as dislocations, grain boundaries, stacking faults, or the interface between two phases. In liquid solutions, chemical gradients exist near second phases and surfaces due to combinations of chemical and electrical effects. Segregation which occurs in well-equilibrated systems due to the instrinsic chemical properties of the system is termed equilibrium segregation. Segregation that occurs due to the processing history of the sample (but that would disappear at long times) is termed non-equilibrium segregation. History. Equilibrium segregation is associated with the lattice disorder at interfaces, where there are sites of energy different from those within the lattice at which the solute atoms can deposit themselves. The equilibrium segregation is so termed because the solute atoms segregate themselves to the interface or surface in accordance with the statistics of thermodynamics in order to minimize the overall free energy of the system. This sort of partitioning of solute atoms between the grain boundary and the lattice was predicted by McLean in 1957. Non-equilibrium segregation, first theorized by Westbrook in 1964, occurs as a result of solutes coupling to vacancies which are moving to grain boundary sources or sinks during quenching or application of stress. It can also occur as a result of solute pile-up at a moving interface. There are two main features of non-equilibrium segregation, by which it is most easily distinguished from equilibrium segregation. In the non-equilibrium effect, the magnitude of the segregation increases with increasing temperature and the alloy can be homogenized without further quenching because its lowest energy state corresponds to a uniform solute distribution. In contrast, the equilibrium segregated state, by definition, is the lowest energy state in a system that exhibits equilibrium segregation, and the extent of the segregation effect decreases with increasing temperature. The details of non-equilibrium segregation are not going to be discussed here, but can be found in the review by Harries and Marwick. Importance. Segregation of a solute to surfaces and grain boundaries in a solid produces a section of material with a discrete composition and its own set of properties that can have important (and often deleterious) effects on the overall properties of the material. These 'zones' with an increased concentration of solute can be thought of as the cement between the bricks of a building. The structural integrity of the building depends not only on the material properties of the brick, but also greatly on the properties of the long lines of mortar in between. Segregation to grain boundaries, for example, can lead to grain boundary fracture as a result of temper brittleness, creep embrittlement, stress relief cracking of weldments, hydrogen embrittlement, environmentally assisted fatigue, grain boundary corrosion, and some kinds of intergranular stress corrosion cracking. A very interesting and important field of study of impurity segregation processes involves AES of grain boundaries of materials. This technique includes tensile fracturing of special specimens directly inside the UHV chamber of the Auger Electron Spectrometer that was developed by Ilyin. Segregation to grain boundaries can also affect their respective migration rates, and so affects sinterability, as well as the grain boundary diffusivity (although sometimes these effects can be used advantageously). Segregation to free surfaces also has important consequences involving the purity of metallurgical samples. Because of the favorable segregation of some impurities to the surface of the material, a very small concentration of impurity in the bulk of the sample can lead to a very significant coverage of the impurity on a cleaved surface of the sample. In applications where an ultra-pure surface is needed (for example, in some nanotechnology applications), the segregation of impurities to surfaces requires a much higher purity of bulk material than would be needed if segregation effects did not exist. The following figure illustrates this concept with two cases in which the total fraction of impurity atoms is 0.25 (25 impurity atoms in 100 total). In the representation on the left, these impurities are equally distributed throughout the sample, and so the fractional surface coverage of impurity atoms is also approximately 0.25. In the representation to the right, however, the same number of impurity atoms are shown segregated on the surface, so that an observation of the surface composition would yield a much higher impurity fraction (in this case, about 0.69). In fact, in this example, were impurities to completely segregate to the surface, an impurity fraction of just 0.36 could completely cover the surface of the material. In an application where surface interactions are important, this result could be disastrous. While the intergranular failure problems noted above are sometimes severe, they are rarely the cause of major service failures (in structural steels, for example), as suitable safety margins are included in the designs. Perhaps the greater concern is that with the development of new technologies and materials with new and more extensive mechanical property requirements, and with the increasing impurity contents as a result of the increased recycling of materials, we may see intergranular failure in materials and situations not seen currently. Thus, a greater understanding of all of the mechanisms surrounding segregation might lead to being able to control these effects in the future. Modeling potentials, experimental work, and related theories are still being developed to explain these segregation mechanisms for increasingly complex systems. Theories of Segregation. Several theories describe the equilibrium segregation activity in materials. The adsorption theories for the solid-solid interface and the solid-vacuum surface are direct analogues of theories well known in the field of gas adsorption on the free surfaces of solids. Langmuir–McLean theory for surface and grain boundary segregation in binary systems. This is the earliest theory specifically for grain boundaries, in which McLean uses a model of P solute atoms distributed at random amongst N lattice sites and p solute atoms distributed at random amongst n independent grain boundary sites. The total free energy due to the solute atoms is then: formula_0 where E and e are energies of the solute atom in the lattice and in the grain boundary, respectively and the kln term represents the configurational entropy of the arrangement of the solute atoms in the bulk and grain boundary. McLean used basic statistical mechanics to find the fractional monolayer of segregant, formula_1, at which the system energy was minimized (at the equilibrium state), differentiating "G" with respect to "p", noting that the sum of "p" and "P" is constant. Here the grain boundary analogue of Langmuir adsorption at free surfaces becomes: formula_2 Here, formula_3 is the fraction of the grain boundary monolayer available for segregated atoms at saturation, formula_1 is the actual fraction covered with segregant, formula_4 is the bulk solute molar fraction, and formula_5 is the free energy of segregation per mole of solute. Values of formula_5 were estimated by McLean using the elastic strain energy, formula_6, released by the segregation of solute atoms. The solute atom is represented by an elastic sphere fitted into a spherical hole in an elastic matrix continuum. The elastic energy associated with the solute atom is given by: formula_7 where formula_8 is the solute bulk modulus, formula_9 is the matrix shear modulus, and formula_10 and formula_11 are the atomic radii of the matrix and impurity atoms, respectively. This method gives values correct to within a factor of two (as compared with experimental data for grain boundary segregation), but a greater accuracy is obtained using the method of Seah and Hondros, described in the following section. Free energy of grain boundary segregation in binary systems. Using truncated BET theory (the gas adsorption theory developed by Brunauer, Emmett, and Teller), Seah and Hondros write the solid-state analogue as: formula_12 formula_13 where formula_14 formula_15 is the solid solubility, which is known for many elements (and can be found in metallurgical handbooks). In the dilute limit, a slightly soluble substance has formula_16, so the above equation reduces to that found with the Langmuir-McLean theory. This equation is only valid for formula_17. If there is an excess of solute such that a second phase appears, the solute content is limited to formula_15 and the equation becomes formula_18 This theory for grain boundary segregation, derived from truncated BET theory, provides excellent agreement with experimental data obtained by Auger electron spectroscopy and other techniques. More complex systems. Other models exist to model more complex binary systems. The above theories operate on the assumption that the segregated atoms are non-interacting. If, in a binary system, adjacent adsorbate atoms are allowed an interaction energy formula_19, such that they can attract (when formula_19 is negative) or repel (when formula_19 is positive) each other, the solid-state analogue of the Fowler adsorption theory is developed as formula_20 When formula_19 is zero, this theory reduces to that of Langmuir and McLean. However, as formula_19 becomes more negative, the segregation shows progressively sharper rises as the temperature falls until eventually the rise in segregation is discontinuous at a certain temperature, as shown in the following figure. Guttman, in 1975, extended the Fowler theory to allow for interactions between two co-segregating species in multicomponent systems. This modification is vital to explaining the segregation behavior that results in the intergranular failures of engineering materials. More complex theories are detailed in the work by Guttmann and McLean and Guttmann. The free energy of surface segregation in binary systems. The Langmuir–McLean equation for segregation, when using the regular solution model for a binary system, is valid for surface segregation (although sometimes the equation will be written replacing formula_1 with formula_21). The free energy of surface segregation is formula_22. The enthalpy is given by formula_23 where formula_24 and formula_25 are matrix surface energies without and with solute, formula_26 is their heat of mixing, Z and formula_27 are the coordination numbers in the matrix and at the surface, and formula_28 is the coordination number for surface atoms to the layer below. The last term in this equation is the elastic strain energy formula_6, given above, and is governed by the mismatch between the solute and the matrix atoms. For solid metals, the surface energies scale with the melting points. The surface segregation enrichment ratio increases when the solute atom size is larger than the matrix atom size and when the melting point of the solute is lower than that of the matrix. A chemisorbed gaseous species on the surface can also have an effect on the surface composition of a binary alloy. In the presence of a coverage of a chemisorbed species theta, it is proposed that the Langmuir-McLean model is valid with the free energy of surface segregation given by formula_29, where formula_30 formula_31 and formula_32 are the chemisorption energies of the gas on solute A and matrix B and formula_33 is the fractional coverage. At high temperatures, evaporation from the surface can take place, causing a deviation from the McLean equation. At lower temperatures, both grain boundary and surface segregation can be limited by the diffusion of atoms from the bulk to the surface or interface. Kinetics of segregation. In some situations where segregation is important, the segregant atoms do not have sufficient time to reach their equilibrium level as defined by the above adsorption theories. The kinetics of segregation become a limiting factor and must be analyzed as well. Most existing models of segregation kinetics follow the McLean approach. In the model for equilibrium monolayer segregation, the solute atoms are assumed to segregate to a grain boundary from two infinite half-crystals or to a surface from one infinite half-crystal. The diffusion in the crystals is described by Fick's laws. The ratio of the solute concentration in the grain boundary to that in the adjacent atomic layer of the bulk is given by an enrichment ratio, formula_34. Most models assume formula_34 to be a constant, but in practice this is only true for dilute systems with low segregation levels. In this dilute limit, if formula_3 is one monolayer, formula_34 is given as formula_35. The kinetics of segregation can be described by the following equation: formula_36formula_37 where formula_38 for grain boundaries and 1 for the free surface, formula_39 is the boundary content at time formula_40, formula_41 is the solute bulk diffusivity, formula_42 is related to the atomic sizes of the solute and the matrix, formula_43 and formula_44, respectively, by formula_45. For short times, this equation is approximated by: formula_46 In practice, formula_34 is not a constant but generally falls as segregation proceeds due to saturation. If formula_34 starts high and falls rapidly as the segregation saturates, the above equation is valid until the point of saturation. In metal castings. All metal castings experience segregation to some extent, and a distinction is made between "macro"segregation and "micro"segregation. Microsegregation refers to localized differences in composition between dendrite arms, and can be significantly reduced by a homogenizing heat treatment. This is possible because the distances involved (typically on the order of 10 to 100 μm) are sufficiently small for diffusion to be a significant mechanism. This is not the case in macrosegregation. Therefore, macrosegregation in metal castings cannot be remedied or removed using heat treatment. Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = pe + PE - kT[\\ln(n!N!) - \\ln(n - p)!p!(N - P)!P!]" }, { "math_id": 1, "text": "X_b" }, { "math_id": 2, "text": "\\frac{X_b}{X_b^0-X_b}=\\frac{X_c}{1-X_c}\\exp \\left ( \\frac{-\\Delta G}{RT} \\right ) " }, { "math_id": 3, "text": "X_b^0" }, { "math_id": 4, "text": "X_c" }, { "math_id": 5, "text": "\\Delta G" }, { "math_id": 6, "text": "E_\\text{el}" }, { "math_id": 7, "text": "E_\\text{el}=\\frac{24\\pi\\Kappa\\mu_0 r_0 (r_1-r_0)^2}{3\\Kappa+4\\mu_0}" }, { "math_id": 8, "text": "\\Kappa " }, { "math_id": 9, "text": "\\mu_0," }, { "math_id": 10, "text": "r_0," }, { "math_id": 11, "text": "r_1," }, { "math_id": 12, "text": "\\frac{X_b}{X_b^0-X_b}=\\frac{X_c}{X_c^0}\\exp " }, { "math_id": 13, "text": " \\left ( \\frac{-\\Delta G'}{RT} \\right ) " }, { "math_id": 14, "text": "\\Delta G=\\Delta G' + \\Delta G_\\text{sol}" }, { "math_id": 15, "text": "X_c^0" }, { "math_id": 16, "text": "X_c^0 = \\exp \\left ( \\frac{\\Delta G_\\text{sol}}{RT}\\right )" }, { "math_id": 17, "text": "X_c \\le X_c^0" }, { "math_id": 18, "text": "\\frac{X_b}{X_b^0-X_b} = \\exp \\left ( \\frac{-\\Delta G'}{RT}\\right )" }, { "math_id": 19, "text": "\\omega\\," }, { "math_id": 20, "text": "\\frac {X_b}{X_b^0 - X_b} = \\frac {X_c}{1-X_c} \\exp \\left [ \\frac{-\\Delta G - Z_1\\omega\\,\\frac{X_b}{X_b^0}}{RT} \\right ]." }, { "math_id": 21, "text": "X_s" }, { "math_id": 22, "text": "\\Delta G_s = \\Delta H_s - T\\,\\Delta S" }, { "math_id": 23, "text": "-\\Delta H_s = \\gamma_0^s - \\gamma_1^s - \\frac{2H_m}{ZX_c(1-X_c)} \\left [ Z_1 (X_c-X_s) + Z_v \\left (X_c - \\frac {1}{2} \\right ) \\right ] + \\frac {24\\pi\\Kappa\\mu_0 r_0 (r_1-r_0)^2}{3\\Kappa+4\\mu_0}" }, { "math_id": 24, "text": "\\gamma_0" }, { "math_id": 25, "text": "\\gamma_1" }, { "math_id": 26, "text": "H_1" }, { "math_id": 27, "text": "Z_1" }, { "math_id": 28, "text": "Z_v" }, { "math_id": 29, "text": "\\Delta G_\\text{chem}" }, { "math_id": 30, "text": "\\Delta G_\\text{chem} = \\Delta G_s + (E_B - E_A)\\Theta\\," }, { "math_id": 31, "text": "E_A" }, { "math_id": 32, "text": "E_B" }, { "math_id": 33, "text": "\\Theta" }, { "math_id": 34, "text": "\\beta" }, { "math_id": 35, "text": "\\beta = \\frac{X_b}{X_c} =\\frac {\\exp\\left (\\frac{-\\Delta G'}{RT}\\right )}{X_c^0}" }, { "math_id": 36, "text": "\\frac{X_b(t) - X_b(0)}{X_b(\\infty) - X_b(0)} = 1 - \\exp \\left ( \\frac {FDt}{\\beta^2f^2} \\right )" }, { "math_id": 37, "text": " \\operatorname{erfc} \\left ( \\frac {FDt}{\\beta^2 f^2} \\right )^{1/2} " }, { "math_id": 38, "text": "F=4" }, { "math_id": 39, "text": "X_b(t)" }, { "math_id": 40, "text": "t" }, { "math_id": 41, "text": "D" }, { "math_id": 42, "text": "f" }, { "math_id": 43, "text": "b" }, { "math_id": 44, "text": "a" }, { "math_id": 45, "text": "f = a^3b^{-2}" }, { "math_id": 46, "text": "\\frac{X_b(t) - X_b(0)}{X_b(\\infty) - X_b(0)} = \\frac {2}{\\beta f} \\sqrt{\\frac {FDt}{\\pi}} = \\frac {2}{\\beta} \\frac {b^2}{a^3} \\sqrt{\\frac {FDt}{\\pi}}" } ]
https://en.wikipedia.org/wiki?curid=14539370
1453977
Null vector
Vector on which a quadratic form is zero In mathematics, given a vector space "X" with an associated quadratic form "q", written ("X", "q"), a null vector or isotropic vector is a non-zero element "x" of "X" for which "q"("x") = 0. In the theory of real bilinear forms, definite quadratic forms and isotropic quadratic forms are distinct. They are distinguished in that only for the latter does there exist a nonzero null vector. A quadratic space ("X", "q") which has a null vector is called a pseudo-Euclidean space. The term "isotropic vector v" when "q"("v") = 0 has been used in quadratic spaces, and anisotropic space for a quadratic space without null vectors. A pseudo-Euclidean vector space may be decomposed (non-uniquely) into orthogonal subspaces "A" and "B", "X" = "A" + "B", where "q" is positive-definite on "A" and negative-definite on "B". The null cone, or isotropic cone, of "X" consists of the union of balanced spheres: formula_0 The null cone is also the union of the isotropic lines through the origin. Split algebras. A composition algebra with a null vector is a split algebra. In a composition algebra ("A", +, ×, *), the quadratic form is q("x") = "x x"*. When "x" is a null vector then there is no multiplicative inverse for "x", and since "x" ≠ 0, "A" is not a division algebra. In the Cayley–Dickson construction, the split algebras arise in the series bicomplex numbers, biquaternions, and bioctonions, which uses the complex number field formula_1 as the foundation of this doubling construction due to L. E. Dickson (1919). In particular, these algebras have two imaginary units, which commute so their product, when squared, yields +1: formula_2 Then formula_3 so 1 + hi is a null vector. The real subalgebras, split complex numbers, split quaternions, and split-octonions, with their null cones representing the light tracking into and out of 0 ∈ "A", suggest spacetime topology. Examples. The light-like vectors of Minkowski space are null vectors. The four linearly independent biquaternions "l" = 1 + "hi", "n" = 1 + "hj", "m" = 1 + "hk", and "m"∗ = 1 – "hk" are null vectors and { "l", "n", "m", "m"∗ } can serve as a basis for the subspace used to represent spacetime. Null vectors are also used in the Newman–Penrose formalism approach to spacetime manifolds. In the Verma module of a Lie algebra there are null vectors. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bigcup_{r \\geq 0} \\{x = a + b : q(a) = -q(b) = r, \\ \\ a \\in A, \\ \\ b \\in B \\}." }, { "math_id": 1, "text": "\\Complex" }, { "math_id": 2, "text": "(hi)^2 = h^2 i^2 = (-1)(-1) = +1 ." }, { "math_id": 3, "text": "(1 + hi)(1 + hi)^* = (1 +hi)(1 - hi) = 1 - (hi)^2 = 0" } ]
https://en.wikipedia.org/wiki?curid=1453977
14540664
10-deacetylbaccatin III 10-O-acetyltransferase
Class of enzymes In enzymology, a 10-deacetylbaccatin III 10-O-acetyltransferase (EC 2.3.1.167) is an enzyme that catalyzes the chemical reaction acetyl-CoA + 10-deacetylbaccatin III formula_0 CoA + baccatin III Thus, the two substrates of this enzyme are acetyl-CoA and 10-deacetylbaccatin III, whereas its two products are CoA and baccatin III. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:taxan-10beta-ol O-acetyltransferase. This enzyme participates in diterpenoid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540664
14540682
10-hydroxytaxane O-acetyltransferase
Class of enzymes In enzymology, a 10-hydroxytaxane O-acetyltransferase (EC 2.3.1.163) is an enzyme that catalyzes the chemical reaction acetyl-CoA + 10-desacetyltaxuyunnanin C formula_0 CoA + taxuyunnanin C Thus, the two substrates of this enzyme are acetyl-CoA and 10-desacetyltaxuyunnanin C, whereas its two products are CoA and taxuyunnanin C. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:taxan-10beta-ol O-acetyltransferase. This enzyme is also called acetyl coenzyme A: 10-hydroxytaxane O-acetyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540682
14540703
13-hydroxylupinine O-tigloyltransferase
Class of enzymes In enzymology, a 13-hydroxylupinine O-tigloyltransferase (EC 2.3.1.93) is an enzyme that catalyzes the chemical reaction (E)-2-methylcrotonoyl-CoA + 13-hydroxylupinine formula_0 CoA + 13-(2-methylcrotonoyl)oxylupinine Thus, the two substrates of this enzyme are (E)-2-methylcrotonoyl-CoA and 13-hydroxylupinine, whereas its two products are CoA and 13-(2-methylcrotonoyl)oxylupinine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is (E)-2-methylcrotonoyl-CoA:13-hydroxylupinine O-2-methylcrotonoyltransferase. Other names in common use include tigloyl-CoA:13-hydroxylupanine O-tigloyltransferase, and 13-hydroxylupanine acyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540703
14540712
1-acylglycerol-3-phosphate O-acyltransferase
Class of enzymes In enzymology, a 1-acylglycerol-3-phosphate O-acyltransferase (EC 2.3.1.51) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-acyl-"sn"-glycerol 3-phosphate formula_0 CoA + 1,2-diacyl-"sn"-glycerol 3-phosphate Thus, the two substrates of this enzyme are acyl-CoA and 1-acyl-"sn"-glycerol 3-phosphate, whereas its two products are CoA and 1,2-diacyl-"sn"-glycerol 3-phosphate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-acyl-sn-glycerol-3-phosphate 2-O-acyltransferase. Other names in common use include 1-acyl-sn-glycero-3-phosphate acyltransferase, 1-acyl-sn-glycerol 3-phosphate acyltransferase, 1-acylglycero-3-phosphate acyltransferase, 1-acylglycerolphosphate acyltransferase, 1-acylglycerophosphate acyltransferase, and lysophosphatidic acid-acyltransferase. This enzyme participates in 3 metabolic pathways: glycerolipid metabolism, glycerophospholipid metabolism, and ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540712
14540726
1-acylglycerophosphocholine O-acyltransferase
Class of enzymes In enzymology, a 1-acylglycerophosphocholine O-acyltransferase (EC 2.3.1.23) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-acyl-sn-glycero-3-phosphocholine formula_0 CoA + 1,2-diacyl-sn-glycero-3-phosphocholine Thus, the two substrates of this enzyme are acyl-CoA and 1-acyl-sn-glycero-3-phosphocholine, whereas its two products are CoA and 1,2-diacyl-sn-glycero-3-phosphocholine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-acyl-sn-glycero-3-phosphocholine O-acyltransferase. Other names in common use include lysolecithin acyltransferase, 1-acyl-sn-glycero-3-phosphocholine acyltransferase, acyl coenzyme A-monoacylphosphatidylcholine acyltransferase, acyl-CoA:1-acyl-glycero-3-phosphocholine transacylase, lysophosphatide acyltransferase, and lysophosphatidylcholine acyltransferase. This enzyme participates in glycerophospholipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540726
14540740
1-alkenylglycerophosphocholine O-acyltransferase
Class of enzymes In enzymology, a 1-alkenylglycerophosphocholine O-acyltransferase (EC 2.3.1.104) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-alkenylglycerophosphocholine formula_0 CoA + 1-alkenyl-2-acylglycerophosphocholine Thus, the two substrates of this enzyme are acyl-CoA and 1-alkenylglycerophosphocholine, whereas its two products are CoA and 1-alkenyl-2-acylglycerophosphocholine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-alkenylglycerophosphocholine O-acyltransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540740
14540755
1-alkenylglycerophosphoethanolamine O-acyltransferase
Class of enzymes In enzymology, a 1-alkenylglycerophosphoethanolamine O-acyltransferase (EC 2.3.1.121) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-alkenylglycerophosphoethanolamine formula_0 CoA + 1-alkenyl-2-acylglycerophosphoethanolamine Thus, the two substrates of this enzyme are acyl-CoA and 1-alkenylglycerophosphoethanolamine, whereas its two products are CoA and 1-alkenyl-2-acylglycerophosphoethanolamine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-alkenylglycerophosphoethanolamine O-acyltransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540755
14540770
1-alkyl-2-acetylglycerol O-acyltransferase
Class of enzymes In enzymology, a 1-alkyl-2-acetylglycerol O-acyltransferase (EC 2.3.1.125) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-O-alkyl-2-acetyl-sn-glycerol formula_0 CoA + 1-O-alkyl-2-acetyl-3-acyl-sn-glycerol Thus, the two substrates of this enzyme are acyl-CoA and 1-O-alkyl-2-acetyl-sn-glycerol, whereas its two products are CoA and 1-O-alkyl-2-acetyl-3-acyl-sn-glycerol. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-O-alkyl-2-acetyl-sn-glycerol O-acyltransferase. This enzyme is also called 1-hexadecyl-2-acetylglycerol acyltransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540770
14540787
1-alkylglycerophosphocholine O-acetyltransferase
Class of enzymes In enzymology, a 1-alkylglycerophosphocholine O-acetyltransferase (EC 2.3.1.67) is an enzyme that catalyzes the chemical reaction acetyl-CoA + 1-alkyl-sn-glycero-3-phosphocholine formula_0 CoA + 2-acetyl-1-alkyl-sn-glycero-3-phosphocholine Thus, the two substrates of this enzyme are acetyl-CoA and 1-alkyl-sn-glycero-3-phosphocholine, whereas its two products are CoA and 2-acetyl-1-alkyl-sn-glycero-3-phosphocholine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:1-alkyl-sn-glycero-3-phosphocholine 2-O-acetyltransferase. Other names in common use include acetyl-CoA:1-alkyl-2-lyso-sn-glycero-3-phosphocholine, 2-O-acetyltransferase, acetyl-CoA:lyso-PAF acetyltransferase, 1-alkyl-2-lysolecithin acetyltransferase, acyl-CoA:1-alkyl-sn-glycero-3-phosphocholine acyltransferase, blood platelet-activating factor acetyltransferase, lyso-GPC:acetyl CoA acetyltransferase, lyso-platelet activating factor:acetyl-CoA acetyltransferase, lysoPAF:acetyl CoA acetyltransferase, PAF acetyltransferase, platelet-activating factor acylhydrolase, platelet-activating factor-synthesizing enzyme, 1-alkyl-2-lyso-sn-glycero-3-phosphocholine acetyltransferase, and lyso-platelet-activating factor:acetyl-CoA acetyltransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540787
14540802
1-alkylglycerophosphocholine O-acyltransferase
Class of enzymes In enzymology, a 1-alkylglycerophosphocholine O-acyltransferase (EC 2.3.1.63) is an enzyme that catalyzes the chemical reaction acyl-CoA + 1-alkyl-sn-glycero-3-phosphocholine formula_0 CoA + 2-acyl-1-alkyl-sn-glycero-3-phosphocholine Thus, the two substrates of this enzyme are acyl-CoA and 1-alkyl-sn-glycero-3-phosphocholine, whereas its two products are CoA and 2-acyl-1-alkyl-sn-glycero-3-phosphocholine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:1-alkyl-sn-glycero-3-phosphocholine O-acyltransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540802
14540819
2,3,4,5-tetrahydropyridine-2,6-dicarboxylate N-succinyltransferase
Class of enzymes In enzymology, a 2,3,4,5-tetrahydropyridine-2,6-dicarboxylate N-succinyltransferase (EC 2.3.1.117) is an enzyme that catalyzes the chemical reaction succinyl-CoA + (S)-2,3,4,5-tetrahydropyridine-2,6-dicarboxylate + H2O formula_0 CoA + N-succinyl-L-2-amino-6-oxoheptanedioate The 3 substrates of this enzyme are succinyl-CoA, (S)-2,3,4,5-tetrahydropyridine-2,6-dicarboxylate, and H2O, whereas its two products are CoA and N-succinyl-L-2-amino-6-oxoheptanedioate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is succinyl-CoA:(S)-2,3,4,5-tetrahydropyridine-2,6-dicarboxylate N-succinyltransferase. Other names in common use include tetrahydropicolinate succinylase, tetrahydrodipicolinate N-succinyltransferase, tetrahydrodipicolinate succinyltransferase, succinyl-CoA:tetrahydrodipicolinate N-succinyltransferase, succinyl-CoA:2,3,4,5-tetrahydropyridine-2,6-dicarboxylate, and N-succinyltransferase. This enzyme participates in lysine biosynthesis. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1KGQ, 1KGT, 2TDT, and 3TDT. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540819
14540829
2,3-diaminopropionate N-oxalyltransferase
Class of enzymes In enzymology, a 2,3-diaminopropionate N-oxalyltransferase (EC 2.3.1.58) is an enzyme that catalyzes the chemical reaction oxalyl-CoA + L-2,3-diaminopropanoate formula_0 CoA + N3-oxalyl-L-2,3-diaminopropanoate Thus, the two substrates of this enzyme are oxalyl-CoA and L-2,3-diaminopropanoate, whereas its two products are CoA and N3-oxalyl-L-2,3-diaminopropanoate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is oxalyl-CoA:L-2,3-diaminopropanoate N3-oxalyltransferase. Other names in common use include oxalyldiaminopropionate synthase, ODAP synthase, oxalyl-CoA:L-alpha,beta-diaminopropionic acid oxalyltransferase, oxalyldiaminopropionic synthase, and oxalyl-CoA:L-2,3-diaminopropanoate 3-N-oxalyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540829
14540842
2-acylglycerol-3-phosphate O-acyltransferase
Class of enzymes In enzymology, a 2-acylglycerol-3-phosphate O-acyltransferase (EC 2.3.1.52) is an enzyme that catalyzes the chemical reaction acyl-CoA + 2-acyl-sn-glycerol 3-phosphate formula_0 CoA + 1,2-diacyl-sn-glycerol 3-phosphate Thus, the two substrates of this enzyme are acyl-CoA and 2-acyl-sn-glycerol 3-phosphate, whereas its two products are CoA and 1,2-diacyl-sn-glycerol 3-phosphate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:2-acyl-sn-glycerol 3-phosphate O-acyltransferase. This enzyme is also called 2-acylglycerophosphate acyltransferase. This enzyme participates in glycerophospholipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540842
14540861
2-acylglycerol O-acyltransferase
Class of enzymes In enzymology, a 2-acylglycerol O-acyltransferase (EC 2.3.1.22) is an enzyme that catalyzes the chemical reaction acyl-CoA + 2-acylglycerol formula_0 CoA + diacylglycerol Thus, the two substrates of this enzyme are acyl-CoA and 2-acylglycerol, whereas its two products are CoA and diacylglycerol. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:2-acylglycerol O-acyltransferase. Other names in common use include acylglycerol palmitoyltransferase, monoglyceride acyltransferase, acyl coenzyme A-monoglyceride acyltransferase, and monoacylglycerol acyltransferase. This enzyme participates in glycerolipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540861
14540875
2-acylglycerophosphocholine O-acyltransferase
Class of enzymes In enzymology, a 2-acylglycerophosphocholine O-acyltransferase (EC 2.3.1.62) is an enzyme that catalyzes the chemical reaction acyl-CoA + 2-acyl-sn-glycero-3-phosphocholine formula_0 CoA + phosphatidylcholine Thus, the two substrates of this enzyme are acyl-CoA and 2-acyl-sn-glycero-3-phosphocholine, whereas its two products are CoA and phosphatidylcholine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:2-acyl-sn-glycero-3-phosphocholine O-acyltransferase. Other names in common use include 2-acylglycerol-3-phosphorylcholine acyltransferase, and 2-acylglycerophosphocholine acyltransferase. This enzyme participates in glycerophospholipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540875
14540885
2alpha-hydroxytaxane 2-O-benzoyltransferase
Enzyme In enzymology, a 2alpha-hydroxytaxane 2-O-benzoyltransferase (EC 2.3.1.166) is an enzyme that catalyzes the chemical reaction benzoyl-CoA + 10-deacetyl-2-debenzoylbaccatin III formula_0 CoA + 10-deacetylbaccatin III Thus, the two substrates of this enzyme are benzoyl-CoA and 10-deacetyl-2-debenzoylbaccatin III, whereas its two products are CoA and 10-deacetylbaccatin III. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is benzoyl-CoA:taxan-2alpha-ol O-benzoyltransferase. This enzyme is also called benzoyl-CoA:taxane 2alpha-O-benzoyltransferase. This enzyme participates in diterpenoid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540885
14540904
2-ethylmalate synthase
Class of enzymes In enzymology, a 2-ethylmalate synthase (EC 2.3.3.6) is an enzyme that catalyzes the chemical reaction acetyl-CoA + H2O + 2-oxobutanoate formula_0 (R)-2-ethylmalate + CoA The 3 substrates of this enzyme are acetyl-CoA, H2O, and 2-oxobutanoate, whereas its two products are (R)-2-ethylmalate and CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer. The systematic name of this enzyme class is acetyl-CoA:2-oxobutanoate C-acetyltransferase (thioester-hydrolysing, carboxymethyl-forming). Other names in common use include (R)-2-ethylmalate 2-oxobutanoyl-lyase (CoA-acetylating), 2-ethylmalate-3-hydroxybutanedioate synthase, propylmalate synthase, and propylmalic synthase. This enzyme participates in pyruvate metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540904
14540915
2-hydroxyglutarate synthase
Class of enzymes In enzymology, a 2-hydroxyglutarate synthase (EC 2.3.3.11) is an enzyme that catalyzes the chemical reaction propanoyl-CoA + H2O + glyoxylate formula_0 2-hydroxyglutarate + CoA The 3 substrates of this enzyme are propanoyl-CoA, H2O, and glyoxylate, whereas its two products are 2-hydroxyglutarate and CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer. The systematic name of this enzyme class is propanoyl-CoA:glyoxylate C-propanoyltransferase (thioester-hydrolysing, 2-carboxyethyl-forming). Other names in common use include 2-hydroxyglutaratic synthetase, 2-hydroxyglutaric synthetase, alpha-hydroxyglutarate synthase, hydroxyglutarate synthase, and 2-hydroxyglutarate glyoxylate-lyase (CoA-propanoylating). This enzyme participates in c5-branched dibasic acid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540915
14540926
2-isopropylmalate synthase
InterPro Family In enzymology, a 2-isopropylmalate synthase (EC 2.3.3.13) is an enzyme that catalyzes the chemical reaction acetyl-CoA + 3-methyl-2-oxobutanoate + H2O formula_0 (2S)-2-isopropylmalate + CoA The three substrates of this enzyme are acetyl-CoA, 3-methyl-2-oxobutanoate, and H2O, and its products are (2S)-2-isopropylmalate and CoA. The enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer. The systematic name of this enzyme class is "acetyl-CoA:3-methyl-2-oxobutanoate C-acetyltransferase (thioester-hydrolysing, carboxymethyl-forming)". Other names in common use include "3-carboxy-3-hydroxy-4-methylpentanoate 3-methyl-2-oxobutanoate-lyase", "(CoA-acetylating)", "alpha-isopropylmalate synthetase", "alpha-isopropylmalate synthase", "alpha-isopropylmalic synthetase", "isopropylmalate synthase", and "isopropylmalate synthetase". This enzyme participates in biosynthesis of -leucine and pyruvate metabolism. Monovalent and divalent cation activation have been reported for enzymes from different sources. "Mycobacterium tuberculosis" α-isopropylmalate synthase requires a divalent metal ion, of which Mg2+ and Mn2+ give highest activity, and a monovalent cation, with K+ as the best activator. Zn2+ was shown to be an inhibitor, contrary to what was assumed from the structural data. Another feature of the "M. tuberculosis" homolog is that -leucine, the feedback inhibitor, inhibits the enzyme in a time-dependent fashion. This was the first demonstration of a feedback inhibitor that displays slow-onset inhibition. Tertiary structure. As of late 2007, only one tertiary structure has been solved for this class of enzymes, with the Protein Data Bank accession code 1SR9. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540926
1454093
Gupta–Bleuler formalism
Gauge fixing procedure In quantum field theory, the Gupta–Bleuler formalism is a way of quantizing the electromagnetic field. The formulation is due to theoretical physicists Suraj N. Gupta and Konrad Bleuler. Overview. Firstly, consider a single photon. A basis of the one-photon vector space (it is explained why it is not a Hilbert space below) is given by the eigenstates formula_0 where formula_1, the 4-momentum is null (formula_2) and the formula_3 component, the energy, is positive and formula_4 is the unit polarization vector and the index formula_5 ranges from 0 to 3. So, formula_1 is uniquely determined by the spatial momentum formula_6. Using the bra–ket notation, this space is equipped with a sesquilinear form defined by formula_7, where the formula_8 factor is to implement Lorentz covariance. The metric signature used here is +−−−. However, this sesquilinear form gives positive norms for spatial polarizations but negative norms for time-like polarizations. Negative probabilities are unphysical, not to mention a physical photon only has two transverse polarizations, not four. If one includes gauge covariance, one realizes a photon can have three possible polarizations (two transverse and one longitudinal (i.e. parallel to the 4-momentum)). This is given by the restriction formula_9. However, the longitudinal component is merely an unphysical gauge. While it would be nice to define a stricter restriction than the one given above which only leaves the two transverse components, it is easy to check that this can't be defined in a Lorentz covariant manner because what is transverse in one frame of reference isn't transverse anymore in another. To resolve this difficulty, first look at the subspace with three polarizations. The sesquilinear form restricted to it is merely semidefinite, which is better than indefinite. In addition, the subspace with zero norm turns out to be none other than the gauge degrees of freedom. So, define the physical Hilbert space to be the quotient space of the three polarization subspace by its zero norm subspace. This space has a positive definite form, making it a true Hilbert space. This technique can be similarly extended to the bosonic Fock space of multiparticle photons. Using the standard trick of adjoint creation and annihilation operators, but with this quotient trick, one can formulate a free field vector potential as an operator valued distribution formula_10 satisfying formula_11 with the condition formula_12 for physical states formula_13 and formula_14 in the Fock space (it is understood that physical states are really equivalence classes of states that differ by a state of zero norm). This is not the same thing as formula_15. Note that if O is any gauge invariant operator, formula_16 does not depend upon the choice of the representatives of the equivalence classes, and so, this quantity is well-defined. This is not true for non-gauge-invariant operators in general because the Lorenz gauge still leaves residual gauge degrees of freedom. In an interacting theory of quantum electrodynamics, the Lorenz gauge condition still applies, but formula_10 no longer satisfies the free wave equation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|k,\\epsilon_\\mu\\rangle " }, { "math_id": 1, "text": "k" }, { "math_id": 2, "text": "k^2=0" }, { "math_id": 3, "text": "k_0" }, { "math_id": 4, "text": " \\epsilon_\\mu " }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "\\vec{k}" }, { "math_id": 7, "text": "\\langle\\vec{k}_a;\\epsilon_\\mu|\\vec{k}_b;\\epsilon_\\nu\\rangle=(-\\eta_{\\mu\\nu})\\,2|\\vec{k}_a|\\,\\delta(\\vec{k}_a-\\vec{k}_b)" }, { "math_id": 8, "text": "2|\\vec{k}_a|" }, { "math_id": 9, "text": "k\\cdot \\epsilon=0" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "\\partial^\\mu \\partial_\\mu A=0" }, { "math_id": 12, "text": "\\langle\\chi|\\partial^\\mu A_\\mu|\\psi\\rangle=0" }, { "math_id": 13, "text": "|\\chi\\rangle" }, { "math_id": 14, "text": "|\\psi\\rangle" }, { "math_id": 15, "text": "\\partial^\\mu A_\\mu=0" }, { "math_id": 16, "text": "\\langle\\chi|O|\\psi\\rangle" } ]
https://en.wikipedia.org/wiki?curid=1454093
14540944
2-methylcitrate synthase
Class of enzymes In enzymology, a 2-methylcitrate synthase (EC 2.3.3.5) is an enzyme that catalyzes the chemical reaction propanoyl-CoA + H2O + oxaloacetate formula_0 (2R,3S)-2-hydroxybutane-1,2,3-tricarboxylate + CoA The 3 substrates of this enzyme are propanoyl-CoA, H2O, and oxaloacetate, whereas its two products are (2R,3S)-2-hydroxybutane-1,2,3-tricarboxylate and CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer. The systematic name of this enzyme class is propanoyl-CoA:oxaloacetate C-propanoyltransferase (thioester-hydrolysing, 1-carboxyethyl-forming). Other names in common use include 2-methylcitrate oxaloacetate-lyase, MCS, methylcitrate synthase, and methylcitrate synthetase. This enzyme participates in propanoate metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540944
14540966
3,4-dichloroaniline N-malonyltransferase
Class of enzymes In enzymology, a 3,4-dichloroaniline N-malonyltransferase (EC 2.3.1.114) is an enzyme that catalyzes the chemical reaction malonyl-CoA + 3,4-dichloroaniline formula_0 CoA + N-(3,4-dichlorophenyl)-malonamate Thus, the two substrates of this enzyme are malonyl-CoA and 3,4-dichloroaniline, whereas its two products are CoA and N-(3,4-dichlorophenyl)-malonamate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:3,4-dichloroaniline N-malonyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540966
14540983
3-ethylmalate synthase
Class of enzymes In enzymology, a 3-ethylmalate synthase (EC 2.3.3.7) is an enzyme that catalyzes the chemical reaction butanoyl-CoA + H2O + glyoxylate formula_0 3-ethylmalate + CoA The 3 substrates of this enzyme are butanoyl-CoA, H2O, and glyoxylate, whereas its two products are 3-ethylmalate and CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer. The systematic name of this enzyme class is butanoyl-CoA:glyoxylate C-butanoyltransferase (thioester-hydrolysing, 1-carboxypropyl-forming). Other names in common use include 2-ethyl-3-hydroxybutanedioate synthase, and 3-ethylmalate glyoxylate-lyase (CoA-butanoylating). This enzyme participates in glyoxylate and dicarboxylate metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14540983
14541004
3-oxoadipyl-CoA thiolase
Class of enzymes In enzymology, a 3-oxoadipyl-CoA thiolase (EC 2.3.1.174) is an enzyme that catalyzes the chemical reaction succinyl-CoA + acetyl-CoA formula_0 CoA + 3-oxoadipyl-CoA Thus, the two substrates of this enzyme are succinyl-CoA and acetyl-CoA, whereas its two products are CoA and 3-oxoadipyl-CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is succinyl-CoA:acetyl-CoA C-succinyltransferase. This enzyme participates in benzoate degradation via hydroxylation. 3-Oxoadipyl-CoA thiolase belongs to the thiolase family of enzymes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541004
14541020
3-propylmalate synthase
Class of enzymes In enzymology, a 3-propylmalate synthase (EC 2.3.3.12) is an enzyme that catalyzes the chemical reaction pentanoyl-CoA + H2O + glyoxylate formula_0 3-propylmalate + CoA The 3 substrates of this enzyme are pentanoyl-CoA, H2O, and glyoxylate, whereas its two products are 3-propylmalate and CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases that convert acyl groups into alkyl groups on transfer. The systematic name of this enzyme class is pentanoyl-CoA:glyoxylate C-pentanoyltransferase (thioester-hydrolysing, 1-carboxybutyl-forming). Other names in common use include 3-(n-propyl)-malate synthase, 3-propylmalate glyoxylate-lyase (CoA-pentanoylating), beta-n-propylmalate synthase, and n-propylmalate synthase. This enzyme participates in glyoxylate and dicarboxylate metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541020
14541037
6'-Deoxychalcone synthase
Class of enzymes In enzymology, a 6'-deoxychalcone synthase (EC 2.3.1.170) is an enzyme that catalyzes the chemical reaction 3 malonyl-CoA + 4-coumaroyl-CoA + NADPH + H+ formula_0 4 CoA + isoliquiritigenin + 3 CO2 + NADP+ + H2O The 4 substrates of this enzyme are malonyl-CoA, 4-coumaroyl-CoA, NADPH, and H+, whereas its 5 products are CoA, isoliquiritigenin, CO2, NADP+, and H2O. Deoxychalcone synthase catalyzed activity is involved in the biosynthesis of retrochalcone and certain phytoalexins in the cells of "Glycyrrhiza echinata" (Russian licorice) and other leguminous plants. This enzyme belongs to the family of transferases, to be specific those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:4-coumaroyl-CoA malonyltransferase (cyclizing, reducing). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541037
14541049
6-methylsalicylic-acid synthase
Class of enzymes In enzymology, a 6-methylsalicylic-acid synthase (EC 2.3.1.165) is a polyketide synthase that catalyzes the chemical reaction acetyl-CoA + 3 malonyl-CoA + NADPH + H+ formula_0 6-methylsalicylate + 4 CoA + 3 CO2 + NADP+ + H2O The 4 substrates of this enzyme are acetyl-CoA, malonyl-CoA, NADPH, and H+, whereas its 5 products are 6-methylsalicylate, CoA, CO2, NADP+, and H2O. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-CoA:malonyl-CoA C-acyltransferase (decarboxylating, oxoacyl-reducing, thioester-hydrolysing and cyclizing). Other names in common use include MSAS, and 6-methylsalicylic acid synthase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541049
14541065
8-amino-7-oxononanoate synthase
Class of enzymes In enzymology, a 8-amino-7-oxononanoate synthase (EC 2.3.1.47) is an enzyme that catalyzes the chemical reaction 6-carboxyhexanoyl-CoA + L-alanine formula_0 8-amino-7-oxononanoate + CoA + CO2 Thus, the two substrates of this enzyme are 6-carboxyhexanoyl-CoA and L-alanine, whereas its 3 products are 8-amino-7-oxononanoate, CoA, and CO2. This enzyme participates in biotin metabolism. It employs one cofactor, pyridoxal phosphate. Nomenclature. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is 6-carboxyhexanoyl-CoA:L-alanine C-carboxyhexanoyltransferase (decarboxylating). Other names in common use include 7-keto-8-aminopelargonic acid synthetase, 7-keto-8-aminopelargonic synthetase, and 8-amino-7-oxopelargonate synthase. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541065
14541081
Acetyl-CoA C-acetyltransferase
Class of enzymes In enzymology, an acetyl-CoA C-acetyltransferase (EC 2.3.1.9) is an enzyme that catalyzes the chemical reaction 2 acetyl-CoA formula_0 CoA + acetoacetyl-CoA Hence, this enzyme has one substrate, acetyl-CoA, and two products, CoA and acetoacetyl-CoA. Acetyl-CoA C-acetyltransferase belongs to the thiolase family of enzymes. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:acetyl-CoA C-acetyltransferase. Other names in common use include acetoacetyl-CoA thiolase, beta-acetoacetyl coenzyme A thiolase, 2-methylacetoacetyl-CoA thiolase [misleading], 3-oxothiolase, acetyl coenzyme A thiolase, acetyl-CoA acetyltransferase, acetyl-CoA:N-acetyltransferase, and thiolase II. This enzyme participates in 10 metabolic pathways: fatty acid metabolism, synthesis and degradation of ketone bodies, valine, leucine and isoleucine degradation, lysine degradation, tryptophan metabolism, pyruvate metabolism, benzoate degradation via coa ligation, propanoate metabolism, butanoate metabolism, and two-component system - general. Isozymes. Human genes encoding acetyl-CoA C-acetyltransferases include: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541081
14541098
Acetyl-CoA C-myristoyltransferase
Class of enzymes In enzymology, an acetyl-CoA C-myristoyltransferase (EC 2.3.1.155) is an enzyme that catalyzes the chemical reaction myristoyl-CoA + acetyl-CoA formula_0 3-oxopalmitoyl-CoA + CoA Thus, the two substrates of this enzyme are myristoyl-CoA and acetyl-CoA, whereas its two products are 3-oxopalmitoyl-CoA and CoA. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is myristoyl-CoA:acetyl-CoA C-myristoyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541098
14541112
Acridone synthase
Class of enzymes In enzymology, an acridone synthase (EC 2.3.1.159) is an enzyme that catalyzes the chemical reaction 3 malonyl-CoA + N-methylanthraniloyl-CoA formula_0 4 CoA + 1,3-dihydroxy-N-methylacridone + 3 CO2 Thus, the two substrates of this enzyme are malonyl-CoA and N-methylanthraniloyl-CoA, whereas its 3 products are CoA, 1,3-dihydroxy-N-methylacridone, and CO2. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:N-methylanthraniloyl-CoA malonyltransferase (cyclizing). This enzyme participates in acridone alkaloid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541112
14541127
Acyl-(acyl-carrier-protein)—phospholipid O-acyltransferase
Class of enzymes In enzymology, an acyl-[acyl-carrier-protein]-phospholipid O-acyltransferase (EC 2.3.1.40) is an enzyme that catalyzes the chemical reaction acyl-[acyl-carrier protein] + O-(2-acyl-sn-glycero-3-phospho)ethanolamine formula_0 [acyl-carrier protein] + O-(1,2-diacyl-sn-glycero-3-phospho)ethanolamine Thus, the two substrates of this enzyme are acyl-acyl-carrier protein and O-(2-acyl-sn-glycero-3-phospho)ethanolamine, whereas its two products are acyl-carrier protein and O-(1,2-diacyl-sn-glycero-3-phospho)ethanolamine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acyl-[acyl-carrier protein]:O-(2-acyl-sn-glycero-3-phospho)ethanolamine O-acyltransferase. Other names in common use include acyl-[acyl-carrier, protein]:O-(2-acyl-sn-glycero-3-phospho)-ethanolamine, and O-acyltransferase. This enzyme participates in glycerophospholipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541127
14541282
Acyl-(acyl-carrier-protein)—UDP-N-acetylglucosamine O-acyltransferase
Enzyme In enzymology, an acyl-[acyl-carrier-protein]-UDP-N-acetylglucosamine O-acyltransferase (EC 2.3.1.129) is an enzyme that catalyzes the chemical reaction (R)-3-hydroxytetradecanoyl-[acyl-carrier-protein] + UDP-N-acetylglucosamine formula_0 [acyl-carrier-protein] + UDP-3-O-(3-hydroxytetradecanoyl)-N-acetylglucosamine Thus, the two substrates of this enzyme are (R)-3-hydroxytetradecanoyl-acyl-carrier-protein and UDP-N-acetylglucosamine, whereas its two products are acyl-carrier-protein and UDP-3-O-(3-hydroxytetradecanoyl)-N-acetylglucosamine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is (R)-3-hydroxytetradecanoyl-[acyl-carrier-protein]:UDP-N-acetylglucosamine 3-O-(3-hydroxytetradecanoyl) transferase. Other names in common use include UDP-N-acetylglucosamine acyltransferase and uridine diphosphoacetylglucosamine acyltransferase. This enzyme participates in lipopolysaccharide biosynthesis. Structural studies. As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1J2Z, 1LXA, 2AQ9, 2JF2, 2JF3, 2QIA, and 2QIV. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541282
14541295
(acyl-carrier-protein) S-acetyltransferase
Class of enzymes In enzymology, a [acyl-carrier-protein] S-acetyltransferase (EC 2.3.1.38) is an enzyme that catalyzes the reversible chemical reaction acetyl-CoA + [acyl-carrier-protein] formula_0 CoA + acetyl-[acyl-carrier-protein] Thus, the two substrates of this enzyme are acetyl-CoA and acyl carrier protein, whereas its two products are CoA and acetyl-acyl-carrier-protein. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:[acyl-carrier-protein] S-acetyltransferase. Other names in common use include acetyl coenzyme A-acyl-carrier-protein transacylase, acetyl-CoA:ACP transacylase, [acyl-carrier-protein]acetyltransferase, [ACP]acetyltransferase, and ACAT. This enzyme participates in fatty acid biosynthesis. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2PFF. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541295
1454130
Slashed zero
Glyph variant of numeral 0 (zero) with slash The dotted or slashed zero 0̷ is a representation of the Arabic digit "0" (zero) with a slash or a dot through it. This variant zero glyph is often used to distinguish the digit "zero" ("0") from the Latin script letter "O" anywhere that the distinction needs emphasis, particularly in encoding systems, scientific and engineering applications, computer programming (such as software development), and telecommunications. It thus helps to differentiate characters that would otherwise be homoglyphs. It was commonly used during the punch card era, when programs were typically written out by hand, to avoid ambiguity when the character was later typed on a card punch. Usage. The slashed zero is used in a number of fields in order to avoid confusion with the letter "O". It is used by computer programmers, in recording amateur radio call signs and in military radio, as logs of such contacts tend to contain both letters and numerals. The slashed zero was used on teleprinter circuits for weather applications. In this usage it was sometimes called communications zero. The slashed zero can be used in stoichiometry to avoid confusion with the symbol for oxygen (capital O). The slashed zero is also used in charting and documenting in the medical and healthcare fields to avoid confusion with the letter "O". It also denotes an absence of something (similar to the usage of an "empty set" character), such as a sign or a symptom. Slashed zeros are used on New Zealand number plates. History. The slashed zero predates computers, and is known to have been used in the twelfth and thirteenth centuries. In the days of the typewriter, there was no key for the slashed zero. Typists could generate it by first typing either an uppercase "O" or a zero and then backspace, followed by typing the slash key. The result would look very much like a slashed zero. It is used in many Baudot teleprinter applications, specifically the keytop and typepallet that combines "P" and slashed zero. Additionally, the slashed zero is used in many ASCII graphic sets descended from the default typewheel on the Teletype Model 33. The use of the slashed zero by many computer systems of the 1970s and 1980s inspired the 1980s space rock band Underground Zerø to use a heavy metal umlaut Scandinavian vowel "ø" in the band's name and as the band logo on all their album covers (see link below). Along with the Westminster, MICR, and OCR-A fonts, the slashed zero became one of the things associated with hacker culture in the 1980s. Some cartoons depicted computer users talking in binary code with 1s and 0s using a "slashed zero" for the 0. Slashed zeroes have been used in the Flash-based artwork of Young-Hae Chang Heavy Industries, notably in their 2003 work, "Operation Nukorea". The reason for their use is unknown, but has been conjectured to be related to themes of "negation, erasure, and absence". Similar symbols. The "slashed zero" has the disadvantage that it can be confused with several other symbols. See the disambiguation page for the symbol Ø for a comprehensive listing: Ø (disambiguation). Representation in Unicode and HTML. In Unicode, slashed zero is considered a typographic variation of the Arabic digit zero 0, which is code point U+0030. Since nearly all software requires each base-10 digit to have only a single, unique semantic representation, Unicode defines no code point (other than U+0030) for altering the visual appearance of zero. This means that the slashed zero glyph is displayed for U+0030 only—"and then always"—when a font whose designer chose the option is active. Successful display on a particular local system depends on making sure that such a font is available there, either via the system's font files or via font embedding, and also ensuring it is selected. As an explicit visual representation, Unicode supports slashed zero only indirectly, not as a single-character code point, but as two characters are paired in a combining sequence. see Combining solidus below. Unicode 9.0 introduced another method to create a short diagonal stroked form by adding the Variation Selector 1 U+FE00 after the zero, on this browser it produces 0︀. Typography. In most typographic designs, the slash of a slashed zero usually does not extend past the ellipse. Compare this to the Scandinavian vowel "Ø", the "empty set" symbol "∅" and the diameter symbol ⌀. A convention common on early line printers left zero unornamented but added a tail or hook to the letter-O so that it resembled an inverted Q (like U+213A ℺) or cursive capital letter-O (formula_0). In the Fixedsys typeface, the numeral 0 has two internal barbs along the lines of the slash. This appears much like a white "S" within the black borders of the zero. In the FE-Schrift typeface, used on German car license plates, the zero is rectangular and has an "insinuated" slash: a diagonal crack just beneath the top right curve. Typefaces. Typefaces commonly found on personal computers that use the slashed zero include: Dotted zero typefaces: Variations. Dotted zero. The zero with a dot in the center seems to have originated as an option on IBM 3270 display controllers. The dotted zero may appear similar to the Greek letter theta (particularly capital theta, Θ), but the two have different glyphs. In raster fonts, the theta usually has a horizontal line connecting, or nearly touching, the sides of an O; while the dotted zero simply has a dot in the middle. However, on a low-definition display, such a form can be confused with a numeral 8. In some fonts the IPA letter for a bilabial click (ʘ) looks similar to the dotted zero. Alternatively, the dot can become a vertical trace, for example by adding a "combining short vertical line overlay" codice_0. It may be coded as codice_1 giving 0⃓. Dotted zero is used on Slovak license plates since 2023. Slashed letter 'O'. IBM (and a few other early mainframe makers) used a convention in which the letter O had a slash and the digit 0 did not. This is even more problematic for Danes, Faroese, and Norwegians because it means two of their letters—the O and slashed O (Ø)—are visually similar. This was later flipped and most mainframe chain or band printers used the opposite convention (letter O printed as is, and digit zero printed with a slash Ø). This was the de facto standard from 1970s to 1990s. However current use of network laser printers that use PC style fonts caused the demise of the slashed zero in most companies – only a few configured laser printers to use Ø. Combining solidus. Unicode supports combining characters, which overlay the preceding character to create a composite glyph. This can be used to obtain a crude typographic approximation where the slash is contained within the zero. It is treated literally as "a zero that is slashed", and it is coded as two characters: a standard zero 0 followed by either "combining short solidus overlay" U+0337 or "combining long solidus overlay" U+0338. For example, placing the "long solidus", which may be written in HTML as , appears as 0̸. Using the "short solidus overlay" U+0337 after a standard zero character is coded as and produces the following: 0̷. Reversed slash. Some Burroughs/Unisys equipment displays a zero with a "reversed" slash, similar to the no symbol, 🛇. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\,\\mathcal{O} \\," } ]
https://en.wikipedia.org/wiki?curid=1454130
14541328
Agaritine gamma-glutamyltransferase
In enzymology, an agaritine gamma-glutamyltransferase (EC 2.3.2.9) is an enzyme that catalyzes the chemical reaction agaritine + acceptor formula_0 4-hydroxymethylphenylhydrazine + gamma-L-glutamyl-acceptor Thus, the two substrates of this enzyme are agaritine and acceptor, whereas its two products are 4-hydroxymethylphenylhydrazine and gamma-L-glutamyl-acceptor. This enzyme belongs to the family of transferases, specifically the aminoacyltransferases. The systematic name of this enzyme class is (gamma-L-glutamyl)-N1-(4-hydroxymethylphenyl)hydrazine:acceptor gamma-glutamyltransferase. Other names in common use include (gamma-L-glutamyl)-N1-(4-hydroxymethylphenyl)hydrazine:(acceptor), gamma-glutamyltransferase, (gamma-L-glutamyl)-1-N-(4-hydroxymethylphenyl)hydrazine:(acceptor), gamma-glutamyltransferase, (gamma-L-glutamyl)-1-N-(4-hydroxymethylphenyl)hydrazine:acceptor, and gamma-glutamyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541328
14541342
Agmatine N4-coumaroyltransferase
In enzymology, an agmatine N4-coumaroyltransferase (EC 2.3.1.64) is an enzyme that catalyzes the chemical reaction 4-coumaroyl-CoA + agmatine formula_0 CoA + N-(4-guanidinobutyl)-4-hydroxycinnamamide Thus, the two substrates of this enzyme are 4-coumaroyl-CoA and agmatine, whereas its two products are CoA and N-(4-guanidinobutyl)-4-hydroxycinnamamide. This enzyme belongs to the family of transferases, to be specific those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is 4-coumaroyl-CoA:agmatine N4-coumaroyltransferase. Other names in common use include p-coumaroyl-CoA-agmatine N-p-coumaroyltransferase, agmatine coumaroyltransferase, and 4-coumaroyl-CoA:agmatine 4-N-coumaroyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541342
14541356
Alanylphosphatidylglycerol synthase
Class of enzymes In enzymology, an alanylphosphatidylglycerol synthase (EC 2.3.2.11) is an enzyme that catalyzes the chemical reaction L-alanyl-tRNA + phosphatidylglycerol formula_0 tRNA + 3-O-L-alanyl-1-O-phosphatidylglycerol Thus, the two substrates of this enzyme are L-alanyl-tRNA and phosphatidylglycerol, whereas its two products are tRNA and 3-O-L-alanyl-1-O-phosphatidylglycerol. This enzyme belongs to the family of transferases, specifically the aminoacyltransferases. The systematic name of this enzyme class is L-alanyl-tRNA:phosphatidylglycerol alanyltransferase. Other names in common use include O-alanylphosphatidylglycerol synthase, and alanyl phosphatidylglycerol synthetase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541356
14541372
Alcohol O-acetyltransferase
In enzymology, an alcohol O-acetyltransferase (EC 2.3.1.84) is an enzyme that catalyzes the chemical reaction acetyl-CoA + an alcohol formula_0 CoA + an acetyl ester Thus, the two substrates of this enzyme are acetyl-CoA and alcohol, whereas its two products are CoA and an acetyl ester. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:alcohol O-acetyltransferase. This enzyme is also called alcohol acetyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541372
14541384
Alcohol O-cinnamoyltransferase
In enzymology, an alcohol O-cinnamoyltransferase (EC 2.3.1.152) is an enzyme that catalyzes the chemical reaction 1-O-trans-cinnamoyl-beta-D-glucopyranose + ROH formula_0 alkyl cinnamate + glucose Thus, the two substrates of this enzyme are 1-O-trans-cinnamoyl-beta-D-glucopyranose and an alkanol (ROH), whereas its two products are alkyl cinnamate and glucose. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is 1-O-trans-cinnamoyl-beta-D-glucopyranose:alcohol O-cinnamoyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541384
14541399
Alkylglycerophosphate 2-O-acetyltransferase
In enzymology, an alkylglycerophosphate 2-O-acetyltransferase (EC 2.3.1.105) is an enzyme that catalyzes the chemical reaction acetyl-CoA + 1-alkyl-sn-glycero-3-phosphate formula_0 CoA + 1-alkyl-2-acetyl-sn-glycero-3-phosphate Thus, the two substrates of this enzyme are acetyl-CoA and 1-alkyl-sn-glycero-3-phosphate, whereas its two products are CoA and 1-alkyl-2-acetyl-sn-glycero-3-phosphate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:1-alkyl-sn-glycero-3-phosphate 2-O-acetyltransferase. This enzyme is also called alkyllyso-GP:acetyl-CoA acetyltransferase. This enzyme participates in ether lipid metabolism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541399
14541420
Alpha-tubulin N-acetyltransferase
In enzymology, an alpha-tubulin N-acetyltransferase (EC 2.3.1.108) is an enzyme which is encoded by the ATAT1 gene. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:[alpha-tubulin]-L-lysine N6-acetyltransferase. Other names in common use include alpha-tubulin acetylase, αTAT, ATAT1, TAT, alpha-TAT, alpha-tubulin acetyltransferase, tubulin N-acetyltransferase, acetyl-CoA:alpha-tubulin-L-lysine N-acetyltransferase, and acetyl-CoA:[alpha-tubulin]-L-lysine 6-N-acetyltransferase. Structure. Primary. This protein has a length of 421 amino acids, among which, we have to highlight the Glutamine number 58 (Gln or Q), which is crucial for catalytic activity. Secondary. ATAT1 has 8 α-helix, 10 β-strands and one turn. However, only half of the protein has a defined secondary conformation. The rest of this protein is intrinsically disordered. Domains. ATAT1 it is not a modular protein because it only have one domain localized from the first amino acid to the one hundred and ninety. Regions. It must be highlighted two important regions of ATAT1 (124-137 and 160-269), because is here where junction points with Acetyl-coA are. Recently, studies describing the crystal structure of ATAT1 suggest that residues 196 to 236 of human ATAT1 (where acetylated lysines K210 and K221 are located) are disordered and do not contribute significantly to catalytic activity. In contrast, acetylated residues K56 and K146 are both within the catalytic domain (α1 and α3 helices, respectively) and close to the Acetyl CoA binding site, which suggests that these residues might act as an intermediate for the transfer of the acetyl group. However, further structural data with autoacetylation mutants are needed to fully understand this mechanism and to test the possibility of conformational changes caused by ATAT1 autoacetylation. Active site. ATAT1 contains a conserved surface pocket close to the active site composed largely of hydrophobic and basic residues, which likely complement the acidic loop containing α-tubulin K40. The protein's active site contains several conserved residues that could potentially function as general bases in the reaction: glutamine 58 (Q58), cysteine (C120), and aspartic acid 157 (D157). Isoforms. ATAT1 presents seven different isoforms due to alternative splicing, a process which consists in the combination of exons during the end of the transcription process. Consequently, from a single gene more tan one messenger RNA can be produced. The different isoforms are: Isoform 1 is known as the canonical sequence. This means that the changes in the other isoforms will be related to this particular sequence of amino acids. Isoform 2 is different from isoform 1, as the sequence of amino acids 1-12 is missing and the sequence from the 13th to the 36th amino acid is charged by the following: MWLTWPFCFLTITLREEGVCHLES Is quite similar to the canonical sequence, the only differences is that the sequence of amino acids in 195th-218th position (RPPAPSLRATRHSRAAAVDPTPAA) is substituted by proline (P). Isoform 4 is different to the canonical sequence as the sequence of amino acids 323-333 from the canonical chain (RGTPPGLVAQS) has been substituted by a different sequence (SSLPRSEESRY). Additionally, the sequence of amino acids 334-421 is missing. In this case, isoform 5 differs to the canonical sequence as the sequence of amino acids 324-421 has been eliminated. Isoform 6 is probably the isoform which differs the most from the canonical sequence. The sequence of amino acids 195-218 (RPPAPSLRATRHSRAAAVDPTPAA) is substituted by proline (P), just like in the isoform 3; the sequence 323-333 (RGTPPGLVAQS) is changed by (SSLPRSEESRY) and the sequence of amino acids 334-421 is missing, just like in isoform 4. The difference between isoform 7 and the canonical sequence is that the sequence of amino acids in 195th-218th positions (RPPAPSLRATRHSRAAAVDPTPAA) has been changed by proline (P) and also the sequence 334-421 is missing. Molecular function. Microtubules are highly dynamic tubular polymers assembled from protofilaments of α/β-tubulin dimers, and are essential for intracellular transport, architectural organization, cell division, cellular morphogenesis and force production in eukaryotic cells. There is a constant modulation of the balance between dynamic short-lived, and stable long-lived microtubule subpopulations in the cell. Although microtubules usually function as dynamic polymers, for some specific functions they require more stability. The acetylation is used y the cell as a marker for these stable microtubules. ATAT1 specifically acetylates ‘Lys-40’ in alpha tubulin on the lumenal side of microtubules. This is the only known posttranslational modification in the microtubule lumen, but it is still unknown how does the enzyme access the lumen. The two substrates for this enzyme are Acetyl-CoA and α-tubulin-L-lysine. Despite its similarity to other acetylating enzymes, it catalyses exclusively the tubulin acetylation reaction. This catalysis occurs when the Acetyl-CoA molecule attached to the enzyme transfers its Acetyl group to the lysine. This is the reaction catalyzed by ATAT1: Acetyl-CoA + [alpha-tubulin]-L-lysine formula_0 CoA + [alpha-tubulin]-N6-acetyl-L-lysine Several experiments concluded that the acetylation is more efficient in microtubule substrates than in free α/β-tubulin dimers. This is because once the ATAT1 is in the microtubule lumen, it diffuses freely and it has a high effective substrate concentration. Biological functions. Formation of the hippocampus. ATAT1 has an important role in the formation of the hippocampus, as it has been found that mice lacking ATAT1 possess a deficient tubulin acetylation and a bulge in the dentate gyrus. Response to stress and signaling pathways. Tubulin acetylation by ATAT1 has been shown to be elevated by the cell exposure to UV irradiation, as well as its exposure to chemicals, such H2O2 or NaCl. Tubulin acetylation is one of the signaling pathways for Na+ and K+-ATPase activity. It has been observed that through traction force microscopy experiments, ATAT1 depletion resulted in lower traction force production on 40 kPa substrates. In contrast, overexpression of GFP-ATAT1 increased the traction energies and forces, and also rescued the effect observed on ATAT1 knockdown when astrocytes were plated on 40 kPa. Autophagy. Tubulin acetylation is also involved in regulating autophagy. It is required for fusion of autophagosomes with lysosomes. When there is a nutrient deprivation, starvation-induced tubulin hyperacetylation is required for autophagy activation. This is a way activated when the cell is under stress. Neuronal migration and maturation. α-tubulin is a target of the Elongator complex and in the regulation of its acetylation underlies the maturation of cortical projection neurons. Sperm flagellar function. Acetylation of the microtubules is required for normal sperm flagellar function. ATAT1 suppression in mice causes diminished sperm motility and male infertility. Cell migration. Stable microtubules are involved in cell migration processes. Those microtubules need their acetylation. Thus, the ATAT1 enzyme is important in cell migration. Embryo development. ATAT1 is quite important in embryo development in Zebrafish. Some authors consider that it may also be critical in embryo development in mammals. Ciliogenesis. ATAT 1 plays an important role in the formation of cilia. It is actually being studied that ciliogenesis can have an effect in the development of handedness in homo sapiens. Moreover, Alpha-tubulin N-acetyltransferase is also essential to make sure that the primary cilium assembly can function in a state of normal kinetics. Intracellular location and associated functions. Scientific background. In 2010, there was discovered the existence of an α-tubuline N-acetyltransferase, not only in "Tetrahymena" and "Caenorhabditis elegans", but also in mammalian. Additionally, two research groups generated ATAT1-knockout mice, which occasioned mice with a lack of acetylation in many tissues. However, its intracellular distribution was still unclear. Recent discoveries. In order to discover the intracellular location of α-tubulin N-acetyltransferase and some of its functions, it was used a microscopy technique, called immunohistochemistry, which allows the differentiation of diverse molecules in a cell by using an antibody and its reaction with a specific antigen (in this case, it was used an antibody called anti-ATAT1 antibody). In this study, ATAT1 was observed in many tissues and scientifics discovered and were able to suppose some of its functions. This last study allowed to reveal the intracellular distribution of ATAT1 in ciliated cells of some tissues. Location. ATAT1 is known to be located in: Trachea. It is manly located at the apical region of epithelial cells, but its function is still an enigma. Kidney. The immunopositive signal caused by the anti-ATAT1 antibody was observed in epithelial cells of the medullary collecting duct. Retina. The α-tubulin N-acetyltransferase is mainly located in photoreceptor cells. Moreover, ATAT1 is thought to be associated not only with the connecting cilia and the axonemes of the outer segment (OS), but also with the entire inner segment (IS) and the entire outer segment (OS). Therefore, it might play an important paper in the intracilial transport of signal proteins during light-sensing signaling in photoreceptor cells. Testis. In testis, the antibody was observed in spermatocytes and spermatids, but not in sperm. In spermatocytes, it was also seen that ATAT1 was located around the Golgi apparatus, which indicates that this protein might play an important paper in spermatogenesis. Third ventricle. Although it is still unclear the function o ATAT1, it was also found in other tissues such as the third ventricle of the brain, but its specific function is unknown. However, it is considered to play an important role in neurone development. Subcellular location. Alpha-tubulin N-acetyltransferase is located in several parts of the cell such as in the cytoskeleton, cytoplasm, or the clathrin coated-pit in the membrane. This is closely related to one of its main functions which is the catalysis of microtubule acetylation. Mutagenesis and mutations. ATAT1 might tend to undergo a process known as mutagenesis according to which, a genetic mutation is produced. This may occur spontaneously or, on the other hand, due to the action of mutagens. It is possible to classify the different results of mutagenesis depending on which of the 421 aminoacids have been changed. If glutamine (Q), which occupies the 58th position in the sequence of aminoacids is substituted by alanine (A) a loss in the acetyltransferase activity will be produced. The consequence of a mutation in which the isoleucine (I) in 64th place is changed by alanine (A) is a strong reduction in the acetyltransferase activity. Moreover, there are a series of mutations which cause a reduction of the protein activity. These are: In some cases, this reduction of activity is even stronger such as in the following mutations: There are some mutations which lead to an increase of activity such as: There are some cases in which the mutation of the gene might cause a reduction in the acetylation of the microtubules. Like for example: Nevertheless, not always a mutation due to a substitution of one aminoacid by another one has a particular effect on the activity of the protein. There are some examples in which a mutation doesn't produce a significant variation of the catalytic effect of the protein. These are: Post Translational Modifications. ATAT1 suffers post translational modifications, which are changes in the protein after it has been translated by ribosomes. The amino acids generally affected by these modifications are in position 46, 146, 233, 244, 272, 276, 315. The main effect of this modifications is an increase in the acetylation of tubulin. Associated diseases. Knockout studies of the mouse enzymes have shown new possible biological functions. Therefore, they have shown some associated diseases, as well. For example, abnormal levels of acetylation are closely linked to neurological disorders, cancer, heart diseases and other illnesses. For some of these diseases a possible solution is an increment of the ATAT1 enzyme. For others, an inhibitor of this enzyme is needed to reach the correct level of acetylation. Neurological disorders. Pathologically, tubulin acetylation might be connected to several neurological disorders, such as: However it is still being investigated if these disorders are directly caused by an abnormality level of acetylation done by ATAT1. Nevertheless, it seems that the only associated disease which can be stated that is caused by a decrease of acetylation caused by ATAT1 is axon injury Cancer. An increase of tubulin acetylation done by ATAT1 may play an important role in: Inflammation and immunity. It has been also slightly demonstrated that an increase of acetylation done by α-tubulin N-acetyltransferase could ease the entrance of virus in the cell. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541420
14541436
Aminoglycoside N3'-acetyltransferase
In enzymology, an aminoglycoside N3'-acetyltransferase (EC 2.3.1.81) is an enzyme that catalyzes the chemical reaction acetyl-CoA + a 2-deoxystreptamine antibiotic formula_0 CoA + N3'-acetyl-2-deoxystreptamine antibiotic Thus, the two substrates of this enzyme are acetyl-CoA and 2-deoxystreptamine antibiotic, whereas its two products are CoA and N3'-acetyl-2-deoxystreptamine antibiotic. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:2-deoxystreptamine-antibiotic N3'-acetyltransferase. Other names in common use include 3'-aminoglycoside acetyltransferase, and 3-N-aminoglycoside acetyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541436
14541451
Aminoglycoside N6'-acetyltransferase
In enzymology, an aminoglycoside N6'-acetyltransferase (EC 2.3.1.82) is an enzyme that catalyzes the chemical reaction acetyl-CoA + kanamycin-B formula_0 CoA + N6'-acetylkanamycin-B Thus, the two substrates of this enzyme are acetyl-CoA and kanamycin B, whereas its two products are CoA and N6'-acetylkanamycin-B. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:kanamycin-B N6'-acetyltransferase. Other names in common use include aminoglycoside 6'-N-acetyltransferase, aminoglycoside-6'-acetyltransferase, aminoglycoside-6-N-acetyltransferase, and kanamycin acetyltransferase. Structural studies. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1S3Z, 1S5K, 1S60, and 2A4N. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541451
14541471
Anthocyanin 5-aromatic acyltransferase
In enzymology, an anthocyanin 5-aromatic acyltransferase (EC 2.3.1.153) is an enzyme that catalyzes the chemical reaction hydroxycinnamoyl-CoA + anthocyanidin-3,5-diglucoside formula_0 CoA + anthocyanidin 3-glucoside-5-hydroxycinnamoylglucoside Thus, the two substrates of this enzyme are hydroxycinnamoyl-CoA and anthocyanidin-3,5-diglucoside, whereas its two products are CoA and anthocyanidin 3-glucoside-5-hydroxycinnamoylglucoside. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is hydroxycinnamoyl-CoA:anthocyanidin 3,5-diglucoside 5-O-glucoside-6"'-O-hydroxycinnamoyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541471
14541490
Anthocyanin 5-O-glucoside 6'''-O-malonyltransferase
In enzymology, an anthocyanin 5-O-glucoside 6-O-malonyltransferase (EC 2.3.1.172) is an enzyme that catalyzes the chemical reaction malonyl-CoA + pelargonidin 3-O-(6-caffeoyl-beta-D-glucoside) 5-O-beta-D-glucoside formula_0 CoA + 4-demalonylsalvianin Thus, the two substrates of this enzyme are malonyl-CoA and pelargonidin 3-O-(6-caffeoyl-beta-D-glucoside) 5-O-beta-D-glucoside, whereas its two products are CoA and 4-demalonylsalvianin. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:pelargonidin-3-O-(6-caffeoyl-beta-D-glucoside)-5-O-beta- D-glucoside 6-O-malonyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541490
14541511
Anthocyanin 6&quot;-O-malonyltransferase
In enzymology, an anthocyanin 6"-O-malonyltransferase (EC 2.3.1.171) is an enzyme that catalyzes the chemical reaction malonyl-CoA + an anthocyanidin 3-O-beta-D-glucoside formula_0 CoA + an anthocyanidin 3-O-(6-O-malonyl-beta-D-glucoside) Thus, the two substrates of this enzyme are malonyl-CoA and anthocyanidin 3-O-beta-D-glucoside, whereas its two products are CoA and anthocyanidin 3-O-(6-O-malonyl-beta-D-glucoside). This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:anthocyanidin-3-O-beta-D-glucoside 6"-O-malonyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541511
14541521
Anthranilate N-benzoyltransferase
In enzymology, an anthranilate N-benzoyltransferase (EC 2.3.1.144) is an enzyme that catalyzes the chemical reaction benzoyl-CoA + anthranilate formula_0 CoA + N-benzoylanthranilate Thus, the two substrates of this enzyme are benzoyl-CoA and anthranilate, whereas its two products are CoA and N-benzoylanthranilate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is benzoyl-CoA:anthranilate N-benzoyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541521
14541541
Anthranilate N-malonyltransferase
In enzymology, an anthranilate N-malonyltransferase (EC 2.3.1.113) is an enzyme that catalyzes the chemical reaction malonyl-CoA + anthranilate formula_0 CoA + N-malonylanthranilate Thus, the two substrates of this enzyme are malonyl-CoA and anthranilate, whereas its two products are CoA and N-malonylanthranilate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:anthranilate N-malonyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541541
14541578
Arginine N-succinyltransferase
In enzymology, an arginine N-succinyltransferase (EC 2.3.1.109) is an enzyme that catalyzes the chemical reaction succinyl-CoA + L-arginine formula_0 CoA + N2-succinyl-L-arginine Thus, the two substrates of this enzyme are succinyl-CoA and L-arginine, whereas its two products are CoA and N2-succinyl-L-arginine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is succinyl-CoA:L-arginine N2-succinyltransferase. Other names in common use include arginine succinyltransferase, AstA, arginine and ornithine N2-succinyltransferase, AOST, AST, and succinyl-CoA:L-arginine 2-N-succinyltransferase. This enzyme participates in arginine and proline metabolism. Structural studies. As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1YLE. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541578
14541594
Arginyltransferase
In enzymology, an arginyltransferase (EC 2.3.2.8) is an enzyme that catalyzes the chemical reaction L-arginyl-tRNA + protein formula_0 tRNA + L-arginyl-protein Thus, the two substrates of this enzyme are L-arginyl-tRNA and protein, whereas its two products are tRNA and L-arginyl-protein. This enzyme belongs to the family of transferases, specifically the aminoacyltransferases. The systematic name of this enzyme class is L-arginyl-tRNA:protein arginyltransferase. Other names in common use include arginine transferase, arginyl-transfer ribonucleate-protein aminoacyltransferase, arginyl-transfer ribonucleate-protein transferase, and arginyl-tRNA protein transferase. It has 2 cofactors: mercaptoethanol, and Cation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541594
14541606
Aromatic-hydroxylamine O-acetyltransferase
Class of enzymes In enzymology, an aromatic-hydroxylamine O-acetyltransferase (EC 2.3.1.56) is an enzyme that catalyzes the chemical reaction N-hydroxy-4-acetylaminobiphenyl + N-hydroxy-4-aminobiphenyl formula_0 N-hydroxy-4-aminobiphenyl + N-acetoxy-4-aminobiphenyl Thus, the two substrates of this enzyme are N-hydroxy-4-acetylaminobiphenyl and N-hydroxy-4-aminobiphenyl, whereas its two products are N-hydroxy-4-aminobiphenyl and N-acetoxy-4-aminobiphenyl. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is N-hydroxy-4-acetylaminobiphenyl:N-hydroxy-4-aminobiphenyl O-acetyltransferase. Other names in common use include aromatic hydroxylamine acetyltransferase, arylhydroxamate acyltransferase, arylhydroxamate N,O-acetyltransferase, arylhydroxamic acid N,O-acetyltransferase, arylhydroxamic acyltransferase, N,O-acetyltransferase, and N-hydroxy-2-acetylaminofluorene N-O acyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541606
14541624
Arylamine N-acetyltransferase
In enzymology, an arylamine N-acetyltransferase (EC 2.3.1.5) is an enzyme that catalyzes the chemical reaction acetyl-CoA + an arylamine formula_0 CoA + an N-acetylarylamine Thus, the two substrates of this enzyme are acetyl-CoA and arylamine, whereas its two products are CoA and N-acetylarylamine. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:arylamine N-acetyltransferase. Other names in common use include arylamine acetylase, beta-naphthylamine N-acetyltransferase, 4-aminobiphenyl N-acetyltransferase, acetyl CoA-arylamine N-acetyltransferase, 2-naphthylamine N-acetyltransferase, arylamine acetyltransferase, indoleamine N-acetyltransferase, N-acetyltransferase, p-aminosalicylate N-acetyltransferase, serotonin acetyltransferase, and serotonin N-acetyltransferase. Structural studies. As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1GX3, 1W5R, 1W6F, 2BSZ, 2IJA, 2PFR, and 2PQT. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541624
14541648
Aspartate N-acetyltransferase
In enzymology, an aspartate N-acetyltransferase (EC 2.3.1.17) is an enzyme that catalyzes the chemical reaction acetyl-CoA + L-aspartate formula_0 CoA + N-acetyl-L-aspartate Thus, the two substrates of this enzyme are acetyl-CoA and L-aspartate, whereas its two products are CoA and N-acetyl-L-aspartate. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:L-aspartate N-acetyltransferase. Other names in common use include aspartate acetyltransferase, and L-aspartate N-acetyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541648
14541664
Aspartyltransferase
In enzymology, an aspartyltransferase (EC 2.3.2.7) is an enzyme that catalyzes the chemical reaction L-asparagine + hydroxylamine formula_0 NH3 + L-aspartylhydroxamate Thus, the two substrates of this enzyme are L-asparagine and hydroxylamine, whereas its two products are NH3 and L-aspartylhydroxamate. This enzyme belongs to the family of transferases, specifically the aminoacyltransferases. The systematic name of this enzyme class is L-asparagine:hydroxylamine gamma-aspartyltransferase. Other names in common use include beta-aspartyl transferase, and aspartotransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541664
14541691
Benzophenone synthase
In enzymology, a benzophenone synthase (EC 2.3.1.151) is an enzyme that catalyzes the chemical reaction 3 malonyl-CoA + 3-hydroxybenzoyl-CoA formula_0 4 CoA + 2,3',4,6-tetrahydroxybenzophenone + 3 CO2 Thus, the two substrates of this enzyme are malonyl-CoA and 3-hydroxybenzoyl-CoA, whereas its 3 products are CoA, 2,3',4,6-tetrahydroxybenzophenone, and CO2. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:3-hydroxybenzoyl-CoA malonyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541691
14541708
Beta-glucogallin O-galloyltransferase
In enzymology, a beta-glucogallin O-galloyltransferase (EC 2.3.1.90) is an enzyme that catalyzes the chemical reaction 2 1-O-galloyl-beta-D-glucose formula_0 D-glucose + 1-O,6-O-digalloyl-beta-D-glucose Hence, this enzyme has one substrate, 1-O-galloyl-beta-D-glucose, and two products, D-glucose and 1-O,6-O-digalloyl-beta-D-glucose. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is 1-O-galloyl-beta-D-glucose:1-O-galloyl-beta-D-glucose O-galloyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541708
14541724
Beta-glucogallin—tetrakisgalloylglucose O-galloyltransferase
In enzymology, a beta-glucogallin-tetrakisgalloylglucose O-galloyltransferase (EC 2.3.1.143) is an enzyme that catalyzes the chemical reaction. 1-O-galloyl-beta-D-glucose + 1,2,3,6-tetrakis-O-galloyl-beta-D-glucose formula_0 D-glucose + 1,2,3,4,6-pentakis-O-galloyl-beta-D-glucose Thus, the two substrates of this enzyme are 1-O-galloyl-beta-D-glucose and 1,2,3,6-tetrakis-O-galloyl-beta-D-glucose, whereas its two products are D-glucose and 1,2,3,4,6-pentakis-O-galloyl-beta-D-glucose. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is 1-O-galloyl-beta-D-glucose:1,2,3,6-tetrakis-O-galloyl-beta-D-glucose 4-O-galloyltransferase. Other names in common use include beta-glucogallin-tetragalloylglucose 4-galloyltransferase, beta-glucogallin:1,2,3,6-tetra-O-galloylglucose, 4-O-galloyltransferase, beta-glucogallin:1,2,3,6-tetra-O-galloyl-beta-D-glucose, and 4-O-galloyltransferase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541724
14541734
Beta-ketoacyl-ACP synthase I
In enzymology, a beta-ketoacyl-acyl-carrier-protein synthase I (EC 2.3.1.41) is an enzyme that catalyzes the chemical reaction an acyl-acyl-carrier-protein + malonyl-acyl-carrier-protein formula_0 a 3-oxoacyl-acyl-carrier-protein + CO2 + acyl-carrier-protein Thus, the two substrates of this enzyme are acyl-acyl-carrier-protein and malonyl-acyl-carrier-protein, whereas its 3 products are 3-oxoacyl-acyl-carrier-protein, CO2, and acyl carrier protein. This enzyme participates in fatty acid biosynthesis. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. Nomenclature. The systematic name of this enzyme class is acyl-[acyl-carrier-protein]:malonyl-[acyl-carrier-protein] C-acyltransferase (decarboxylating). Other names in common use include: &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541734
14541750
Beta-ketoacyl-ACP synthase II
In enzymology, a beta-ketoacyl-acyl-carrier-protein synthase II (EC 2.3.1.179) is an enzyme that catalyzes the chemical reaction (Z)-hexadec-11-enoyl-[acyl-carrier-protein] + malonyl-[acyl-carrier-protein] formula_0 (Z)-3-oxooctadec-13-enoyl-[acyl-carrier-protein] + CO2 + [acyl-carrier-protein] Thus, the two substrates of this enzyme are (Z)-hexadec-11-enoyl-[acyl-carrier-protein] and malonyl-[acyl-carrier-protein], whereas its 3 products are (Z)-3-oxooctadec-13-enoyl-[acyl-carrier-protein], CO2, and acyl-carrier-protein. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is (Z)-hexadec-11-enoyl-[acyl-carrier-protein]:malonyl-[acyl-carrier-pr otein] C-acyltransferase (decarboxylating). Other names in common use include KASII, KAS II, FabF, 3-oxoacyl-acyl carrier protein synthase I, and beta-ketoacyl-ACP synthase II. This enzyme participates in fatty acid biosynthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541750
14541769
Beta-ketoacyl-ACP synthase III
Enzyme In enzymology, a β-ketoacyl-[acyl-carrier-protein] synthase III (EC 2.3.1.180) is an enzyme that catalyzes the chemical reaction acetyl-CoA + malonyl-[acyl carrier protein] formula_0 acetoacetyl-[acyl carrier protein] + CoA + CO2 Thus, the two substrates of this enzyme are acetyl-CoA and malonyl-[acyl-carrier-protein], whereas its 3 products are acetoacetyl-[acyl-carrier-protein], CoA, and CO2. This enzyme belongs to the family of transferases, to be specific those acyltransferases transferring groups other than aminoacyl groups. This enzyme participates in fatty acid biosynthesis. β-Ketoacyl-acyl-carrier-protein synthase III is involved in the dissociated (or type II) fatty-acid biosynthesis system that occurs in plants and bacteria. The role of FabH in fatty acid synthesis has been described in "Streptomyces glaucescens", "Streptococcus pneumoniae", and "Streptomyces coelicolor". Nomenclature. The systematic name of this enzyme class is acetyl-CoA:malonyl-[acyl-carrier-protein] C-acyltransferase. Other names in common use include: Role in tuberculosis. "Mycobacterium tuberculosis", the cause of tuberculosis, evades effective immune clearance through encapsulation, especially with mycolic acids that are particularly resistant to the normal degradative processes of macrophages. Furthermore, this capsule inhibits entry of antibiotics. The enzymes involved in mycolate biosynthesis are essential for survival and pathogenesis, and thus represent excellent drug targets. In "M. tuberculosis", the beta-ketoacyl-[acyl-carrier-protein] synthase III enzyme is designated mtFabH and is a crucial link between the fatty acid synthase-I and fatty acid synthase-II pathways producing mycolic acids. FAS-I is involved in the synthesis of C16 and C26 fatty acids. The C16 acyl-CoA product acts as a substrate for the synthesis of meromycolic acid by FAS-II, whereas the C26 fatty acid constitutes the alpha branch of the final mycolic acid. MtFabH has been proposed to be the link between FAS-I and FAS-II by converting C14-CoA generated by FAS-I to C16-AcpM, which is channelled into the FAS-II cycle. According to "in silico" flux balance analyses, mtFabH is essential but not according to transposon site hybridization analysis. Unlike the enzymes in FAS-I, the enzymes of FAS-II, including mtFabH, and are not found in mammals, suggesting inhibitors of these enzymes are suitable choices for drug development. Structure and substrates. Crystal structures of FabH have been reported from "Mycobacterium tuberculosis", "Staphylococcus aureus", "Escherichia coli", and "Thermus thermophilus". The catalytic activity and substrate specificity of mtFabH has been measured then further probed using crystallographic and directed mutagenesis methods. Structures have been determined of ecFabH bound with substrates, (CoA, malonyl CoA, degraded CoA). Specific inhibitors developed using rational design have recently been reported. In 2005, the structure of a catalytically disabled mtFabH mutant with lauroyl-CoA was reported. Native mtFabH is a homodimer with Mr = 77 ± 25 kDa. Although there is substantial structural homology among all bacterial FabH enzymes determined thus far, with two channels for binding of acyl-CoA and malonyl-ACP substrates and a conserved catalytic triad (C122, H258, N289 in mtFabH), mtFabH contains residues along the acyl-CoA binding channel that preferentially select for longer-chain substrates peaking with lauroyl-CoA (C12). Inhibition strategies based on rational design could include competitive displacement of the substrates or disruption of the catalytic site. Phosphorylation of Thr45, which is located at the entrance of the substrate channel, inhibits activity, perhaps by altering accessibility of substrates. Inhibitors. At least two of the existing drugs for tuberculosis were originally derived from microbes; cerulenin from the fungus "Cephalosporium caerulens" and thiolactomycin (TLM) from the actinomycete "Nocardia" spp. Isoniazid (isonicotinic acid hydrazide), ethionamide, triclosan [5-chloro-2-(2,4-dichlorophenoxy)-phenol] and TLM are known to specifically inhibit mycolic acid biosynthesis. Derivatives of TLM and related compounds are being screened to improve efficacy. While much has been learned from these structural studies and rational design is an excellent approach to develop novel inhibitors, alternative approaches such as bio-prospecting may reveal unexpected compounds such as an allosteric inhibitor discovered by Daines "et al." This could be especially important given that phosphorylation of mycolate synthesis enzymes is suggested to be critical to regulation and kinase domains are known to have multiple control mechanisms remote from ligand binding and active sites. Following the discovery that phomallenic acids isolated from a leaf litter fungus identified as "Phoma" sp. are inhibitors of the FabH/FabF. Wang "et al." recently reported their discovery from the soil bacterium "Streptomyces platensis" of a novel natural inhibitor of FabH with "in vivo" activity called platencin. These were found by screening 250,000 extracts of soil bacteria and fungi, demonstrating the viability of bio-prospecting. While a potentially useful antibiotic in its own right, it has now been shown that platensimycin is not specifically active on mtFabH. It is speculated that novel inhibitors will most likely be small molecules of relatively low polarity, considering that the catalytic sites of the mtFabH homodimer are hidden in relatively hydrophobic pockets and the need to traverse capsules of established bacilli. This is supported by the poor water solubility of an inhibitor to ecFabH. It is also hoped that, by being small molecules, their synthesis or biosynthesis will be simple and cheap, thereby enhancing affordability of subsequent drugs to developing countries. Techniques for screening efficacy of inhibitors are available. Therapeutic potential. In 2005, tuberculosis caused approximately 1.6 million deaths worldwide, 8.8 million people became sick, with 90% of these cases in developing countries, and an estimated one-third of the world's population has latent TB. Despite the availability of the BCG vaccine and multiple antibiotics, until 2005 TB resurged due to multidrug resistance, exacerbated by incubation in immune-compromised AIDS victims, drug treatment non-compliance, and ongoing systemic deficiencies of healthcare in developing countries. Mortality and infection rates appear to have peaked, but TB remains a serious global problem. New effective drugs are needed to combat this disease. Inhibitors against mtFabH, or against other enzymes of the FAS-II pathway, may have broader utility, such as the treatment of multidrug-resistant "Staphylococcus aureus", and "Plasmodium falciparum", the causative agent of another serious refractory problem, malaria. Given the predominance of TB in poor countries, the commercial incentive to develop new drugs has been hampered, along with complacency and reliance on old, well-established, "first-line" drugs such as Rifampicin, Isoniazid, Pyrazinamide, and Ethambutol. The price point is already very low: US$16–35 will buy a full six-month drug course Nevertheless, new drugs are in clinical trials. According to the Global Alliance for TB Drug Development, sales of first-line TB drugs are projected to be approximately US$315 million per year, and US$54 million for second-line treatments, yet the global economic toll of TB is at least $12 billion each year. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541769
14541804
Biphenyl synthase
In enzymology, a biphenyl synthase (EC 2.3.1.177) is an enzyme that catalyzes the chemical reaction: 3 malonyl-CoA + benzoyl-CoA formula_0 4 CoA + 3,5-dihydroxybiphenyl + 4 CO2 Thus, the two substrates of this enzyme are malonyl-CoA and benzoyl-CoA, whereas its three products are CoA, 3,5-dihydroxybiphenyl, and CO2. This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is malonyl-CoA:benzoyl-CoA malonyltransferase. This enzyme is also called BIS. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541804
14541820
Carnitine O-acetyltransferase
Enzyme Carnitine O-acetyltransferase also called carnitine acetyltransferase (CRAT, or CAT) (EC 2.3.1.7) is an enzyme that encoded by the CRAT gene that catalyzes the chemical reaction acetyl-CoA + carnitine formula_0 CoA + acetylcarnitine where the acetyl group displaces the hydrogen atom in the central hydroxyl group of carnitine. Thus, the two substrates of this enzyme are acetyl-CoA and carnitine, whereas its two products are CoA and O-acetylcarnitine. The reaction is highly reversible and does not depend on the order in which substrates bind. Different subcellular localizations of the CRAT mRNAs are thought to result from alternative splicing of the CRAT gene suggested by the divergent sequences in the 5' region of peroxisomal and mitochondrial CRAT cDNAs and the location of an intron where the sequences diverge. The alternatively splicing of this gene results in three distinct isoforms, one of which contains an N-terminal mitochondrial transit peptide, and has been shown to be located in mitochondria. Nomenclature. This enzyme belongs to the family of transferases, to be specific those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is acetyl-CoA:carnitine O-acetyltransferase. Other names in common use include acetyl-CoA-carnitine O-acetyltransferase, acetylcarnitine transferase, carnitine acetyl coenzyme A transferase, carnitine acetylase, carnitine acetyltransferase, carnitine-acetyl-CoA transferase, and CATC. This enzyme participates in alanine and aspartate metabolism. Structure. In general, carnitine acetyltransferases have molecular weights of about 70 kDa, and contain approximately 600 residues1. CRAT contains two domains, an N domain and a C domain, and is composed of 20 α helices and 16 β strands. The N domain consists of an eight-stranded β sheet flanked on both sides by eight α helices. A six-stranded mixed β sheet and eleven α helices comprise the enzyme’s C domain. When compared, the cores of the two domains reflect significantly similar peptide backbone folding. This occurs despite the fact that only 4% of the amino acids that comprise those peptide backbones corresponds to one another. Active site. His343 is the catalytic residue in CRAT. It is located at the interface between the enzyme’s C and N domains towards the heart of CRAT. His343 is accessible via two 15-18 Å channels that approach the residue from opposite ends of the CRAT enzyme. These channels are utilized by the substrates of CRAT, one channel for carnitine, and one for CoA. The side chain of His343 is positioned irregularly, with the δ1 ring nitrogen hydrogen bonded to the carbonyl oxygen on the amino acid backbone. CoA binding site. Due to the fact that CRAT binds CoA, rather than acetyl-CoA, it appears that CRAT possesses the ability to hydrolyze acetyl-CoA, before interacting with the lone CoA fragment at the binding site. CoA is bound in a linear conformation with its pantothenic arm binding at the active site. Here, the pantothenic arm’s terminal thiol group and the ε2 nitrogen on the catalytic His343 side chain form a hydrogen bond. The 3’-phosphate on CoA forms interactions with residues Lys419 and Lys423. Also at the binding site, the residues Asp430 and Glu453 form a direct hydrogen bond to one another. If either residue exhibits a mutation, can result in a decrease in CRAT activity. Carnitine binding site. Carnitine binds to CRAT in a partially folded state, with its hydroxyl group and carboxyl group facing opposite directions. The site itself is composed of the C domain β sheet and particular residues from the N domain. Upon binding, a face of carnitine is left exposed to the space outside the enzyme. Like CoA, carnitine forms a hydrogen bond with the ε2 nitrogen on His343. In the case of carnitine, the bond is formed with its 3-hydroxyl group. This CRAT catalysis is stereospecific for carnitine, as the stereoisomer of the 3-hydroxyl group cannot sufficiently interact with the CRAT carnitine binding site. CRAT undergoes minor conformational changes upon binding with carnitine. Function. Enzyme mechanism. The His343 residue at the active site of CRAT acts as a base that is able to deprotonate the CoA thiol group or the Carnitine 3’-hydroxyl group depending on the direction of the reaction. The structure of CRAT optimizes this reaction by causing direct hydrogen bonding between the His343 and both substrates. The deprotonated group is now free to attack the acetyl group of acetyl-CoA or acetylcarnitine at its carbonyl site. The reaction proceeds directly, without the formation of a His343-acetyl intermediate. Hydrolysis. It is possible for catalysis to occur with only one of the two substrates. If either acetyl-CoA or acetylcarnitine binds to CRAT, a water molecule may fill the other binding site and act as an acetyl group acceptor. Substrate-assisted catalysis. The literature suggests that the trimethylammonium group on carnitine may be a crucial factor in CRAT catalysis. This group exhibits a positive charge that stabilizes the oxyanion in the reaction’s intermediate. This idea is supported by the fact the positive charge of carnitine is unnecessary for active site binding, but vital for the catalysis to proceed. This has been proven to be the case through the synthesis of a carnitine analog lacking its trimethylammonium group. This compound was able to compete with carnitine in binding to CRAT, but was unable to induce a reaction. The emergence of substrate-assisted catalysis has opened up new strategies for increasing synthetic substrate specificity. Biological function. There is evidence that suggests that CRAT activity is necessary for the cell cycle to proceed from the G1 phase to the S phase. Clinical significance. Those with an inherited deficiency in CRAT activity are at risk for developing severe heart and neurological problems. Reduced CRAT activity can be found in individuals suffering from Alzheimer’s disease. CRAT and its family of enzymes have great potential as targets for developing therapeutic treatments for Type 2 diabetes and other diseases. Interactions. CRAT is known to interact with NEDD8, PEX5, SUMO1. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541820
14541837
Carnitine O-octanoyltransferase
Carnitine O-octanoyltransferase (CROT or COT) is a member of the transferase family, more specifically a carnitine acyltransferase, a type of enzyme which catalyzes the transfer of acyl groups from acyl-CoAs to carnitine, generating CoA and an acyl-carnitine. (EC 2.3.1.137) Specifically, CROT catalyzes the chemical reaction: octanoyl-CoA + L-carnitine formula_0 CoA + L-octanoylcarnitine Thus, the two substrates of this enzyme are octanoyl-CoA and L-carnitine and its two products are CoA and L-octanoylcarnitine. This reaction is easily reversible, and does not require any energy input, as both fatty acyl-CoAs and fatty acylcarnitines are considered chemically “activated” forms of fatty acyl groups. Nomenclature. The systematic name of this enzyme is octanoyl-CoA:L-carnitine O-octanoyltransferase. Other names in common use include: This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. Structure. CROT is 612 amino acids long, with a molecular weight of about 70 kDa. In terms of broad overall structural features, CROT has 20 α-helices and 16 β-strands, and can be divided into two overall domains, named N and C. As of late 2007, 4 structures have been solved for this class of enzymes, with PDB accession codes 1XL7, 1XL8, 1XMC, and 1XMD. The significant catalytic residue within all carnitine acyltransferases, including CROT, has been found to be a histidine residue, confirmed by site-directed mutagenesis studies. In CROT, this residue is at position 327. This residue, including the rest of the active site, is found at the interface between the N and C domains. The active site stabilizes carnitine by an intricate network of hydrogen-bonding residues, along with a key water molecule. The longer acyl chain is stabilized by hydrophobic residues arrayed in an approximately-cylindrical fashion. As may be expected from members of the same enzymatic family, there is strong similarity between the structures of carnitine acetyltransferase (CRAT) and CROT, as these enzymes have 36% sequence homology. A key difference between these enzymes that may explain their selectivities between short and medium-chain acyl-CoAs hinges on a glycine residue which is present in the acyl binding site in CROT, Gly-553. In CRAT, however, the residue in the same position in the acyl binding site is a methionine residue, Met-564. These residues has been shown to serve as a substrate “gatekeeper” in both CRAT and CROT. M564G CRAT mutants have been shown to accept a wider variety of acyl-CoA substrates. Similarly, G553M CROT mutants show marked inactivity with octanoyl-CoA, while maintaining activity with short-chain acyl-CoAs. Function. One function of CROT is to supply acetyl-CoA to glucose-starved cells. In the absence of carnitine acetyltransferase (CRAT), acyltransferases such as CROT can catalyze the acetyl group transfer from acetylcarnitine to coenzyme A. Rescue experiments with CROT gene knockout cells have shown that peroxisomal CROT can mediate acetyl-CoA production under glucose-limited conditions. The peroxisome can then export these products into the cytosol. Localization. Though CROT is distributed on both sides of microsomal vesicles, it has also been found that the bulk of CROT activity in murine liver is in the cytoplasm face of the vesicles and endoplasmic reticulum. CROT may play a role in converting peroxisomal medium-chain acylcarnitine derivatives to medium-chain acyl-CoA derivatives. These can then feed into a variety of biosynthetic pathways for elongations and other modifications. In addition, CROT is inhibited by trypsin in a dose-dependent manner. A maximum of 60% inhibition was observed in purified CROT, similar to what was seen with carnitine palmitoyltransferase (CPT). CROT activity also appears to be inhibited to the same extent in both permeable and sealed microsomal membranes. CROT is thought to be peroxisomally-located. It was found that administration of di(2-ethylhexyl)phthalate (DEHP), a peroxisomal proliferator, to Wistar rats led to an increase in the expression of CROT by a factor of 14.1. This was as a result of increased translation of CROT mRNA, along with decreased degradation by a factor of 1.5. CROT activity has also been reported in mouse liver, kidney, adipocyte, mammary gland, skeletal muscle, and heart tissues. It was found that CROT activity in the kidney was mostly overt, while in the liver and heart it was mainly latent. Interestingly, the trend for a related enzyme, carnitine palmitoyl transferase (CPT), the opposite trend was found. Substrates. While CROT's canonical substrate is octanoyl-CoA, CROT is also known to be able to catalyze the deacylation of numerous acyl-CoAs, such as acetyl-CoA, propionyl-CoA, butyryl-CoA, and hexanoyl-CoA. CROT can also take branched-chain fatty acyl-CoAs as substrates, such as 4,8-dimethylnonanoyl-CoA, which is derived from the metabolism of pristanic acid in the peroxisome. Regulation. Because CROT activity has a role in beta-oxidation of fatty acids and ketone body synthesis, it is an important point of regulation. One known inhibitor of CROT is malonyl-CoA, which inhibits CROT non-linearly. Complex kinetic behavior is observed when malonyl-CoA is incubated with purified CROT. A decrease in pH can also enhance malonyl-CoA inhibition of CROT. Some studies have indicated that when the pH of assaying conditions was decreased from 7.4 to 6.8, inhibition could increase by 20-30%. Further, the Ki for malonyl-CoA in CROT decreases from 106 uM to 35 uM over this drop. This change is not seen for palmitoyl-CoA and decanoyl-CoA. However, the degree of inhibition by malonyl-CoA is similar to that observed with other short-chain acyl-CoA esters, such as glutaryl-CoA, hydroxymethylglutaryl-CoA, and methylmalonyl-CoA. The ionization state of malonyl-CoA does not change significantly over the pH range 7.4-6.8. The change in sensitivity to inhibitors may be due to the CROT active-site His-327 residue. Malonyl-CoA is also found at a lower concentration in the cell (1-6 uM) than its Ki.Thus, its inhibition of CROT may not be physiologically significant under homeostatic conditions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541837
14541853
CDP-acylglycerol O-arachidonoyltransferase
In enzymology, a CDP-acylglycerol O-arachidonoyltransferase (EC 2.3.1.70) was an enzyme construed to catalyze the chemical reaction arachidonoyl-CoA + CDP-acylglycerol formula_0 CoA + CDP-diacylglycerol When discovered in 1979, the two substrates of this enzyme were believed to be arachidonoyl-CoA and CDP-acylglycerol, whereas its two products were CoA and CDP-diacylglycerol. Such enzyme were describes as transferases, specifically acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class was arachidonoyl-CoA:CDP-acylglycerol O-arachidonoyltransferase. Other names also found are CDP-acylglycerol O-arachidonyltransferase, and arachidonyl-CoA:CDP-acylglycerol O-arachidonyltransferase. Such enzyme was presumably participating in glycerophospholipid metabolism. However, no CDP-acylglycerol O-arachidonoyltransferase has been characterized, the reaction demonstrated in 1979 by Thompson and MacDonald was not reproducible. In 1983, W. Thomson retracted his discovery explaining possible contamination of their batch of liponucleotides. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=14541853