id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
15394298
Venkatesan Guruswami
Venkatesan Guruswami (born 1976) is a senior scientist at the Simons Institute for the Theory of Computing and Professor of EECS and Mathematics at the University of California, Berkeley. He did his high schooling at Padma Seshadri Bala Bhavan in Chennai, India. He completed his undergraduate in Computer Science from IIT Madras and his doctorate from Massachusetts Institute of Technology under the supervision of Madhu Sudan in 2001. After receiving his PhD, he spent a year at UC Berkeley as a Miller Fellow, and then was a member of the faculty at the University of Washington from 2002 to 2009. His primary area of research is computer science, and in particular on error-correcting codes. During 2007–2008, he visited the Institute for Advanced Study as a Member of School of Mathematics. He also visited SCS at Carnegie Mellon University during 2008–09 as a visiting faculty. From July 2009 through December 2020 he was a faculty member in the Computer Science Department in the School of Computer Science at Carnegie Mellon University. Recognition. Guruswami was awarded the 2002 ACM Doctoral Dissertation Award for his dissertation "List Decoding of Error-Correcting Codes", which introduced an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. It applies to Reed–Solomon codes and more generally to algebraic geometry codes. This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over formula_0 and its extensions. He was an invited speaker in International Congress of Mathematicians 2010, Hyderabad on the topic of "Mathematical Aspects of Computer Science." Guraswami was one of two winners of the 2012 Presburger Award, given by the European Association for Theoretical Computer Science for outstanding contributions by a young theoretical computer scientist. He was elected as an ACM Fellow in 2017, as an IEEE Fellow in 2019, and to the 2023 class of Fellows of the American Mathematical Society, "for contributions to the theory of computing and error-correcting codes, and for service to the profession". References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "GF(2^m)" } ]
https://en.wikipedia.org/wiki?curid=15394298
1539548
Reversible computing
Model of computation in which all processes are time-reversible Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is a form of unconventional computing. Due to the unitarity of quantum mechanics, quantum circuits are reversible, as long as they do not "collapse" the quantum states on which they operate. Reversibility. There are two major, closely related types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be "physically reversible" if it results in no increase in physical entropy; it is isentropic. There is a style of circuit design ideally exhibiting this property that is referred to as charge recovery logic, adiabatic circuits, or adiabatic computing (see Adiabatic process). Although "in practice" no nonstationary physical process can be "exactly" physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well isolated from interactions with unknown external environments, when the laws of physics describing the system's evolution are precisely known. A motivation for the study of technologies aimed at implementing reversible computing is that they offer what is predicted to be the only potential way to improve the computational energy efficiency (i.e., useful operations performed per unit energy dissipated) of computers beyond the fundamental von Neumann–Landauer limit of "kT" ln(2) energy dissipated per irreversible bit operation. Although the Landauer limit was millions of times below the energy consumption of computers in the 2000s and thousands of times less in the 2010s, proponents of reversible computing argue that this can be attributed largely to architectural overheads which effectively magnify the impact of Landauer's limit in practical circuit designs, so that it may prove difficult for practical technology to progress very far beyond current levels of energy efficiency if reversible computing principles are not used. Relation to thermodynamics. As was first argued by Rolf Landauer while working at IBM, in order for a computational process to be physically reversible, it must also be "logically reversible". Landauer's principle is the observation that the oblivious erasure of "n" bits of known information must always incur a cost of "nkT" ln(2) in thermodynamic entropy. A discrete, deterministic computational process is said to be logically reversible if the transition function that maps old computational states to new ones is a one-to-one function; i.e. the output logical states uniquely determine the input logical states of the computational operation. For computational processes that are nondeterministic (in the sense of being probabilistic or random), the relation between old and new states is not a single-valued function, and the requirement needed to obtain physical reversibility becomes a slightly weaker condition, namely that the size of a given ensemble of possible initial computational states does not decrease, on average, as the computation proceeds forwards. Physical reversibility. Landauer's principle (and indeed, the second law of thermodynamics) can also be understood to be a direct logical consequence of the underlying reversibility of physics, as is reflected in the general Hamiltonian formulation of mechanics, and in the unitary time-evolution operator of quantum mechanics more specifically. The implementation of reversible computing thus amounts to learning how to characterize and control the physical dynamics of mechanisms to carry out desired computational operations so precisely that the experiment accumulates a negligible total amount of uncertainty regarding the complete physical state of the mechanism, per each logic operation that is performed. In other words, precisely track the state of the active energy that is involved in carrying out computational operations within the machine, and design the machine so that the majority of this energy is recovered in an organized form that can be reused for subsequent operations, rather than being permitted to dissipate into the form of heat. Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms for computing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing someday to build computers that generate much less than 1 bit's worth of physical entropy (and dissipate much less than "kT" ln 2 energy to heat) for each useful logical operation that they carry out internally. Today, the field has a substantial body of academic literature. A wide variety of reversible device concepts, logic gates, electronic circuits, processor architectures, programming languages, and application algorithms have been designed and analyzed by physicists, electrical engineers, and computer scientists. This field of research awaits the detailed development of a high-quality, cost-effective, nearly reversible logic device technology, one that includes highly energy-efficient clocking and synchronization mechanisms, or avoids the need for these through asynchronous design. This sort of solid engineering progress will be needed before the large body of theoretical research on reversible computing can find practical application in enabling real computer technology to circumvent the various near-term barriers to its energy efficiency, including the von Neumann–Landauer bound. This may only be circumvented by the use of logically reversible computing, due to the second law of thermodynamics. Logical reversibility. For a computational operation to be logically reversible means that the output (or final state) of the operation can be computed from the input (or initial state), and vice versa. Reversible functions are bijective. This means that reversible gates (and circuits, i.e. compositions of multiple gates) generally have the same number of input bits as output bits (assuming that all input bits are consumed by the operation, and that all input/output states are possible). An inverter (NOT) gate is logically reversible because it can be "undone". The NOT gate may however not be physically reversible, depending on its implementation. The exclusive or (XOR) gate is irreversible because its two inputs cannot be unambiguously reconstructed from its single output, or alternatively, because information erasure is not reversible. However, a reversible version of the XOR gate—the controlled NOT gate (CNOT)—can be defined by preserving one of the inputs as a 2nd output. The three-input variant of the CNOT gate is called the Toffoli gate. It preserves two of its inputs "a,b" and replaces the third "c" by formula_0. With formula_1, this gives the AND function, and with formula_2 this gives the NOT function. Because AND and NOT together is a functionally complete set, the Toffoli gate is universal and can implement any Boolean function (if given enough initialized ancilla bits). Similarly, in the Turing machine model of computation, a reversible Turing machine is one whose transition function is invertible, so that each machine state has at most one predecessor. proposed a reversible Turing machine in a 1963 paper, but apparently unaware of Landauer's principle, did not pursue the subject further, devoting most of the rest of his career to ethnolinguistics. In 1973 Charles H. Bennett, at IBM Research, showed that a universal Turing machine could be made both logically and thermodynamically reversible, and therefore able in principle to perform an arbitrarily large number of computation steps per unit of physical energy dissipated, if operated sufficiently slowly. Thermodynamically reversible computers could perform useful computations at useful speed, while dissipating considerably less than "kT" of energy per logical step. In 1982 Edward Fredkin and Tommaso Toffoli proposed the Billiard ball computer, a mechanism using classical hard spheres to do reversible computations at finite speed with zero dissipation, but requiring perfect initial alignment of the balls' trajectories, and Bennett's review compared these "Brownian" and "ballistic" paradigms for reversible computation. Aside from the motivation of energy-efficient computation, reversible logic gates offered practical improvements of bit-manipulation transforms in cryptography and computer graphics. Since the 1980s, reversible circuits have attracted interest as components of quantum algorithms, and more recently in photonic and nano-computing technologies where some switching devices offer no signal gain. Surveys of reversible circuits, their construction and optimization, as well as recent research challenges, are available. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" /> External links.
[ { "math_id": 0, "text": "c\\oplus (a\\cdot b)" }, { "math_id": 1, "text": "c=0" }, { "math_id": 2, "text": "a\\cdot b=1" } ]
https://en.wikipedia.org/wiki?curid=1539548
15395806
Process management (computing)
Computer system for maintaining order among running programs A process is a program in execution, and an integral part of any modern-day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronization among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process. Multiprogramming. In any modern operating system there can be more than one instance of a program loaded in memory at the same time. For example, more than one user could be executing the same program, each user having separate copies of the program loaded into memory. With some programs, it is possible to have one copy loaded into memory, while several users have shared access to it so that they can each execute the same program-code. Such a program is said to be re-entrant. The processor at any instant can only be executing one instruction from one program but several processes can be sustained over a period of time by assigning each process to the processor at intervals while the remainder become temporarily inactive. A number of processes being executed over a period of time instead of at the same time is called concurrent execution. A multiprogramming or multitasking OS is a system executing many processes concurrently. Multiprogramming requires that the processor be allocated to each process for a period of time and de-allocated at an appropriate moment. If the processor is de-allocated during the execution of a process, it must be done in such a way that it can be restarted later as easily as possible. There are two possible ways for an OS to regain control of the processor during a program's execution in order for the OS to perform de-allocation or allocation: The stopping of one process and starting (or restarting) of another process is called a context switch or context change. In many modern operating systems, processes can consist of many sub-processes. This introduces the concept of a "thread". A thread may be viewed as a "sub-process"; that is, a separate, independent sequence of execution within the code of one process. Threads are becoming increasingly important in the design of distributed and client–server systems and in software run on multi-processor systems. How multiprogramming increases efficiency. A common trait observed among processes associated with most computer programs is that they alternate between CPU cycles and I/O cycles. For the portion of the time required for CPU cycles, the process is being executed; i.e. is occupying the CPU. During the time required for I/O cycles, the process is not using the processor. Instead, it is either waiting to perform Input/Output, or is actually performing Input/Output. An example of this is reading from or writing to a file on disk. Prior to the advent of multiprogramming, computers operated as single-user systems. Users of such systems quickly became aware that for much of the time that a computer was allocated to a single user, the processor was idle; when the user was entering information or debugging programs for example. Computer scientists observed that the overall performance of the machine could be improved by letting a different process use the processor whenever one process was waiting for input/output. In a "uni-programming system", if "N" users were to execute programs with individual execution times of "t"1, "t"2, ..., "t""N", then the total time, "t"uni, to service the "N" processes (consecutively) of all "N" users would be: "t"uni = "t"1 + "t"2 + ... + "t""N". However, because each process consumes both CPU cycles and I/O cycles, the time which each process actually uses the CPU is a very small fraction of the total execution time for the process. So, for process "i": "t""i" (processor) ≪ "t""i" (execution) where "t""i" (processor) is the time process "i" spends using the CPU, and<br> "t""i" (execution) is the total execution time for the process; i.e. the time for CPU cycles plus I/O cycles to be carried out (executed) until completion of the process. In fact, usually, the sum of all the processor time, used by "N" processes, rarely exceeds a small fraction of the time to execute any one of the processes; formula_0 Therefore, in uni-programming systems, the processor lay idle for a considerable proportion of the time. To overcome this inefficiency, multiprogramming is now implemented in modern operating systems such as Linux, UNIX and Microsoft Windows. This enables the processor to switch from one process, X, to another, Y, whenever X is involved in the I/O phase of its execution. Since the processing time is much less than a single job's runtime, the total time to service all "N" users with a multiprogramming system can be reduced to approximately: "t"multi = max("t"1, "t"2, ..., "t""N") Process creation. Operating systems need some ways to create processes. In a very simple system designed for running only a single application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be needed present when the system comes up. In general-purpose systems, however, some way is needed to create and terminate processes as needed during operation. There are four principal events that cause a process to be created: When an operating system is booted, typically several processes are created. Some of these are foreground processes, that interact with a (human) user and perform work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, one background process may be designed to accept incoming e-mails, sleeping most of the day but suddenly springing to life when an incoming e-mail arrives. Another background process may be designed to accept an incoming request for web pages hosted on the machine, waking up when a request arrives to service that request. Process creation in UNIX and Linux is done through fork() or clone() system calls. There are several steps involved in process creation. The first step is the validation of whether the parent process has sufficient authorization to create a process. Upon successful validation, the parent process is copied almost entirely, with changes only to the unique process id, parent process, and user-space. Each new process gets its own user space. Process creation in Windows is done through the CreateProcessA() system call. A new process runs in the security context of the calling process, but otherwise runs independently of the calling process. Methods exist to alter the security context in which a new processes runs. New processes are assigned identifiers by which they can be accessed. Functions are provided to synchronize calling threads to newly created processes. Process termination. There are many reasons for process termination: Two-state process management model. The operating system's principal responsibility is to control the execution of processes. This includes determining the interleaving pattern for execution and allocation of resources to processes. One part of designing an OS is to describe the behavior that we would like each process to exhibit. The simplest model is based on the fact that a process is either being executed by a processor or it is not. Thus, a process may be considered to be in one of two states, "RUNNING" or "NOT RUNNING". When the operating system creates a new process, that process is initially labeled as "NOT RUNNING", and is placed into a queue in the system in the "NOT RUNNING" state. The process (or some portion of it) then exists in main memory, and it waits in the queue for an opportunity to be executed. After some period of time, the currently "RUNNING" process will be interrupted, and moved from the "RUNNING" state to the "NOT RUNNING" state, making the processor available for a different process. The dispatch portion of the OS will then select, from the queue of "NOT RUNNING" processes, one of the waiting processes to transfer to the processor. The chosen process is then relabeled from a "NOT RUNNING" state to a "RUNNING" state, and its execution is either begun if it is a new process, or is resumed if it is a process which was interrupted at an earlier time. From this model, we can identify some design elements of the OS: Three-state process management model. Although the two-state process management model is a perfectly valid design for an operating system, the absence of a "BLOCKED" state means that the processor lies idle when the active process changes from CPU cycles to I/O cycles. This design does not make efficient use of the processor. The three-state process management model is designed to overcome this problem, by introducing a new state called the "BLOCKED" state. This state describes any process which is waiting for an I/O event to take place. In this case, an I/O event can mean the use of some device or a signal from another process. The three states in this model are: At any instant, a process is in one and only one of the three states. For a single processor computer, only one process can be in the "RUNNING" state at any one instant. There can be many processes in the "READY" and "BLOCKED" states, and each of these states will have an associated queue for processes. Processes entering the system must go initially into the "READY" state, processes can only enter the "RUNNING" state via the "READY" state. Processes normally leave the system from the "RUNNING" state. For each of the three states, the process occupies space in the main memory. While the reason for most transitions from one state to another might be obvious, some may not be so clear. Process description and control. Each process in the system is represented by a data structure called a Process Control Block (PCB), or Process Descriptor in Linux. Process Identification: Each process is uniquely identified by the user's identification and a pointer connecting it to its descriptor. Process Status: This indicates the current status of the process; "READY", "RUNNING", "BLOCKED", "READY SUSPEND", "BLOCKED SUSPEND". Process State: This contains all of the information needed to indicate the current state of the job. Accounting: This contains information used mainly for billing purposes and for performance measurement. It indicates what kind of resources the process has used and for how long. Processor modes. Contemporary processors incorporate a mode bit to define the execution capability of a program in the processor. This bit can be set to "kernel mode" or "user mode". Kernel mode is also commonly referred to as "supervisor mode", "monitor mode" or "ring 0". In kernel mode, the processor can execute every instruction in its hardware repertoire, whereas in user mode, it can only execute a subset of the instructions. Instructions that can be executed only in kernel mode are called kernel, privileged, or protected instructions to distinguish them from the user mode instructions. For example, I/O instructions are privileged. So, if an application program executes in user mode, it cannot perform its own I/O. Instead, it must request the OS to perform I/O on its behalf. The computer architecture may logically extend the mode bit to define areas of memory to be used when the processor is in kernel mode versus user mode. If the mode bit is set to kernel mode, the process executing in the processor can access either the kernel or user partition of the memory. However, if user mode is set, the process can reference only the user memory space. We frequently refer to two classes of memory user space and system space (or kernel, supervisor, or protected space). In general, the mode bit extends the operating system's protection rights. The mode bit is set by the user mode trap instruction, This instruction sets the mode bit, and branches to a fixed location in the system space. Since only system code is loaded in the system space, only system code can be invoked via a trap. When the OS has completed the supervisor call, it resets the mode bit to user mode prior to the return. The Kernel system concept. The parts of the OS critical to its correct operation execute in kernel mode, while other software (such as generic system software) and all application programs execute in user mode. This fundamental distinction is usually the irrefutable distinction between the operating system and other system software. The part of the system executing in the kernel supervisor state is called the kernel, or nucleus, of the operating system. The kernel operates as trusted software, meaning that when it was designed and implemented, it was intended to implement protection mechanisms that could not be covertly changed through the actions of untrusted software executing in user space. Extensions to the OS execute in user mode, so the OS does not rely on the correctness of those parts of the system software for the correct operation of the OS. Hence, a fundamental design decision for any function to be incorporated into the OS is whether it needs to be implemented in the kernel. If it is implemented in the kernel, it will execute in kernel (supervisor) space, and have access to other parts of the kernel. It will also be trusted software by the other parts of the kernel. If the function is implemented to execute in user mode, it will have no access to kernel data structures. However, the advantage is that it will normally require very limited effort to invoke the function. While kernel-implemented functions may be easy to implement, the trap mechanism and authentication at the time of the call are usually relatively expensive. The kernel code runs fast, but there is a large performance overhead in the actual call. This is a subtle, but important point. Requesting system services. There are two techniques by which a program executing in user mode can request the kernel's services: Operating systems are designed with one or the other of these two facilities, but not both. First, assume that a user process wishes to invoke a particular target system function. For the system call approach, the user process uses the trap instruction. The idea is that the system call should appear to be an ordinary procedure call to the application program; the OS provides a library of user functions with names corresponding to each actual system call. Each of these stub functions contains a trap to the OS function. When the application program calls the stub, it executes the trap instruction, which switches the CPU to kernel mode, and then branches (indirectly through an OS table), to the entry point of the function which is to be invoked. When the function completes, it switches the processor to user mode and then returns control to the user process; thus simulating a normal procedure return. In the message passing approach, the user process constructs a message, that describes the desired service. Then it uses a trusted send function to pass the message to a trusted OS process. The send function serves the same purpose as the trap; that is, it carefully checks the message, switches the processor to kernel mode, and then delivers the message to a process that implements the target functions. Meanwhile, the user process waits for the result of the service request with a message receive operation. When the OS process completes the operation, it sends a message back to the user process. The distinction between the two approaches has important consequences regarding the relative independence of the OS behavior, from the application process behavior, and the resulting performance. As a rule of thumb, operating system based on a system call interface can be made more efficient than those requiring messages to be exchanged between distinct processes. This is the case, even though the system call must be implemented with a trap instruction; that is, even though the trap is relatively expensive to perform, it is more efficient than the message-passing approach, where there are generally higher costs associated with the process multiplexing, message formation and message copying. The system call approach has the interesting property that there is not necessarily any OS process. Instead, a process executing in user mode changes to kernel mode when it is executing kernel code, and switches back to user mode when it returns from the OS call. If, on the other hand, the OS is designed as a set of separate processes, it is usually easier to design it so that it gets control of the machine in special situations, than if the kernel is simply a collection of functions executed by users processes in kernel mode. Even procedure-based operating systems usually find it necessary to include at least a few system processes (called daemons in UNIX) to handle situations whereby the machine is otherwise idle such as scheduling and handling the network.
[ { "math_id": 0, "text": "\\sum_{j=1}^{N} t_{j \\, (\\mathrm{processor})} < t_{i \\, (\\mathrm{execution}\\!)}" } ]
https://en.wikipedia.org/wiki?curid=15395806
15397524
Morphological dictionary
Linguistic resource In the fields of computational linguistics and applied linguistics, a morphological dictionary is a linguistic resource that contains correspondences between surface form and lexical forms of words. Surface forms of words are those found in natural language text. The corresponding lexical form of a surface form is the lemma followed by grammatical information (for example the part of speech, gender and number). In English "give", "gives", "giving", "gave" and "given" are surface forms of the verb "give". The lexical form would be "give", verb. There are two kinds of morphological dictionaries: morpheme-aligned dictionaries and full-form (non-aligned) dictionaries. Notable examples and formalisms. Universal Morphologies. Inspired by the success of the Universal Dependencies for cross-linguistic annotation of syntactic dependencies, similar efforts have emerged for morphology, e.g., UniMorph and UDer. These feature simple tabular (tab-separated) formats with one form in a row, and its derivation (UDer), resp., inflection information (UniMorph):aalen   aalend  V.PTCP;PRS aalen   aalen   V;IND;PRS;1;PL aalen   aalen   V;IND;PRS;3;PL aalen   aalen   V;NFIN (UniMorph, German. Columns are LEMMA, FORM, FEATURES)In UDer, additional information (part of speech) is encoded within the columns:abändern_V      Abänderung_Nf   dVN07&gt; Abarbeiten_Nn   abarbeiten_V    dNV09&gt; abartig_A       Abartigkeit_Nf  dAN03&gt; Abart_Nf        abartig_A       dNA05&gt; abbaggern_V     Abbaggern_Nn    dVN09&gt; (UDer, German DErivBase 0.5. Columns are BASE, DERIVED, RULE)At the time of writing (2021), all of these are non-aligned morphological dictionaries (see below). Their simplistic format is particularly well-suited for the application of machine learning techniques, and UniMorph in particular, has been subject of numerous shared tasks. Finite State Transducers. Finite State Transducers (FSTs) are a popular technique for the computational handling of morphology, esp., inflectional morphology. In rule-based morphological parsers, both lexicon and rules are normally formalized as finite state automata and subsequently combined. They thus require morphological dictionaries with specific processing instructions (which often have a linguistic interpretation, but, technically, are just treated like arbitrary string symbols). Popular FST packages such as SFST (as available from the fst package in Debian and Ubuntu) allow to define application-specific file formats for morphological lexica, that bundle different pieces of morphological information with every individual morpheme. These are thus aligned morphological dictionaries, but very rich (and also, idiosyncratic) in structure. Sample data from SMOR (German SFST grammar):&lt;Base_Stems&gt;Aachen&lt;NN&gt;&lt;base&gt;&lt;nativ&gt;&lt;Name-Neut_s&gt; &lt;Base_Stems&gt;Aal&lt;NN&gt;&lt;base&gt;&lt;nativ&gt;&lt;NMasc_es_e&gt; &lt;Base_Stems&gt;Aarau&lt;NN&gt;&lt;base&gt;&lt;nativ&gt;&lt;Name-Neut_s&gt; &lt;Suff_Stems&gt;&lt;suffderiv&gt;&lt;gebunden&gt;&lt;kompos&gt;&lt;NN&gt;nom&lt;&gt;:e&lt;&gt;:n&lt;NN&gt;&lt;SUFF&gt;&lt;kompos&gt;&lt;frei&gt; &lt;Suff_Stems&gt;&lt;suffderiv&gt;&lt;gebunden&gt;&lt;kompos&gt;&lt;NN&gt;nom&lt;NN&gt;&lt;SUFF&gt;&lt;base&gt;&lt;frei&gt;&lt;NMasc_en_en&gt; &lt;Suff_Stems&gt;&lt;suffderiv&gt;&lt;gebunden&gt;&lt;kompos&gt;&lt;NN&gt;nom&lt;NN&gt;&lt;SUFF&gt;&lt;deriv&gt;&lt;frei&gt; Interlinear Glossed Text editors. Interlinear Glossed Text (IGT) is a popular formalism in language documentation, linguistic typology and other branches of linguistics and the philologies. Although IGT can be created without any specialized software (but just with a conventional editor), such specialized software has been developed, with notable examples such as Toolbox, the FieldWorks Language Explorer (FLEx) or open source alternatives such as Xigt. Toolbox and FLEx support semi-automated annotation by means of an internal morphological dictionary. Whenever a morphological segment is encountered for which an annotation in the dictionary can be found, this annotations is applied. Whenever a morphological segment is newly annotated, the annotation is stored in the dictionary. FLEx and Toolbox provide different editor functionalities for annotating text and editing dictionaries, so that additional information beyond that found in annotations can be added, but at its core, their formats provide aligned morphological dictionaries. FLEx and Xigt are based on XML formats, Toolbox uses a plain text format with idiosyncratic "markers". FLEx and Toolbox are not directly interoperable with each other, but a semiautomated converter for Toolbox to FLEx does exist. Xigt comes with FLEx and Toolbox importers, but is less widely used that either FLEx or Toolbox. Their formats of FLEx and Toolbox are not intended for human consumption, nor are they well-supported by any processing software other than their native tools. OntoLex-Morph: A community standard for morphological dictionaries. OntoLex is a community standard for machine-readable dictionaries on the web. In 2019, the OntoLex-Morph module has been proposed to facilitate data modelling of morphology in lexicography, as well as to provide a data model for morphological dictionaries for Natural Language Processing. OntoLex-Morph does support both aligned and non-aligned morphological dictionaries. A specific goal is to establish interoperability between and among IGT dictionaries, FST lexicons and morphological dictionaries used for machine learning. Types and structure of morphological dictionaries. Aligned morphological dictionaries. In an aligned morphological dictionary, the correspondence between the surface form and the lexical form of a word is aligned at the character level, for example: (h,h) (o,o) (u,u) (s,s) (e,e) (s,⟨n⟩), (θ,⟨pl⟩) Where θ is the empty symbol and ⟨n⟩ signifies "noun", and ⟨pl⟩ signifies "plural". In the example the left hand side is the surface form (input), and the right hand side is the lexical form (output). This order is used in morphological analysis where a lexical form is generated from a surface form. In morphological generation this order would be reversed. Formally, if Σ is the alphabet of the input symbols, and formula_0 is the alphabet of the output symbols, an aligned morphological dictionary is a subset formula_1, where: formula_2 is the alphabet of all the possible alignments including the empty symbol. That is, an aligned morphological dictionary is a set of string in formula_3. Non-aligned morphological dictionaries (full-form dictionaries). A non-aligned morphological dictionary (or full-form dictionary) is simply a set formula_4 of pairs of input and output strings. A non-aligned morphological dictionary would represent the previous example as: (houses, house⟨n⟩⟨pl⟩) It is possible to convert a non-aligned dictionary into an aligned dictionary. Besides trivial alignments to the left or to the right, linguistically motivated alignments which align characters to their corresponding morphemes are possible. Lexical ambiguities. Frequently there exists more than one lexical form associated with a surface form of a word. For example, "house" may be a noun in the singular, , or may be a verb in the present tense, . As a result of this it is necessary to have a function which relates input strings with their corresponding output strings. If we define the set formula_5 of input words such that formula_6, the correspondence function would be formula_7 defined as formula_8. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Gamma " }, { "math_id": 1, "text": " A \\subset 2^{(L^*)} " }, { "math_id": 2, "text": " L = (( \\Sigma \\cup { \\theta } ) \\times \\Gamma) \\cup (\\Sigma \\times ( \\Gamma \\cup { \\theta } )) " }, { "math_id": 3, "text": "L^*" }, { "math_id": 4, "text": " U \\subset 2^{(\\Gamma^* \\times \\Sigma^*)}" }, { "math_id": 5, "text": " E \\subset \\Sigma^* " }, { "math_id": 6, "text": " E = { w: (w,w') \\in U } " }, { "math_id": 7, "text": " \\tau : E \\rightarrow 2^{\\Gamma^{*}} " }, { "math_id": 8, "text": " \\tau(w) = w' : (w,w') \\in U " } ]
https://en.wikipedia.org/wiki?curid=15397524
1539785
Dark matter halo
Theoretical cosmological structure In modern models of physical cosmology, a dark matter halo is a basic unit of cosmological structure. It is a hypothetical region that has decoupled from cosmic expansion and contains gravitationally bound matter. A single dark matter halo may contain multiple virialized clumps of dark matter bound together by gravity, known as subhalos. Modern cosmological models, such as ΛCDM, propose that dark matter halos and subhalos may contain galaxies. The dark matter halo of a galaxy envelops the galactic disc and extends well beyond the edge of the visible galaxy. Thought to consist of dark matter, halos have not been observed directly. Their existence is inferred through observations of their effects on the motions of stars and gas in galaxies and gravitational lensing. Dark matter halos play a key role in current models of galaxy formation and evolution. Theories that attempt to explain the nature of dark matter halos with varying degrees of success include cold dark matter (CDM), warm dark matter, and massive compact halo objects (MACHOs). Rotation curves as evidence of a dark matter halo. The presence of dark matter (DM) in the halo is inferred from its gravitational effect on a spiral galaxy's rotation curve. Without large amounts of mass throughout the (roughly spherical) halo, the rotational velocity of the galaxy would decrease at large distances from the galactic center, just as the orbital speeds of the outer planets decrease with distance from the Sun. However, observations of spiral galaxies, particularly radio observations of line emission from neutral atomic hydrogen (known, in astronomical parlance, as 21 cm Hydrogen line, H one, and H I line), show that the rotation curve of most spiral galaxies flattens out, meaning that rotational velocities do not decrease with distance from the galactic center. The absence of any visible matter to account for these observations implies either that unobserved (dark) matter, first proposed by Ken Freeman in 1970, exist, or that the theory of motion under gravity (general relativity) is incomplete. Freeman noticed that the expected decline in velocity was not present in NGC 300 nor M33, and considered an undetected mass to explain it. The DM Hypothesis has been reinforced by several studies. Formation and structure of dark matter halos. The formation of dark matter halos is believed to have played a major role in the early formation of galaxies. During initial galactic formation, the temperature of the baryonic matter should have still been much too high for it to form gravitationally self-bound objects, thus requiring the prior formation of dark matter structure to add additional gravitational interactions. The current hypothesis for this is based on cold dark matter (CDM) and its formation into structure early in the universe. The hypothesis for CDM structure formation begins with density perturbations in the Universe that grow linearly until they reach a critical density, after which they would stop expanding and collapse to form gravitationally bound dark matter halos. The spherical collapse framework analytically models the formation and growth of such halos. These halos would continue to grow in mass (and size), either through accretion of material from their immediate neighborhood, or by merging with other halos. Numerical simulations of CDM structure formation have been found to proceed as follows: A small volume with small perturbations initially expands with the expansion of the Universe. As time proceeds, small-scale perturbations grow and collapse to form small halos. At a later stage, these small halos merge to form a single virialized dark matter halo with an ellipsoidal shape, which reveals some substructure in the form of dark matter sub-halos. The use of CDM overcomes issues associated with the normal baryonic matter because it removes most of the thermal and radiative pressures that were preventing the collapse of the baryonic matter. The fact that the dark matter is cold compared to the baryonic matter allows the DM to form these initial, gravitationally bound clumps. Once these subhalos formed, their gravitational interaction with baryonic matter is enough to overcome the thermal energy, and allow it to collapse into the first stars and galaxies. Simulations of this early galaxy formation matches the structure observed by galactic surveys as well as observation of the Cosmic Microwave Background. Density profiles. A commonly used model for galactic dark matter halos is the pseudo-isothermal halo: formula_0 where formula_1 denotes the finite central density and formula_2 the core radius. This provides a good fit to most rotation curve data. However, it cannot be a complete description, as the enclosed mass fails to converge to a finite value as the radius tends to infinity. The isothermal model is, at best, an approximation. Many effects may cause deviations from the profile predicted by this simple model. For example, (i) collapse may never reach an equilibrium state in the outer region of a dark matter halo, (ii) non-radial motion may be important, and (iii) mergers associated with the (hierarchical) formation of a halo may render the spherical-collapse model invalid. Numerical simulations of structure formation in an expanding universe lead to the empirical NFW (Navarro–Frenk–White) profile: formula_3 where formula_4 is a scale radius, formula_5 is a characteristic (dimensionless) density, and formula_6 = formula_7 is the critical density for closure. The NFW profile is called 'universal' because it works for a large variety of halo masses, spanning four orders of magnitude, from individual galaxies to the halos of galaxy clusters. This profile has a finite gravitational potential even though the integrated mass still diverges logarithmically. It has become conventional to refer to the mass of a halo at a fiducial point that encloses an overdensity 200 times greater than the critical density of the universe, though mathematically the profile extends beyond this notational point. It was later deduced that the density profile depends on the environment, with the NFW appropriate only for isolated halos. NFW halos generally provide a worse description of galaxy data than does the pseudo-isothermal profile, leading to the cuspy halo problem. Higher resolution computer simulations are better described by the Einasto profile: formula_8 where r is the spatial (i.e., not projected) radius. The term formula_9 is a function of n such that formula_10 is the density at the radius formula_11 that defines a volume containing half of the total mass. While the addition of a third parameter provides a slightly improved description of the results from numerical simulations, it is not observationally distinguishable from the 2 parameter NFW halo, and does nothing to alleviate the cuspy halo problem. Shape. The collapse of overdensities in the cosmic density field is generally aspherical. So, there is no reason to expect the resulting halos to be spherical. Even the earliest simulations of structure formation in a CDM universe emphasized that the halos are substantially flattened. Subsequent work has shown that halo equidensity surfaces can be described by ellipsoids characterized by the lengths of their axes. Because of uncertainties in both the data and the model predictions, it is still unclear whether the halo shapes inferred from observations are consistent with the predictions of ΛCDM cosmology. Halo substructure. Up until the end of the 1990s, numerical simulations of halo formation revealed little substructure. With increasing computing power and better algorithms, it became possible to use greater numbers of particles and obtain better resolution. Substantial amounts of substructure are now expected. When a small halo merges with a significantly larger halo it becomes a subhalo orbiting within the potential well of its host. As it orbits, it is subjected to strong tidal forces from the host, which cause it to lose mass. In addition the orbit itself evolves as the subhalo is subjected to dynamical friction which causes it to lose energy and angular momentum to the dark matter particles of its host. Whether a subhalo survives as a self-bound entity depends on its mass, density profile, and its orbit. Angular momentum. As originally pointed out by Hoyle and first demonstrated using numerical simulations by Efstathiou &amp; Jones, asymmetric collapse in an expanding universe produces objects with significant angular momentum. Numerical simulations have shown that the spin parameter distribution for halos formed by dissipation-less hierarchical clustering is well fit by a log-normal distribution, the median and width of which depend only weakly on halo mass, redshift, and cosmology: formula_12 with formula_13 and formula_14. At all halo masses, there is a marked tendency for halos with higher spin to be in denser regions and thus to be more strongly clustered. Milky Way dark matter halo. The visible disk of the Milky Way Galaxy is thought to be embedded in a much larger, roughly spherical halo of dark matter. The dark matter density drops off with distance from the galactic center. It is now believed that about 95% of the galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the galaxy's matter and energy in any way except through gravity. The luminous matter makes up approximately solar masses. The dark matter halo is likely to include around to solar masses of dark matter. A 2014 Jeans analysis of stellar motions calculated the dark matter density (at the sun's distance from the galactic centre) = 0.0088 (+0.0024 −0.0018) solar masses/parsec^3. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho(r) = \\rho_o \\left[1+\\left(\\frac{r}{r_c}\\right)^2\\right]^{-1}" }, { "math_id": 1, "text": "\\rho_o" }, { "math_id": 2, "text": "r_c" }, { "math_id": 3, "text": "\\rho(r) = \\frac{\\rho_{crit} \\delta_c}{\\left(\\frac{r}{r_s}\\right)\\left(1+\\frac{r}{r_s}\\right)^2}" }, { "math_id": 4, "text": "r_s" }, { "math_id": 5, "text": "\\delta_c" }, { "math_id": 6, "text": "\\rho_{crit}" }, { "math_id": 7, "text": "3H^2/8\\pi G" }, { "math_id": 8, "text": "\\rho(r) = \\rho_e \\exp\\left[ -d_n \\left(\\left(\\frac{r}{r_e}\\right)^{\\frac{1}{n}}-1\\right)\\right]" }, { "math_id": 9, "text": "d_n" }, { "math_id": 10, "text": "\\rho_e" }, { "math_id": 11, "text": "r_e" }, { "math_id": 12, "text": "\\rho(\\lambda)d\\lambda = \\frac{1}{\\sqrt{2 \\pi} \\sigma_{ln\\lambda}} \\exp \\left[-\\frac{\\ln\\left(\\frac{\\lambda}{\\bar{\\lambda}}\\right)^2}{2 \\sigma^2_{\\ln\\lambda}}\\right] \\frac{d\\lambda}{\\lambda}" }, { "math_id": 13, "text": "\\bar{\\lambda} \\approx 0.035" }, { "math_id": 14, "text": "\\sigma_{ln\\lambda} \\approx 0.5" } ]
https://en.wikipedia.org/wiki?curid=1539785
15397886
Hyperdeterminant
Concept in algebra In algebra, the hyperdeterminant is a generalization of the determinant. Whereas a determinant is a scalar valued function defined on an "n" × "n" square matrix, a hyperdeterminant is defined on a multidimensional array of numbers or tensor. Like a determinant, the hyperdeterminant is a homogeneous polynomial with integer coefficients in the components of the tensor. Many other properties of determinants generalize in some way to hyperdeterminants, but unlike a determinant, the hyperdeterminant does not have a simple geometric interpretation in terms of volumes. There are at least three definitions of hyperdeterminant. The first was discovered by Arthur Cayley in 1843 presented to the Cambridge Philosophical Society. It is in two parts and Cayley's first hyperdeterminant is covered in the second part. It is usually denoted by det0. The second Cayley hyperdeterminant originated in 1845 and is often denoted "Det". This definition is a discriminant for a singular point on a scalar valued multilinear map. Cayley's first hyperdeterminant is defined only for hypercubes having an even number of dimensions (although variations exist in odd dimensions). Cayley's second hyperdeterminant is defined for a restricted range of hypermatrix formats (including the hypercubes of any dimensions). The third hyperdeterminant, most recently defined by Glynn, occurs only for fields of prime characteristic "p". It is denoted by det"p" and acts on all hypercubes over such a field. Only the first and third hyperdeterminants are "multiplicative," except for the second hyperdeterminant in the case of "boundary" formats. The first and third hyperdeterminants also have closed formulae as polynomials and therefore their degrees are known, whereas the second one does not appear to have a closed formula or degree in all cases that are known. The notation for determinants can be extended to hyperdeterminants without change or ambiguity. Hence the hyperdeterminant of a hypermatrix "A" may be written using the vertical bar notation as |"A"| or as "det"("A"). A standard modern textbook on Cayley's second hyperdeterminant Det (as well as many other results) is "Discriminants, Resultants and Multidimensional Determinants" by Gel'fand, Kapranov and Zelevinsky. Their notation and terminology is followed in the next section. Cayley's second hyperdeterminant Det. In the special case of a 2 × 2 × 2 hypermatrix the hyperdeterminant is known as Cayley's hyperdeterminant after the British mathematician Arthur Cayley who discovered it. The quartic expression for the Cayley's hyperdeterminant of hypermatrix "A" with components "a""ijk", "i", "j", "k" ∊ {0, 1} is given by Det("A") = "a"0002"a"1112 + "a"0012"a"1102 + "a"0102"a"1012 + "a"1002"a"0112 − 2"a"000"a"001"a"110"a"111 − 2"a"000"a"010"a"101"a"111 − 2"a"000"a"011"a"100"a"111 − 2"a"001"a"010"a"101"a"110 − 2"a"001"a"011"a"110"a"100 − 2"a"010"a"011"a"101"a"100 + 4"a"000"a"011"a"101"a"110 + 4"a"001"a"010"a"100"a"111. This expression acts as a discriminant in the sense that it is zero "if and only if" there is a non-zero solution in six unknowns "x""i", "y""i", "z""i", (with superscript "i" = 0 or 1) of the following system of equations "a"000"x"0"y"0 + "a"010"x"0"y"1 + "a"100"x"1"y"0 + "a"110"x"1"y"1 = 0 "a"001"x"0"y"0 + "a"011"x"0"y"1 + "a"101"x"1"y"0 + "a"111"x"1"y"1 = 0 "a"000"x"0"z"0 + "a"001"x"0"z"1 + "a"100"x"1"z"0 + "a"101"x"1"z"1 = 0 "a"010"x"0"z"0 + "a"011"x"0"z"1 + "a"110"x"1"z"0 + "a"111"x"1"z"1 = 0 "a"000"y"0"z"0 + "a"001"y"0"z"1 + "a"010"y"1"z"0 + "a"011"y"1"z"1 = 0 "a"100"y"0"z"0 + "a"101"y"0"z"1 + "a"110"y"1"z"0 + "a"111"y"1"z"1 = 0. The hyperdeterminant can be written in a more compact form using the Einstein convention for summing over indices and the Levi-Civita symbol which is an alternating tensor density with components ε"ij" specified by ε00 = ε11 = 0, ε01 = −ε10 = 1: "b""kn" = (1/2)ε"il"ε"jm""a""ijk""a""lmn" Det("A") = (1/2)ε"il"ε"jm""b""ij""b""lm". Using the same conventions we can define a multilinear form "f"(x,y,z) = "a""ijk" "x""i""y""j""z""k" Then the hyperdeterminant is zero if and only if there is a non-trivial point where all partial derivatives of "f" vanish. As a tensor expression. The above determinant can be written in terms of a generalisation of the Levi-Civita symbol: formula_0 where "f" is a generalisation of the Levi-Civita symbol which allows two indices to be the same: formula_1 formula_2 where the "f" satisfy: formula_3 As a discriminant. For symmetric 2 × 2 × 2 × ⋯ hypermatrices, the hyperdeterminant is the discriminant of a polynomial. For example, formula_4 formula_5 formula_6 formula_7 Then Det("A") is the discriminant of formula_8 Other general hyperdeterminants related to Cayley's Det. Definitions. In the general case a hyperdeterminant is defined as a discriminant for a multilinear map "f" from finite-dimensional vector spaces "V""i" to their underlying field "K" which may be formula_9 or formula_10. formula_11 "f" can be identified with a tensor in the tensor product of each dual space "V"*"i" formula_12 By definition a hyperdeterminant "Det"("f") is a polynomial in components of the tensor "f" which is zero if and only if the map "f" has a non-trivial point where all partial derivatives with respect to the components of its vector arguments vanish (a non-trivial point means that none of the vector arguments are zero.) The vector spaces "V""i" need not have the same dimensions and the hyperdeterminant is said to be of format ("k"1, ..., "k""r") "k""i" &gt; 0, if the dimension of each space "V""i" is "k""i" + 1. It can be shown that the hyperdeterminant exists for a given format and is unique up to a scalar factor, if and only if the largest number in the format is less than or equal to the sum of the other numbers in the format. This definition does not provide a means to construct the hyperdeteriminant and in general this is a difficult task. For hyperdeterminants with formats where "r" ≥ 4 the number of terms is usually too large to write out the hyperdeterminant in full. For larger "r" even the degree of the polynomial increases rapidly and does not have a convenient general formula. Examples. The case of formats with "r" = 1 deals with vectors of length "k"1 + 1. In this case the sum of the other format numbers is zero and "k"1 is always greater than zero so no hyperdeterminants exist. The case of "r" = 2 deals with ("k"1 + 1) × ("k"2 + 1) matrices. Each format number must be greater than or equal to the other, therefore only square matrices "S" have hyperdeterminants and they can be identified with the determinant det("S"). Applying the definition of the hyperdeterminant as a discriminant to this case requires that det("S") is zero when there are vectors "X" and "Y" such that the matrix equations "SX" = 0 and "YS" = 0 have solutions for non-zero "X" and "Y". For "r" &gt; 2 there are hyperdeterminants with different formats satisfying the format inequality. For example, Cayley's 2 × 2 × 2 hyperdeterminant has format (1, 1, 1) and a 2 × 2 × 3 hyperdeterminant of format (1, 1, 2) also exists. However a 2 × 2 × 4 hyperdeterminant would have format (1, 1, 3) but 3 &gt; 1 + 1 so it does not exist. Degree. Since the hyperdeterminant is homogeneous in its variables it has a well-defined degree that is a function of the format and is written "N"("k"1, ..., "k""r"). In special cases we can write down an expression for the degree. For example, a hyperdeterminant is said to be of boundary format when the largest format number is the sum of the others and in this case we have formula_13 For hyperdeterminants of dimensions 2"r", a convenient generating formula for the degrees "N""r" is formula_14 In particular for "r" = 2,3,4,5,6 the degree is respectively 2, 4, 24, 128, 880 and then grows very rapidly. Three other special formulae for computing the degree of hyperdeterminants are given in for 2 × "m" × "m" use "N"(1, "m" − 1, "m" − 1) = 2"m"("m" − 1) for 3 × "m" × "m" use "N"(2, "m" − 1, "m" − 1) = 3"m"("m" − 1)2 for 4 × "m" × "m" use "N"(3, "m" − 1, "m" − 1) = (2/3)"m"("m" − 1)("m" − 2)(5"m" − 3) A general result that follows from the hyperdeterminants product rule and invariance properties listed below is that the least common multiple of the dimensions of the vector spaces on which the linear map acts divides the degree of the hyperdeterminant, that is, lcm("k"1 + 1, ..., "k"r + 1) | "N"("k"1, ..., "k""r"). Properties of hyperdeterminants. Hyperdeterminants generalise many of the properties of determinants. The property of being a discriminant is one of them and it is used in the definition above. Multiplicative properties. One of the most familiar properties of determinants is the multiplication rule which is sometimes known as the Binet-Cauchy formula. For square "n" × "n" matrices "A" and "B" the rule says that det("AB") = det("A")det("B") This is one of the harder rules to generalize from determinants to hyperdeterminants because generalizations of products of hypermatrices can give hypermatrices of different sizes. The full domain of cases in which the product rule can be generalized is still a subject of research. However, there are some basic instances that can be stated. Given a multilinear form "f"(x1, ..., x"r") we can apply a linear transformation on the last argument using an "n" × "n" matrix "B", y"r" = "B" x"r". This generates a new multilinear form of the same format, "g"(x1, ..., xr) = "f"(x1, ..., y"r") In terms of hypermatrices this defines a product which can be written "g" = "f"."B" It is then possible to use the definition of the hyperdeterminant to show that det("f"."B") = det("f")det("B")"N"/"n" where "n" is the degree of the hyperdeterminant. This generalises the product rule for matrices. Further generalizations of the product rule have been demonstrated for appropriate products of hypermatrices of boundary format. Cayley's first hyperdeterminant det0 is multiplicative in the following sense. Let "A" be a "r"-dimensional "n" × ... × "n" hypermatrix with elements "a""i", ..., "k", "B" be a "s"-dimensional "n" × ... × "n" hypermatrix with elements "b"..., and "C" be a ("r" + "s" − 2)-dimensional "n" × ... × "n" hypermatrix with elements "c"... such that (using Einstein notation) "c""i", ..., "j", "l", ..., "m" = "a""i", ..., "j""k""b""k", "l", ..., "m", then det0(C) = det0(A) det0(B). Invariance properties. A determinant is not usually considered in terms of its properties as an algebraic invariant but when determinants are generalized to hyperdeterminants the invariance is more notable. Using the multiplication rule above on the hyperdeterminant of a hypermatrix "H" times a matrix "S" with determinant equal to one gives det("H"."S") = det("H") In other words, the hyperdeterminant is an algebraic invariant under the action of the special linear group SL("n") on the hypermatrix. The transformation can be equally well applied to any of the vector spaces on which the multilinear map acts to give another distinct invariance. This leads to the general result, The hyperdeterminant of format formula_15 is an invariant under an action of the group formula_16 For example, the determinant of an "n" × "n" matrix is an SL("n")2 invariant and Cayley's hyperdeterminant for a 2 × 2 × 2 hypermatrix is an SL(2)3 invariant. A more familiar property of a determinant is that if you add a multiple of a row (or column) to a different row (or column) of a square matrix then its determinant is unchanged. This is a special case of its invariance in the case where the special linear transformation matrix is an identity matrix plus a matrix with only one non-zero off-diagonal element. This property generalizes immediately to hyperdeterminants implying invariance when you add a multiple of one slice of a hypermatrix to another parallel slice. A hyperdeterminant is not the only polynomial algebraic invariant for the group acting on the hypermatrix. For example, other algebraic invariants can be formed by adding and multiplying hyperdeterminants. In general the invariants form a ring algebra and it follows from Hilbert's basis theorem that the ring is finitely generated. In other words, for a given hypermatrix format, all the polynomial algebraic invariants with integer coefficients can be formed using addition, subtraction and multiplication starting from a finite number of them. In the case of a 2 × 2 × 2 hypermatrix, all such invariants can be generated in this way from Cayley's second hyperdeterminant alone, but this is not a typical result for other formats. For example, the second hyperdeterminant for a hypermatrix of format 2 × 2 × 2 × 2 is an algebraic invariant of degree 24 yet all the invariants can be generated from a set of four simpler invariants of degree 6 and less. History and applications. The second hyperdeterminant was invented and named by Arthur Cayley in 1845, who was able to write down the expression for the 2 × 2 × 2 format, but Cayley went on to use the term for any algebraic invariant and later abandoned the concept in favour of a general theory of polynomial forms which he called "quantics". For the next 140 years there were few developments in the subject and hyperdeterminants were largely forgotten until they were rediscovered by Gel'fand, Kapranov and Zelevinsky in the 1980s as an offshoot of their work on generalized hypergeometric functions. This led to them writing their textbook in which the hyperdeterminant is reintroduced as a discriminant. Indeed, Cayley's first hyperdeterminant is more fundamental than his second, since it is a straightforward generalization the ordinary determinant, and has found recent applications in the Alon-Tarsi conjecture. Since then the hyperdeterminant has found applications over a wide range of disciplines including algebraic geometry, number theory, quantum computing and string theory. In "algebraic geometry" the second hyperdeterminant is studied as a special case of an X-discriminant. A principal result is that there is a correspondence between the vertices of the Newton polytope for hyperdeterminants and the "triangulation" of a cube into simplices. In "quantum computing" the invariants on hypermatrices of format 2"N" are used to study the entanglement of "N" qubits. In "string theory" the hyperdeterminant first surfaced in connection with string dualities and black hole entropy. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. For other historical developments not contained in the book from Gel'fand, Kapranov and Zelevinsky, see:
[ { "math_id": 0, "text": "\\mathrm{Det}(A) = f^{ijkl}f^{nmop}f^{qrst}a_{inq}a_{jmr}a_{kos}a_{lpt}" }, { "math_id": 1, "text": "f^{0011}=f^{1100}=f^{0110}=f^{1001} = -1/2" }, { "math_id": 2, "text": "f^{0101}=f^{1010}=1 " }, { "math_id": 3, "text": "f^{...abc...} + f^{...bca...}+f^{...cab...}+f^{...cba...} + f^{...acb...}+f^{...bac...} = 0." }, { "math_id": 4, "text": "a_{000}=a" }, { "math_id": 5, "text": "a_{001}=a_{010}=a_{100} = b" }, { "math_id": 6, "text": "a_{110}=a_{101}=a_{011} = c" }, { "math_id": 7, "text": "a_{111}=d" }, { "math_id": 8, "text": "ax^3 + 3bx^2 + 3cx + d." }, { "math_id": 9, "text": "\\mathbb{R}" }, { "math_id": 10, "text": "\\mathbb{C}" }, { "math_id": 11, "text": "f: V_1 \\otimes V_2 \\otimes \\cdots \\otimes V_r \\to K" }, { "math_id": 12, "text": "f \\in V^*_1 \\otimes V^*_2 \\otimes \\cdots \\otimes V^*_r" }, { "math_id": 13, "text": "N(k_2 + \\cdots + k_r, k_2, \\ldots, k_r) = \\frac{(k_2 + \\cdots + k_r + 1)!}{k_2! \\cdots k_r!}." }, { "math_id": 14, "text": "\\sum_{r=0}^\\infty N_r \\frac{z^r}{r!} = \\frac{e^{-2z}}{(1-z)^2}." }, { "math_id": 15, "text": "(k_1,\\ldots,k_r)" }, { "math_id": 16, "text": "\\mathrm{SL}(k_1+1) \\otimes \\cdots \\otimes \\mathrm{SL}(k_r+1)" } ]
https://en.wikipedia.org/wiki?curid=15397886
1539804
Sheet resistance
Electrical resistance of a thin film Sheet resistance is the resistance of a square piece of a thin material with contacts made to two opposite sides of the square. It is usually a measurement of electrical resistance of thin films that are uniform in thickness. It is commonly used to characterize materials made by semiconductor doping, metal deposition, resistive paste printing, and glass coating. Examples of these processes are: doped semiconductor regions (e.g., silicon or polysilicon), and the resistors that are screen printed onto the substrates of thick-film hybrid microcircuits. The utility of sheet resistance as opposed to resistance or resistivity is that it is directly measured using a four-terminal sensing measurement (also known as a four-point probe measurement) or indirectly by using a non-contact eddy-current-based testing device. Sheet resistance is invariable under scaling of the film contact and therefore can be used to compare the electrical properties of devices that are significantly different in size. Calculations. Sheet resistance is applicable to two-dimensional systems in which thin films are considered two-dimensional entities. When the term sheet resistance is used, it is implied that the current is along the plane of the sheet, not perpendicular to it. In a regular three-dimensional conductor, the resistance can be written asformula_0where Upon combining the resistivity with the thickness, the resistance can then be written asformula_6where formula_7 is the sheet resistance. If the film thickness is known, the bulk resistivity formula_1 (in Ω·m) can be calculated by multiplying the sheet resistance by the film thickness in m:formula_8 Units. Sheet resistance is a special case of resistivity for a uniform sheet thickness. Commonly, resistivity (also known as bulk resistivity, specific electrical resistivity, or volume resistivity) is in units of Ω·m, which is more completely stated in units of Ω·m2/m (Ω·area/length). When divided by the sheet thickness (m), the units are Ω·m·(m/m)/m = Ω. The term "(m/m)" cancels, but represents a special "square" situation yielding an answer in ohms. An alternative, common unit is "ohms square" (denoted "formula_9") or "ohms per square" (denoted "Ω/sq" or "formula_10"), which is dimensionally equal to an ohm, but is exclusively used for sheet resistance. This is an advantage, because sheet resistance of 1 Ω could be taken out of context and misinterpreted as bulk resistance of 1 ohm, whereas sheet resistance of 1 Ω/sq cannot thus be misinterpreted. The reason for the name "ohms per square" is that a square sheet with sheet resistance 10 ohm/square has an actual resistance of 10 ohm, regardless of the size of the square. (For a square, formula_11, so formula_12.) The unit can be thought of as, loosely, "ohms · aspect ratio". Example: A 3-unit long by 1-unit wide (aspect ratio = 3) sheet made of material having a sheet resistance of 21 Ω/sq would measure 63 Ω (since it is composed of three 1-unit by 1-unit squares), if the 1-unit edges were attached to an ohmmeter that made contact entirely over each edge. For semiconductors. For semiconductors doped through diffusion or surface peaked ion implantation we define the sheet resistance using the average resistivity formula_13 of the material:formula_14which in materials with majority-carrier properties can be approximated by (neglecting intrinsic charge carriers):formula_15where formula_16 is the junction depth, formula_17 is the majority-carrier mobility, formula_18 is the carrier charge, and formula_19 is the net impurity concentration in terms of depth. Knowing the background carrier concentration formula_20 and the surface impurity concentration, the "sheet resistance-junction depth" product formula_21 can be found using Irvin's curves, which are numerical solutions to the above equation. Measurement. A four-point probe is used to avoid contact resistance, which can often have the same magnitude as the sheet resistance. Typically a constant current is applied to two probes, and the potential on the other two probes is measured with a high-impedance voltmeter. A geometry factor needs to be applied according to the shape of the four-point array. Two common arrays are square and in-line. For more details see Van der Pauw method. Measurement may also be made by applying high-conductivity bus bars to opposite edges of a square (or rectangular) sample. Resistance across a square area will be measured in Ω/sq (often written as Ω/◻). For a rectangle, an appropriate geometric factor is added. Bus bars must make ohmic contact. Inductive measurement is used as well. This method measures the shielding effect created by eddy currents. In one version of this technique a conductive sheet under test is placed between two coils. This non-contact sheet resistance measurement method also allows to characterize encapsulated thin-films or films with rough surfaces. A very crude two-point probe method is to measure resistance with the probes close together and the resistance with the probes far apart. The difference between these two resistances will be of the order of magnitude of the sheet resistance. Typical applications. Sheet resistance measurements are very common to characterize the uniformity of conductive or semiconductive coatings and materials, e.g. for quality assurance. Typical applications include the inline process control of metal, TCO, conductive nanomaterials, or other coatings on architectural glass, wafers, flat panel displays, polymer foils, OLED, ceramics, etc. The contacting four-point probe is often applied for single-point measurements of hard or coarse materials. Non-contact eddy current systems are applied for sensitive or encapsulated coatings, for inline measurements and for high-resolution mapping. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = \\rho \\frac{L}{A} = \\rho \\frac{L}{W t}," }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "W" }, { "math_id": 5, "text": "t" }, { "math_id": 6, "text": "R = \\frac{\\rho}{t} \\frac{L}{W} = R_\\text{s} \\frac{L}{W}," }, { "math_id": 7, "text": "R_\\text{s}" }, { "math_id": 8, "text": "\\rho = R_s \\cdot t." }, { "math_id": 9, "text": "\\Omega\\Box" }, { "math_id": 10, "text": "\\Omega/\\Box" }, { "math_id": 11, "text": "L = W" }, { "math_id": 12, "text": "R_\\text{s} = R" }, { "math_id": 13, "text": "\\overline{\\rho} = 1 / \\overline{\\sigma}" }, { "math_id": 14, "text": "R_\\text{s} = \\overline{\\rho} / x_\\text{j} = (\\overline{\\sigma} x_\\text{j})^{-1} = \\frac{1}{ \\int_0^{x_\\text{j}} \\sigma(x) \\,dx }," }, { "math_id": 15, "text": "R_\\text{s} = \\frac{1}{\\int_0^{x_\\text{j}} \\mu q N(x) \\,dx}," }, { "math_id": 16, "text": "x_\\text{j}" }, { "math_id": 17, "text": "\\mu" }, { "math_id": 18, "text": "q" }, { "math_id": 19, "text": "N(x)" }, { "math_id": 20, "text": "N_\\text{B}" }, { "math_id": 21, "text": "R_\\text{s} x_\\text{j}" } ]
https://en.wikipedia.org/wiki?curid=1539804
15398838
Contour set
In mathematics, contour sets generalize and formalize the everyday notions of Formal definitions. Given a relation on pairs of elements of set formula_0 formula_1 and an element formula_2 of formula_0 formula_3 The upper contour set of formula_2 is the set of all formula_4 that are related to formula_2: formula_5 The lower contour set of formula_2 is the set of all formula_4 such that formula_2 is related to them: formula_6 The strict upper contour set of formula_2 is the set of all formula_4 that are related to formula_2 without formula_2 being "in this way" related to any of them: formula_7 The strict lower contour set of formula_2 is the set of all formula_4 such that formula_2 is related to them without any of them being "in this way" related to formula_2: formula_8 The formal expressions of the last two may be simplified if we have defined formula_9 so that formula_10 is related to formula_11 but formula_11 is "not" related to formula_10, in which case the strict upper contour set of formula_2 is formula_12 and the strict lower contour set of formula_2 is formula_13 Contour sets of a function. In the case of a function formula_14 considered in terms of relation formula_15, reference to the contour sets of the function is implicitly to the contour sets of the implied relation formula_16 Examples. Arithmetic. Consider a real number formula_2, and the relation formula_17. Then Consider, more generally, the relation formula_18 Then It would be "technically" possible to define contour sets in terms of the relation formula_23 though such definitions would tend to confound ready understanding. In the case of a real-valued function formula_14 (whose arguments might or might not be themselves real numbers), reference to the contour sets of the function is implicitly to the contour sets of the relation formula_18 Note that the arguments to formula_14 might be vectors, and that the notation used might instead be formula_24 Economics. In economics, the set formula_0 could be interpreted as a set of goods and services or of possible outcomes, the relation formula_25 as "strict preference", and the relationship formula_26 as "weak preference". Then Such preferences might be captured by a utility function formula_27, in which case Complementarity. On the assumption that formula_26 is a total ordering of formula_0, the complement of the upper contour set is the strict lower contour set. formula_32 formula_33 and the complement of the strict upper contour set is the lower contour set. formula_34 formula_35
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\succcurlyeq~\\subseteq~X^2" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "x\\in X" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "\\left\\{ y~\\backepsilon~y\\succcurlyeq x\\right\\}" }, { "math_id": 6, "text": "\\left\\{ y~\\backepsilon~x\\succcurlyeq y\\right\\}" }, { "math_id": 7, "text": "\\left\\{ y~\\backepsilon~(y\\succcurlyeq x)\\land\\lnot(x\\succcurlyeq y)\\right\\}" }, { "math_id": 8, "text": "\\left\\{ y~\\backepsilon~(x\\succcurlyeq y)\\land\\lnot(y\\succcurlyeq x)\\right\\}" }, { "math_id": 9, "text": "\\succ~=~\\left\\{ \\left(a,b\\right)~\\backepsilon~\\left(a\\succcurlyeq b\\right)\\land\\lnot(b\\succcurlyeq a)\\right\\}" }, { "math_id": 10, "text": "a" }, { "math_id": 11, "text": "b" }, { "math_id": 12, "text": "\\left\\{ y~\\backepsilon~y\\succ x\\right\\}" }, { "math_id": 13, "text": "\\left\\{ y~\\backepsilon~x\\succ y\\right\\}" }, { "math_id": 14, "text": "f()" }, { "math_id": 15, "text": "\\triangleright" }, { "math_id": 16, "text": "(a\\succcurlyeq b)~\\Leftarrow~[f(a)\\triangleright f(b)]" }, { "math_id": 17, "text": "\\ge" }, { "math_id": 18, "text": "(a\\succcurlyeq b)~\\Leftarrow~[f(a)\\ge f(b)]" }, { "math_id": 19, "text": "f(y)\\ge f(x)" }, { "math_id": 20, "text": "f(y)>f(x)" }, { "math_id": 21, "text": "f(x)\\ge f(y)" }, { "math_id": 22, "text": "f(x)>f(y)" }, { "math_id": 23, "text": "(a\\succcurlyeq b)~\\Leftarrow~[f(a)\\le f(b)]" }, { "math_id": 24, "text": "[(a_1 ,a_2 ,\\ldots)\\succcurlyeq(b_1 ,b_2 ,\\ldots)]~\\Leftarrow~[f(a_1 ,a_2 ,\\ldots)\\ge f(b_1 ,b_2 ,\\ldots)]" }, { "math_id": 25, "text": "\\succ" }, { "math_id": 26, "text": "\\succcurlyeq" }, { "math_id": 27, "text": "u()" }, { "math_id": 28, "text": "u(y)\\ge u(x)" }, { "math_id": 29, "text": "u(y)>u(x)" }, { "math_id": 30, "text": "u(x)\\ge u(y)" }, { "math_id": 31, "text": "u(x)>u(y)" }, { "math_id": 32, "text": "X^2\\backslash\\left\\{ y~\\backepsilon~y\\succcurlyeq x\\right\\}=\\left\\{ y~\\backepsilon~x\\succ y\\right\\}" }, { "math_id": 33, "text": "X^2\\backslash\\left\\{ y~\\backepsilon~x\\succ y\\right\\}=\\left\\{ y~\\backepsilon~y\\succcurlyeq x\\right\\}" }, { "math_id": 34, "text": "X^2\\backslash\\left\\{ y~\\backepsilon~y\\succ x\\right\\}=\\left\\{ y~\\backepsilon~x\\succcurlyeq y\\right\\}" }, { "math_id": 35, "text": "X^2\\backslash\\left\\{ y~\\backepsilon~x\\succcurlyeq y\\right\\}=\\left\\{ y~\\backepsilon~y\\succ x\\right\\}" } ]
https://en.wikipedia.org/wiki?curid=15398838
15398902
Hypernetted-chain equation
Closure relation to solve the Ornstein-Zernike equation In statistical mechanics the hypernetted-chain equation is a closure relation to solve the Ornstein–Zernike equation which relates the direct correlation function to the total correlation function. It is commonly used in fluid theory to obtain e.g. expressions for the radial distribution function. It is given by: formula_0 where formula_1 is the number density of molecules, formula_2, formula_3 is the radial distribution function, formula_4 is the direct interaction between pairs. formula_5 with formula_6 being the Thermodynamic temperature and formula_7 the Boltzmann constant. Derivation. The direct correlation function represents the direct correlation between two particles in a system containing "N" − 2 other particles. It can be represented by formula_8 where formula_9 (with formula_10 the potential of mean force) and formula_11 is the radial distribution function without the direct interaction between pairs formula_4 included; i.e. we write formula_12. Thus we "approximate" formula_13 by formula_14 By expanding the indirect part of formula_3 in the above equation and introducing the function formula_15 we can approximate formula_13 by writing: formula_16 with formula_17. This equation is the essence of the hypernetted chain equation. We can equivalently write formula_18 If we substitute this result in the Ornstein–Zernike equation formula_19 one obtains the hypernetted-chain equation: formula_20
[ { "math_id": 0, "text": " \n\\ln y(r_{12}) =\\ln g(r_{12}) + \\beta u(r_{12}) =\\rho \\int \\left[h(r_{13}) - \\ln g(r_{13}) - \\beta u(r_{13})\\right] h(r_{23}) \\, d \\mathbf{r_{3}}, \\, " }, { "math_id": 1, "text": "\\rho = \\frac{N}{V}" }, { "math_id": 2, "text": " h(r) = g(r)-1" }, { "math_id": 3, "text": "g(r)" }, { "math_id": 4, "text": "u(r)" }, { "math_id": 5, "text": "\\beta = \\frac{1}{k_{\\rm B} T}" }, { "math_id": 6, "text": "T " }, { "math_id": 7, "text": "k_{\\rm B}" }, { "math_id": 8, "text": " c(r)=g_{\\rm total}(r) - g_{\\rm indirect}(r) \\, " }, { "math_id": 9, "text": "g_{\\rm total}(r)=g(r) = \\exp[-\\beta w(r)]" }, { "math_id": 10, "text": "w(r)" }, { "math_id": 11, "text": "g_{\\rm indirect}(r)" }, { "math_id": 12, "text": "g_{\\rm indirect}(r)=\\exp\\{-\\beta[w(r)-u(r)]\\}" }, { "math_id": 13, "text": "c(r)" }, { "math_id": 14, "text": " c(r)=e^{-\\beta w(r)}- e^{-\\beta[w(r)-u(r)]}. \\, " }, { "math_id": 15, "text": "y(r)=e^{\\beta u(r)}g(r) (= g_{\\rm indirect}(r) )" }, { "math_id": 16, "text": " c(r)=e^{-\\beta w(r)}-1+\\beta[w(r)-u(r)] \\, \n= g(r)-1-\\ln y(r) \\,\n= f(r)y(r)+[y(r)-1-\\ln y(r)] \\,\\, (\\text{HNC}), " }, { "math_id": 17, "text": " f(r) = e^{-\\beta u(r)}-1" }, { "math_id": 18, "text": "\nh(r) - c(r) = g(r) - 1 -c(r) = \\ln y(r). " }, { "math_id": 19, "text": "\nh(r_{12})- c(r_{12}) = \\rho \\int c(r_{13})h(r_{23})d \\mathbf{r}_{3}, " }, { "math_id": 20, "text": "\n\\ln y(r_{12}) =\\ln g(r_{12}) + \\beta u(r_{12}) =\\rho \\int \\left[h(r_{13}) -\\ln g(r_{13}) - \\beta u(r_{13})\\right] h(r_{23}) \\, d \\mathbf{r_{3}}. \\, " } ]
https://en.wikipedia.org/wiki?curid=15398902
1539973
Overlapping generations model
The overlapping generations (OLG) model is one of the dominating frameworks of analysis in the study of macroeconomic dynamics and economic growth. In contrast to the Ramsey–Cass–Koopmans neoclassical growth model in which individuals are infinitely-lived, in the OLG model individuals live a finite length of time, long enough to overlap with at least one period of another agent's life. The OLG model is the natural framework for the study of: (a) the life-cycle behavior (investment in human capital, work and saving for retirement), (b) the implications of the allocation of resources across the generations, such as Social Security, on the income per capita in the long-run, (c) the determinants of economic growth in the course of human history, and (d) the factors that triggered the fertility transition. History. The construction of the OLG model was inspired by Irving Fisher's monograph "The Theory of Interest". It was first formulated in 1947, in the context of a pure-exchange economy, by Maurice Allais, and more rigorously by Paul Samuelson in 1958. In 1965, Peter Diamond incorporated an aggregate neoclassical production into the model. This OLG model with production was further augmented with the development of the two-sector OLG model by Oded Galor, and the introduction of OLG models with endogenous fertility. Books devoted to the use of the OLG model include Azariadis' Intertemporal Macroeconomics and de la Croix and Michel's Theory of Economic Growth. Pure-exchange OLG model. The most basic OLG model has the following characteristics: formula_5 formula_6 where formula_7 is the rate of time preference. OLG model with production. Basic one-sector OLG model. The pure-exchange OLG model was augmented with the introduction of an aggregate neoclassical production by Peter Diamond.  In contrast, to Ramsey–Cass–Koopmans neoclassical growth model in which individuals are infinitely-lived and the economy is characterized by a unique steady-state equilibrium, as was established by Oded Galor and Harl Ryder, the OLG economy may be characterized by multiple steady-state equilibria, and initial conditions may therefore affect the long-run evolution of the long-run level of income per capita. Since initial conditions in the OLG model may affect economic growth in long-run, the model was useful for the exploration of the convergence hypothesis. The economy has the following characteristics: Two-sector OLG model. The one-sector OLG model was further augmented with the introduction of a two-sector OLG model by Oded Galor. The two-sector model provides a framework of analysis for the study of the sectoral adjustments to aggregate shocks and implications of international trade for the dynamics of comparative advantage. In contrast to the Uzawa two-sector neoclassical growth model, the two-sector OLG model may be characterized by multiple steady-state equilibria, and initial conditions may therefore affect the long-run position of an economy. OLG model with endogenous fertility. Oded Galor and his co-authors develop OLG models where population growth is endogenously determined to explore: (a) the importance the narrowing of the gender wage gap for the fertility decline, (b) the contribution of the rise in the return to human capital and the decline in fertility to the transition from stagnation to growth, and (c) the importance of population adjustment to technological progress for the emergence of the Malthusian trap. Dynamic inefficiency. One important aspect of the OLG model is that the steady state equilibrium need not be efficient, in contrast to general equilibrium models where the first welfare theorem guarantees Pareto efficiency. Because there are an infinite number of agents in the economy (summing over future time), the total value of resources is infinite, so Pareto improvements can be made by transferring resources from each young generation to the current old generation, similar to the logic described in the Hilbert Hotel. Not every equilibrium is inefficient; the efficiency of an equilibrium is strongly linked to the interest rate and the Cass Criterion gives necessary and sufficient conditions for when an OLG competitive equilibrium allocation is inefficient. Another attribute of OLG type models is that it is possible that 'over saving' can occur when capital accumulation is added to the model—a situation which could be improved upon by a social planner by forcing households to draw down their capital stocks. However, certain restrictions on the underlying technology of production and consumer tastes can ensure that the steady state level of saving corresponds to the Golden Rule savings rate of the Solow growth model and thus guarantee intertemporal efficiency. Along the same lines, most empirical research on the subject has noted that oversaving does not seem to be a major problem in the real world. In Diamond's version of the model, individuals tend to save more than is socially optimal, leading to dynamic inefficiency. Subsequent work has investigated whether dynamic inefficiency is a characteristic in some economies and whether government programs to transfer wealth from young to poor do reduce dynamic inefficiency. Another fundamental contribution of OLG models is that they justify existence of money as a medium of exchange. A system of expectations exists as an equilibrium in which each new young generation accepts money from the previous old generation in exchange for consumption. They do this because they expect to be able to use that money to purchase consumption when they are the old generation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N^{t}_t " }, { "math_id": 1, "text": "N^{t-1}_t " }, { "math_id": 2, "text": "N_0 " }, { "math_id": 3, "text": "N^{0}_0 =1" }, { "math_id": 4, "text": "N^{t}_t = N^{t}_{t+1} " }, { "math_id": 5, "text": " N_t^t = (1+n)^t " }, { "math_id": 6, "text": " u(c_t^t,c_t^{t+1}) = U(c_t^t) + \\beta U(c_t^{t+1})," }, { "math_id": 7, "text": " \\beta " } ]
https://en.wikipedia.org/wiki?curid=1539973
15399989
Load-loss factor
Load-loss factor (also loss load factor, LLF, or simply loss factor) is a dimensionless ratio between average and peak values of load loss (loss of electric power between the generator and the consumer in electricity distribution). Since the losses in the wires are proportional to the square of the current (and thus the square of the power), the LLF can be calculated by measuring the square of delivered power over a short interval of time (typically half an hour), calculating an average of these values over a long period (a year), and dividing by the square of the peak power exhibited during the same long period: formula_0, where The LLF value naturally depends on the load profile. For electricity utilities, numbers about 0.2-0.3 are typical (cf. 0.22 for Toronto Hydro, 0.33 for New Zealand). Multiple empirical formulae exist that relate the loss factor to the load factor (Dickert et al. in 2009 listed nine). Similarly, the ratio between the average and the peak current is called form coefficient k or peak responsibility factor k, its typical value is between 0.2 to 0.8 for distribution networks and 0.8 to 0.95 for transmission networks. Coefficient k describes the losses as an additional load carried by the system, and is named loss equivalent load factor in Japan.
[ { "math_id": 0, "text": "{LLF}=\\frac {\\sum_{i=1}^{NI} {Load}_i^2} {NI*{Load}_{peak}^2}" }, { "math_id": 1, "text": "NI" }, { "math_id": 2, "text": "{Load}_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "{Load}_{peak}" } ]
https://en.wikipedia.org/wiki?curid=15399989
15402259
MAIFI
Reliability index The Momentary Average Interruption Frequency Index (MAIFI) is a reliability index used by electric power utilities. MAIFI is the average number of momentary interruptions that a customer would experience during a given period (typically a year). Electric power utilities may define momentary interruptions differently, with some considering a momentary interruption to be an outage of less than 1 minute in duration while others may consider a momentary interruption to be an outage of less than 5 minutes in duration. Calculation. MAIFI is calculated as formula_0 Reporting. MAIFI has tended to be less reported than other reliability indicators, such as SAIDI, SAIFI, and CAIDI. However, MAIFI is useful for tracking momentary power outages, or "blinks," that can be hidden or misrepresented by an overall outage duration index like SAIDI or SAIFI. Causes. Momentary power outages are often caused by transient faults, such as lightning strikes or vegetation contacting a power line, and many utilities use reclosers to automatically restore power quickly after a transient fault has cleared. Comparisons. MAIFI is specific to the area ( power utility, state, region, county, power line, etc. ) because of the many variables that affect the measure: high/low lightning, number &amp; type of trees, high/low winds, etc. Therefore, comparing MAIFI of one power utility to another is not valid and should not be used in this type of benchmarking. It also is difficult to compare this measure of reliability within a single utility. One year may have had an unusually high number of thunderstorms and thus skew any comparison to another year's MAIFI. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{MAIFI} = \\frac{\\mbox{total number of customer interruptions less than the defined time}}{\\mbox{total number of customers served}}" } ]
https://en.wikipedia.org/wiki?curid=15402259
1540333
Perron–Frobenius theorem
Theory in linear algebra In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron (1907) and Georg Frobenius (1912), asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem, Hawkins–Simon condition); to demography (Leslie population age distribution model); to social networks (DeGroot learning process); to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. Statement. Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix "A" are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers "A""k" as "k" → ∞ is controlled by the eigenvalue of "A" with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when "A" is a non-negative real square matrix. Early results were due to Oskar Perron (1907) and concerned positive matrices. Later, Georg Frobenius (1912) found their extension to certain classes of non-negative matrices. Positive matrices. Let formula_0 be an formula_1 positive matrix: formula_2 for formula_3. Then the following statements hold. All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1–7 can be found in Meyer chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669. The left and right eigenvectors "w" and "v" are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector "v" sums to one, while formula_10. Non-negative matrices. There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than "or equal", in absolute value, to all other eigenvalues. However, for the example formula_11, the maximum eigenvalue "r" = 1 has the same absolute value as the other eigenvalue −1; while for formula_12, the maximum eigenvalue is "r" = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive. However, Frobenius found a special subclass of non-negative matrices — "irreducible" matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form formula_13, where "formula_14" is a real strictly positive eigenvalue, and formula_15 ranges over the complex "h"' th roots of 1 for some positive integer "h" called the period of the matrix. The eigenvector corresponding to "formula_14" has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below. Classification of matrices. Let "A" be a "n" × "n" square matrix over field "F". The matrix "A" is irreducible if any of the following equivalent properties holds. Definition 1 : "A" does not have non-trivial invariant "coordinate" subspaces. Here a non-trivial coordinate subspace means a linear subspace spanned by any proper subset of standard basis vectors of "Fn". More explicitly, for any linear subspace spanned by standard basis vectors "e""i"1 , ..., "e""i"k, 0 &lt; "k" &lt; "n" its image under the action of "A" is not contained in the same subspace. Definition 2: "A" cannot be conjugated into block upper triangular form by a permutation matrix "P": formula_16 where "E" and "G" are non-trivial (i.e. of size greater than zero) square matrices. Definition 3: One can associate with a matrix "A" a certain directed graph "G""A". It has "n" vertices labeled 1...,"n", and there is an edge from vertex "i" to vertex "j" precisely when "a""ij" ≠ 0. Then the matrix "A" is irreducible if and only if its associated graph "G""A" is strongly connected. If "F" is the field of real or complex numbers, then we also have the following condition. Definition 4: The group representation of formula_17 on formula_18 or formula_19 on formula_20 given by formula_21 has no non-trivial invariant coordinate subspaces. (By comparison, this would be an irreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.) A matrix is reducible if it is not irreducible. A real matrix "A" is primitive if it is non-negative and its "m"th power is positive for some natural number "m" (i.e. all entries of "Am" are positive). Let "A" be real and non-negative. Fix an index "i" and define the period of index "i" to be the greatest common divisor of all natural numbers "m" such that ("A""m")"ii" &gt; 0. When "A" is irreducible, the period of every index is the same and is called the period of "A". In fact, when "A" is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in "G""A" (see Kitchens page 16). The period is also called the index of imprimitivity (Meyer page 674) or the order of cyclicity. If the period is 1, "A" is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period. Results for non-negative matrices were first obtained by Frobenius in 1912. Perron–Frobenius theorem for irreducible non-negative matrices. Let formula_22 be an irreducible non-negative formula_23 matrix with period formula_24 and spectral radius formula_25. Then the following statements hold. formula_34 where formula_35 denotes a zero matrix and the blocks along the main diagonal are square matrices. formula_42 The example formula_43 shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks "A""j" need not be square, and "h" need not divide "n". Further properties. Let "A" be an irreducible non-negative matrix, then: A matrix "A" is primitive provided it is non-negative and "Am" is positive for some "m", and hence "Ak" is positive for all "k ≥ m". To check primitivity, one needs a bound on how large the minimal such "m" can be, depending on the size of "A": formula_46 Applications. Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain. Non-negative matrices. The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix "A" may be written in upper-triangular block form (known as the normal form of a reducible matrix) "PAP"−1 = formula_47 where "P" is a permutation matrix and each "Bi" is a square matrix that is either irreducible or zero. Now if "A" is non-negative then so too is each block of "PAP"−1, moreover the spectrum of "A" is just the union of the spectra of the "Bi". The invertibility of "A" can also be studied. The inverse of "PAP"−1 (if it exists) must have diagonal blocks of the form "Bi"−1 so if any "Bi" isn't invertible then neither is "PAP"−1 or "A". Conversely let "D" be the block-diagonal matrix corresponding to "PAP"−1, in other words "PAP"−1 with the asterisks zeroised. If each "Bi" is invertible then so is "D" and "D"−1("PAP"−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (if "Nk" = 0 the inverse of 1 − "N" is 1 + "N" + "N"2 + ... + "N""k"−1) so "PAP"−1 and "A" are both invertible. Therefore, many of the spectral properties of "A" may be deduced by applying the theorem to the irreducible "Bi". For example, the Perron root is the maximum of the ρ("Bi"). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive. Stochastic matrices. A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. If "A" is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ("A") by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If "A" is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal. Algebraic graph theory. The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative "n"-square matrix is the graph with vertices numbered 1, ..., "n" and arc "ij" if and only if "Aij" ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible. Finite Markov chains. The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type). Compact operators. More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology. Proof methods. A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the spectral theory from which part of the arguments are borrowed. Perron root is strictly maximal eigenvalue for positive (and primitive) matrices. If "A" is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue "r" (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence "r" is the spectral radius of "A". This statement does not hold for general non-negative irreducible matrices, which have "h" eigenvalues with the same absolute eigenvalue as "r", where "h" is the period of "A". Proof for positive matrices. Let "A" be a positive matrix, assume that its spectral radius ρ("A") = 1 (otherwise consider "A/ρ(A)"). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer "m" such that "Am" is a positive matrix and the real part of λ"m" is negative. Let ε be half the smallest diagonal entry of "Am" and set "T" = "Am" − "εI" which is yet another positive matrix. Moreover, if "Ax" = "λx" then "Amx" = "λmx" thus "λ""m" − "ε" is an eigenvalue of "T". Because of the choice of "m" this point lies outside the unit disk consequently "ρ"("T") &gt; 1. On the other hand, all the entries in "T" are positive and less than or equal to those in "Am" so by Gelfand's formula "ρ"("T") ≤ "ρ"("Am") ≤ "ρ"("A")"m" = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices. Lemma. Given a non-negative "A", assume there exists "m", such that "Am" is positive, then "A""m"+1, "A""m"+2, "A""m"+3... are all positive. "A""m"+1 = "AA""m", so it can have zero element only if some row of "A" is entirely zero, but in this case the same row of "Am" will be zero. Applying the same arguments as above for primitive matrices, prove the main claim. Power method and the positive eigenpair. For a positive (or more generally irreducible non-negative) matrix "A" the dominant eigenvector is real and strictly positive (for non-negative "A" respectively non-negative.) This can be established using the power method, which states that for a sufficiently generic (in the sense below) matrix "A" the sequence of vectors "b""k"+1 = "Ab""k" / | "Ab""k" | converges to the eigenvector with the maximum eigenvalue. (The initial vector "b"0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector "b"0 produces the sequence of non-negative vectors "bk". Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for "A", proving the assertion. The corresponding eigenvalue is non-negative. The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this. Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest: Lemma: given a positive (or more generally irreducible non-negative) matrix "A" and "v" as any non-negative eigenvector for "A", then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive. Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes "i,j" there exists "m", such that ("A""m")"ij" is strictly positive. Given a non-negative eigenvector "v", and that at least one of its components say "j"-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given "n" such that ("A""n")"ii" &gt;0, hence: "r""n""v""i" = "A""n""v""i" ≥ ("A""n")"ii""v""i" &gt;0. Hence "r" is strictly positive. The eigenvector is strict positivity. Then given "m", such that ("A""m")"ij" &gt;0, hence: "r""m""v""j" = ("A""m""v")"j" ≥ ("A""m")"ij""v""i" &gt;0, hence "v""j" is strictly positive, i.e., the eigenvector is strictly positive. Multiplicity one. This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue "r" is one-dimensional. The arguments here are close to those in Meyer. Given a strictly positive eigenvector "v" corresponding to "r" and another eigenvector "w" with the same eigenvalue. (The vectors "v" and "w" can be chosen to be real, because "A" and "r" are both real, so the null space of "A-r" has a basis consisting of real vectors.) Assuming at least one of the components of "w" is positive (otherwise multiply "w" by −1). Given maximal possible "α" such that "u=v- α w" is non-negative, then one of the components of "u" is zero, otherwise "α" is not maximum. Vector "u" is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of "u" is zero. The contradiction implies that "w" does not exist. Case: There are no Jordan cells corresponding to the Perron–Frobenius eigenvalue "r" and all other eigenvalues which have the same absolute value. If there is a Jordan cell, then the infinity norm (A/r)k∞ tends to infinity for "k → ∞ ", but that contradicts the existence of the positive eigenvector. Given "r" = 1, or "A/r". Letting "v" be a Perron–Frobenius strictly positive eigenvector, so "Av=v", then: formula_48 So "Ak"∞ is bounded for all "k". This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan cell for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan cell implies that "Ak"∞ is unbounded. For a two by two matrix: formula_49 hence "J""k"∞ = |"k" + "λ"| (for |"λ"| = 1), so it tends to infinity when "k" does so. Since "Jk" = "C"−1 "A""k""C", then "A""k" ≥ "J""k"/ ("C"−1 "C" ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan cells for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalue "r" is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as "r". The same claim is true for them, but requires more work. No other non-negative eigenvectors. Given positive (or more generally irreducible non-negative matrix) "A", the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for "A". Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair ("λ", "y") for "A", such that vector "y" is positive, and given ("r", "x"), where "x" – is the left Perron–Frobenius eigenvector for "A" (i.e. eigenvector for "AT"), then "rx""T""y" = ("x""T" "A") "y" = "x""T" ("Ay") = "λx""T""y", also "x""T" "y" &gt; 0, so one has: "r" = "λ". Since the eigenspace for the Perron–Frobenius eigenvalue "r" is one-dimensional, non-negative eigenvector "y" is a multiple of the Perron–Frobenius one. Collatz–Wielandt formula. Given a positive (or more generally irreducible non-negative matrix) "A", one defines the function "f" on the set of all non-negative non-zero vectors "x" such that "f(x)" is the minimum value of ["Ax"]"i" / "x""i" taken over all those "i" such that "xi" ≠ 0. Then "f" is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue "r". For the proof we denote the maximum of "f" by the value "R". The proof requires to show " R = r". Inserting the Perron-Frobenius eigenvector "v" into "f", we obtain "f(v) = r" and conclude "r ≤ R". For the opposite inequality, we consider an arbitrary nonnegative vector "x" and let "ξ=f(x)". The definition of "f" gives "0 ≤ ξx ≤ Ax" (componentwise). Now, we use the positive right eigenvector "w" for "A" for the Perron-Frobenius eigenvalue "r", then " ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x ". Hence "f(x) = ξ ≤ r", which implies "R ≤ r". Perron projection as a limit: "A""k"/"r""k". Let "A" be a positive (or more generally, primitive) matrix, and let "r" be its Perron–Frobenius eigenvalue. Hence "P" is a spectral projection for the Perron–Frobenius eigenvalue "r", and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrix "M" such that there exists an eigenvalue "r" which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above). Given that "M" is diagonalizable, "M" is conjugate to a diagonal matrix with eigenvalues "r"1, ... , "r""n" on the diagonal (denote "r"1 = "r"). The matrix "M""k"/"r""k" will be conjugate (1, ("r"2/"r")"k", ... , ("r""n"/"r")"k"), which tends to (1,0,0...,0), for "k → ∞", so the limit exists. The same method works for general "M" (without assuming that "M" is diagonalizable). The projection and commutativity properties are elementary corollaries of the definition: "MM""k"/"r""k" = "M""k"/"r""k" "M" ; "P"2 = lim "M"2"k"/"r"2"k" = "P". The third fact is also elementary: "M"("Pu") = "M" lim "M""k"/"r""k" "u" = lim "rM""k"+1/"r""k"+1"u", so taking the limit yields "M"("Pu") = "r"("Pu"), so image of "P" lies in the "r"-eigenspace for "M", which is one-dimensional by the assumptions. Denoting by "v", "r"-eigenvector for "M" (by "w" for "MT"). Columns of "P" are multiples of "v", because the image of "P" is spanned by it. Respectively, rows of "w". So "P" takes a form "(a v wT)", for some "a". Hence its trace equals to "(a wT v)". Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that "P" acts identically on the "r"-eigenvector for "M". So it is one-dimensional. So choosing ("w""T""v") = 1, implies "P" = "vw""T". Inequalities for Perron–Frobenius eigenvalue. For any non-negative matrix "A" its Perron–Frobenius eigenvalue "r" satisfies the inequality: formula_50 This is not specific to non-negative matrices: for any matrix "A" with an eigenvalue formula_51 it is true that formula_52. This is an immediate corollary of the Gershgorin circle theorem. However another proof is more direct: Any matrix induced norm satisfies the inequality formula_53 for any eigenvalue formula_51 because, if formula_54 is a corresponding eigenvector, formula_55. The infinity norm of a matrix is the maximum of row sums: formula_56 Hence the desired inequality is exactly formula_57 applied to the non-negative matrix "A". Another inequality is: formula_58 This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that "A" is positive (not just non-negative), then there exists a positive eigenvector "w" such that "Aw" = "rw" and the smallest component of "w" (say "wi") is 1. Then "r" = ("Aw")"i" ≥ the sum of the numbers in row "i" of "A". Thus the minimum row sum gives a lower bound for "r" and this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via the Collatz-Wielandt formula. One takes the vector "x" = (1, 1, ..., 1) and immediately obtains the inequality. Further proofs. Perron projection. The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if "A" is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if "P" is its Perron projection then "AP" = "PA" = ρ("A")"P" so every column of "P" is a positive right eigenvector of "A" and every row is a positive left eigenvector. Moreover, if "Ax" = λ"x" then "PAx" = λ"Px" = ρ("A")"Px" which means "Px" = 0 if λ ≠ ρ("A"). Thus the only positive eigenvectors are those associated with ρ("A"). If "A" is a primitive matrix with ρ("A") = 1 then it can be decomposed as "P" ⊕ (1 − "P")"A" so that "An" = "P" + (1 − "P")"A""n". As "n" increases the second of these terms decays to zero leaving "P" as the limit of "An" as "n" → ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. If "v" and "w" are the positive row and column vectors that it generates then the Perron projection is just "wv"/"vw". The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition. Peripheral projection. The analysis when "A" is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ("A") that negate use of the power method and prevent the powers of (1 − "P")"A" decaying as in the primitive case whenever ρ("A") = 1. So we consider the peripheral projection, which is the spectral projection of "A" corresponding to all the eigenvalues that have modulus "ρ"("A"). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal. Cyclicity. Suppose in addition that ρ("A") = 1 and "A" has "h" eigenvalues on the unit circle. If "P" is the peripheral projection then the matrix "R" = "AP" = "PA" is non-negative and irreducible, "Rh" = "P", and the cyclic group "P", "R", "R"2, ..., "R""h"−1 represents the harmonics of "A". The spectral projection of "A" at the eigenvalue λ on the unit circle is given by the formula formula_59. All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of "A" is given by "A" = "R" ⊕ (1 − "P")"A" so the difference between "An" and "Rn" is "An" − "Rn" = (1 − "P")"A""n" representing the transients of "An" which eventually decay to zero. "P" may be computed as the limit of "Anh" as "n" → ∞. Counterexamples. The matrices "L" = formula_60, "P" = formula_61, "T" = formula_62, "M" = formula_63 provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of "L" are both equal to "P", thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix "T" is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. "M" is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of "M" are {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5 are both absent. More precisely, since "M" is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one Terminology. A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms "strictly positive" and "positive" to mean &gt; 0 and ≥ 0 respectively. In this article "positive" means &gt; 0 and "non-negative" means ≥ 0. Another vexed area concerns "decomposability" and "reducibility": "irreducible" is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix "A" such that 1 + "A" is primitive is sometimes said to be "connected". Then irreducible non-negative square matrices and connected matrices are synonymous. The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a probability distribution and is sometimes called a "stochastic eigenvector". "Perron–Frobenius eigenvalue" and "dominant eigenvalue" are alternative names for the Perron root. Spectral projections are also known as "spectral projectors" and "spectral idempotents". The period is sometimes referred to as the "index of imprimitivity" or the "order of cyclicity". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = (a_{ij}) " }, { "math_id": 1, "text": " n \\times n " }, { "math_id": 2, "text": " a_{ij} > 0 " }, { "math_id": 3, "text": " 1 \\le i,j \\le n " }, { "math_id": 4, "text": "\\rho(A) " }, { "math_id": 5, "text": " \\lim_{k \\rightarrow \\infty} A^k/r^k = v w^T" }, { "math_id": 6, "text": "r = \\sup_{x>0} \\inf_{y>0} \\frac{y^\\top A x}{y^\\top x} = \\inf_{x>0} \\sup_{y>0} \\frac{y^\\top A x}{y^\\top x} = \\inf_{x>0} \\sup_{y>0} \\sum_{i,j=1}^n y_i a_{ij} x_j/\\sum_{i=1}^n y_i x_i." }, { "math_id": 7, "text": "r = \\sup_p \\inf_{x>0} \\sum_{i=1}^n p_i[Ax]_i/x_i." }, { "math_id": 8, "text": "r = \\sup_{z > 0} \\ \\inf_{x>0, \\ y>0,\\ x \\circ y = z} \\frac{y^\\top A x}{y^\\top x} = \\sup_{z > 0} \\ \\inf_{x>0, \\ y>0,\\ x \\circ y = z}\\sum_{i,j=1}^n y_i a_{ij} x_j/\\sum_{i=1}^n y_i x_i." }, { "math_id": 9, "text": "\\min_i \\sum_{j} a_{ij} \\le r \\le \\max_i \\sum_{j} a_{ij}." }, { "math_id": 10, "text": "w^T v=1" }, { "math_id": 11, "text": "A = \\left(\\begin{smallmatrix}0 & 1\\\\\n1 & 0\\end{smallmatrix}\\right)" }, { "math_id": 12, "text": "A = \\left(\\begin{smallmatrix}0 & 1\\\\\n0 & 0\\end{smallmatrix}\\right)" }, { "math_id": 13, "text": "\\omega r" }, { "math_id": 14, "text": "r" }, { "math_id": 15, "text": "\\omega" }, { "math_id": 16, "text": "PAP^{-1} \\ne\n\\begin{pmatrix} E & F \\\\ O & G \\end{pmatrix}," }, { "math_id": 17, "text": "(\\mathbb R, +)" }, { "math_id": 18, "text": "\\mathbb{R}^n" }, { "math_id": 19, "text": "(\\mathbb C, +)" }, { "math_id": 20, "text": "\\mathbb{C}^n" }, { "math_id": 21, "text": "t \\mapsto\\exp(tA)" }, { "math_id": 22, "text": "A" }, { "math_id": 23, "text": "N\\times N" }, { "math_id": 24, "text": "h" }, { "math_id": 25, "text": "\\rho(A) = r" }, { "math_id": 26, "text": "r\\in\\mathbb{R}^+" }, { "math_id": 27, "text": "\\mathbf v" }, { "math_id": 28, "text": "\\mathbf w" }, { "math_id": 29, "text": "\\omega = 2\\pi/h" }, { "math_id": 30, "text": "e^{i\\omega}A" }, { "math_id": 31, "text": "e^{i\\omega}" }, { "math_id": 32, "text": "h>1" }, { "math_id": 33, "text": "P" }, { "math_id": 34, "text": "PAP^{-1}=\n\\begin{pmatrix}\nO & A_1 & O & O & \\ldots & O \\\\\nO & O & A_2 & O & \\ldots & O \\\\\n\\vdots & \\vdots &\\vdots & \\vdots & & \\vdots \\\\\nO & O & O & O & \\ldots & A_{h-1} \\\\\nA_h & O & O & O & \\ldots & O\n\\end{pmatrix},\n" }, { "math_id": 35, "text": "O" }, { "math_id": 36, "text": "\\mathbf x\n" }, { "math_id": 37, "text": "f(\\mathbf x)\n" }, { "math_id": 38, "text": "[A\\mathbf x]_i/x_i\n" }, { "math_id": 39, "text": "i\n" }, { "math_id": 40, "text": "x_i\\neq0\n" }, { "math_id": 41, "text": "f\n" }, { "math_id": 42, "text": "\\min_i \\sum_{j} a_{ij} \\le r \\le \\max_i \\sum_{j} a_{ij}." }, { "math_id": 43, "text": "A =\\left(\\begin{smallmatrix}\n0 & 0 & 1 \\\\\n0 & 0 & 1 \\\\\n1 & 1 & 0 \n\\end{smallmatrix}\\right)" }, { "math_id": 44, "text": "\nP A^q P^{-1}= \\begin{pmatrix}\nA_1 & O & O & \\dots & O \\\\\nO & A_2 & O & \\dots & O \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots \\\\\nO & O & O & \\dots & A_d \\\\\n\\end{pmatrix}\n" }, { "math_id": 45, "text": " \\lim_{k \\rightarrow \\infty} 1/k\\sum_{i=0,...,k} A^i/r^i = ( v w^T)," }, { "math_id": 46, "text": "M=\n\\left(\\begin{smallmatrix}\n0 & 1 & 0 & 0 & \\cdots & 0 \\\\\n0 & 0 & 1 & 0 & \\cdots & 0 \\\\\n0 & 0 & 0 & 1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\vdots & & \\vdots \\\\\n0 & 0 & 0 & 0 & \\cdots & 1 \\\\\n1 & 1 & 0 & 0 & \\cdots & 0\n\\end{smallmatrix}\\right)\n" }, { "math_id": 47, "text": " \\left( \\begin{smallmatrix}\nB_1 & * & * & \\cdots & * \\\\\n0 & B_2 & * & \\cdots & * \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots \\\\\n0 & 0 & 0 & \\cdots & * \\\\\n0 & 0 & 0 & \\cdots & B_h\n\\end{smallmatrix}\n\\right)" }, { "math_id": 48, "text": " \\|v\\|_{\\infty}= \\|A^k v\\|_{\\infty} \\ge \\|A^k\\|_{\\infty} \\min_i (v_i), ~~\\Rightarrow~~ \\|A^k\\|_{\\infty} \\le \\|v\\|/\\min_i (v_i) " }, { "math_id": 49, "text": "\nJ^k= \\begin{pmatrix} \\lambda & 1 \\\\ 0 & \\lambda \\end{pmatrix} ^k\n=\n\\begin{pmatrix} \\lambda^k & k\\lambda^{k-1} \\\\ 0 & \\lambda^k \\end{pmatrix},\n" }, { "math_id": 50, "text": " r \\; \\le \\; \\max_i \\sum_j a_{ij}." }, { "math_id": 51, "text": "\\scriptstyle\\lambda" }, { "math_id": 52, "text": "\\scriptstyle |\\lambda| \\; \\le \\; \\max_i \\sum_j |a_{ij}|" }, { "math_id": 53, "text": "\\scriptstyle\\|A\\| \\ge |\\lambda|" }, { "math_id": 54, "text": "\\scriptstyle x" }, { "math_id": 55, "text": "\\scriptstyle\\|A\\| \\ge |Ax|/|x| = |\\lambda x|/|x| = |\\lambda|" }, { "math_id": 56, "text": "\\scriptstyle \\left \\| A \\right \\| _\\infty = \\max \\limits _{1 \\leq i \\leq m} \\sum _{j=1} ^n | a_{ij} |. " }, { "math_id": 57, "text": "\\scriptstyle\\|A\\|_\\infty \\ge |\\lambda|" }, { "math_id": 58, "text": "\\min_i \\sum_j a_{ij} \\; \\le \\; r ." }, { "math_id": 59, "text": "\\scriptstyle h^{-1}\\sum^h_1\\lambda^{-k}R^k" }, { "math_id": 60, "text": "\\left(\n\\begin{smallmatrix}\n1 & 0 & 0 \\\\\n1 & 0 & 0 \\\\\n1 & 1 & 1\n\\end{smallmatrix}\n\\right)" }, { "math_id": 61, "text": "\\left(\n\\begin{smallmatrix}\n1 & 0 & 0 \\\\\n1 & 0 & 0 \\\\\n\\!\\!\\!-1 & 1 & 1\n\\end{smallmatrix}\n\\right)" }, { "math_id": 62, "text": "\\left(\n\\begin{smallmatrix}\n0 & 1 & 1 \\\\\n1 & 0 & 1 \\\\\n1 & 1 & 0\n\\end{smallmatrix}\n\\right)" }, { "math_id": 63, "text": "\\left(\n\\begin{smallmatrix}\n0 & 1 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 & 0\n\\end{smallmatrix}\n\\right)" } ]
https://en.wikipedia.org/wiki?curid=1540333
1540704
Equation of state (cosmology)
Equation of state in cosmology In cosmology, the equation of state of a perfect fluid is characterized by a dimensionless number formula_0, equal to the ratio of its pressure formula_1 to its energy density formula_2: formula_3 It is closely related to the thermodynamic equation of state and ideal gas law. The equation. The perfect gas equation of state may be written as formula_12 where formula_13 is the mass density, formula_14 is the particular gas constant, formula_15 is the temperature and formula_16 is a characteristic thermal speed of the molecules. Thus formula_17 where formula_18 is the speed of light, formula_19 and formula_20 for a "cold" gas. FLRW equations and the equation of state. The equation of state may be used in Friedmann–Lemaître–Robertson–Walker (FLRW) equations to describe the evolution of an isotropic universe filled with a perfect fluid. If formula_21 is the scale factor then formula_22 If the fluid is the dominant form of matter in a flat universe, then formula_23 where formula_24 is the proper time. In general the Friedmann acceleration equation is formula_25 where formula_26 is the cosmological constant and formula_27 is Newton's constant, and formula_28 is the second proper time derivative of the scale factor. If we define (what might be called "effective") energy density and pressure as formula_29 formula_30 and formula_31 the acceleration equation may be written as formula_32 Non-relativistic particles. The equation of state for ordinary non-relativistic 'matter' (e.g. cold dust) is formula_7, which means that its energy density decreases as formula_33, where formula_34 is a volume. In an expanding universe, the total energy of non-relativistic matter remains constant, with its density decreasing as the volume increases. Ultra-relativistic particles. The equation of state for ultra-relativistic 'radiation' (including neutrinos, and in the very early universe other particles that later became non-relativistic) is formula_5 which means that its energy density decreases as formula_6. In an expanding universe, the energy density of radiation decreases more quickly than the volume expansion, because its wavelength is red-shifted. Acceleration of cosmic inflation. Cosmic inflation and the accelerated expansion of the universe can be characterized by the equation of state of dark energy. In the simplest case, the equation of state of the cosmological constant is formula_9. In this case, the above expression for the scale factor is not valid and formula_10, where the constant "H" is the Hubble parameter. More generally, the expansion of the universe is accelerating for any equation of state formula_35. The accelerated expansion of the Universe was indeed observed. According to observations, the value of equation of state of cosmological constant is near -1. Hypothetical phantom energy would have an equation of state formula_36, and would cause a Big Rip. Using the existing data, it is still impossible to distinguish between phantom formula_11 and non-phantom formula_37. Fluids. In an expanding universe, fluids with larger equations of state disappear more quickly than those with smaller equations of state. This is the origin of the flatness and monopole problems of the Big Bang: curvature has formula_8 and monopoles have formula_7, so if they were around at the time of the early Big Bang, they should still be visible today. These problems are solved by cosmic inflation which has formula_38. Measuring the equation of state of dark energy is one of the largest efforts of observational cosmology. By accurately measuring formula_0, it is hoped that the cosmological constant could be distinguished from quintessence which has formula_39. Scalar modeling. A scalar field formula_40 can be viewed as a sort of perfect fluid with equation of state formula_41 where formula_42 is the time-derivative of formula_40 and formula_43 is the potential energy. A free (formula_44) scalar field has formula_4, and one with vanishing kinetic energy is equivalent to a cosmological constant: formula_9. Any equation of state in between, but not crossing the formula_9 barrier known as the Phantom Divide Line (PDL), is achievable, which makes scalar fields useful models for many phenomena in cosmology.
[ { "math_id": 0, "text": "w" }, { "math_id": 1, "text": "p " }, { "math_id": 2, "text": "\\rho" }, { "math_id": 3, "text": "w \\equiv \\frac{p}{\\rho}." }, { "math_id": 4, "text": "w = 1" }, { "math_id": 5, "text": "w = 1/3" }, { "math_id": 6, "text": "\\rho \\propto a^{-4}" }, { "math_id": 7, "text": "w = 0" }, { "math_id": 8, "text": "w = -1/3" }, { "math_id": 9, "text": "w = -1" }, { "math_id": 10, "text": "a\\propto e^{Ht}" }, { "math_id": 11, "text": "w < -1 " }, { "math_id": 12, "text": "p = \\rho_m RT = \\rho_m C^2" }, { "math_id": 13, "text": " \\rho_m" }, { "math_id": 14, "text": "R" }, { "math_id": 15, "text": "T" }, { "math_id": 16, "text": "C=\\sqrt{RT}" }, { "math_id": 17, "text": "w \\equiv \\frac{p}{\\rho} = \\frac{\\rho_mC^2}{\\rho_mc^2} = \\frac{C^2}{c^2}\\approx 0" }, { "math_id": 18, "text": "c" }, { "math_id": 19, "text": "\\rho = \\rho_mc^2" }, { "math_id": 20, "text": "C\\ll c" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "\\rho \\propto a^{-3(1+w)}." }, { "math_id": 23, "text": "a \\propto t^{\\frac{2}{3(1+w)}}," }, { "math_id": 24, "text": "t" }, { "math_id": 25, "text": "3\\frac{\\ddot{a}}{a} = \\Lambda - 4 \\pi G (\\rho + 3p)" }, { "math_id": 26, "text": " \\Lambda" }, { "math_id": 27, "text": "G" }, { "math_id": 28, "text": "\\ddot{a}" }, { "math_id": 29, "text": "\\rho' \\equiv \\rho + \\frac{\\Lambda}{8 \\pi G}" }, { "math_id": 30, "text": "p' \\equiv p - \\frac{\\Lambda}{8 \\pi G}" }, { "math_id": 31, "text": " p' = w'\\rho'" }, { "math_id": 32, "text": "\\frac{\\ddot a}{a}=-\\frac{4}{3}\\pi G\\left(\\rho' + 3p'\\right) = -\\frac{4}{3}\\pi G(1+3w')\\rho'" }, { "math_id": 33, "text": "\\rho \\propto a^{-3} = V^{-1}" }, { "math_id": 34, "text": "V" }, { "math_id": 35, "text": "w < -1/3" }, { "math_id": 36, "text": "w < -1" }, { "math_id": 37, "text": "w \\ge -1 " }, { "math_id": 38, "text": "w \\approx -1" }, { "math_id": 39, "text": "w \\ne -1" }, { "math_id": 40, "text": " \\phi" }, { "math_id": 41, "text": "w = \\frac{\\frac{1}{2}\\dot{\\phi}^2-V(\\phi)}{\\frac{1}{2}\\dot{\\phi}^2+V(\\phi)}," }, { "math_id": 42, "text": " \\dot{\\phi}" }, { "math_id": 43, "text": "V(\\phi)" }, { "math_id": 44, "text": "V = 0" } ]
https://en.wikipedia.org/wiki?curid=1540704
1540711
Phantom energy
Hypothetical form of dark energy Phantom energy is a hypothetical form of dark energy satisfying the equation of state formula_0 with formula_1. It possesses negative kinetic energy, and predicts expansion of the universe in excess of that predicted by a cosmological constant, which leads to a Big Rip. The idea of phantom energy is often dismissed, as it would suggest that the vacuum is unstable with negative mass particles bursting into existence. The concept is hence tied to emerging theories of a continuously created negative mass dark fluid, in which the cosmological constant can vary as a function of time. Big Rip mechanism. The existence of phantom energy could cause the expansion of the universe to accelerate so quickly that a scenario known as the Big Rip, a possible end to the universe, occurs. The expansion of the universe reaches an infinite degree in finite time, causing expansion to accelerate without bounds. This acceleration necessarily passes the speed of light (since it involves expansion of the universe itself, not particles moving within it), causing more and more objects to leave our observable universe faster than its expansion, as light and information emitted from distant stars and other cosmic sources cannot "catch up" with the expansion. As the observable universe expands, objects will be unable to interact with each other via fundamental forces, and eventually, the expansion will prevent any action of forces between any particles, even within atoms, "ripping apart" the universe, making distances between individual particles infinite. One application of phantom energy in 2007 was to a cyclic model of the universe, which reverses its expansion extremely shortly before the would-be Big Rip. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " P = w \\rho c^2" }, { "math_id": 1, "text": " w < -1" } ]
https://en.wikipedia.org/wiki?curid=1540711
15407327
Absolute electrode potential
Electrode potential in electrochemistry Absolute electrode potential, in electrochemistry, according to an IUPAC definition, is the electrode potential of a metal measured with respect to a universal reference system (without any additional metal–solution interface). Definition. According to a more specific definition presented by Trasatti, the absolute electrode potential is the difference in electronic energy between a point inside the metal (Fermi level) of an electrode and a point outside the electrolyte in which the electrode is submerged (an electron at rest in vacuum). This potential is difficult to determine accurately. For this reason, a standard hydrogen electrode is typically used for reference potential. The absolute potential of the SHE is 4.44 ± 0.02 V at 25 °C. Therefore, for any electrode at 25 °C: formula_0 where: E is electrode potential V is the unit volt "M" denotes the electrode made of metal M (abs) denotes the absolute potential (SHE) denotes the electrode potential relative to the standard hydrogen electrode. A different definition for the absolute electrode potential (also known as absolute half-cell potential and single electrode potential) has also been discussed in the literature. In this approach, one first defines an isothermal absolute single-electrode process (or absolute half-cell process.) For example, in the case of a generic metal being oxidized to form a solution-phase ion, the process would be M(metal) → M+(solution) + (gas) For the hydrogen electrode, the absolute half-cell process would be H2 (gas) → H+(solution) + (gas) Other types of absolute electrode reactions would be defined analogously. In this approach, all three species taking part in the reaction, including the electron, must be placed in thermodynamically well-defined states. All species, including the electron, are at the same temperature, and appropriate standard states for all species, including the electron, must be fully defined. The absolute electrode potential is then defined as the Gibbs free energy for the absolute electrode process. To express this in volts one divides the Gibbs free energy by the negative of Faraday's constant. Rockwood's approach to absolute-electrode thermodynamics is easily expendable to other thermodynamic functions. For example, the absolute half-cell entropy has been defined as the entropy of the absolute half-cell process defined above. An alternative definition of the absolute half-cell entropy has recently been published by Fang et al. who define it as the entropy of the following reaction (using the hydrogen electrode as an example): H2 (gas) → H+(solution) + (metal) This approach differs from the approach described by Rockwood in the treatment of the electron, i.e. whether it is placed in the gas phase or the metal. The electron can also be in another state, that of a solvated electron in solution, as studied by Alexander Frumkin and B. Damaskin and others. Determination. The basis for determination of the absolute electrode potential under the Trasatti definition is given by the equation: formula_1 where: "E""M"(abs) is the absolute potential of the electrode made of metal M formula_2 is the electron work function of metal M formula_3 is the contact (Volta) potential difference at the metal("M")–solution("S") interface. For practical purposes, the value of the absolute electrode potential of the standard hydrogen electrode is best determined with the utility of data for an ideally-polarizable mercury (Hg) electrode: formula_4 where: formula_5 is the absolute standard potential of the hydrogen electrode "σ" = 0 denotes the condition of the point of zero charge at the interface. The types of physical measurements required under the Rockwood definition are similar to those required under the Trasatti definition, but they are used in a different way, e.g. in Rockwood's approach they are used to calculate the equilibrium vapour pressure of the electron gas. The numerical value for the absolute potential of the standard hydrogen electrode one would calculate under the Rockwood definition is sometimes fortuitously close to the value one would obtain under the Trasatti definition. This near-agreement in the numerical value depends on the choice of ambient temperature and standard states, and is the result of the near-cancellation of certain terms in the expressions. For example, if a standard state of one atmosphere ideal gas is chosen for the electron gas then the cancellation of terms occurs at a temperature of 296 K, and the two definitions give an equal numerical result. At 298.15 K a near-cancellation of terms would apply and the two approaches would produce nearly the same numerical values. However, there is no fundamental significance to this near agreement because it depends on arbitrary choices, such as temperature and definitions of standard states. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E^M_{\\rm{(abs)}} = E^M_{\\rm{(SHE)}}+(4.44 \\pm 0.02)\\ {\\mathrm V}" }, { "math_id": 1, "text": "E^M{\\rm (abs)} = \\phi^M + \\Delta ^M_S \\psi" }, { "math_id": 2, "text": "\\phi^M" }, { "math_id": 3, "text": "\\Delta ^M_S \\psi" }, { "math_id": 4, "text": "E^\\ominus {\\rm (H^+/H_2)(abs)} = \\phi^{\\rm{Hg}} + \\Delta ^{\\rm{Hg}} _S \\psi^\\ominus_{\\sigma=0} - E^{\\rm{Hg}}_{\\sigma=0}\\rm{(SHE)}" }, { "math_id": 5, "text": "E^\\ominus {\\rm (H^+/H_2)(abs)}" } ]
https://en.wikipedia.org/wiki?curid=15407327
15409174
Piezoelectric accelerometer
Type of accelerometer A piezoelectric accelerometer is an accelerometer that employs the piezoelectric effect of certain materials to measure dynamic changes in mechanical variables (e.g., acceleration, vibration, and mechanical shock). As with all transducers, piezoelectrics convert one form of energy into another and provide an electrical signal in response to a quantity, property, or condition that is being measured. Using the general sensing method upon which all accelerometers are based, acceleration acts upon a seismic mass that is restrained by a spring or suspended on a cantilever beam, and converts a physical force into an electrical signal. Before the acceleration can be converted into an electrical quantity it must first be converted into either a force or displacement. This conversion is done via the mass spring system shown in the figure to the right. Introduction. The word piezoelectric finds its roots in the Greek word "piezein", which means to squeeze or press. When a physical force is exerted on the accelerometer, the seismic mass loads the piezoelectric element according to Newton's second law of motion (formula_0). The force exerted on the piezoelectric material can be observed in the change in the electrostatic force or voltage generated by the piezoelectric material. This differs from a piezoresistive effect in that piezoresistive materials experience a change in the resistance of the material rather than a change in charge or voltage. Physical force exerted on the piezoelectric can be classified as one of two types; bending or compression. Stress of the compression type can be understood as a force exerted to one side of the piezoelectric while the opposing side rests against a fixed surface, while bending involves a force being exerted on the piezoelectric from both sides. Piezoelectric materials used for the purpose of accelerometers fall into two categories: single crystal and ceramic materials. The first and more widely used are single-crystal materials (usually quartz). Though these materials do offer a long life span in terms of sensitivity, their disadvantage is that they are generally less sensitive than some piezoelectric ceramics. The other category, ceramic materials, have a higher piezoelectric constant (sensitivity) than single-crystal materials, and are less expensive to produce. Ceramics use barium titanate, lead-zirconate-lead-titanate, lead metaniobate, and other materials whose composition is considered proprietary by the company responsible for their development. The disadvantage of piezoelectric ceramics, however, is that their sensitivity degrades with time making the longevity of the device less than that of single-crystal materials. In applications when low sensitivity piezoelectrics are used, two or more crystals can be connected together for output multiplication. The proper material can be chosen for particular applications based on the sensitivity, frequency response, bulk-resistivity, and thermal response. Due to the low output signal and high output impedance that piezoelectric accelerometers possess, there is a need for amplification and impedance conversion of the signal produced. In the past this problem has been solved using a separate (external) amplifier/impedance converter. This method, however, is generally impractical due to the noise that is introduced as well as the physical and environmental constraints posed on the system as a result. Today IC amplifiers/impedance converters are commercially available and are generally packaged within the case of the accelerometer itself. History. Behind the mystery of the operation of the piezoelectric accelerometer lie some very fundamental concepts governing the behavior of crystallographic structures. In 1880, Pierre and Jacques Curie published an experimental demonstration connecting mechanical stress and surface charge on a crystal. This phenomenon became known as the piezoelectric effect. Closely related to this phenomenon is the Curie point, named for the physicist Pierre Curie, which is the temperature above which piezoelectric material loses spontaneous polarization of its atoms. The development of the commercial piezoelectric accelerometer came about through a number of attempts to find the most effective method to measure the vibration on large structures such as bridges and on vehicles in motion such as aircraft. One attempt involved using the resistance strain gage as a device to build an accelerometer. Incidentally, it was Hans J. Meier who, through his work at MIT, is given credit as the first to construct a commercial strain gage accelerometer (circa 1938). However, the strain gage accelerometers were fragile and could only produce low resonant frequencies and they also exhibited a low frequency response. These limitations in dynamic range made it unsuitable for testing naval aircraft structures. On the other hand, the piezoelectric sensor was proven to be a much better choice over the strain gage in designing an accelerometer. The high modulus of elasticity of piezoelectric materials makes the piezoelectric sensor a more viable solution to the problems identified with the strain gage accelerometer. Simply stated, the inherent properties of the piezoelectric accelerometers made it a much better alternative to the strain gage types because of its high frequency response, and its ability to generate high resonant frequencies. The piezoelectric accelerometer allowed for a reduction in its physical size at the manufacturing level and it also provided for a higher g (standard gravity) capability relative to the strain gage type. By comparison, the strain gage type exhibited a flat frequency response above 200 Hz while the piezoelectric type provided a flat response up to 10,000 Hz. These improvements made it possible for measuring the high frequency vibrations associated with the quick movements and short duration shocks of aircraft which before was not possible with the strain gage types. Before long, the technological benefits of the piezoelectric accelerometer became apparent and in the late 1940s, large scale production of piezoelectric accelerometers began. Today, piezoelectric accelerometers are used for instrumentation in the fields of engineering, health and medicine, aeronautics and many other different industries. Manufacturing. There are two common methods used to manufacture accelerometers. One is based upon the principles of piezoresistance and the other is based on the principles of piezoelectricity. Both methods ensure that unwanted orthogonal acceleration vectors are excluded from detection. Manufacturing an accelerometer that uses piezoresistance first starts with a semiconductor layer that is attached to a handle wafer by a thick oxide layer. The semiconductor layer is then patterned to the accelerometer's geometry. This semiconductor layer has one or more apertures so that the underlying mass will have the corresponding apertures. Next the semiconductor layer is used as a mask to etch out a cavity in the underlying thick oxide. A mass in the cavity is supported in cantilever fashion by the piezoresistant arms of the semiconductor layer. Directly below the accelerometer's geometry is a flex cavity that allows the mass in the cavity to flex or move in direction that is orthogonal to the surface of the accelerometer. Accelerometers based upon piezoelectricity are constructed with two piezoelectric transducers. The unit consists of a hollow tube that is sealed by a piezoelectric transducer on each end. The transducers are oppositely polarized and are selected to have a specific series capacitance. The tube is then partially filled with a heavy liquid and the accelerometer is excited. While excited the total output voltage is continuously measured and the volume of the heavy liquid is microadjusted until the desired output voltage is obtained. Finally the outputs of the individual transducers are measured, the residual voltage difference is tabulated, and the dominant transducer is identified. In 1943 the Danish company Brüel &amp; Kjær launched Type 4301 - the world's first charge accelerometer. Applications of piezoelectric accelerometers. Piezoelectric accelerometers are used in many different industries, environments, and applications - all typically requiring measurement of short duration impulses. Piezoelectric measuring devices are widely used today in the laboratory, on the production floor, and as original equipment for measuring and recording dynamic changes in mechanical variables including shock and vibration. Some accelerometers have built-in electronics to amplify the signal before transmitting it to the recording device. This work was pioneered by PCB Piezotronics, released in 1967 as ICP® Integrated circuit piezoelectric, later evolving to be the IEPE standard (see Integrated Electronics Piezo-Electric). Other related, brand specific descriptors of IEPE are: CCLD, IsoTron or DeltaTron. Accelerometers also have had the addition of onboard memory to contain serial number and calibration data, typically referred to as TEDS Transducer Electronic Data Sheet per the IEEE 1451 standard. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F=ma" } ]
https://en.wikipedia.org/wiki?curid=15409174
15409192
SAT solver
Computer program for the Boolean satisfiability problem In computer science and formal methods, a SAT solver is a computer program which aims to solve the Boolean satisfiability problem. On input a formula over Boolean variables, such as "("x" or "y") and ("x" or not "y")", a SAT solver outputs whether the formula is satisfiable, meaning that there are possible values of "x" and "y" which make the formula true, or unsatisfiable, meaning that there are no such values of "x" and "y". In this case, the formula is satisfiable when "x" is true, so the solver should return "satisfiable". Since the introduction of algorithms for SAT in the 1960s, modern SAT solvers have grown into complex software artifacts involving a large number of heuristics and program optimizations to work efficiently. By a result known as the Cook–Levin theorem, Boolean satisfiability is an NP-complete problem in general. As a result, only algorithms with exponential worst-case complexity are known. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s, which have contributed to dramatic advances in the ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints. SAT solvers often begin by converting a formula to conjunctive normal form. They are often based on core algorithms such as the DPLL algorithm, but incorporate a number of extensions and features. Most SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution with an output such as "unknown". Often, SAT solvers do not just provide an answer, but can provide further information including an example assignment (values for "x", "y", etc.) in case the formula is satisfiable or minimal set of unsatisfiable clauses if the formula is unsatisfiable. Modern SAT solvers have had a significant impact on fields including software verification, program analysis, constraint solving, artificial intelligence, electronic design automation, and operations research. Powerful solvers are readily available as free and open-source software and are built into some programming languages such as exposing SAT solvers as constraints in constraint logic programming. Overview. A "Boolean formula" is any expression that can be written using Boolean (propositional) variables "x, y, z, ..." and the Boolean operations AND, OR, and NOT. For example, ("x" AND "y") OR ("x" AND (NOT "z")) An "assignment" consists of choosing, for each variable, an assignment TRUE or FALSE. For any assignment "v", the Boolean formula can be evaluated, and evaluates to true or false. The formula is "satisfiable" if there exists an assignment (called a "satisfying assignment") for which the formula evaluates to true. The "Boolean satisfiability problem" is the decision problem which asks, on input a Boolean formula, to determine whether the formula is satisfiable or not. This problem is NP-complete. DPLL solvers. A DPLL SAT solver employs a systematic backtracking search procedure to explore the (exponentially sized) space of variable assignments looking for satisfying assignments. The basic search procedure was proposed in two seminal papers in the early 1960s (see references below) and is now commonly referred to as the Davis–Putnam–Logemann–Loveland algorithm ("DPLL" or "DLL"). Many modern approaches to practical SAT solving are derived from the DPLL algorithm and share the same structure. Often they only improve the efficiency of certain classes of SAT problems such as instances that appear in industrial applications or randomly generated instances. Theoretically, exponential lower bounds have been proved for the DPLL family of algorithms. CDCL solvers. Modern SAT solvers (developed in the 2000s) come in two flavors: "conflict-driven" and "look-ahead". Both approaches descend from DPLL. Conflict-driven solvers, such as conflict-driven clause learning (CDCL), augment the basic DPLL search algorithm with efficient conflict analysis, clause learning, backjumping, a "two-watched-literals" form of unit propagation, adaptive branching, and random restarts. These "extras" to the basic systematic search have been empirically shown to be essential for handling the large SAT instances that arise in electronic design automation (EDA). Most state-of-the-art SAT solvers are based on the CDCL framework as of 2019. Well known implementations include Chaff and GRASP. Look-ahead solvers have especially strengthened reductions (going beyond unit-clause propagation) and the heuristics, and they are generally stronger than conflict-driven solvers on hard instances (while conflict-driven solvers can be much better on large instances which actually have an easy instance inside). The conflict-driven MiniSAT, which was relatively successful at the 2005 SAT competition, only has about 600 lines of code. A modern Parallel SAT solver is ManySAT. It can achieve super linear speed-ups on important classes of problems. An example for look-ahead solvers is march_dl, which won a prize at the 2007 SAT competition. Google's CP-SAT solver, part of OR-Tools, won gold medals at the Minizinc constraint programming competitions in 2018, 2019, 2020, and 2021. Certain types of large random satisfiable instances of SAT can be solved by survey propagation (SP). Particularly in hardware design and verification applications, satisfiability and other logical properties of a given propositional formula are sometimes decided based on a representation of the formula as a binary decision diagram (BDD). Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. All of these behaviors can be seen in the SAT solving contests. Parallel SAT-solving. Parallel SAT solvers come in three categories: portfolio, divide-and-conquer and parallel local search algorithms. With parallel portfolios, multiple different SAT solvers run concurrently. Each of them solves a copy of the SAT instance, whereas divide-and-conquer algorithms divide the problem between the processors. Different approaches exist to parallelize local search algorithms. The International SAT Solver Competition has a parallel track reflecting recent advances in parallel SAT solving. In 2016, 2017 and 2018, the benchmarks were run on a shared-memory system with 24 processing cores, therefore solvers intended for distributed memory or manycore processors might have fallen short. Portfolios. In general there is no SAT solver that performs better than all other solvers on all SAT problems. An algorithm might perform well for problem instances others struggle with, but will do worse with other instances. Furthermore, given a SAT instance, there is no reliable way to predict which algorithm will solve this instance particularly fast. These limitations motivate the parallel portfolio approach. A portfolio is a set of different algorithms or different configurations of the same algorithm. All solvers in a parallel portfolio run on different processors to solve of the same problem. If one solver terminates, the portfolio solver reports the problem to be satisfiable or unsatisfiable according to this one solver. All other solvers are terminated. Diversifying portfolios by including a variety of solvers, each performing well on a different set of problems, increases the robustness of the solver. Many solvers internally use a random number generator. Diversifying their seeds is a simple way to diversify a portfolio. Other diversification strategies involve enabling, disabling or diversifying certain heuristics in the sequential solver. One drawback of parallel portfolios is the amount of duplicate work. If clause learning is used in the sequential solvers, sharing learned clauses between parallel running solvers can reduce duplicate work and increase performance. Yet, even merely running a portfolio of the best solvers in parallel makes a competitive parallel solver. An example of such a solver is PPfolio. It was designed to find a lower bound for the performance a parallel SAT solver should be able to deliver. Despite the large amount of duplicate work due to lack of optimizations, it performed well on a shared memory machine. HordeSat is a parallel portfolio solver for large clusters of computing nodes. It uses differently configured instances of the same sequential solver at its core. Particularly for hard SAT instances HordeSat can produce linear speedups and therefore reduce runtime significantly. In recent years parallel portfolio SAT solvers have dominated the parallel track of the International SAT Solver Competitions. Notable examples of such solvers include Plingeling and painless-mcomsps. Divide-and-conquer. In contrast to parallel portfolios, parallel divide-and-conquer tries to split the search space between the processing elements. Divide-and-conquer algorithms, such as the sequential DPLL, already apply the technique of splitting the search space, hence their extension towards a parallel algorithm is straight forward. However, due to techniques like unit propagation, following a division, the partial problems may differ significantly in complexity. Thus the DPLL algorithm typically does not process each part of the search space in the same amount of time, yielding a challenging load balancing problem. Due to non-chronological backtracking, parallelization of conflict-driven clause learning is more difficult. One way to overcome this is the Cube-and-Conquer paradigm. It suggests solving in two phases. In the "cube" phase the Problem is divided into many thousands, up to millions, of sections. This is done by a look-ahead solver, that finds a set of partial configurations called "cubes". A cube can also be seen as a conjunction of a subset of variables of the original formula. In conjunction with the formula, each of the cubes forms a new formula. These formulas can be solved independently and concurrently by conflict-driven solvers. As the disjunction of these formulas is equivalent to the original formula, the problem is reported to be satisfiable, if one of the formulas is satisfiable. The look-ahead solver is favorable for small but hard problems, so it is used to gradually divide the problem into multiple sub-problems. These sub-problems are easier but still large which is the ideal form for a conflict-driven solver. Furthermore, look-ahead solvers consider the entire problem whereas conflict-driven solvers make decisions based on information that is much more local. There are three heuristics involved in the cube phase. The variables in the cubes are chosen by the decision heuristic. The direction heuristic decides which variable assignment (true or false) to explore first. In satisfiable problem instances, choosing a satisfiable branch first is beneficial. The cutoff heuristic decides when to stop expanding a cube and instead forward it to a sequential conflict-driven solver. Preferably the cubes are similarly complex to solve. Treengeling is an example for a parallel solver that applies the Cube-and-Conquer paradigm. Since its introduction in 2012 it has had multiple successes at the International SAT Solver Competition. Cube-and-Conquer was used to solve the Boolean Pythagorean triples problem. Cube-and-Conquer is a modification or a generalization of the DPLL-based Divide-and-conquer approach used to compute the Van der Waerden numbers w(2;3,17) and w(2;3,18) in 2010 where both the phases (splitting and solving the partial problems) were performed using DPLL. Local search. One strategy towards a parallel local search algorithm for SAT solving is trying multiple variable flips concurrently on different processing units. Another is to apply the aforementioned portfolio approach, however clause sharing is not possible since local search solvers do not produce clauses. Alternatively, it is possible to share the configurations that are produced locally. These configurations can be used to guide the production of a new initial configuration when a local solver decides to restart its search. Randomized approaches. Algorithms that are not part of the DPLL family include stochastic local search algorithms. One example is WalkSAT. Stochastic methods try to find a satisfying interpretation but cannot deduce that a SAT instance is unsatisfiable, as opposed to complete algorithms, such as DPLL. In contrast, randomized algorithms like the PPSZ algorithm by Paturi, Pudlak, Saks, and Zane set variables in a random order according to some heuristics, for example bounded-width resolution. If the heuristic can't find the correct setting, the variable is assigned randomly. The PPSZ algorithm has a of formula_0 for 3-SAT. This was the best-known runtime for this problem until 2019, when Hansen, Kaplan, Zamir and Zwick published a modification of that algorithm with a runtime of formula_1 for 3-SAT. The latter is currently the fastest known algorithm for k-SAT at all values of k. In the setting with many satisfying assignments the randomized algorithm by Schöning has a better bound. Applications. In mathematics. SAT solvers have been used to assist in proving mathematical theorems through computer-assisted proof. In Ramsey theory, several previously unknown Van der Waerden numbers were computed with the help of specialized SAT solvers running on FPGAs. In 2016, Marijn Heule, Oliver Kullmann, and Victor Marek solved the Boolean Pythagorean triples problem by using a SAT solver to show that there is no way to color the integers up to 7825 in the required fashion. Small values of the Schur numbers were also computed by Heule using SAT solvers. In software verification. SAT solvers are used in formal verification of hardware and software. In model checking (in particular, bounded model checking), SAT solvers are used to check whether a finite-state system satisfies a specification of its intended behavior. SAT solvers are the core component on which satisfiability modulo theories (SMT) solvers are built, which are used for problems such as job scheduling, symbolic execution, program model checking, program verification based on hoare logic, and other applications. These techniques are also closely related to constraint programming and logic programming. In other areas. In operations research, SAT solvers have been applied to solve optimization and scheduling problems. In social choice theory, SAT solvers have been used to prove impossibility theorems. Tang and Lin used SAT solvers to prove Arrow's theorem and other classic impossibility theorems. Geist and Endriss used it to find new impossibilities related to set extensions. Brandt and Geist used this approach to prove an impossibility about strategyproof tournament solutions. Other authors used this technology to prove new impossibilities about the no-show paradox, half-way monotonicity, and probabilistic voting rules. Brandl, Brandt, Peters and Stricker used it to prove the impossibility of a strategyproof, efficient and fair rule for fractional social choice. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(1.308^n)" }, { "math_id": 1, "text": "O(1.307^n)" } ]
https://en.wikipedia.org/wiki?curid=15409192
15409391
Charging argument
In computer science, a charging argument is used to compare the output of an optimization algorithm to an optimal solution. It is typically used to show that an algorithm produces optimal results by proving the existence of a particular injective function. For profit maximization problems, the function can be any one-to-one mapping from elements of an optimal solution to elements of the algorithm's output. For cost minimization problems, the function can be any one-to-one mapping from elements of the algorithm's output to elements of an optimal solution. Correctness. In order for an algorithm to optimally solve a profit maximization problem, the algorithm must produce an output that has as much profit as the optimal solution for every possible input. Let |A(I)| denote the profit of the algorithm's output given an input "I", and let |OPT(I)| denote the profit of an optimal solution for "I". If an injective function "h : OPT(I) → A(I)" exists, it follows that |OPT(I)| "≤" |A(I)|. Since the optimal solution has the greatest profit attainable, this means that the output given by the algorithm is just as profitable as the optimal solution, and so the algorithm is optimal. The correctness of the charging argument for a cost minimization problem is symmetric. If |A(I)| and |OPT(I)| denote the cost of the algorithm's output and optimal solution respectively, then the existence of an injective function "h : A(I) → OPT(I)" would mean that |A(I)| "≤" |OPT(I)|. Since the optimal solution has the lowest cost, and the cost of the algorithm is the same as the cost of the optimal solution of the minimization problem, then the algorithm also optimally solves the problem. Variations. Charging arguments can also be used to show approximation results. In particular, it can be used to show that an algorithm is an "n"-approximation to an optimization problem. Instead of showing that an algorithm produces outputs with the same value of profit or cost as the optimal solution, show that it attains that value within a factor of "n". Rather than proving the existence of a one-to-one function, the charging argument focuses on proving that an "n"-to-one function exists in order to prove approximation results. Examples. Interval Scheduling Problem. Given a set of "n" intervals "I = {I1, I2, ... , In}", where each interval "Ii" ∈ "I" has a starting time "si" and a finishing time "fi", where "si &lt; fi", the goal is to find a maximal subset of mutually compatible intervals in "I". Here, two intervals "Ij" and "Ik" are said to be compatible if they do not overlap, in that "sj &lt; fj ≤ sk &lt; fk". Consider the earliest finish time greedy algorithm, described as follows: The interval scheduling problem can be viewed as a profit maximization problem, where the number of intervals in the mutually compatible subset is the profit. The charging argument can be used to show that the earliest finish time algorithm is optimal for the interval scheduling problem. Given a set of intervals "I = {I1, I2, ... , In}", let "OPT(I)" be any optimal solution of the interval scheduling problem, and let "EFT(I)" be the solution of the earliest finishing time algorithm. For any interval "J ∈ OPT(I)", define "h(J)" as the interval "J' ∈ EFT(I)" that intersects "J" with the earliest finishing time amongst all intervals in "EFT(I)" intersecting "J". To show that the earliest finish time algorithm is optimal using the charging argument, "h" must be shown to be a one-to-one function mapping intervals in "OPT(I)" to those in "EFT(I)". Suppose "J" is an arbitrary interval in "OPT(I)". Show that "h" is a function mapping "OPT(I)" to "EFT(I)". Assume for a contradiction that there is no interval "J' ∈ EFT(I)" satisfying "h(J) = J"'. By definition of "h", this means that no interval in "EFT(I)" intersects with "J". However, this would also mean that "J" is compatible with every interval in "EFT(I)", and so the earliest finishing time algorithm would have added "J" into "EFT(I)", and so "J ∈ EFT(I)". A contradiction arises, since "J" was assumed to not intersect with any interval in "EFT(I)", yet "J" is in "EFT(I)", and "J" intersects with itself. Thus by contradiction, "J" must intersect with at least one interval in "EFT(I)". It remains to show that "h(J)" is unique. Based on the definition of compatibility, it can never be the case that two compatible intervals have the same finishing time. Since all intervals in "EFT(I)" are mutually compatible, none of these intervals have the same finishing time. In particular, every interval in "EFT(I)" that intersects with "J" have distinct finishing times, and so "h(J)" is unique. Show that "h" is one-to-one. Assume for a contradiction that "h" is not injective. Then there are two distinct intervals in "OPT(I)", "J1" and "J2", such that "h" maps both "J1" and "J2" to the same interval "J' ∈ EFT(I)". Without loss of generality, assume that f1 &lt; f2. The intervals "J1" and "J2" cannot intersect because they are both in the optimal solution, and so f1 ≤ s2&lt; f2. Since "EFT(I)" contains "J' " instead of "J1", the earliest finishing time algorithm encountered "J' " before "J1". Thus, "f' ≤ f1". However, this means that "f' ≤ f1 ≤ s2&lt; f2", so "J' " and "J2" do not intersect. This is a contradiction because "h" cannot map "J2" to "J' " if they do not intersect. Thus by contradiction, "h" is injective. Therefore, "h" is a one-to-one function mapping intervals in "OPT(I)" to those in "EFT(I)". By the charging argument, the earliest finishing time algorithm is optimal. Job Interval Scheduling Problem. Consider the job interval scheduling problem, an NP-hard variant of the interval scheduling problem visited earlier. As before, the goal is to find a maximal subset of mutually compatible intervals in a given set of "n" intervals, "I = {I1, I2, ... , In}". Each interval "Ii" ∈ "I" has a starting time "si", a finishing time "fi", and a job class "ci". Here, two intervals "Ij" and "Ik" are said to be compatible if they do not overlap and have different classes. Recall the earliest finishing time algorithm from the previous example. After modifying the definition of compatibility in the algorithm, the charging argument can be used to show that the earliest finish time algorithm is a 2-approximation algorithm for the job interval scheduling problem. Let "OPT(I)" and "EFT(I)" denote the optimal solution and the solution produced by the earliest finishing time algorithm, as earlier defined. For any interval "J ∈ OPT(I)", define "h" as follows: formula_0 To show that the earliest finish time algorithm is a 2-approximation algorithm using the charging argument, "h" must be shown to be a two-to-one function mapping intervals in "OPT(I)" to those in "EFT(I)". Suppose "J" is an arbitrary interval in "OPT(I)". Show that "h" is a function mapping "OPT(I)" to "EFT(I)". First, notice that there is either some interval in "EFT(I)" with the same job class as "J", or there isn't. Case 1. Suppose that some interval in "EFT(I)" has the same job class as "J". If there is an interval in "EFT(I)" with the same class as "J", then "J" will map to that interval. Since the intervals in "EFT(I)" are mutually compatible, every interval in "EFT(I)" must have a different job class. Thus, such an interval is unique. Case 2. Suppose that there are no intervals in "EFT(I)" with the same job class as "J". If there are no intervals in "EFT(I)" with the same class as "J", then "h" maps "J" to the interval with the earliest finishing time amongst all intervals in EFT(I) intersecting "J". The proof of existence and uniqueness of such an interval is given in the previous example. Show that "h" is two-to-one. Assume for a contradiction that "h" is not two-to-one. Then there are three distinct intervals in "OPT(I)", "J1", "J2", and "J3", such that "h" maps each of "J1", "J2", and "J3" to the same interval "J' ∈ EFT(I)". By the pigeonhole principle, at least two of the three intervals were mapped to "J"' because they have the same job class as "J" ', or because "J" ' is the interval with the earliest finishing time amongst all intervals in EFT(I) intersecting both intervals. Without loss of generality, assume that these two intervals are "J1" and "J2". Case 1. Suppose "J1" and "J2" were mapped to "J" ' because they have the same job class as "J" '. Then each "J" ', "J1", and "J2" have the same job class. This is a contradiction, since the intervals in the optimal solution must be compatible, yet "J1" and "J2" are not. Case 2. Suppose "J" ' is the interval with the earliest finishing time amongst all intervals in EFT(I) intersecting both "J1" and "J2". The proof of this case is equivalent to the one in the previous example that showed injectivity. A contradiction follows from the proof above. Therefore, "h" maps no more than two distinct intervals in "OPT(I)" to the same interval in "EFT(I)", and so "h" is two-to-one. By the charging argument, the earliest finishing time algorithm is a two-approximation algorithm for the job interval scheduling problem.
[ { "math_id": 0, "text": "h(J) = \\begin{cases}\n \\mbox{the interval in EFT(I) with the same job class as J, if one exists} \\\\\n \\mbox{the interval with the earliest finishing time amongst all intervals in EFT(I) intersecting J, otherwise}\n\\end{cases}\n" } ]
https://en.wikipedia.org/wiki?curid=15409391
15409471
Occupancy grid mapping
Occupancy Grid Mapping refers to a family of computer algorithms in probabilistic robotics for mobile robots which address the problem of generating maps from noisy and uncertain sensor measurement data, with the assumption that the robot pose is known. Occupancy grids were first proposed by H. Moravec and A. Elfes in 1985. The basic idea of the occupancy grid is to represent a map of the environment as an evenly spaced field of binary random variables each representing the presence of an obstacle at that location in the environment. Occupancy grid algorithms compute approximate posterior estimates for these random variables. Algorithm outline. There are four major components of occupancy grid mapping approach. They are: Occupancy grid mapping algorithm. The goal of an occupancy mapping algorithm is to estimate the posterior probability over maps given the data: formula_0, where formula_1 is the map, formula_2 is the set of measurements from time 1 to t, and formula_3 is the set of robot poses from time 1 to t. The controls and odometry data play no part in the occupancy grid mapping algorithm since the path is assumed known. Occupancy grid algorithms represent the map formula_1 as a fine-grained grid over the continuous space of locations in the environment. The most common type of occupancy grid maps are 2d maps that describe a slice of the 3d world. If we let formula_4 denote the grid cell with index i (often in 2d maps, two indices are used to represent the two dimensions), then the notation formula_5 represents the probability that cell i is occupied. The computational problem with estimating the posterior formula_0 is the dimensionality of the problem: if the map contains 10,000 grid cells (a relatively small map), then the number of possible maps that can be represented by this gridding is formula_6. Thus calculating a posterior probability for all such maps is infeasible. The standard approach, then, is to break the problem down into smaller problems of estimating formula_7 for all grid cells formula_4. Each of these estimation problems is then a binary problem. This breakdown is convenient but does lose some of the structure of the problem, since it does not enable modelling dependencies between neighboring cells. Instead, the posterior of a map is approximated by factoring it into formula_8. Due to this factorization, a binary Bayes filter can be used to estimate the occupancy probability for each grid cell. It is common to use a log-odds representation of the probability that each grid cell is occupied. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p(m\\mid z_{1:t}, x_{1:t})" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "z_{1:t}" }, { "math_id": 3, "text": "x_{1:t}" }, { "math_id": 4, "text": "m_i" }, { "math_id": 5, "text": "p(m_i)" }, { "math_id": 6, "text": "2^{10,000}" }, { "math_id": 7, "text": "p(m_i\\mid z_{1:t}, x_{1:t})" }, { "math_id": 8, "text": "p(m\\mid z_{1:t}, x_{1:t}) = \\prod_i p(m_i\\mid z_{1:t}, x_{1:t})" } ]
https://en.wikipedia.org/wiki?curid=15409471
15412
Infrared spectroscopy
Measurement of infrared radiation's interaction with matter Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm−1. Units of IR wavelength are commonly given in micrometers (formerly called "microns"), symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below. The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14,000–4,000 cm−1 (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4,000–400 cm−1 (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1,000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm−1, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties. Uses and applications. Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. In catalysis research it is a very useful tool to characterize the catalyst, as well as to detect intermediates and products during the catalytic reaction. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers. It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver. IR spectroscopy has been successfully used in analysis and identification of pigments in paintings and other art objects such as illuminated manuscripts. Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Modern research instruments can take infrared measurements across the range of interest as frequently as 32 times a second. This can be done whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and processes quicker and more accurate. Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics: for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc. Another important application of infrared spectroscopy is in the food industry to measure the concentration of various compounds in different food products. The instruments are now small, and can be transported, even for use in field trials. Infrared spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs. These devices detect hydrocarbon gas leaks in the transportation of natural gas and crude oil. In February 2014, NASA announced a greatly upgraded database, based on IR spectroscopy, for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. Infrared spectroscopy is an important analysis method in the recycling process of household waste plastics, and a convenient stand-off method to sort plastic of different polymers (PET, HDPE, ...). Other developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal everyday use, and NIR-spectroscopic chips that can be embedded in smartphones and various gadgets. Infrared spectroscopy coupled with machine learning and artificial intelligence also has potential for rapid, accurate and non-invasive sensing of bacteria. The complex chemical composition of bacteria, including nucleic acids, proteins, carbohydrates and fatty acids, results in high-dimensional datasets where the essential features are effectively hidden under the total spectrum. Extraction of the essential features therefore requires advanced statistical methods such as machine learning and deep-neural networks. The potential of this technique for bacteria classification have been demonstrated for differentiation at the genus, species and serotype taxonomic levels, and it has also been shown promising for antimicrobial susceptibility testing, which is important for many clinical settings where faster susceptibility testing would decrease unnecessary blind-treatment with broad-spectrum antibiotics. The main limitation of this technique for clinical applications is the high sensitivity to technical equipment and sample preparation techniques, which makes it difficult to construct large-scale databases. Attempts in this direction have however been made by Bruker with the IR Biotyper for food microbiology. Theory. Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling. In particular, in the Born–Oppenheimer and harmonic approximations (i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighbourhood of the equilibrium molecular geometry), the resonant frequencies are associated with the normal modes of vibration corresponding to the molecular electronic ground state potential energy surface. Thus, it depends on both the nature of the bonds and the mass of the atoms that are involved. Using the Schrödinger equation leads to the selection rule for the vibrational quantum number in the system undergoing vibrational changes: formula_0 The compression and extension of a bond may be likened to the behaviour of a spring, but real molecules are hardly perfectly elastic in nature. If a bond between atoms is stretched, for instance, there comes a point at which the bond breaks and the molecule dissociates into atoms. Thus real molecules deviate from perfect harmonic motion and their molecular vibrational motion is anharmonic. An empirical expression that fits the energy curve of a diatomic molecule undergoing anharmonic extension and compression to a good approximation was derived by P.M. Morse, and is called the Morse function. Using the Schrödinger equation leads to the selection rule for the system undergoing vibrational changes : formula_1 Number of vibrational modes. In order for a vibrational mode in a sample to be "IR active", it must be associated with changes in the molecular dipole moment. A permanent dipole is not necessary, as the rule requires only a "change" in dipole moment. A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms, geometrically linear molecules have 3"N" – 5 degrees of vibrational modes, whereas nonlinear molecules have 3"N" – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As examples linear carbon dioxide (CO2) has 3 × 3 – 5 = 4, while non-linear water (H2O), has only 3 × 3 – 6 = 3. Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. carbon monoxide (CO), absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra. The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these vibrations involve only the CH2 portion: two stretching modes (ν): symmetric (νs) and antisymmetric (νas); and four bending modes: scissoring (δ), rocking (ρ), wagging (ω) and twisting (τ), as shown below. Structures that do not have the two additional X groups attached have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H atoms represent simple rotation of the whole molecule rather than vibrations within it. In case of more complex molecules, out-of-plane (γ) vibrational modes can be also present. These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms. The simplest and most important or "fundamental" IR bands arise from the excitations of normal modes, the simplest distortions of the molecule, from the ground state with vibrational quantum number "v" = 0 to the first excited state with vibrational quantum number "v" = 1. In some cases, overtone bands are observed. An overtone band arises from the absorption of a photon leading to a direct transition from the ground state to the second excited vibrational state ("v" = 2). Such a band appears at approximately twice the energy of the fundamental band for the same normal mode. Some excitations, so-called "combination modes", involve simultaneous excitation of more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc. Practical IR spectroscopy. The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR is the same as the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured using a Fourier transform instrument and then a transmittance or absorbance spectrum is generated using a dedicated procedure. This technique is commonly used for analyzing samples with covalent bonds. Simple spectra are obtained from samples with few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more complex spectra. Sample preparation. Gas samples. Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters. Liquid samples. Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used). The plates are transparent to the infrared light and do not introduce any lines onto the spectra. With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment). Solid samples. Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the mull is applied onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non-hygroscopic solvent. A drop of this solution is deposited on the surface of a KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 μm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved. In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it. A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials. Comparing to a reference. It is typical to record spectrum of both the sample and a "reference". This step controls for a number of variables, e.g. infrared detector, which may affect the spectrum. The reference measurement makes it possible to eliminate the instrument influence. The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately). A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors. Nevertheless, among different absorption-based techniques which are used for gaseous species detection, Cavity ring-down spectroscopy (CRDS) can be used as a calibration-free method. The fact that CRDS is based on the measurements of photon life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference Some instruments also automatically identify the substance being measured from a store of thousands of reference spectra held in storage. FTIR. Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an "interferogram", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference. An alternate method for acquiring spectra is the "dispersive" or "scanning monochromator" method. In this approach, the sample is irradiated sequentially with various single wavelengths. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A dispersive measurement requires detecting much lower light levels than an FTIR measurement. There are other advantages, as well as some disadvantages, but virtually all modern infrared spectrometers are FTIR instruments. Infrared microscopy. Various forms of infrared microscopy exist. These include IR versions of sub-diffraction microscopy such as IR NSOM, photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR). Other methods in molecular vibrational spectroscopy. Infrared spectroscopy is not the only method of studying molecular vibrational spectra. Raman spectroscopy involves an inelastic scattering process in which only part of the energy of an incident photon is absorbed by the molecule, and the remaining part is scattered and detected. The energy difference corresponds to absorbed vibrational energy. The selection rules for infrared and for Raman spectroscopy are different at least for some molecular symmetries, so that the two methods are complementary in that they observe vibrations of different symmetries. Another method is electron energy loss spectroscopy (EELS), in which the energy absorbed is provided by an inelastically scattered electron rather than a photon. This method is useful for studying vibrations of molecules adsorbed on a solid surface. Recently, high-resolution EELS (HREELS) has emerged as a technique for performing vibrational spectroscopy in a transmission electron microscope (TEM). In combination with the high spatial resolution of the TEM, unprecedented experiments have been performed, such as nano-scale temperature measurements, mapping of isotopically labeled molecules, mapping of phonon modes in position- and momentum-space, vibrational surface and bulk mode mapping on nanocubes, and investigations of polariton modes in van der Waals crystals. Analysis of vibrational modes that are IR-inactive but appear in inelastic neutron scattering is also possible at high spatial resolution using EELS. Although the spatial resolution of HREELs is very high, the bands are extremely broad compared to other techniques. Computational infrared microscopy. By using computer simulations and normal mode analysis it is possible to calculate theoretical frequencies of molecules. Absorption bands. IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below. Regions. A spectrograph is often interpreted as having two regions. In the functional region there are one to a few troughs per functional group. In the fingerprint region there are many troughs which form an intricate pattern which can be used like a fingerprint to determine the compound. Badger's rule. For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's rule. Originally published by Richard McLean Badger in 1934, this rule states that the strength of a bond (in terms of force constant) correlates with the bond length. That is, increase in bond strength leads to corresponding bond shortening and vice versa. Isotope effects. The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm−1 for ν(16O–16O) and ν(18O–18O), respectively. By considering the O–O bond as a spring, the frequency of absorbance can be calculated as a wavenumber [= frequency/(speed of light)] formula_4 where "k" is the spring constant for the bond, "c" is the speed of light, and "μ" is the reduced mass of the A–B system: formula_5 (formula_6 is the mass of atom formula_7). The reduced masses for 16O–16O and 18O–18O can be approximated as 8 and 9 respectively. Thus formula_8 The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to 29Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to 30Si, the lifetime becomes 27 ps. Two-dimensional IR. Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers. Nonlinear two-dimensional infrared spectroscopy is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research. As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\bigtriangleup v =\\pm 1" }, { "math_id": 1, "text": "\\bigtriangleup v = \\pm 1, \\pm 2, \\pm 3, \\cdot\\cdot\\cdot" }, { "math_id": 2, "text": "\\geq 1,500 \\text{ cm}^{-1}" }, { "math_id": 3, "text": " < 1,500 \\text{ cm}^{-1}" }, { "math_id": 4, "text": "\\tilde{\\nu} = \\frac{1}{2 \\pi c} \\sqrt{\\frac{k}{\\mu}}" }, { "math_id": 5, "text": "\\mu = \\frac{m_\\mathrm A m_\\mathrm B}{m_\\mathrm A + m_\\mathrm B}" }, { "math_id": 6, "text": "m_i" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "\\frac{\\tilde{\\nu}(^{16}\\mathrm O)}{\\tilde{\\nu}(^{18}\\mathrm O)} = \\sqrt{\\frac{9}{8}} \\approx \\frac{832}{788}." } ]
https://en.wikipedia.org/wiki?curid=15412
15416002
Chicago Mercantile Exchange Hurricane Index
The Chicago Mercantile Exchange Hurricane Index (CMEHI) is an index which describes the potential for damage from an Atlantic hurricane in the United States. The CMEHI is used as the basis for trading hurricane futures and options on the Chicago Mercantile Exchange (CME). It is very similar to the Hurricane Severity Index, which also factors both size and intensity of a hurricane. Index calculation. The CMEHI takes as input two variables: the maximum sustained wind speed of a hurricane in miles per hour and the radius to hurricane-force winds of a hurricane in miles (i.e. how far from the center of the hurricane winds of 74 mph or greater are experienced). If the maximum sustained wind speed is denoted by "V" and the radius to hurricane-force winds is denoted by "R" then the CMEHI is calculated as follows: formula_0 where the subscript "0" denotes reference values. For use on the CME, the reference values of 74 mph and 60 miles are used for the maximum sustained wind speed and radius of hurricane-force winds respectively. Index history and data. The development of the CMEHI was based on work published by Lakshmi Kantha at the Department of Aerospace Studies at the University of Colorado in Boulder, Colorado. Kantha's paper in Eos developed a number of indices based on various characteristics of hurricanes. The ReAdvisory team at the reinsurance broker RK Carvill used the basics of the Kantha paper to develop an index which became the Carvill Hurricane Index (CHI). In 2009, the scale was renamed the Chicago Mercantile Exchange Hurricane Index (CMEHI). The data for the CMEHI comes from the public advisories issued for named storms by the National Hurricane Center. Specifically, to determine the maximum sustained wind speed, the following verbiage is looked for: MAXIMUM SUSTAINED WINDS ARE NEAR "XX" MPH To determine the radius to hurricane-force winds, the following phrase is looked for: HURRICANE FORCE WINDS EXTEND OUTWARD UP TO "XX" MILES For example, Advisory 23A for Hurricane Katrina at 1pm Central daylight time on Sunday, August 28, 2005, gave the maximum sustained wind speed of 175 mph and the radius of hurricane-force winds of 105 miles resulting in a CMEHI value of 27.9. Data. Public advisories from the National Hurricane Center are archived back to 1998. The table below lists the CMEHI values for all the landfalling hurricanes since 1998 based on the NHC Public Advisories, and uses alternate sources for hurricanes between 1989 and 1998. Prior to 1998, the data becomes sparse. However, using data from the HURSAT database at NOAA it is possible to construct a set of CMEHI values for storms back to 1983. Modeled data is available from a number of sources:
[ { "math_id": 0, "text": " \\text{CMEHI} = \\left ( \\frac{V}{V_0} \\right )^3 + \\frac{3}{2}\\left ( \\frac{R}{R_0} \\right ) \\left ( \\frac{V}{V_0} \\right )^2 " } ]
https://en.wikipedia.org/wiki?curid=15416002
154163
Curry's paradox
Mathematical paradox named after Haskell Curry Curry's paradox is a paradox in which an arbitrary claim "F" is proved from the mere existence of a sentence "C" that says of itself "If "C", then "F"". The paradox requires only a few apparently-innocuous logical deduction rules. Since "F" is arbitrary, any logic having these rules allows one to prove everything. The paradox may be expressed in natural language and in various logics, including certain forms of set theory, lambda calculus, and combinatory logic. The paradox is named after the logician Haskell Curry, who wrote about it in 1942. It has also been called Löb's paradox after Martin Hugo Löb, due to its relationship to Löb's theorem. In natural language. Claims of the form "if "A", then "B"" are called conditional claims. Curry's paradox uses a particular kind of self-referential conditional sentence, as demonstrated in this example: &lt;templatestyles src="Block indent/styles.css"/&gt;If this sentence is true, then Germany borders China. Even though Germany does not border China, the example sentence certainly is a natural-language sentence, and so the truth of that sentence can be analyzed. The paradox follows from this analysis. The analysis consists of two steps. First, common natural-language proof techniques can be used to prove that the example sentence is true "[steps 1–4 below]". Second, the truth of the sentence can be used to prove that Germany borders China "[steps 5–6]": Because Germany does not border China, this suggests that there has been an error in one of the proof steps. The claim "Germany borders China" could be replaced by any other claim, and the sentence would still be provable. Thus every sentence appears to be provable. Because the proof uses only well-accepted methods of deduction, and because none of these methods appears to be incorrect, this situation is paradoxical. Informal proof. The standard method for proving conditional sentences (sentences of the form "if "A", then "B"") is called "conditional proof". In this method, in order to prove "if "A", then "B"", first "A" is assumed and then with that assumption "B" is shown to be true. To produce Curry's paradox, as described in the two steps above, apply this method to the sentence "if this sentence is true, then Germany borders China". Here "A", "this sentence is true", refers to the overall sentence, while "B" is "Germany borders China". So, assuming "A" is the same as assuming "If "A", then "B"". Therefore, in assuming "A", we have assumed both "A" and "If "A", then "B"". Therefore, "B" is true, by modus ponens, and we have proven "If this sentence is true, then 'Germany borders China' is true." in the usual way, by assuming the hypothesis and deriving the conclusion. Now, because we have proved "If this sentence is true, then 'Germany borders China' is true", then we can again apply modus ponens, because we know that the claim "this sentence is true" is correct. In this way, we can deduce that Germany borders China. In formal logics. Sentential logic. The example in the previous section used unformalized, natural-language reasoning. Curry's paradox also occurs in some varieties of formal logic. In this context, it shows that if we assume there is a formal sentence ("X" → "Y"), where "X" itself is equivalent to ("X" → "Y"), then we can prove "Y" with a formal proof. One example of such a formal proof is as follows. For an explanation of the logic notation used in this section, refer to the list of logic symbols. An alternative proof is via "Peirce's law". If "X" = "X" → "Y", then ("X" → "Y") → "X". This together with Peirce's law (("X" → "Y") → "X") → "X" and "modus ponens" implies "X" and subsequently "Y" (as in above proof). The above derivation shows that, if "Y" is an unprovable statement in a formal system, then there is no statement "X" in that system such that "X" is equivalent to the implication ("X" → "Y"). In other words, step 1 of the previous proof fails. By contrast, the previous section shows that in natural (unformalized) language, for every natural language statement "Y" there is a natural language statement "Z" such that "Z" is equivalent to ("Z" → "Y") in natural language. Namely, "Z" is "If this sentence is true then "Y"". Naive set theory. Even if the underlying mathematical logic does not admit any self-referential sentences, certain forms of naive set theory are still vulnerable to Curry's paradox. In set theories that allow unrestricted comprehension, we can prove any logical statement "Y" by examining the set formula_0One then shows easily that the statement formula_1 is equivalent to formula_2. From this, formula_3 may be deduced, similarly to the proofs shown above. ("formula_1" stands for "this sentence".) Therefore, in a consistent set theory, the set formula_4 does not exist for false "Y". This can be seen as a variant on Russell's paradox, but is not identical. Some proposals for set theory have attempted to deal with Russell's paradox not by restricting the rule of comprehension, but by restricting the rules of logic so that it tolerates the contradictory nature of the set of all sets that are not members of themselves. The existence of proofs like the one above shows that such a task is not so simple, because at least one of the deduction rules used in the proof above must be omitted or restricted. Lambda calculus with restricted minimal logic. Curry's paradox may be expressed in untyped lambda calculus, enriched by implicational propositional calculus. To cope with the lambda calculus's syntactic restrictions, formula_5 shall denote the implication function taking two parameters, that is, the lambda term formula_6 shall be equivalent to the usual infix notation formula_7. An arbitrary formula formula_8 can be proved by defining a lambda function formula_9, and formula_10, where formula_11 denotes Curry's fixed-point combinator. Then formula_12 by definition of formula_11 and formula_13, hence the above sentential logic proof can be duplicated in the calculus: formula_14 In simply typed lambda calculus, fixed-point combinators cannot be typed and hence are not admitted. Combinatory logic. Curry's paradox may also be expressed in combinatory logic, which has equivalent expressive power to lambda calculus. Any lambda expression may be translated into combinatory logic, so a translation of the implementation of Curry's paradox in lambda calculus would suffice. The above term formula_15 translates to formula_16 in combinatory logic, where formula_17 hence formula_18 Discussion. Curry's paradox can be formulated in any language supporting basic logic operations that also allows a self-recursive function to be constructed as an expression. Two mechanisms that support the construction of the paradox are self-reference (the ability to refer to "this sentence" from within a sentence) and unrestricted comprehension in naive set theory. Natural languages nearly always contain many features that could be used to construct the paradox, as do many other languages. Usually, the addition of metaprogramming capabilities to a language will add the features needed. Mathematical logic generally does not allow explicit reference to its own sentences; however, the heart of Gödel's incompleteness theorems is the observation that a different form of self-reference can be added—see Gödel number. The rules used in the construction of the proof are the rule of assumption for conditional proof, the rule of contraction, and modus ponens. These are included in most common logical systems, such as first-order logic. Consequences for some formal logic. In the 1930s, Curry's paradox and the related Kleene–Rosser paradox, from which Curry's paradox was developed, played a major role in showing that various formal logic systems allowing self-recursive expressions are inconsistent. The axiom of unrestricted comprehension is not supported by modern set theory, and Curry's paradox is thus avoided. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X \\ \\stackrel{\\mathrm{def}}{=}\\ \\left\\{ x \\mid (x \\in x) \\to Y \\right\\}." }, { "math_id": 1, "text": "X\\in X" }, { "math_id": 2, "text": "(X\\in X) \\to Y" }, { "math_id": 3, "text": "Y" }, { "math_id": 4, "text": "\\left\\{ x \\mid (x \\in x) \\to Y \\right\\}" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "((m A) B)" }, { "math_id": 7, "text": "A \\to B" }, { "math_id": 8, "text": "Z" }, { "math_id": 9, "text": "N := \\lambda p.((m p) Z)" }, { "math_id": 10, "text": "X := (\\textsf{Y} N)" }, { "math_id": 11, "text": "\\textsf{Y}" }, { "math_id": 12, "text": "X = (N X) = ((m X) Z)" }, { "math_id": 13, "text": "N" }, { "math_id": 14, "text": "\n\\begin{array}{cll}\n\\vdash & ((m X) X) & \\mbox{ by the minimal logic axiom } A \\to A \\\\\n\\vdash & ((m X) ((m X) Z)) & \\mbox{ since } X = ((m X) Z) \\\\\n\\vdash & ((m X) Z) & \\mbox{ by the theorem } (A \\to (A \\to B)) \\vdash (A \\to B) \\mbox{ of minimal logic } \\\\\n\\vdash & X & \\mbox{ since } X = ((m X) Z) \\\\\n\\vdash & Z & \\mbox{ by modus ponens } A, (A \\to B) \\vdash B \\mbox{ from } X \\mbox{ and } ((m X) Z) \\\\\n\\end{array}\n" }, { "math_id": 15, "text": "X" }, { "math_id": 16, "text": "(r \\ r)" }, { "math_id": 17, "text": "r = \\textsf{S} \\ (\\textsf{S} (\\textsf{K} m) (\\textsf{S} \\textsf{I} \\textsf{I})) \\ (\\textsf{K} Z);" }, { "math_id": 18, "text": "(r \\ r) = ((m (r r)) \\ Z)." } ]
https://en.wikipedia.org/wiki?curid=154163
15417
Intermolecular force
Force of attraction or repulsion between molecules and neighboring particles An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics. The first reference to the nature of microscopic forces is found in Alexis Clairaut's work "Théorie de la figure de la Terre," published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling. Attractive intermolecular forces are categorized into the following types: Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential. In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical, that is, ionic, covalent or metallic bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a , but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology). Hydrogen bonding. A "hydrogen bond" is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has. Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural. Salt bridge. The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge. It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions. Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol. Dipole–dipole and similar interactions. Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3). &lt;math chem&gt;\overset{\color{Red}\delta+}\ce{H}-\overset{\color{Red}\delta-}\ce{Cl}\cdots\overset{\color{Red}\delta+}\ce{H}-\overset{\color{Red}\delta-}\ce{Cl}&lt;/math&gt; Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces". Ion–dipole and ion–induced dipole forces. Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding. An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water. An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule. Van der Waals forces. The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies. Keesom force (permanent dipole – permanent dipole). The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the "Keesom interaction", named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent. They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation: formula_0 where "d" = electric dipole moment, formula_1 = permittivity of free space, formula_2 = dielectric constant of surrounding material, "T" = temperature, formula_3 = Boltzmann constant, and "r" = distance between molecules. Debye force (permanent dipoles–induced dipoles). The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions. The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the "Debye force", named after Peter J. W. Debye. One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation: formula_4 where formula_5 = polarizability. This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force. London dispersion force (fluctuating dipole–induced dipole interaction). The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range. Relative strength of forces. This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way proceeding the thousands of enzymatic reactions, so important for living organisms. Effect on the behavior of gases. Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor). In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature. When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces. Quantum mechanical theories. Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this. Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{-d_1^2 d_2^2}{24\\pi^2 \\varepsilon_0^2 \\varepsilon_r^2 k_\\text{B} T r^6} = V," }, { "math_id": 1, "text": "\\varepsilon_0" }, { "math_id": 2, "text": "\\varepsilon_r" }, { "math_id": 3, "text": "k_\\text{B}" }, { "math_id": 4, "text": "\\frac{-d_1^2 \\alpha_2}{16\\pi^2 \\varepsilon_0^2 \\varepsilon_r^2 r^6} = V," }, { "math_id": 5, "text": "\\alpha_2" } ]
https://en.wikipedia.org/wiki?curid=15417
15418290
Thermal emittance
Thermal emittance or thermal emissivity (formula_0) is the ratio of the radiant emittance of heat of a specific object or surface to that of a standard black body. Emissivity and emittivity are both dimensionless quantities given in the range of 0 to 1, representing the comparative/relative emittance with respect to a blackbody operating in similar conditions, but emissivity refers to a material property (of a homogeneous material), while emittivity refers to specific samples or objects. For building products, thermal emittance measurements are taken for wavelengths in the infrared. Determining the thermal emittance and solar reflectance of building materials, especially roofing materials, can be very useful for reducing heating and cooling energy costs in buildings. Combined index Solar Reflectance Index (SRI) is often used to determine the overall ability to reflect solar heat and release thermal heat. A roofing surface with high solar reflectance and high thermal emittance will reflect solar heat and release absorbed heat readily. High thermal emittance material radiates thermal heat back into the atmosphere more readily than one with a low thermal emittance. In common construction applications, the thermal emittance of a surface is usually higher than 0.8–0.85. High thermal emittance materials are essential to passive daytime radiative cooling, which uses surfaces high in thermal emittance and solar reflectance to lower surface temperatures by dissipating heat to outer space. It has been proposed as a solution to energy crises and global warming. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=15418290
15419368
Haidao Suanjing
Haidao Suanjing (; "The Island Mathematical Manual") was written by the Chinese mathematician Liu Hui of the Three Kingdoms era (220–280) as an extension of chapter 9 of "The Nine Chapters on the Mathematical Art". During the Tang dynasty, this appendix was taken out from "The Nine Chapters on the Mathematical Art" as a separate book, titled "Haidao suanjing" ("Sea Island Mathematical Manual"), named after problem No 1 "Looking at a sea island." In the time of the early Tang dynasty, "Haidao Suanjing" was selected into one of The Ten Computational Canons as the official mathematical texts for imperial examinations in mathematics. Content. This book contained many practical problems of surveying using geometry. This work provided detailed instructions on how to measure distances and heights with tall surveyor's poles and horizontal bars fixed at right angles to them. The units of measurement were Calculation was carried out with place value decimal Rod calculus. Liu Hui used his rectangle in right angle triangle theorem as the mathematical basis for survey. The setup is pictured on the right. By invoking his "in-out-complement" principle, he proved that the area of two inscribed rectangles in the two complementary right angle triangles have equal area, thus formula_0 Survey of sea island. Now we are surveying a sea island. Set up two 3-zhang poles at 1000 steps apart; let the two poles and the island be in a straight line. Step back from the front post 123 steps. With eye on ground level, the tip of the pole is on a straight line with the peak of island. Step back 127 steps from the rear pole. Eye on ground level also aligns with the tip of pole and tip of island. What is the height of the island, and what is the distance to the pole? Answer: The height of the island is 4 li and 55 steps, and it is 102 li and 150 steps from the pole. Method: Let the numerator equal to the height of pole multiplied by the separation of poles, let denominator be the difference of offsets, add the quotient to the height of pole to obtain the height of island. As the distance of front pole to the island could not be measured directly, Liu Hui set up two poles of same height at a known distance apart and made two measurements. The pole was perpendicular to the ground, eye view from ground level when the tip of pole was on a straight line sight with the peak of island, the distance of eye to the pole was called front offset = formula_1, similarly, the back offset = formula_2, difference of offsets = formula_3. Pole height formula_4 chi Front pole offset formula_5 steps Back pole offset formula_6 steps Difference of offset = formula_3 Distance between the poles = formula_7 Height of island = formula_8 Disttance of front pole to island = formula_9 Using his principle of inscribe rectangle in right angle triangle for formula_10 and formula_11, he obtained: Height of island formula_12 Distance of front pole to island formula_13. Height of a hill top pine tree. A pine tree of unknown height grows on the hill. Set up two poles of 2 zhang each, one at front and one at the rear 50 steps in between. Let the rear pole align with the front pole. Step back 7 steps and 4 chi, view the tip of pine tree from the ground till it aligns in a straight line with the tip of the pole. Then view the tree trunk, the line of sight intersects the poles at 2 chi and 8 cun from its tip . Step back 8 steps and 5 chi from the rear pole, the view from ground also aligns with tree top and pole top. What is the height of the pine tree, and what is its distance from the pole ? Answer: the height of the pine is 12 zhang 2 chi 8 cun, the distance of mountain from the pole is 1 li and (28 + 4/7) steps. Method: let the numerator be the product of separation of the poles and intersection from tip of pole, let the denominator be the difference of offsets. Add the height of pole to the quotient to obtain the height of pine tree. The size of a square city wall viewed afar. We are viewing from the south a square city of unknown size. Set up an east gnome and a west pole, 6 zhang apart, linked with a rope at eye level. Let the east pole aligned with the NE and SE corners. Move back 5 steps from the north gnome, watch the NW corner of the city, the line of sight intersects the rope at 2 zhang 2 chi and 6.5 cun from the east end. Step back northward 13 steps and 2 chi, watch the NW corner of the city, the line of sight just aligns with the west pole. What is the length of the square city, and what is its distance to the pole? Answer: The length of the square city is 3 li, 43 and 3/4 steps; the distance of the city to the pole is 4 li and 45 steps. Studies and translations. The 19th century British Protestant Christian missionary Alexander Wylie in his article "Jottings on the Sciences of Chinese Mathematics" published in "North China Herald" 1852, was the first person to introduce "Sea Island Mathematical Manual" to the West. In 1912, Japanese mathematic historian Yoshio Mikami published "The Development of Mathematics in China and Japan", chapter 5 was dedicated to this book. A French mathematician translated the book into French in 1932. In 1986 Ang Tian Se and Frank Swetz translated Haidao into English. After comparing the development of surveying in China and the West, Frank Swetz concluded that "in the endeavours of mathematical surveying, China's accomplishments exceeded those realized in the West by about one thousand years."
[ { "math_id": 0, "text": "CE \\cdot AF = FB \\cdot BC" }, { "math_id": 1, "text": "DG" }, { "math_id": 2, "text": "FH" }, { "math_id": 3, "text": "FH-DG" }, { "math_id": 4, "text": "CD = 30" }, { "math_id": 5, "text": "DG = 123" }, { "math_id": 6, "text": "FH = 127" }, { "math_id": 7, "text": "DF" }, { "math_id": 8, "text": "AB" }, { "math_id": 9, "text": "BD" }, { "math_id": 10, "text": "\\triangle ABG" }, { "math_id": 11, "text": "\\triangle ABH" }, { "math_id": 12, "text": "AB = \\tfrac{CD \\times DF}{FH-DG}+CD" }, { "math_id": 13, "text": " BD = \\tfrac{DG \\times DF}{FH-DG}" } ]
https://en.wikipedia.org/wiki?curid=15419368
154212
Lenstra elliptic-curve factorization
Algorithm for integer factorization The Lenstra elliptic-curve factorization or the elliptic-curve factorization method (ECM) is a fast, sub-exponential running time, algorithm for integer factorization, which employs elliptic curves. For general-purpose factoring, ECM is the third-fastest known factoring method. The second-fastest is the multiple polynomial quadratic sieve, and the fastest is the general number field sieve. The Lenstra elliptic-curve factorization is named after Hendrik Lenstra. Practically speaking, ECM is considered a special-purpose factoring algorithm, as it is most suitable for finding small factors. Currently[ [update]], it is still the best algorithm for divisors not exceeding 50 to 60 digits, as its running time is dominated by the size of the smallest factor "p" rather than by the size of the number "n" to be factored. Frequently, ECM is used to remove small factors from a very large integer with many factors; if the remaining integer is still composite, then it has only large factors and is factored using general-purpose techniques. The largest factor found using ECM so far has 83 decimal digits and was discovered on 7 September 2013 by R. Propper. Increasing the number of curves tested improves the chances of finding a factor, but they are not linear with the increase in the number of digits. Algorithm. The Lenstra elliptic-curve factorization method to find a factor of a given natural number formula_0 works as follows: The time complexity depends on the size of the number's smallest prime factor and can be represented by exp[(√2 + o(1)) √ln "p" ln ln "p"], where "p" is the smallest factor of "n", or formula_27, in L-notation. Explanation. If "p" and "q" are two prime divisors of "n", then "y"2 = "x"3 + "ax" + "b" (mod "n") implies the same equation also modulo "p" and modulo "q". These two smaller elliptic curves with the formula_28-addition are now genuine groups. If these groups have "N""p" and "Nq" elements, respectively, then for any point "P" on the original curve, by Lagrange's theorem, "k" &gt; 0 is minimal such that formula_29 on the curve modulo "p" implies that "k" divides "N""p"; moreover, formula_30. The analogous statement holds for the curve modulo "q". When the elliptic curve is chosen randomly, then "N""p" and "N""q" are random numbers close to "p" + 1 and "q" + 1, respectively (see below). Hence it is unlikely that most of the prime factors of "N""p" and "N""q" are the same, and it is quite likely that while computing "eP", we will encounter some "kP" that is ∞ modulo "p" but not modulo "q", or vice versa. When this is the case, "kP" does not exist on the original curve, and in the computations we found some "v" with either gcd("v","p") = "p" or gcd("v", "q") = "q", but not both. That is, gcd("v", "n") gave a non-trivial factor of "n". ECM is at its core an improvement of the older "p" − 1 algorithm. The "p" − 1 algorithm finds prime factors "p" such that "p" − 1 is b-powersmooth for small values of "b". For any "e", a multiple of "p" − 1, and any "a" relatively prime to "p", by Fermat's little theorem we have "a""e" ≡ 1 (mod "p"). Then gcd("a""e" − 1, "n") is likely to produce a factor of "n". However, the algorithm fails when "p" - 1 has large prime factors, as is the case for numbers containing strong primes, for example. ECM gets around this obstacle by considering the group of a random elliptic curve over the finite field Z"p", rather than considering the multiplicative group of Z"p" which always has order "p" − 1. The order of the group of an elliptic curve over Z"p" varies (quite randomly) between "p" + 1 − 2√"p" and "p" + 1 + 2√"p" by Hasse's theorem, and is likely to be smooth for some elliptic curves. Although there is no proof that a smooth group order will be found in the Hasse-interval, by using heuristic probabilistic methods, the Canfield–Erdős–Pomerance theorem with suitably optimized parameter choices, and the L-notation, we can expect to try L[√2/2, √2] curves before getting a smooth group order. This heuristic estimate is very reliable in practice. Example usage. The following example is from , with some details added. We want to factor "n" = 455839. Let's choose the elliptic curve "y"2 = "x"3 + 5"x" – 5, with the point "P" = (1, 1) on it, and let's try to compute (10!)"P". The slope of the tangent line at some point "A"=("x", "y") is "s" = (3"x"2 + 5)/(2"y") (mod n). Using "s" we can compute 2"A". If the value of "s" is of the form "a/b" where "b" &gt; 1 and gcd("a","b") = 1, we have to find the modular inverse of "b". If it does not exist, gcd("n","b") is a non-trivial factor of "n". First we compute 2"P". We have "s"("P") = "s"(1,1) = 4, so the coordinates of 2"P" = ("x′", "y′") are "x′" = "s"2 – 2"x" = 14 and "y′" = "s"("x" – "x′") – "y" = 4(1 – 14) – 1 = –53, all numbers understood (mod "n"). Just to check that this 2"P" is indeed on the curve: (–53)2 = 2809 = 143 + 5·14 – 5. Then we compute 3(2"P"). We have "s"(2"P") = "s"(14,-53) = –593/106 (mod "n"). Using the Euclidean algorithm: 455839 = 4300·106 + 39, then 106 = 2·39 + 28, then 39 = 28 + 11, then 28 = 2·11 + 6, then 11 = 6 + 5, then 6 = 5 + 1. Hence gcd(455839, 106) = 1, and working backwards (a version of the extended Euclidean algorithm): 1 = 6 – 5 = 2·6 – 11 = 2·28 – 5·11 = 7·28 – 5·39 = 7·106 – 19·39 = 81707·106 – 19·455839. Hence 106−1 = 81707 (mod 455839), and –593/106 = –133317 (mod 455839). Given this "s", we can compute the coordinates of 2(2"P"), just as we did above: 4"P" = (259851, 116255). Just to check that this is indeed a point on the curve: "y"2 = 54514 = "x"3 + 5"x" – 5 (mod 455839). After this, we can compute formula_31. We can similarly compute 4!"P", and so on, but 8!"P" requires inverting 599 (mod 455839). The Euclidean algorithm gives that 455839 is divisible by 599, and we have found a factorization 455839 = 599·761. The reason that this worked is that the curve (mod 599) has 640 = 27·5 points, while (mod 761) it has 777 = 3·7·37 points. Moreover, 640 and 777 are the smallest positive integers "k" such that "kP" = ∞ on the curve (mod 599) and (mod 761), respectively. Since 8! is a multiple of 640 but not a multiple of 777, we have 8!"P" = ∞ on the curve (mod 599), but not on the curve (mod 761), hence the repeated addition broke down here, yielding the factorization. The algorithm with projective coordinates. Before considering the projective plane over formula_32 first consider a 'normal' projective space over formula_33: Instead of points, lines through the origin are studied. A line may be represented as a non-zero point formula_34, under an equivalence relation ~ given by: formula_35 ⇔ ∃ c ≠ 0 such that "x' = cx", "y' = cy" and "z' = cz". Under this equivalence relation, the space is called the projective plane formula_36; points, denoted by formula_37, correspond to lines in a three-dimensional space that pass through the origin. Note that the point formula_38 does not exist in this space since to draw a line in any possible direction requires at least one of x',y' or z' ≠ 0. Now observe that almost all lines go through any given reference plane - such as the ("X","Y",1)-plane, whilst the lines precisely parallel to this plane, having coordinates ("X,Y",0), specify directions uniquely, as 'points at infinity' that are used in the affine ("X,Y")-plane it lies above. In the algorithm, only the group structure of an elliptic curve over the field formula_33 is used. Since we do not necessarily need the field formula_33, a finite field will also provide a group structure on an elliptic curve. However, considering the same curve and operation over formula_39 with n not a prime does not give a group. The Elliptic Curve Method makes use of the failure cases of the addition law. We now state the algorithm in projective coordinates. The neutral element is then given by the point at infinity formula_40. Let n be a (positive) integer and consider the elliptic curve (a set of points with some structure on it) formula_41. In point 5 it is said that under the right circumstances a non-trivial divisor can be found. As pointed out in Lenstra's article (Factoring Integers with Elliptic Curves) the addition needs the assumption formula_57. If formula_58 are not formula_40 and distinct (otherwise addition works similarly, but is a little different), then addition works as follows: If addition fails, this will be due to a failure calculating formula_65 In particular, because formula_66 can not always be calculated if n is not prime (and therefore formula_54 is not a field). Without making use of formula_54 being a field, one could calculate: This calculation is always legal and if the gcd of the Z-coordinate with n ≠ (1 or n), so when simplifying fails, a non-trivial divisor of n is found. Twisted Edwards curves. The use of Edwards curves needs fewer modular multiplications and less time than the use of Montgomery curves or Weierstrass curves (other used methods). Using Edwards curves you can also find more primes. Definition. Let formula_19 be a field in which formula_71, and let formula_72 with formula_73. Then the twisted Edwards curve formula_74 is given by formula_75 An Edwards curve is a twisted Edwards curve in which formula_76. There are five known ways to build a set of points on an Edwards curve: the set of affine points, the set of projective points, the set of inverted points, the set of extended points and the set of completed points. The set of affine points is given by: formula_77. The addition law is given by formula_78 The point (0,1) is its neutral element and the inverse of formula_79 is formula_80. The other representations are defined similar to how the projective Weierstrass curve follows from the affine. Any elliptic curve in Edwards form has a point of order 4. So the torsion group of an Edwards curve over formula_81 is isomorphic to either formula_82 or formula_83. The most interesting cases for ECM are formula_84 and formula_83, since they force the group orders of the curve modulo primes to be divisible by 12 and 16 respectively. The following curves have a torsion group isomorphic to formula_84: Every Edwards curve with a point of order 3 can be written in the ways shown above. Curves with torsion group isomorphic to formula_83 and formula_91 may be more efficient at finding primes. Stage 2. The above text is about the first stage of elliptic curve factorisation. There one hopes to find a prime divisor p such that formula_92 is the neutral element of formula_93. In the second stage one hopes to have found a prime divisor q such that formula_92 has small prime order in formula_94. We hope the order to be between formula_95 and formula_96, where formula_95 is determined in stage 1 and formula_96 is new stage 2 parameter. Checking for a small order of formula_92, can be done by computing formula_97 modulo n for each prime l. GMP-ECM and EECM-MPFQ. The use of Twisted Edwards elliptic curves, as well as other techniques were used by Bernstein et al to provide an optimized implementation of ECM. Its only drawback is that it works on smaller composite numbers than the more general purpose implementation, GMP-ECM of Zimmerman. Hyperelliptic-curve method (HECM). There are recent developments in using hyperelliptic curves to factor integers. Cosset shows in his article (of 2010) that one can build a hyperelliptic curve with genus two (so a curve formula_98 with f of degree 5), which gives the same result as using two "normal" elliptic curves at the same time. By making use of the Kummer surface, calculation is more efficient. The disadvantages of the hyperelliptic curve (versus an elliptic curve) are compensated by this alternative way of calculating. Therefore, Cosset roughly claims that using hyperelliptic curves for factorization is no worse than using elliptic curves. Quantum version (GEECM). Bernstein, Heninger, Lou, and Valenta suggest GEECM, a quantum version of ECM with Edwards curves. It uses Grover's algorithm to roughly double the length of the primes found compared to standard EECM, assuming a quantum computer with sufficiently many qubits and of comparable speed to the classical computer running EECM. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 2, "text": "y^2 = x^3 + ax + b \\pmod n" }, { "math_id": 3, "text": "P(x_0,y_0)" }, { "math_id": 4, "text": "x_0,y_0,a \\in \\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 5, "text": "b = y_0^2 - x_0^3 - ax_0\\pmod n" }, { "math_id": 6, "text": "P" }, { "math_id": 7, "text": "[k]P = P + \\ldots + P \\text{ (k times)}" }, { "math_id": 8, "text": "Q" }, { "math_id": 9, "text": "v \\bmod n" }, { "math_id": 10, "text": "\\gcd(v,n)" }, { "math_id": 11, "text": "u/v" }, { "math_id": 12, "text": "\\gcd(u,v)=1" }, { "math_id": 13, "text": "v = 0 \\bmod n" }, { "math_id": 14, "text": "\\infty" }, { "math_id": 15, "text": "P(x,y), P'(x,-y)" }, { "math_id": 16, "text": "\\gcd(v,n) \\neq 1, n" }, { "math_id": 17, "text": "[k]P" }, { "math_id": 18, "text": "\\bmod n" }, { "math_id": 19, "text": "k" }, { "math_id": 20, "text": "B!" }, { "math_id": 21, "text": "B" }, { "math_id": 22, "text": "[B!]P" }, { "math_id": 23, "text": "[2]P" }, { "math_id": 24, "text": "[3]([2]P)" }, { "math_id": 25, "text": "[4]([3!]P)" }, { "math_id": 26, "text": "\\gcd(v,n) \\neq 1,n" }, { "math_id": 27, "text": "L_p\\left[\\frac{1}{2},\\sqrt{2}\\right]" }, { "math_id": 28, "text": "\\boxplus" }, { "math_id": 29, "text": "kP=\\infty" }, { "math_id": 30, "text": "N_p P=\\infty" }, { "math_id": 31, "text": "3(2P) = 4P \\boxplus 2P" }, { "math_id": 32, "text": "(\\Z/n\\Z)/\\sim," }, { "math_id": 33, "text": "\\mathbb{R}" }, { "math_id": 34, "text": "(x,y,z)" }, { "math_id": 35, "text": "(x,y,z)\\sim(x',y',z')" }, { "math_id": 36, "text": "\\mathbb{P}^2" }, { "math_id": 37, "text": "(x:y:z)" }, { "math_id": 38, "text": "(0:0:0)" }, { "math_id": 39, "text": "(\\Z/n\\Z)/\\sim" }, { "math_id": 40, "text": "(0:1:0)" }, { "math_id": 41, "text": "E(\\Z/n\\Z)=\\{(x:y:z) \\in \\mathbb{P}^2\\ |\\ y^2z=x^3+axz^2+bz^3\\}" }, { "math_id": 42, "text": "x_P,y_P,a \\in \\Z/n\\Z" }, { "math_id": 43, "text": "b = y_P^2 - x_P^3 - ax_P" }, { "math_id": 44, "text": "y^2 = x^3 + ax + b" }, { "math_id": 45, "text": "ZY^2=X^3+aZ^2X+bZ^3" }, { "math_id": 46, "text": "P=(x_P:y_P:1)" }, { "math_id": 47, "text": "B \\in \\Z" }, { "math_id": 48, "text": "\\Z/p\\Z" }, { "math_id": 49, "text": "\\#E(\\Z/p\\Z)" }, { "math_id": 50, "text": "k={\\rm lcm}(1,\\dots ,B)" }, { "math_id": 51, "text": "k P := P + P + \\cdots + P " }, { "math_id": 52, "text": "E(\\Z/n\\Z)" }, { "math_id": 53, "text": "\\#E(\\Z/n\\Z)" }, { "math_id": 54, "text": "\\Z/n\\Z" }, { "math_id": 55, "text": "kP = (0:1:0)" }, { "math_id": 56, "text": "kP." }, { "math_id": 57, "text": "\\gcd(x_1-x_2,n)=1" }, { "math_id": 58, "text": "P,Q" }, { "math_id": 59, "text": " R = P + Q;" }, { "math_id": 60, "text": "P = (x_1:y_1:1), Q = (x_2:y_2:1)" }, { "math_id": 61, "text": "\\lambda =(y_1-y_2) (x_1-x_2)^{-1}" }, { "math_id": 62, "text": " x_3 = \\lambda^2 - x_1 - x_2" }, { "math_id": 63, "text": " y_3 = \\lambda(x_1-x_3) - y_1" }, { "math_id": 64, "text": " R = P + Q = (x_3:y_3:1)" }, { "math_id": 65, "text": "\\lambda." }, { "math_id": 66, "text": "(x_1-x_2)^{-1}" }, { "math_id": 67, "text": "\\lambda'=y_1-y_2" }, { "math_id": 68, "text": " x_3' = {\\lambda'}^2 - x_1(x_1-x_2)^2 - x_2(x_1-x_2)^2" }, { "math_id": 69, "text": " y_3' = \\lambda'(x_1(x_1-x_2)^2-x_3') - y_1(x_1-x_2)^3" }, { "math_id": 70, "text": " R = P + Q = (x_3'(x_1-x_2):y_3':(x_1-x_2)^3)" }, { "math_id": 71, "text": "2 \\neq 0" }, { "math_id": 72, "text": "a,d \\in k\\setminus\\{0\\}" }, { "math_id": 73, "text": "a\\neq d" }, { "math_id": 74, "text": "E_{E,a,d}" }, { "math_id": 75, "text": "ax^2+y^2=1+dx^2y^2." }, { "math_id": 76, "text": "a=1" }, { "math_id": 77, "text": "\\{(x,y)\\in \\mathbb{A}^2 : ax^2+y^2=1+dx^2y^2\\}" }, { "math_id": 78, "text": "(e,f),(g,h) \\mapsto \\left(\\frac{eh+fg}{1+ degfh},\\frac{fh-aeg}{1-degfh}\\right)." }, { "math_id": 79, "text": "(e,f)" }, { "math_id": 80, "text": "(-e,f)" }, { "math_id": 81, "text": "\\Q" }, { "math_id": 82, "text": "\\Z/4\\Z, \\Z/8\\Z, \\Z/12\\Z,\\Z/2\\Z \\times \\Z/4\\Z" }, { "math_id": 83, "text": "\\Z/2\\Z\\times \\Z/8\\Z" }, { "math_id": 84, "text": "\\Z/12\\Z" }, { "math_id": 85, "text": "x^2+y^2=1+dx^2y^2" }, { "math_id": 86, "text": "(a,b) " }, { "math_id": 87, "text": "b \\notin\\{-2,-1/2,0,\\pm1\\}, a^2=-(b^2+2b) " }, { "math_id": 88, "text": "d=-(2b+1)/(a^2b^2) " }, { "math_id": 89, "text": "a=\\frac{u^2-1}{u^2+1}, b=-\\frac{(u-1)^2}{u^2+1}" }, { "math_id": 90, "text": "d=\\frac{(u^2+1)^3(u^2-4u+1)}{(u-1)^6(u+1)^2}, u\\notin\\{0,\\pm1\\}." }, { "math_id": 91, "text": "\\Z/2\\Z\\times \\Z/4\\Z" }, { "math_id": 92, "text": "sP" }, { "math_id": 93, "text": "E(\\mathbb{Z}/p\\mathbb{Z})" }, { "math_id": 94, "text": "E(\\mathbb{Z}/q\\mathbb{Z})" }, { "math_id": 95, "text": "B_1" }, { "math_id": 96, "text": "B_2" }, { "math_id": 97, "text": "(ls)P" }, { "math_id": 98, "text": "y^2 = f(x)" } ]
https://en.wikipedia.org/wiki?curid=154212
1542238
Smooth structure
Maximal smooth atlas for a topological manifold In mathematics, a smooth structure on a manifold allows for an unambiguous notion of smooth function. In particular, a smooth structure allows mathematical analysis to be performed on the manifold. Definition. A smooth structure on a manifold formula_0 is a collection of smoothly equivalent smooth atlases. Here, a smooth atlas for a topological manifold formula_0 is an atlas for formula_0 such that each transition function is a smooth map, and two smooth atlases for formula_0 are smoothly equivalent provided their union is again a smooth atlas for formula_1 This gives a natural equivalence relation on the set of smooth atlases. A smooth manifold is a topological manifold formula_0 together with a smooth structure on formula_1 Maximal smooth atlases. By taking the union of all atlases belonging to a smooth structure, we obtain a maximal smooth atlas. This atlas contains every chart that is compatible with the smooth structure. There is a natural one-to-one correspondence between smooth structures and maximal smooth atlases. Thus, we may regard a smooth structure as a maximal smooth atlas and vice versa. In general, computations with the maximal atlas of a manifold are rather unwieldy. For most applications, it suffices to choose a smaller atlas. For example, if the manifold is compact, then one can find an atlas with only finitely many charts. Equivalence of smooth structures. If formula_2 and formula_3 are two maximal atlases on formula_0 the two smooth structures associated to formula_2 and formula_3 are said to be equivalent if there is a diffeomorphism formula_4 such that formula_5 Exotic spheres. John Milnor showed in 1956 that the 7-dimensional sphere admits a smooth structure that is not equivalent to the standard smooth structure. A sphere equipped with a nonstandard smooth structure is called an exotic sphere. E8 manifold. The E8 manifold is an example of a topological manifold that does not admit a smooth structure. This essentially demonstrates that Rokhlin's theorem holds only for smooth structures, and not topological manifolds in general. Related structures. The smoothness requirements on the transition functions can be weakened, so that the transition maps are only required to be formula_6-times continuously differentiable; or strengthened, so that the transition maps are required to be real-analytic. Accordingly, this gives a formula_7 or (real-)analytic structure on the manifold rather than a smooth one. Similarly, a complex structure can be defined by requiring the transition maps to be holomorphic. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "M." }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "\\nu" }, { "math_id": 4, "text": "f : M \\to M" }, { "math_id": 5, "text": "\\mu \\circ f = \\nu." }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "C^k" } ]
https://en.wikipedia.org/wiki?curid=1542238
154227
What the Tortoise Said to Achilles
Allegorical dialogue by Lewis Carroll "What the Tortoise Said to Achilles", written by Lewis Carroll in 1895 for the philosophical journal "Mind", is a brief allegorical dialogue on the foundations of logic. The title alludes to one of Zeno's paradoxes of motion, in which Achilles could never overtake the tortoise in a race. In Carroll's dialogue, the tortoise challenges Achilles to use the force of logic to make him accept the conclusion of a simple deductive argument. Ultimately, Achilles fails, because the clever tortoise leads him into an infinite regression. Summary of the dialogue. The discussion begins by considering the following logical argument: The tortoise accepts premises "A" and "B" as true but not the hypothetical: The Tortoise claims that it is not "under any logical necessity to accept "Z" as true". The tortoise then challenges Achilles to force it logically to accept "Z" as true. Instead of searching the tortoise’s reasons for not accepting "C", Achilles asks it to accept "C", which it does. After which, Achilles says: The tortoise responds, "That's another Hypothetical, isn't it? And, if I failed to see its truth, I might accept A and B and C, and still not accept Z, mightn't I?" Again, instead of requesting reasons for not accepting "D", he asks the tortoise to accept "D". And again, it is "quite willing to grant it", but it still refuses to accept Z. It then tells Achilles to write into his book, Following this, the Tortoise says: "until I’ve granted that [i.e., "E"], of course I needn’t grant Z. So it's quite a necessary step". With a touch of sadness, Achilles sees the point. The story ends by suggesting that the list of premises continues to grow without end, but without explaining the point of the regress. Explanation. Lewis Carroll was showing that there is a regressive problem that arises from "modus ponens" deductions. formula_0 Or, in words: proposition "P" (is true) implies "Q" (is true), and given "P", therefore "Q". The regress problem arises because a prior principle is required to explain logical principles, here "modus ponens", and once "that" principle is explained, "another" principle is required to explain "that" principle. Thus, if the argumentative chain is to continue, the argument falls into infinite regress. However, if a formal system is introduced whereby "modus ponens" is simply a rule of inference defined within the system, then it can be abided by simply by reasoning within the system. That is not to say that the user reasoning according to this formal system agrees with these rules (consider, for example, the constructivist's rejection of the law of the excluded middle and the dialetheist's rejection of the law of noncontradiction). In this way, formalising logic as a system can be considered as a response to the problem of infinite regress: "modus ponens" is placed as a rule within the system, the validity of "modus ponens" is eschewed without the system. In propositional logic, the logical implication is defined as follows: P implies Q if and only if the proposition "not P or Q" is a tautology. Hence "modus ponens", [P ∧ (P → Q)] ⇒ Q, is a valid logical conclusion according to the definition of logical implication just stated. Demonstrating the logical implication simply translates into verifying that the compound truth table produces a tautology. But the tortoise does not accept on faith the rules of propositional logic that this explanation is founded upon. He asks that these rules, too, be subject to logical proof. The tortoise and Achilles do not agree on any definition of logical implication. In addition, the story hints at problems with the propositional solution. Within the system of propositional logic, no proposition or variable carries any semantic content. The moment any proposition or variable takes on semantic content, the problem arises again because semantic content runs outside the system. Thus, if the solution is to be said to work, then it is to be said to work solely within the given formal system, and not otherwise. Some logicians (Kenneth Ross, Charles Wright) draw a firm distinction between the conditional connective and the implication relation. These logicians use the phrase "not p or q" for the conditional connective and the term "implies" for an asserted implication relation. Discussion. Several philosophers have tried to resolve Carroll's paradox. Bertrand Russell discussed the paradox briefly in § 38 of "The Principles of Mathematics" (1903), distinguishing between "implication" (associated with the form "if "p", then "q"), which he held to be a relation between "unasserted" propositions, and "inference" (associated with the form "p", therefore "q""), which he held to be a relation between "asserted" propositions; having made this distinction, Russell could deny that the Tortoise's attempt to treat "inferring" "Z" from "A" and "B" as equivalent to, or dependent on, agreeing to the "hypothetical" "If "A" and "B" are true, then "Z" is true." Peter Winch, a Wittgensteinian philosopher, discussed the paradox in "The Idea of a Social Science and its Relation to Philosophy" (1958), where he argued that the paradox showed that "the actual process of drawing an inference, which is after all at the heart of logic, is something which cannot be represented as a logical formula ... Learning to infer is not just a matter of being taught about explicit logical relations between propositions; it is learning "to do" something" (p. 57). Winch goes on to suggest that the moral of the dialogue is a particular case of a general lesson, to the effect that the proper "application" of rules governing a form of human activity cannot itself be summed up with a set of "further" rules, and so that "a form of human activity can never be summed up in a set of explicit precepts" (p. 53). Carroll's dialogue is apparently the first description of an obstacle to conventionalism about logical truth, later reworked in more sober philosophical terms by W.V.O. Quine. Sources. Reprinted: As audio: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{P \\to Q,\\; P}{\\therefore Q}" } ]
https://en.wikipedia.org/wiki?curid=154227
1543107
Varroa destructor
Species of mite &lt;templatestyles src="Template:Taxobox/core/styles.css" /&gt; Varroa destructor, the Varroa mite, is an external parasitic mite that attacks and feeds on honey bees and is one of the most damaging honey bee pests in the world. A significant mite infestation leads to the death of a honey bee colony, usually in the late autumn through early spring. Without management for Varroa mite, honey bee colonies typically collapse within 2 to 3 years in temperate climates. These mites can infest "Apis mellifera", the western honey bee, and "Apis cerana", the Asian honey bee. Due to very similar physical characteristics, this species was thought to be the closely related "Varroa jacobsoni" prior to 2000, but they were found to be two separate species after DNA analysis. Parasitism of bees by mites in the genus "Varroa" is called varroosis. The Varroa mite can reproduce only in a honey bee colony. It attaches to the body of the bee and weakens the bee. The species is a vector for at least five debilitating bee viruses, including RNA viruses such as the deformed wing virus (DWV). The Varroa mite is the parasite with possibly the most pronounced economic impact on the beekeeping industry and is one of multiple stress factors contributing to the higher levels of bee losses around the world. Varroa mite has also been implicated as one of the multiple causes of colony collapse disorder. Management of this pest focuses on reducing mite numbers through monitoring to avoid significant hive losses or death. 3% of bees infested in a hive is considered an economic threshold where damage is high enough to warrant additional management. Miticides are available, though some are difficult to time correctly while avoiding harm to the hive, and resistance has occurred for others. Screened bottom boards on hives can be used for both monitoring and mite removal, and drone comb that mites prefer can be used as a trap to remove mites from the hive. Honey bee lines in breeding programs also show partial resistance to Varroa mite through increased hygienic behavior that is being incorporated as an additional management strategy. Description and taxonomy. The adult female mite is reddish-brown in color, while the male is white. Varroa mites are flat, having a button shape. They are 1–1.8 mm long and 1.5–2 mm wide, and have eight legs. Varroa mites lack eyes. These mites have curved bodies that allow them to fit between the abdominal segments of adult bees. Host bee species can help differentiate mite species in the genus "Varroa"; both "V. destructor" and "Varroa jacobsoni" parasitize "Apis cerana", the Asian honey bee, but the closely related mite species originally described as "V. jacobsoni" by Anthonie Cornelis Oudemans in 1904 does not attack "Apis mellifera", the western honey bee, unlike "V. destructor". Until 2000, "V. destructor" was thought to be "V. jacobsoni" and resulted in some mislabeling in the scientific literature. The two species cannot be easily distinguished with physical traits and have 99.7% similar genomes, so DNA analysis is required instead. Because the more virulent and damaging species "V. destructor" could not be distinguished at the time, most pre-2000 research on western honey bees that refers to "V. jacobsoni" was actually research on "V. destructor". Other Varroa species "V. underwoodi" and "V. rindereri" can also parasitize honey bee species and can be distinguished from "V. destructor" and "V. jacobsoni" with slight differences in body size and setae characteristics, though each of the four species within the Varroa genus have similar physical characteristics. If a Varroa species is found on a western honey bee, it will typically be "V. destructor" except where "V. underwoodi" is present, such as in Papua New Guinea. The name "Varroa mite" is typically used as the common name for "V. destructor" after the species was considered separate from "V. jacobsoni". Varroa mite has two distinct genetic strains from when it switched hosts from the Asian honey bee to the western honey bee: Korean and Japanese. The Korean strain that occurred in 1952 is now found worldwide in high frequencies, while the Japanese strain that started around 1957 occurs in similar areas at much lower frequencies. Varroa mite has low genetic diversity, which is typical for an invasive species undergoing a range or host expansion. Range. Varroa mites originally only occurred in Asia on the Asian honey bee, but this species has been introduced to many other countries on several continents, resulting in disastrous infestations of European honey bees. Introduction data prior to 2000 is unclear due to confusion with "V. jacobsoni". By 2020, "V. destructor" was confirmed to be present throughout North America excluding Greenland, South America, most of Europe and Asia, and portions of Africa. The species was not present in Australia as well as Oman, Congo, Democratic Republic of Congo, and Malawi. It was suspected to not be present in Sudan and Somalia. Mites were found in 2022 in New South Wales in Australia. Life cycle. Female mites enter brood cells to lay eggs on the comb wall after the cell is capped. Eggs are approximately 0.2 to 0.3 mm in diameter and cannot be seen without magnification. These eggs hatch into male and female protonymphs that are both transparent white. Immature mites can only feed on capped brood, so the life cycle cannot be completed during broodless periods. Protonymphs molt into deuteronymphs that more closely resemble the curved body of adults before they molt into adults. Development time from egg to adult is 6–7 days. Males will not leave brood cells and only mate with females present in the brood cell. Adult females can be found feeding both on brood and adult bees. After reaching the adult stage, females will leave the brood cell and enter a phoretic stage where mites attach to adult bees in order to disperse. Mites will feed on adult bees at this time and can be transmitted from bee to bee during this stage. Nurse bees are preferred hosts in order to be moved to new brood cells. Because the nurse bee spends more time around the drone brood (i.e., male bees) rather than the worker brood, many more drones are infected with the mites. These phoretic females can also be transmitted to other hives through bee contact or hive equipment transfer. The phoretic stage can last for 4.5–11 days during brood production periods or up to five to six months when no brood is present in winter months. Female mites have a life expectancy of 27 days when brood is present. After the phoretic stage, female mites leave the adult bee and enter brood cells with bee larvae. Drone cells are preferred over workers. These females are called foundress mites, and they bury themselves in brood food provided by worker bees before the cell is capped. Brood cell capping begins egg cell activation for a foundress mite while she emerges to feed on the larva. She will lay a single unfertilized egg after feeding to produce a male mite. After laying this egg, fertilized eggs to produce females are laid approximately once a day. Both the mother and nymphs will feed on the developing pupa. Unless multiple foundress mites are present in a cell, mating occurs between siblings when they reach the adult stage. Once females mate, they are unable to receive additional sperm. Varroa mite's genetic bottleneck is also likely due to its habit of sibling mating. Host interactions. Adult mites feed on both adult bees and bee larvae by sucking on the fat body, an insect organ that stores glycogen and triglycerides with tissue abundant under epidermis and the surrounding internal body cavity. As the fat body is crucial for many bodily functions such as hormone and energy regulation, immunity, and pesticide detoxification, the mite's consumption of the fat body weakens both the adult bee and the larva. Feeding on fat body cells significantly decreases the weight of both the immature and adult bee. Infested adult worker bees have a shorter lifespan than ordinary worker bees, and they furthermore tend to be absent from the colony far more than ordinary bees, which could be due to their reduced ability to navigate or regulate their energy for flight. Infested bees are more likely to wander into other hives and further increase spread. Bees will occasionally drift into other nearby hives, but this rate is higher for Varroa infested bees. Adult mites live and feed under the abdominal plates of adult bees primarily on the underside of the abdominal region on the left side of the bee. Adult mites are more often identified as present in the hive when on top of the adult bee on the thorax, but mites in this location are likely not feeding, but rather attempting to transfer to another bee. Varroa mites have been found on flowers visited by worker bees, which may be a means by which phoretic mites spread short distances when other bees, including from other hives, visit. They have also been found on larvae of some wasp species, such as "Vespula vulgaris", and flower-feeding insects such as the bumblebee, "Bombus pensylvanicus", the scarab beetle, "Phanaeus vindex", and the flower-fly, "Palpada vinetorum". There have not been any indications Varroa mites are able to complete their life cycle on these insects, but instead they become distributed to other areas while a mite is still alive on these insects. Virus transmission. Open wounds left by the feeding become sites for disease and virus infections. The mites are vectors for at least five and possibly up to 18 debilitating bee viruses, including RNA viruses such as the deformed wing virus. Prior to the widespread introduction of Varroa mite, honey bee viruses were typically considered a minor issue. Virus particles are directly injected into the bee's body cavity and mites can also cause immunosuppression that increases infection in host bees. Varroa mites can transmit the following viruses: Deformed wing virus is one of the most prominent and damaging honey bee viruses transmitted by Varroa mites. It causes crumpled deformed wings that resemble sticks and also causes shortened abdomens. Colony collapse disorder. There is some evidence that harm from both Varroa mite and associated viruses they transmit may be a contributing factor that leads to colony collapse disorder (CCD). While the exact causes of CCD are not known, infection of colonies from multiple pathogens and interaction of those pathogens with environmental stresses is considered by entomologists to be one of the likely causes of CCD. Most scientists agree there is not a single cause of CCD. Management. Mite populations undergo exponential growth when bee broods are available, and exponential decline when no brood is available. In 12 weeks, the number of mites in a western honey bee hive can multiply by roughly 12. Mites often invade colonies in the summer, leading to high mite populations in autumn. High mite populations in the autumn can cause a crisis when drone rearing ceases and the mites switch to worker larvae, causing a quick population crash and often hive death. Various management methods are used for Varroa mite integrated pest management to monitor and manage damage to hives. Monitoring. Beekeepers use several methods for monitoring levels of Varroa mites in a colony. They involve either estimating the total number of mites in a hive by using a sticky board under a screen bottom board to capture mites falling from the hive or estimating the number of mites per bee with powdered sugar or an ethanol wash. Monitoring for mites with a sticky board can be used to estimate the total number of mites in a colony over 72 hours using the equation: formula_0 where "b" is the number of mites found on the sticky board and "c" is the number of estimated mites in the colony. However, the bee population in a colony also needs to be known to determine what population of mites is tolerable with this method. Mite counts from a known quantity of bees (i.e., 300 bees) collected from brood comb are instead often used to determine mite severity. Mites are dislodged from a sample of bees using non-lethal or lethal means. The bees are shaken in a container of either powdered sugar, alcohol, or soapy water to dislodge and count mites. Powdered sugar is generally considered non-lethal to honey bees, but lethal methods such as alcohol can be more effective at dislodging mites. 3% of the colony being infested is considered an economic threshold damaging enough to warrant further management such as miticides, though beekeepers may use other management tactics in the 0–2% infestation range to keep mite populations low. Chemical measures. Varroa mites can be treated with commercially available acaricides that must be timed carefully to minimize the contamination of honey that might be consumed by humans. The four most common synthetic pesticides used for mite treatments with formulations specific for honey bee colony use are amitraz, coumaphos, and two pyrethroids, flumethrin and tau-fluvalinate, while naturally occurring compounds include formic acid, oxalic acid, essential oils such as thymol and beta acids from hops resin (e.g. lupulone). Many of these products whether synthetic or naturally produced can negatively affect honey bee brood or queens. These products often are applied through impregnated plastic strips or as powders spread between brood frames. Synthetic compounds often have high efficacy against Varroa mites, but resistance has occurred for all of these products in different areas of the world. Pyrethroids are used because a concentration that will kill mites has relatively low toxicity to honey bees. Compounds derived from plants have also been assessed for mite management. Thymol is one essential oil with efficacy against mites, but can be harmful to bees at high temperatures. Other essential oils such as garlic, oregano, and neem oil have had some efficacy in field trials, though most essential oils that have been tested have little to no effect. Essential oil use is widespread in hives with many of those uses being off-label or in violation of pesticide regulations in various countries. Hop beta acids are lupulones obtained from hop plants and have been used in products marketed for mite control. Resistance to pyrethroids has occurred in the Czech Republic and the UK due to a single amino acid substitution on Varroa mite's genome. Underlying mechanisms for resistance in other pesticides, such as coumaphos, are still unknown. Mechanical control. Varroa mites can also be controlled through nonchemical means. Most of these controls are intended to reduce the mite population to a manageable level, not to eliminate the mites completely. Screened bottom boards are used both for monitoring and can modestly reduce mite populations by 11–14%. Mites which fall from the comb or bees can land outside the hive instead landing on a solid bottom board that would allow them to easily return to the nest. Varroa infest drone cells at a higher rate than worker brood cells, so drone cells can be used as a trap for mite removal. Beekeepers can also introduce a frame with drone foundation cells that encourage bees to construct more drone cells. When the drone cells are capped, the frame can be removed to freeze out mites. This labor-intensive process can reduce mite levels by about 50–93%, but if trap cells are not removed early enough before mites emerge, mite populations can spike. This method is only viable in spring and early summer when drones are produced. Heat is also sometimes used as a control method. The mites cannot survive temperatures near , but brief exposure to these temperatures do not harm honey bees. Devices are marketed intended to heat brood to these temperatures, though the efficacy of many of these products has not been reviewed. Powdered sugar used for estimating mite counts in hives has also been considered for mite management as it or other inert dusts were believed to initiate grooming responses. Long-term studies do not show any efficacy for reducing mite populations. Genetic methods. Honey bee genetics. The Asian honey bee, is more hygienic with respect to Varroa mite than western honey bees, which is in part why mite infestations are more pronounced in western honey bee colonies. Efforts also have been made to breed hygienic honey bees heritable behavior traits, such as those with resistance to Varroa mites. Honey bee lines with resistance include Minnesota Hygienic Bees, Russian Honey Bees, and Varroa sensitive hygiene. Hygienic behaviors include workers removing pupae heavily infested with mites, which kills both the developing bee and immature mites, and grooming or removal from the brood cell, which increases adult mite mortality. Mites removed from host pupae are at an incorrect life stage to re-infest another pupa. An extended phoretic period in adult female mites has also been noticed. Hygienic behavior is effective against diseases such as American foulbrood or chalkbrood, but the efficacy of this behavior against mites is not well-quantified; colonies with this behavior alone do not necessarily result in Varroa mite resistant colonies that can survive without miticide treatments. The efficacy of this behavior can vary between bee lines in comparison studies with Minnesota hygienic bees removing 66% of infested pupae, while Varroa sensitive hygiene bees removed 85% of infested pupae. There are minimal trade-off costs to hives that have this hygienic behavior, so it is being actively pursued in bee breeding programs. Mite genetics. Researchers have been able to use RNA interference by feeding honey bees mixtures of double-stranded RNA that target expression of several Varroa mite genes, such as cytoskeleton arrangement, transfer of energy, and transcription. This can reduce infestation to 50% without harm to honey bees and is being pursued as an additional control method for Varroa mite. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p(c)=\\frac{3.76-b}{-0.01}" } ]
https://en.wikipedia.org/wiki?curid=1543107
15433374
Classical Hamiltonian quaternions
Hamilton's original treatment of quaternions William Rowan Hamilton invented quaternions, a mathematical entity in 1843. This article describes Hamilton's original treatment of quaternions, using his notation and terms. Hamilton's treatment is more geometric than the modern approach, which emphasizes quaternions' algebraic properties. Mathematically, quaternions discussed differ from the modern definition only by the terminology which is used. Classical elements of a quaternion. Hamilton defined a quaternion as the quotient of two directed lines in tridimensional space; or, more generally, as the quotient of two vectors. A quaternion can be represented as the sum of a "scalar" and a "vector". It can also be represented as the product of its "tensor" and its "versor". Scalar. Hamilton invented the term "scalars" for the real numbers, because they span the "scale of progression from positive to negative infinity" or because they represent the "comparison of positions upon one common scale". Hamilton regarded ordinary scalar algebra as the science of pure time. Vector. Hamilton defined a vector as "a right line ... having not only length but also direction". Hamilton derived the word "vector" from the Latin "vehere", to carry. Hamilton conceived a vector as the "difference of its two extreme points." For Hamilton, a vector was always a three-dimensional entity, having three co-ordinates relative to any given co-ordinate system, including but not limited to both polar and rectangular systems. He therefore referred to vectors as "triplets". Hamilton defined addition of vectors in geometric terms, by placing the origin of the second vector at the end of the first. He went on to define vector subtraction. By adding a vector to itself multiple times, he defined multiplication of a vector by an integer, then extended this to division by an integer, and multiplication (and division) of a vector by a rational number. Finally, by taking limits, he defined the result of multiplying a vector α by any scalar "x" as a vector β with the same direction as α if "x" is positive; the opposite direction to α if "x" is negative; and a length that is |"x"| times the length of α. The quotient of two parallel or anti-parallel vectors is therefore a scalar with absolute value equal to the ratio of the lengths of the two vectors; the scalar is positive if the vectors are parallel and negative if they are anti-parallel. Unit vector. A unit vector is a vector of length one. Examples of unit vectors include i, j and k. Note: The use of the word "tensor" by Hamilton does not coincide with modern terminology. Hamilton's "tensor" is actually the absolute value on the quaternion algebra, which makes it a normed vector space. Tensor. Hamilton defined "tensor" as a positive numerical quantity, or, more properly, signless number. A tensor can be thought of as a positive scalar. The "tensor" can be thought of as representing a "stretching factor." Hamilton introduced the term tensor in his first book, Lectures on Quaternions, based on lectures he gave shortly after his invention of the quaternions: Each quaternion has a tensor, which is a measure of its magnitude (in the same way as the length of a vector is a measure of a vectors' magnitude). When a quaternion is defined as the quotient of two vectors, its tensor is the ratio of the lengths of these vectors. Versor. A versor is a quaternion with a tensor of 1. Alternatively, a versor can be defined as the quotient of two equal-length vectors. In general a versor defines all of the following: a directional axis; the plane normal to that axis; and an angle of rotation. When a versor and a vector which lies in the plane of the versor are multiplied, the result is a new vector of the same length but turned by the angle of the versor. Vector arc. Since every unit vector can be thought of as a point on a unit sphere, and since a versor can be thought of as the quotient of two vectors, a versor has a representative great circle arc, called a vector arc, connecting these two points, drawn from the divisor or lower part of quotient, to the dividend or upper part of the quotient. Right versor. When the arc of a versor has the magnitude of a right angle, then it is called a right versor, a "right radial" or "quadrantal versor". Degenerate forms. There are two special degenerate versor cases, called the unit-scalars. These two scalars (negative and positive unity) can be thought of as scalar quaternions. These two scalars are special limiting cases, corresponding to versors with angles of either zero or π. Unlike other versors, these two cannot be represented by a unique arc. The arc of 1 is a single point, and –1 can be represented by an infinite number of arcs, because there are an infinite number of shortest lines between antipodal points of a sphere. Quaternion. Every quaternion can be decomposed into a scalar and a vector. formula_0 These two operations S and V are called "take the Scalar of" and "take the vector of" a quaternion. The vector part of a quaternion is also called the right part. Every quaternion is equal to a versor multiplied by the tensor of the quaternion. Denoting the versor of a quaternion by formula_1 and the tensor of a quaternion by formula_2 we have formula_3 Right quaternion. A real multiple of a right versor is a right quaternion, thus a right quaternion is a quaternion whose scalar component is zero, formula_4 The angle of a right quaternion is 90 degrees. So a right quaternion has only a vector part and no scalar part. Right quaternions may be put in standard trinomial form. For example, if "Q" is a right quaternion, it may be written as: formula_5 Four operations. Four operations are of fundamental importance in quaternion notation. In particular it is important to understand that there is a single operation of multiplication, a single operation of division, and a single operation each of addition and subtraction. This single multiplication operator can operate on any of the types of mathematical entities. Likewise every kind of entity can be divided, added or subtracted from any other type of entity. Understanding the meaning of the subtraction symbol is critical in quaternion theory, because it leads to an understanding of the concept of a vector. Ordinal operators. The two ordinal operations in classical quaternion notation were addition and subtraction or + and −. These marks are: "...characteristics of synthesis and analysis of a state of progression, according as this state is considered as being derived from, or compared with, some other state of that progression." Subtraction. Subtraction is a type of analysis called ordinal analysis ...let space be now regarded as the field of progression which is to be studied, and POINTS as "states" of that progression. ...I am led to regard the word "Minus," or the mark −, in geometry, as the sign or characteristic of analysis of one geometric position (in space), as compared with another (such) position. The comparison of one mathematical point with another with a view to the determination of what may be called their ordinal relation, or their relative position in space... The first example of subtraction is to take the point A to represent the earth, and the point B to represent the sun, then an arrow drawn from A to B represents the act of moving or vection from A to B. B − A this represents the first example in Hamilton's lectures of a vector. In this case the act of traveling from the earth to the sun. Addition. Addition is a type of analysis called ordinal synthesis. Addition of vectors and scalars. Vectors and scalars can be added. When a vector is added to a scalar, a completely different entity, a quaternion is created. A vector plus a scalar is always a quaternion even if the scalar is zero. If the scalar added to the vector is zero then the new quaternion produced is called a right quaternion. It has an angle characteristic of 90 degrees. Cardinal operations. The two Cardinal operations in quaternion notation are geometric multiplication and geometric division and can be written: It is not required to learn the following more advanced terms in order to use division and multiplication. Division is a kind of analysis called cardinal analysis. Multiplication is a kind of synthesis called cardinal synthesis Division. Classically, the quaternion was viewed as the ratio of two vectors, sometimes called a geometric fraction. If OA and OB represent two vectors drawn from the origin O to two other points A and B, then the geometric fraction was written as formula_6 Alternately if the two vectors are represented by α and β the quotient was written as formula_7 or formula_8 Hamilton asserts: "The quotient of two vectors is generally a quaternion". "Lectures on Quaternions" also first introduces the concept of a quaternion as the quotient of two vectors: Logically and by definition, if formula_9 then formula_10. In Hamilton's calculus the product is not commutative, i.e., the order of the variables is of great importance. If the order of q and β were to be reversed the result would not in general be α. The quaternion q can be thought of as an operator that changes β into α, by first rotating it, formerly an act of "version" and then changing the length of it, formerly called an act of "tension". Also by definition the quotient of two vectors is equal to the numerator times the reciprocal of the denominator. Since multiplication of vectors is not commutative, the order cannot be changed in the following expression. formula_11 Again the order of the two quantities on the right hand side is significant. Hardy presents the definition of division in terms of mnemonic cancellation rules. "Canceling being performed by an upward right hand stroke". If alpha and beta are vectors and q is a quaternion such that formula_12 then formula_13 and formula_14 formula_15 and formula_16 are inverse operations, such that: formula_17 and formula_18 and formula_19 An important way to think of q is as an operator that changes β into α, by first rotating it ("version") and then changing its length (tension). formula_20 Division of the unit vectors "i", "j", "k". The results of using the division operator on "i", "j", and "k" was as follows. The reciprocal of a unit vector is the vector reversed. formula_21 Because a unit vector and its reciprocal are parallel to each other but point in opposite directions, the product of a unit vector and its reciprocal have a special case commutative property, for example if a is any unit vector then: formula_22 However, in the more general case involving more than one vector (whether or not it is a unit vector) the commutative property does not hold. For example: formula_23 ≠ formula_24 This is because k/i is carefully defined as: formula_25. So that: formula_26, however formula_27 Division of two parallel vectors. While in general the quotient of two vectors is a quaternion, If α and β are two parallel vectors then the quotient of these two vectors is a scalar. For example, if formula_28, and formula_29 then formula_30 Where a/b is a scalar. Division of two non-parallel vectors. The quotient of two vectors is in general the quaternion: formula_31formula_32 Where α and β are two non-parallel vectors, φ is that angle between them, and ε is a unit vector perpendicular to the plane of the vectors α and β, with its direction given by the standard right hand rule. Multiplication. Classical quaternion notation had only one concept of multiplication. Multiplication of two real numbers, two imaginary numbers or a real number by an imaginary number in the classical notation system was the same operation. Multiplication of a scalar and a vector was accomplished with the same single multiplication operator; multiplication of two vectors of quaternions used this same operation as did multiplication of a quaternion and a vector or of two quaternions. Factor × Faciend = Factum Factor, Faciend and Factum. When two quantities are multiplied the first quantity is called the factor, the second quantity is called the faciend and the result is called the factum. Distributive. In classical notation, multiplication was distributive. Understanding this makes it simple to see why the product of two vectors in classical notation produced a quaternion. formula_33 formula_34 Using the quaternion multiplication table we have: formula_35 Then collecting terms: formula_36 The first three terms are a scalar. Letting formula_37 formula_38 formula_39 formula_40 So that the product of two vectors is a quaternion, and can be written in the form: formula_41 Product of two right quaternions. The product of two right quaternions is generally a quaternion. Let α and β be the right quaternions that result from taking the vectors of two quaternions: formula_42 formula_43 Their product in general is a new quaternion represented here by r. This product is not ambiguous because classical notation has only one product. formula_44 Like all quaternions r may now be decomposed into its vector and scalar parts. formula_45 The terms on the right are called "scalar of the product", and the "vector of the product" of two right quaternions. Note: "Scalar of the product" corresponds to Euclidean scalar product of two vectors up to the change of sign (multiplication to −1). Other operators in detail. Scalar and vector. Two important operations in two the classical quaternion notation system were S(q) and V(q) which meant take the scalar part of, and take the imaginary part, what Hamilton called the vector part of the quaternion. Here S and V are operators acting on q. Parenthesis can be omitted in these kinds of expressions without ambiguity. Classical notation: formula_46 Here, "q" is a quaternion. "Sq" is the scalar of the quaternion while Vq is the vector of the quaternion. Conjugate. K is the conjugate operator. The conjugate of a quaternion is a quaternion obtained by multiplying the vector part of the first quaternion by minus one. If formula_46 then formula_47. The expression formula_48, means, assign the quaternion r the value of the conjugate of the quaternion q. Tensor. T is the tensor operator. It returns a kind of number called a "tensor". The tensor of a positive scalar is that scalar. The tensor of a negative scalar is the absolute value of the scalar (i.e., without the negative sign). For example: formula_49 formula_50 The tensor of a vector is by definition the length of the vector. For example, if: formula_51 Then formula_52 The tensor of a unit vector is one. Since the versor of a vector is a unit vector, the tensor of the versor of any vector is always equal to unity. Symbolically: formula_53 A quaternion is by definition the quotient of two vectors and the tensor of a quaternion is by definition the quotient of the tensors of these two vectors. In symbols: formula_54 formula_55 From this definition it can be shown that a useful formula for the tensor of a quaternion is: formula_56 It can also be proven from this definition that another formula to obtain the tensor of a quaternion is from the common norm, defined as the product of a quaternion and its conjugate. The square root of the common norm of a quaternion is equal to its tensor. formula_57 A useful identity is that the square of the tensor of a quaternion is equal to the tensor of the square of a quaternion, so that the parentheses may be omitted. formula_58 Also, the tensors of conjugate quaternions are equal. formula_59 The tensor of a quaternion is now called its norm. Axis and angle. Taking the angle of a non-scalar quaternion, resulted in a value greater than zero and less than π. When a non-scalar quaternion is viewed as the quotient of two vectors, then the axis of the quaternion is a unit vector perpendicular to the plane of the two vectors in this original quotient, in a direction specified by the right hand rule. The angle is the angle between the two vectors. In symbols, formula_60 formula_61 Reciprocal. If formula_62 then its reciprocal is defined as formula_63 The expression: formula_64 Reciprocals have many important applications, for example rotations, particularly when q is a versor. A versor has an easy formula for its reciprocal. formula_65 In words the reciprocal of a versor is equal to its conjugate. The dots between operators show the order of the operations, and also help to indicate that S and U for example, are two different operations rather than a single operation named SU. Common norm. The product of a quaternion with its conjugate is its common norm. The operation of taking the common norm of a quaternion is represented with the letter N. By definition the common norm is the product of a quaternion with its conjugate. It can be proven that common norm is equal to the square of the tensor of a quaternion. However this proof does not constitute a definition. Hamilton gives exact, independent definitions of both the common norm and the tensor. This norm was adopted as suggested from the theory of numbers, however to quote Hamilton "they will not often be wanted". The tensor is generally of greater utility. The word norm does not appear in "Lectures on Quaternions", and only twice in the table of contents of "Elements of Quaternions". In symbols: formula_66 The common norm of a versor is always equal to positive unity. formula_67 Biquaternions. Geometrically real and geometrically imaginary numbers. In classical quaternion literature the equation formula_68 was thought to have infinitely many solutions that were called geometrically real. These solutions are the unit vectors that form the surface of a unit sphere. A geometrically real quaternion is one that can be written as a linear combination of "i", "j" and "k", such that the squares of the coefficients add up to one. Hamilton demonstrated that there had to be additional roots of this equation in addition to the geometrically real roots. Given the existence of the imaginary scalar, a number of expressions can be written and given proper names. All of these were part of Hamilton's original quaternion calculus. In symbols: formula_69 where q and q′ are real quaternions, and the square root of minus one is the imaginary of ordinary algebra, and are called an imaginary or symbolical roots and not a geometrically real vector quantity. Imaginary scalar. Geometrically Imaginary quantities are additional roots of the above equation of a purely symbolic nature. In article 214 of "Elements" Hamilton proves that if there is an i, j and k there also has to be another quantity h which is an imaginary scalar, which he observes should have already occurred to anyone who had read the preceding articles with attention. Article 149 of "Elements" is about Geometrically Imaginary numbers and includes a footnote introducing the term "biquaternion". The terms "imaginary of ordinary algebra" and "scalar imaginary" are sometimes used for these geometrically imaginary quantities. "Geometrically Imaginary" roots to an equation were interpreted in classical thinking as geometrically impossible situations. Article 214 of "Elements of Quaternions" explores the example of the equation of a line and a circle that do not intersect, as being indicated by the equation having only a geometrically imaginary root. In Hamilton's later writings he proposed using the letter h to denote the imaginary scalar Biquaternion. On page 665 of "Elements of Quaternions" Hamilton defines a biquaternion to be a quaternion with complex number coefficients. The scalar part of a biquaternion is then a complex number called a biscalar. The vector part of a biquaternion is a bivector consisting of three complex components. The biquaternions are then the complexification of the original (real) quaternions. Other double quaternions. Hamilton invented the term "associative" to distinguish between the imaginary scalar (known by now as a complex number) which is both commutative and associative, and four other possible roots of negative unity which he designated L, M, N and O, mentioning them briefly in appendix B of "Lectures on Quaternions" and in private letters. However, non-associative roots of minus one do not appear in "Elements of Quaternions". Hamilton died before he worked on these strange entities. His son claimed them to be "bows reserved for the hands of another Ulysses". Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "q = \\mathbf{S}(q) + \\mathbf{V}(q)" }, { "math_id": 1, "text": "\\mathbf{U}q" }, { "math_id": 2, "text": "\\mathbf{T}q" }, { "math_id": 3, "text": "q=\\mathbf{T}q\\mathbf{U}q" }, { "math_id": 4, "text": "S(q) = 0 ." }, { "math_id": 5, "text": "Q = xi + yj + zk" }, { "math_id": 6, "text": "OA:OB" }, { "math_id": 7, "text": "\\alpha\\div\\beta" }, { "math_id": 8, "text": "\\frac{\\alpha}{\\beta}" }, { "math_id": 9, "text": "\\frac{\\alpha}{\\beta}=q" }, { "math_id": 10, "text": "{q}\\times{\\beta} = \\alpha." }, { "math_id": 11, "text": "\\frac{\\alpha}{\\beta}=\\,{\\alpha}\\times\\frac{1}{\\beta}" }, { "math_id": 12, "text": "\\frac{\\alpha}{\\beta} = q " }, { "math_id": 13, "text": "\\alpha\\beta^{-1}=q" }, { "math_id": 14, "text": "\\frac{\\alpha}{\\beta}.\\beta = \\alpha\\beta^{-1}.\\beta=\\alpha" }, { "math_id": 15, "text": "\\times" }, { "math_id": 16, "text": "\\div" }, { "math_id": 17, "text": "\\beta\\div\\alpha\\times\\alpha=\\beta" }, { "math_id": 18, "text": "q\\times\\alpha\\div\\alpha=q" }, { "math_id": 19, "text": "\\gamma=(\\gamma\\div\\beta)\\times(\\beta\\div\\alpha)\\times\\alpha" }, { "math_id": 20, "text": "\\gamma\\div\\alpha=(\\gamma\\div\\beta)\\times(\\beta\\div\\alpha)" }, { "math_id": 21, "text": "\\frac{1}{i} = i^{-1} = -i" }, { "math_id": 22, "text": "\\frac{1}{a}a = (-a)a = 1 = a(-a) = a\\frac{1}{a}." }, { "math_id": 23, "text": "i\\frac{k}{i}" }, { "math_id": 24, "text": "\\frac{k}{i} i." }, { "math_id": 25, "text": "\\frac{k}{i} = k\\frac{1}{i} = ki^{-1} = k(-i) = -(ki) = -(j) = -j" }, { "math_id": 26, "text": "i\\frac{k}{i} = i(-j) = -k" }, { "math_id": 27, "text": "\\frac{k}{i} i= (-j)i = -(ji) = -(-k) = k" }, { "math_id": 28, "text": "\\alpha = ai" }, { "math_id": 29, "text": "\\beta = bi" }, { "math_id": 30, "text": "\\alpha\\div\\beta = \\frac{\\alpha}{\\beta} = \\frac{ai}{bi} = \\frac{a}{b}" }, { "math_id": 31, "text": "q =\\frac{\\alpha}{\\beta}" }, { "math_id": 32, "text": "=\\frac{T\\alpha}{T\\beta}(\\cos\\phi + \\epsilon\\sin\\phi)" }, { "math_id": 33, "text": "q=(ai + bj + ck)\\times(ei + fj + gk)" }, { "math_id": 34, "text": "q = ae({i}\\times{i}) + af({i}\\times{j}) + ag({i}\\times{k}) + be({j}\\times{i}) + bf({j}\\times{j}) + bg({j}\\times{k}) + ce({k}\\times{i}) + cf({k}\\times{j}) + cg({k}\\times{k})" }, { "math_id": 35, "text": "q = ae(-1) + af(+k) + ag(-j) + be(-k) + bf(-1) + bg(+i) + ce(+j) + cf(-i) + cg(-1)" }, { "math_id": 36, "text": "q = -ae - bf - cg + (bg-cf)i + (ce - ag)j + (af-be)k" }, { "math_id": 37, "text": "w = -ae - bf - cg" }, { "math_id": 38, "text": "x = (bg-cf)" }, { "math_id": 39, "text": "y = (ce - ag)" }, { "math_id": 40, "text": "z = (af-be)" }, { "math_id": 41, "text": "q = w + xi + yj + zk" }, { "math_id": 42, "text": "\\alpha=\\mathbf{V}p" }, { "math_id": 43, "text": "\\beta=\\mathbf{V}q" }, { "math_id": 44, "text": "r =\\,\\alpha\\beta;" }, { "math_id": 45, "text": "r=\\mathbf{S}r+\\mathbf{V}r" }, { "math_id": 46, "text": "q =\\,\\mathbf{S}q + \\mathbf{V}q" }, { "math_id": 47, "text": "\\mathbf{K}q=\\mathbf{S}\\,q - \\mathbf{V}q" }, { "math_id": 48, "text": "r=\\,\\mathbf{K}q" }, { "math_id": 49, "text": "\\mathbf{T}(5) = 5 " }, { "math_id": 50, "text": "\\mathbf{T}(-5)= 5" }, { "math_id": 51, "text": "\\alpha = xi + yj + zk" }, { "math_id": 52, "text": "\\mathbf{T}\\alpha = \\sqrt{x^2+y^2+z^2}" }, { "math_id": 53, "text": "\\mathbf{TU}\\alpha = 1" }, { "math_id": 54, "text": "q = \\frac{\\alpha}{\\beta}." }, { "math_id": 55, "text": "\\mathbf{T}q = \\frac{\\mathbf{T}\\alpha}{\\mathbf{T}\\beta}." }, { "math_id": 56, "text": "\\mathbf{T}q=\\sqrt{w^2+x^2+y^2+z^2}" }, { "math_id": 57, "text": "\\mathbf{T}q=\\sqrt{qKq}" }, { "math_id": 58, "text": "(\\mathbf{T}q)^2 = \\mathbf{T}(q^2) = \\mathbf{T}q^2" }, { "math_id": 59, "text": "\\mathbf{TK}q = \\mathbf{T}q" }, { "math_id": 60, "text": "u = Ax.q" }, { "math_id": 61, "text": "\\theta = \\angle q" }, { "math_id": 62, "text": "q=\\frac{\\alpha}{\\beta}" }, { "math_id": 63, "text": "\\frac{1}{q}=q^{-1} = \\frac{\\beta}{\\alpha}" }, { "math_id": 64, "text": "{q}\\times{\\alpha}\\times\\frac{1}{q}" }, { "math_id": 65, "text": "\\frac{1}{(\\mathbf{U}q)}= \\mathbf{S.U}q - \\mathbf{V.U}q = \\mathbf{K.U}q" }, { "math_id": 66, "text": "\\mathbf{N}q=\\,q\\mathbf{K}q =\\,(\\mathbf{T}q)^2" }, { "math_id": 67, "text": "\\mathbf{NU}q = \\mathbf{U}q.\\mathbf{KU}q = 1" }, { "math_id": 68, "text": "q^2=-1" }, { "math_id": 69, "text": "q + q'\\sqrt{-1}" } ]
https://en.wikipedia.org/wiki?curid=15433374
1543358
Darboux's theorem
Foundational result in symplectic geometry In differential geometry, a field in mathematics, Darboux's theorem is a theorem providing a normal form for special classes of differential 1-forms, partially generalizing the Frobenius integration theorem. It is named after Jean Gaston Darboux who established it as the solution of the Pfaff problem. It is a foundational result in several fields, the chief among them being symplectic geometry. Indeed, one of its many consequences is that any two symplectic manifolds of the same dimension are locally symplectomorphic to one another. That is, every formula_0-dimensional symplectic manifold can be made to look locally like the linear symplectic space formula_1 with its canonical symplectic form. There is also an analogous consequence of the theorem applied to contact geometry. Statement. Suppose that formula_2 is a differential 1-form on an "formula_3"-dimensional manifold, such that formula_4 has constant rank "formula_5". Then Darboux's original proof used induction on "formula_5" and it can be equivalently presented in terms of distributions or of differential ideals. Frobenius' theorem. Darboux's theorem for "formula_11" ensures that any 1-form "formula_12" such that "formula_13" can be written as "formula_14" in some coordinate system formula_15. This recovers one of the formulation of Frobenius theorem in terms of differential forms: if formula_16 is the differential ideal generated by formula_17, then "formula_13" implies the existence of a coordinate system formula_15 where formula_16 is actually generated by formula_18. Darboux's theorem for symplectic manifolds. Suppose that formula_19 is a symplectic 2-form on an formula_20-dimensional manifold "formula_21". In a neighborhood of each point "formula_5" of "formula_21", by the Poincaré lemma, there is a 1-form formula_2 with formula_22. Moreover, formula_2 satisfies the first set of hypotheses in Darboux's theorem, and so locally there is a coordinate chart "formula_23" near "formula_5" in whichformula_24 Taking an exterior derivative now shows formula_25 The chart "formula_23" is said to be a Darboux chart around "formula_5". The manifold "formula_21" can be covered by such charts. To state this differently, identify formula_26 with formula_27 by letting formula_28. If formula_29is a Darboux chart, then formula_30 can be written as the pullback of the standard symplectic form formula_31 on formula_32: formula_33 A modern proof of this result, without employing Darboux's general statement on 1-forms, is done using Moser's trick. Comparison with Riemannian geometry. Darboux's theorem for symplectic manifolds implies that there are no local invariants in symplectic geometry: a Darboux basis can always be taken, valid near any given point. This is in marked contrast to the situation in Riemannian geometry where the curvature is a local invariant, an obstruction to the metric being locally a sum of squares of coordinate differentials. The difference is that Darboux's theorem states that formula_30 can be made to take the standard form in an "entire neighborhood" around "formula_5". In Riemannian geometry, the metric can always be made to take the standard form "at" any given point, but not always in a neighborhood around that point. Darboux's theorem for contact manifolds. Another particular case is recovered when formula_34; if formula_9 everywhere, then formula_17 is a contact form. A simpler proof can be given, as in the case of symplectic structures, by using Moser's trick. The Darboux-Weinstein theorem. Alan Weinstein showed that the Darboux's theorem for sympletic manifolds can be strengthened to hold on a neighborhood of a submanifold: "Let formula_35 be a smooth manifold endowed with two symplectic forms formula_36 and formula_37, and let formula_38 be a closed submanifold. If formula_39, then there is a neighborhood formula_40 of formula_41 in formula_35 and a diffeomorphism formula_42 such that formula_43." The standard Darboux theorem is recovered when formula_41 is a point and formula_37 is the standard symplectic structure on a coordinate chart. This theorem also holds for infinite-dimensional Banach manifolds.
[ { "math_id": 0, "text": "2n " }, { "math_id": 1, "text": "\\mathbb{C}^n " }, { "math_id": 2, "text": "\\theta " }, { "math_id": 3, "text": "n " }, { "math_id": 4, "text": "\\mathrm{d} \\theta " }, { "math_id": 5, "text": "p " }, { "math_id": 6, "text": " \\theta \\wedge \\left(\\mathrm{d}\\theta\\right)^p = 0 " }, { "math_id": 7, "text": " (x_1,\\ldots,x_{n-p},y_1,\\ldots, y_p) " }, { "math_id": 8, "text": " \\theta=x_1\\,\\mathrm{d}y_1+\\ldots + x_p\\,\\mathrm{d}y_p; " }, { "math_id": 9, "text": " \\theta \\wedge \\left( \\mathrm{d} \\theta \\right)^p \\ne 0 " }, { "math_id": 10, "text": " \\theta=x_1\\,\\mathrm{d}y_1+\\ldots + x_p\\,\\mathrm{d}y_p + \\mathrm{d}x_{p+1}." }, { "math_id": 11, "text": "p=0 " }, { "math_id": 12, "text": "\\theta \\neq 0 " }, { "math_id": 13, "text": "\\theta \\wedge d\\theta = 0 " }, { "math_id": 14, "text": "\\theta = dx_1 " }, { "math_id": 15, "text": " (x_1,\\ldots,x_n) " }, { "math_id": 16, "text": " \\mathcal{I} \\subset \\Omega^*(M) " }, { "math_id": 17, "text": " \\theta " }, { "math_id": 18, "text": " d x_1 " }, { "math_id": 19, "text": "\\omega " }, { "math_id": 20, "text": "n=2m " }, { "math_id": 21, "text": "M " }, { "math_id": 22, "text": "\\mathrm{d} \\theta = \\omega" }, { "math_id": 23, "text": "U " }, { "math_id": 24, "text": " \\theta=x_1\\,\\mathrm{d}y_1+\\ldots + x_m\\,\\mathrm{d}y_m. " }, { "math_id": 25, "text": " \\omega = \\mathrm{d} \\theta = \\mathrm{d}x_1 \\wedge \\mathrm{d}y_1 + \\ldots + \\mathrm{d}x_m \\wedge \\mathrm{d}y_m." }, { "math_id": 26, "text": "\\mathbb{R}^{2m}" }, { "math_id": 27, "text": "\\mathbb{C}^{m}" }, { "math_id": 28, "text": "z_j=x_j+\\textit{i}\\,y_j" }, { "math_id": 29, "text": "\\varphi: U \\to \\mathbb{C}^n" }, { "math_id": 30, "text": " \\omega " }, { "math_id": 31, "text": "\\omega_0" }, { "math_id": 32, "text": "\\mathbb{C}^{n}" }, { "math_id": 33, "text": "\\omega = \\varphi^{*}\\omega_0.\\," }, { "math_id": 34, "text": " n=2p+1 " }, { "math_id": 35, "text": "M" }, { "math_id": 36, "text": "\\omega_1" }, { "math_id": 37, "text": "\\omega_2" }, { "math_id": 38, "text": "N \\subset M" }, { "math_id": 39, "text": " \\left.\\omega_1\\right|_N = \\left.\\omega_2\\right|_N " }, { "math_id": 40, "text": " U " }, { "math_id": 41, "text": "N" }, { "math_id": 42, "text": "f : U \\to U" }, { "math_id": 43, "text": "f^*\\omega_2 = \\omega_1" } ]
https://en.wikipedia.org/wiki?curid=1543358
15433591
Uniformly most powerful test
Theoretically optimal hypothesis test In statistical hypothesis testing, a uniformly most powerful (UMP) test is a hypothesis test which has the greatest power formula_0 among all possible tests of a given size "α". For example, according to the Neyman–Pearson lemma, the likelihood-ratio test is UMP for testing simple (point) hypotheses. Setting. Let formula_1 denote a random vector (corresponding to the measurements), taken from a parametrized family of probability density functions or probability mass functions formula_2, which depends on the unknown deterministic parameter formula_3. The parameter space formula_4 is partitioned into two disjoint sets formula_5 and formula_6. Let formula_7 denote the hypothesis that formula_8, and let formula_9 denote the hypothesis that formula_10. The binary test of hypotheses is performed using a test function formula_11 with a reject region formula_12 (a subset of measurement space). formula_13 meaning that formula_9 is in force if the measurement formula_14 and that formula_7 is in force if the measurement formula_15. Note that formula_16 is a disjoint covering of the measurement space. Formal definition. A test function formula_11 is UMP of size formula_17 if for any other test function formula_18 satisfying formula_19 we have formula_20 The Karlin–Rubin theorem. The Karlin–Rubin theorem can be regarded as an extension of the Neyman–Pearson lemma for composite hypotheses. Consider a scalar measurement having a probability density function parameterized by a scalar parameter "θ", and define the likelihood ratio formula_21. If formula_22 is monotone non-decreasing, in formula_23, for any pair formula_24 (meaning that the greater formula_23 is, the more likely formula_9 is), then the threshold test: formula_25 where formula_26 is chosen such that formula_27 is the UMP test of size "α" for testing formula_28 Note that exactly the same test is also UMP for testing formula_29 Important case: exponential family. Although the Karlin-Rubin theorem may seem weak because of its restriction to scalar parameter and scalar measurement, it turns out that there exist a host of problems for which the theorem holds. In particular, the one-dimensional exponential family of probability density functions or probability mass functions with formula_30 has a monotone non-decreasing likelihood ratio in the sufficient statistic formula_31, provided that formula_32 is non-decreasing. Example. Let formula_33 denote i.i.d. normally distributed formula_34-dimensional random vectors with mean formula_35 and covariance matrix formula_12. We then have formula_36 which is exactly in the form of the exponential family shown in the previous section, with the sufficient statistic being formula_37 Thus, we conclude that the test formula_38 is the UMP test of size formula_17 for testing formula_39 vs. formula_40 Further discussion. Finally, we note that in general, UMP tests do not exist for vector parameters or for two-sided tests (a test in which one hypothesis lies on both sides of the alternative). The reason is that in these situations, the most powerful test of a given size for one possible value of the parameter (e.g. for formula_41 where formula_42) is different from the most powerful test of the same size for a different value of the parameter (e.g. for formula_43 where formula_44). As a result, no test is uniformly most powerful in these situations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1 - \\beta" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "f_{\\theta}(x)" }, { "math_id": 3, "text": "\\theta \\in \\Theta" }, { "math_id": 4, "text": "\\Theta" }, { "math_id": 5, "text": "\\Theta_0" }, { "math_id": 6, "text": "\\Theta_1" }, { "math_id": 7, "text": "H_0" }, { "math_id": 8, "text": "\\theta \\in \\Theta_0" }, { "math_id": 9, "text": "H_1" }, { "math_id": 10, "text": "\\theta \\in \\Theta_1" }, { "math_id": 11, "text": "\\varphi(x)" }, { "math_id": 12, "text": "R" }, { "math_id": 13, "text": "\\varphi(x) = \n\\begin{cases}\n1 & \\text{if } x \\in R \\\\\n0 & \\text{if } x \\in R^c\n\\end{cases}" }, { "math_id": 14, "text": " X \\in R" }, { "math_id": 15, "text": "X\\in R^c" }, { "math_id": 16, "text": "R \\cup R^c" }, { "math_id": 17, "text": "\\alpha" }, { "math_id": 18, "text": "\\varphi'(x)" }, { "math_id": 19, "text": "\\sup_{\\theta\\in\\Theta_0}\\; \\operatorname{E}[\\varphi'(X)|\\theta]=\\alpha'\\leq\\alpha=\\sup_{\\theta\\in\\Theta_0}\\; \\operatorname{E}[\\varphi(X)|\\theta]\\," }, { "math_id": 20, "text": " \\forall \\theta \\in \\Theta_1, \\quad \\operatorname{E}[\\varphi'(X)|\\theta]= 1 - \\beta'(\\theta) \\leq 1 - \\beta(\\theta) =\\operatorname{E}[\\varphi(X)|\\theta]." }, { "math_id": 21, "text": " l(x) = f_{\\theta_1}(x) / f_{\\theta_0}(x)" }, { "math_id": 22, "text": "l(x)" }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": "\\theta_1 \\geq \\theta_0" }, { "math_id": 25, "text": "\\varphi(x) = \n\\begin{cases}\n1 & \\text{if } x > x_0 \\\\\n0 & \\text{if } x < x_0\n\\end{cases}" }, { "math_id": 26, "text": "x_0" }, { "math_id": 27, "text": "\\operatorname{E}_{\\theta_0}\\varphi(X)=\\alpha" }, { "math_id": 28, "text": " H_0: \\theta \\leq \\theta_0 \\text{ vs. } H_1: \\theta > \\theta_0 ." }, { "math_id": 29, "text": " H_0: \\theta = \\theta_0 \\text{ vs. } H_1: \\theta > \\theta_0 ." }, { "math_id": 30, "text": "f_\\theta(x) = g(\\theta) h(x) \\exp(\\eta(\\theta) T(x))" }, { "math_id": 31, "text": "T(x)" }, { "math_id": 32, "text": "\\eta(\\theta)" }, { "math_id": 33, "text": "X=(X_0 ,\\ldots , X_{M-1})" }, { "math_id": 34, "text": "N" }, { "math_id": 35, "text": "\\theta m" }, { "math_id": 36, "text": "\\begin{align}\nf_\\theta (X) = {} & (2 \\pi)^{-MN/2} |R|^{-M/2} \\exp \\left\\{-\\frac 1 2 \\sum_{n=0}^{M-1} (X_n - \\theta m)^T R^{-1}(X_n - \\theta m) \\right\\} \\\\[4pt]\n= {} & (2 \\pi)^{-MN/2} |R|^{-M/2} \\exp \\left\\{-\\frac 1 2 \\sum_{n=0}^{M-1} \\left (\\theta^2 m^T R^{-1} m \\right ) \\right\\} \\\\[4pt]\n& \\exp \\left\\{-\\frac 1 2 \\sum_{n=0}^{M-1} X_n^T R^{-1} X_n \\right\\} \\exp \\left\\{\\theta m^T R^{-1} \\sum_{n=0}^{M-1}X_n \\right\\}\n\\end{align}" }, { "math_id": 37, "text": "T(X) = m^T R^{-1} \\sum_{n=0}^{M-1}X_n." }, { "math_id": 38, "text": "\\varphi(T) = \\begin{cases} 1 & T > t_0 \\\\ 0 & T < t_0 \\end{cases} \\qquad \\operatorname{E}_{\\theta_0} \\varphi (T) = \\alpha" }, { "math_id": 39, "text": "H_0: \\theta \\leqslant \\theta_0" }, { "math_id": 40, "text": "H_1: \\theta > \\theta_0" }, { "math_id": 41, "text": "\\theta_1" }, { "math_id": 42, "text": "\\theta_1 > \\theta_0" }, { "math_id": 43, "text": "\\theta_2" }, { "math_id": 44, "text": "\\theta_2 < \\theta_0" } ]
https://en.wikipedia.org/wiki?curid=15433591
15433676
Wait-for graph
A wait-for graph in computer science is a directed graph used for deadlock detection in operating systems and relational database systems. In computer science, a system that allows concurrent operation of multiple processes and locking of resources and which does not provide mechanisms to avoid or prevent deadlock must support a mechanism to detect deadlocks and an algorithm for recovering from them. One such deadlock detection algorithm makes use of a wait-for graph to track which other processes a process is currently blocking on. In a wait-for graph, processes are represented as nodes, and an edge from process formula_0 to formula_1 implies formula_1 is holding a resource that formula_0 needs and thus formula_0 is waiting for formula_1 to release its lock on that resource. If the process is waiting for more than a single resource to become available (the trivial case), multiple edges may represent a conjunctive (and) or disjunctive (or) set of different resources or a certain number of equivalent resources from a collection. The possibility of a deadlock is implied by graph cycles in the conjunctive case, and by knots in the disjunctive case. There is no simple algorithm for detecting the possibility of deadlock in the final case. A "wait-for graph" is a graph of conflicts blocked by locks from being materialized; it can be also defined as the graph of non-materialized conflicts; conflicts not materialized are not reflected in the precedence graph and do not affect serializability. The wait-for-graph scheme is not applicable to a resource allocation system with multiple instances of each resource type. An arc from a transaction T1 to another transaction T2 represents that T1 waits for T2 to release a lock (i.e., T1 acquired a lock which is incompatible with a previously acquired lock from T2). A lock is incompatible with another if they are on the same object, one is a write, and they are from different transactions. A deadlock occurs in a schedule if and only if there is at least one cycle in the wait-for graph. Not every cycle necessarily represents a distinct deadlock instance. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P_i" }, { "math_id": 1, "text": "P_j" } ]
https://en.wikipedia.org/wiki?curid=15433676
15437134
Isomorphous replacement
Isomorphous replacement (IR) is historically the most common approach to solving the phase problem in X-ray crystallography studies of proteins. For protein crystals this method is conducted by soaking the crystal of a sample to be analyzed with a heavy atom solution or co-crystallization with the heavy atom. The addition of the heavy atom (or ion) to the structure should not affect the crystal formation or unit cell dimensions in comparison to its native form, hence, they should be isomorphic. Data sets from the native and heavy-atom derivative of the sample are first collected. Then the interpretation of the Patterson difference map reveals the heavy atom's location in the unit cell. This allows both the amplitude and the phase of the heavy-atom contribution to be determined. Since the structure factor of the heavy atom derivative (Fph) of the crystal is the vector sum of the lone heavy atom (Fh) and the native crystal (Fp) then the phase of the native Fp and Fph vectors can be solved geometrically. formula_0 The most common form is multiple isomorphous replacement (MIR), which uses at least two isomorphous derivatives. Single isomorphous replacement is possible, but gives an ambiguious result with two possible phases; density modification is required to resolve the ambiguity. There are also forms that also take into account the anomalous X-ray scattering of the soaked heavy atoms, called MIRAS and SIRAS respectively. Development. Single isomorphous replacement (SIR). Early demonstrations of isomorphous replacement in crystallography come from James M. Cork, John Monteath Robertson, and others. An early demonstration of isomorphous replacement in crystallography came in 1927 with a paper reporting the x-ray crystal structures of a series of alum compounds from Cork. The alum compounds studied had the general formula A.B.(SO4)2.12H2O, where A was a monovalent metallic ion (NH4+, K+, Rb+, Cs+, or Tl+), B was a trivalent metallic ion (Al3+, Cr3+, or Fe3+) and S was usually sulfur, but could also be selenium or tellurium. Because the alum crystals were largely isomorphous when the heavy atoms were changed out, they could be phased by isomorphous replacement. Fourier analysis was used to find the heavy atom positions. The first demonstration of isomorphous replacement in protein crystallography was in 1954 with a paper from David W. Green, Vernon Ingram, and Max Perutz. Examples. Some examples of heavy atoms used in protein MIR: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf F_{ph} = \\mathbf F_p + \\mathbf F_h" } ]
https://en.wikipedia.org/wiki?curid=15437134
1543735
Matrix norm
Norm on a vector space of matrices In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also interact with matrix multiplication. Preliminaries. Given a field formula_0 of either real or complex numbers, let formula_1 be the K-vector space of matrices with formula_2 rows and formula_3 columns and entries in the field formula_0. A matrix norm is a norm on formula_1. Norms are often expressed with double vertical bars (like so: formula_4). Thus, the matrix norm is a function formula_5 that must satisfy the following properties: For all scalars formula_6 and matrices formula_7, The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative: Every norm on "K""n"×"n" can be rescaled to be sub-multiplicative; in some books, the terminology "matrix norm" is reserved for sub-multiplicative norms. Matrix norms induced by vector norms. Suppose a vector norm formula_13 on formula_14 and a vector norm formula_15 on formula_16 are given. Any formula_17 matrix A induces a linear operator from formula_14 to formula_16 with respect to the standard basis, and one defines the corresponding "induced norm" or "operator norm" or "subordinate norm" on the space formula_1 of all formula_17 matrices as follows: formula_18 where formula_19 denotes the supremum. This norm measures how much the mapping induced by formula_20 can stretch vectors. Depending on the vector norms formula_13, formula_15 used, notation other than formula_21 can be used for the operator norm. Matrix norms induced by vector "p"-norms. If the "p"-norm for vectors (formula_22) is used for both spaces formula_14 and formula_23 then the corresponding operator norm is:formula_24These induced norms are different from the "entry-wise" "p"-norms and the Schatten "p"-norms for matrices treated below, which are also usually denoted by formula_25 Geometrically speaking, one can imagine a "p"-norm unit ball formula_26 in formula_14, then apply the linear map formula_20 to the ball. It would end up becoming a distorted convex shape formula_27, and formula_28 measures the longest "radius" of the distorted convex shape. In other words, we must take a "p"-norm unit ball formula_29 in formula_16, then multiply it by at least formula_28, in order for it to be large enough to contain formula_30. "p" = 1, ∞. When formula_31, we have simple formulas.formula_32which is simply the maximum absolute column sum of the matrix.formula_33which is simply the maximum absolute row sum of the matrix. For example, for formula_34 we have that formula_35 formula_36 Spectral norm ("p" = 2). When formula_37 (the Euclidean norm or formula_38-norm for vectors), the induced matrix norm is the "spectral norm". (The two values do "not" coincide in infinite dimensions — see Spectral radius for further discussion. The spectral radius should not be confused with the spectral norm.) The spectral norm of a matrix formula_20 is the largest singular value of formula_20 (i.e., the square root of the largest eigenvalue of the matrix formula_39 where formula_40 denotes the conjugate transpose of formula_20):formula_41where formula_42 represents the largest singular value of matrix formula_43 There are further properties: Matrix norms induced by vector "α"- and "β"-norms. We can generalize the above definition. Suppose we have vector norms formula_13 and formula_15 for spaces formula_14 and formula_16 respectively; the corresponding operator norm isformula_49In particular, the formula_50 defined previously is the special case of formula_51. In the special cases of formula_52 and formula_53, the induced matrix norms can be computed byformula_54 where formula_55 is the i-th row of matrix formula_56. In the special cases of formula_57 and formula_58, the induced matrix norms can be computed byformula_59 where formula_60 is the j-th column of matrix formula_56. Hence, formula_61 and formula_62 are the maximum row and column 2-norm of the matrix, respectively. Properties. Any operator norm is consistent with the vector norms that induce it, giving formula_63 Suppose formula_21; formula_64; and formula_65 are operator norms induced by the respective pairs of vector norms formula_66; formula_67; and formula_68. Then, formula_69 this follows from formula_70 and formula_71 Square matrices. Suppose formula_72 is an operator norm on the space of square matrices formula_73 induced by vector norms formula_13 and formula_74. Then, the operator norm is a sub-multiplicative matrix norm: formula_75 Moreover, any such norm satisfies the inequality for all positive integers "r", where "ρ"("A") is the spectral radius of A. For symmetric or hermitian A, we have equality in (1) for the 2-norm, since in this case the 2-norm "is" precisely the spectral radius of A. For an arbitrary matrix, we may not have equality for any norm; a counterexample would be formula_76 which has vanishing spectral radius. In any case, for any matrix norm, we have the spectral radius formula: formula_77 Consistent and compatible norms. A matrix norm formula_78 on formula_1 is called "consistent" with a vector norm formula_79 on formula_14 and a vector norm formula_80 on formula_16, if: formula_81 for all formula_82 and all formula_83. In the special case of "m" = "n" and formula_84, formula_78 is also called "compatible" with formula_85. All induced norms are consistent by definition. Also, any sub-multiplicative matrix norm on formula_86 induces a compatible vector norm on formula_14 by defining formula_87. "Entry-wise" matrix norms. These norms treat an formula_88 matrix as a vector of size formula_89, and use one of the familiar vector norms. For example, using the "p"-norm for vectors, "p" ≥ 1, we get: formula_90 This is a different norm from the induced "p"-norm (see above) and the Schatten "p"-norm (see below), but the notation is the same. The special case "p" = 2 is the Frobenius norm, and "p" = ∞ yields the maximum norm. "L"2,1 and "Lp,q" norms. Let formula_91 be the columns of matrix formula_20. From the original definition, the matrix formula_20 presents "n" data points in m-dimensional space. The formula_92 norm is the sum of the Euclidean norms of the columns of the matrix: formula_93 The formula_92 norm as an error function is more robust, since the error for each data point (a column) is not squared. It is used in robust data analysis and sparse coding. For "p", "q" ≥ 1, the formula_92 norm can be generalized to the formula_94 norm as follows: formula_95 Frobenius norm. When "p" = "q" = 2 for the formula_94 norm, it is called the Frobenius norm or the Hilbert–Schmidt norm, though the latter term is used more frequently in the context of operators on (possibly infinite-dimensional) Hilbert space. This norm can be defined in various ways: formula_96 where the trace is the sum of diagonal entries, and formula_97 are the singular values of formula_20. The second equality is proven by explicit computation of formula_98. The third equality is proven by singular value decomposition of formula_20, and the fact that the trace is invariant under circular shifts. The Frobenius norm is an extension of the Euclidean norm to formula_73 and comes from the Frobenius inner product on the space of all matrices. The Frobenius norm is sub-multiplicative and is very useful for numerical linear algebra. The sub-multiplicativity of Frobenius norm can be proved using Cauchy–Schwarz inequality. Frobenius norm is often easier to compute than induced norms, and has the useful property of being invariant under rotations (and unitary operations in general). That is, formula_99 for any unitary matrix formula_100. This property follows from the cyclic nature of the trace (formula_101): formula_102 and analogously: formula_103 where we have used the unitary nature of formula_100 (that is, formula_104). It also satisfies formula_105 and formula_106 where formula_107 is the Frobenius inner product, and Re is the real part of a complex number (irrelevant for real matrices) Max norm. The max norm is the elementwise norm in the limit as "p" = "q" goes to infinity: formula_108 This norm is not sub-multiplicative; but modifying the right-hand side to formula_109 makes it so. Note that in some literature (such as Communication complexity), an alternative definition of max-norm, also called the formula_110-norm, refers to the factorization norm: formula_111 Schatten norms. The Schatten "p"-norms arise when applying the "p"-norm to the vector of singular values of a matrix. If the singular values of the formula_17 matrix formula_20 are denoted by "σi", then the Schatten "p"-norm is defined by formula_112 These norms again share the notation with the induced and entry-wise "p"-norms, but they are different. All Schatten norms are sub-multiplicative. They are also unitarily invariant, which means that formula_113 for all matrices formula_20 and all unitary matrices formula_100 and formula_114. The most familiar cases are "p" = 1, 2, ∞. The case "p" = 2 yields the Frobenius norm, introduced before. The case "p" = ∞ yields the spectral norm, which is the operator norm induced by the vector 2-norm (see above). Finally, "p" = 1 yields the nuclear norm (also known as the "trace norm", or the Ky Fan 'n'-norm), defined as: formula_115 where formula_116 denotes a positive semidefinite matrix formula_117 such that formula_118. More precisely, since formula_119 is a positive semidefinite matrix, its square root is well defined. The nuclear norm formula_120 is a convex envelope of the rank function formula_121, so it is often used in mathematical optimization to search for low-rank matrices. Combining von Neumann's trace inequality with Hölder's inequality for Euclidean space yields a version of Hölder's inequality for Schatten norms for formula_122: formula_123 In particular, this implies the Schatten norm inequality formula_124 Monotone norms. A matrix norm formula_125 is called "monotone" if it is monotonic with respect to the Loewner order. Thus, a matrix norm is increasing if formula_126 The Frobenius norm and spectral norm are examples of monotone norms. Cut norms. Another source of inspiration for matrix norms arises from considering a matrix as the adjacency matrix of a weighted, directed graph. The so-called "cut norm" measures how close the associated graph is to being bipartite: formula_127 where "A" ∈ "K""m"×"n". Equivalent definitions (up to a constant factor) impose the conditions ; "S" = "T"; or "S" ∩ "T" = &amp;emptyset;. The cut-norm is equivalent to the induced operator norm ‖·‖∞→1, which is itself equivalent to another norm, called the Grothendieck norm. To define the Grothendieck norm, first note that a linear operator "K"1 → "K"1 is just a scalar, and thus extends to a linear operator on any "Kk" → "Kk". Moreover, given any choice of basis for "Kn" and "Km", any linear operator "Kn" → "Km" extends to a linear operator ("K""k")"n" → ("K""k")"m", by letting each matrix element on elements of "Kk" via scalar multiplication. The Grothendieck norm is the norm of that extended operator; in symbols: formula_128 The Grothendieck norm depends on choice of basis (usually taken to be the standard basis) and k. Equivalence of norms. For any two matrix norms formula_13 and formula_15, we have that: formula_129 for some positive numbers "r" and "s", for all matrices formula_130. In other words, all norms on formula_1 are "equivalent"; they induce the same topology on formula_1. This is true because the vector space formula_1 has the finite dimension formula_17. Moreover, for every matrix norm formula_131 on formula_132 there exists a unique positive real number formula_133 such that formula_134 is a sub-multiplicative matrix norm for every formula_135; to wit, formula_136 A sub-multiplicative matrix norm formula_13 is said to be "minimal", if there exists no other sub-multiplicative matrix norm formula_15 satisfying formula_137. Examples of norm equivalence. Let formula_138 once again refer to the norm induced by the vector "p"-norm (as above in the Induced norm section). For matrix formula_139 of rank formula_140, the following inequalities hold: Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "K^{m \\times n}" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\|A\\|" }, { "math_id": 5, "text": "\\|\\cdot\\| : K^{m \\times n} \\to \\R" }, { "math_id": 6, "text": "\\alpha \\in K" }, { "math_id": 7, "text": "A, B \\in K^{m \\times n}" }, { "math_id": 8, "text": "\\|A\\|\\ge 0" }, { "math_id": 9, "text": "\\|A\\|= 0 \\iff A=0_{m,n}" }, { "math_id": 10, "text": "\\left\\|\\alpha A\\right\\|=\\left|\\alpha\\right| \\left\\|A\\right\\|" }, { "math_id": 11, "text": "\\|A+B\\| \\le \\|A\\|+\\|B\\|" }, { "math_id": 12, "text": "\\left\\|AB\\right\\| \\le \\left\\|A\\right\\| \\left\\|B\\right\\| " }, { "math_id": 13, "text": "\\|\\cdot\\|_{\\alpha}" }, { "math_id": 14, "text": "K^n" }, { "math_id": 15, "text": "\\|\\cdot\\|_{\\beta}" }, { "math_id": 16, "text": "K^m" }, { "math_id": 17, "text": "m \\times n" }, { "math_id": 18, "text": " \\begin{align}\n\\|A\\|_{\\alpha,\\beta} \n&= \\sup\\{\\|Ax\\|_\\beta : x\\in K^n \\text{ with }\\|x\\|_\\alpha = 1\\} \\\\\n&= \\sup\\left\\{\\frac{\\|Ax\\|_\\beta}{\\|x\\|_\\alpha} : x\\in K^n \\text{ with } x\\ne 0\\right\\}.\n\\end{align} " }, { "math_id": 19, "text": " \\sup " }, { "math_id": 20, "text": "A" }, { "math_id": 21, "text": "\\|\\cdot\\|_{\\alpha,\\beta}" }, { "math_id": 22, "text": "1 \\leq p \\leq \\infty" }, { "math_id": 23, "text": "K^m," }, { "math_id": 24, "text": " \\|A\\|_p = \\sup_{x \\ne 0} \\frac{\\| A x\\| _p}{\\|x\\|_p}. " }, { "math_id": 25, "text": " \\|A\\|_p ." }, { "math_id": 26, "text": "V_{p, n} = \\{x\\in K^n : \\|x\\|_p \\le 1 \\}" }, { "math_id": 27, "text": "AV_{p, n} \\subset K^m" }, { "math_id": 28, "text": " \\|A\\|_p " }, { "math_id": 29, "text": "V_{p, m}" }, { "math_id": 30, "text": "AV_{p, n}" }, { "math_id": 31, "text": "p = 1, \\infty" }, { "math_id": 32, "text": " \\|A\\|_1 = \\max_{1 \\leq j \\leq n} \\sum_{i=1}^m | a_{ij} |, " }, { "math_id": 33, "text": " \\|A\\|_\\infty = \\max_{1 \\leq i \\leq m} \\sum _{j=1}^n | a_{ij} |, " }, { "math_id": 34, "text": "A = \\begin{bmatrix} -3 & 5 & 7 \\\\ 2 & 6 & 4 \\\\ 0 & 2 & 8 \\\\ \\end{bmatrix}," }, { "math_id": 35, "text": "\\|A\\|_1 = \\max(|{-3}|+2+0; 5+6+2; 7+4+8) = \\max(5,13,19) = 19," }, { "math_id": 36, "text": "\\|A\\|_\\infty = \\max(|{-3}|+5+7; 2+6+4;0+2+8) = \\max(15,12,10) = 15." }, { "math_id": 37, "text": "p = 2" }, { "math_id": 38, "text": "\\ell_2" }, { "math_id": 39, "text": "A^*A," }, { "math_id": 40, "text": "A^*" }, { "math_id": 41, "text": " \\|A\\|_2 = \\sqrt{\\lambda_{\\max}\\left(A^* A\\right)} = \\sigma_{\\max}(A)." }, { "math_id": 42, "text": "\\sigma_{\\max}(A)" }, { "math_id": 43, "text": "A." }, { "math_id": 44, "text": "\\|A \\|_2 = \\sup\\{x^* A y : x \\in K^m, y \\in K^n \\text{ with }\\|x\\|_2 = \\|y\\|_2 = 1\\}." }, { "math_id": 45, "text": " \\| A^* A\\|_2 = \\| A A^* \\|_2 = \\|A\\|_2^2" }, { "math_id": 46, "text": " \\|A\\| _2 = \\sigma_{\\mathrm{max}}(A) \\leq \\|A\\|_{\\rm F} = \\sqrt{\\sum_i \\sigma_{i}(A)^2}" }, { "math_id": 47, "text": "\\|A\\|_\\textrm{F}" }, { "math_id": 48, "text": " \\|A\\|_2 = \\sqrt{\\rho(A^{*}A)}\\leq\\sqrt{\\|A^{*}A\\|_\\infty}\\leq\\sqrt{\\|A\\|_1\\|A\\|_\\infty} " }, { "math_id": 49, "text": " \\|A\\|_{\\alpha, \\beta} = \\sup_{x \\ne 0} \\frac{\\| A x\\| _\\beta}{\\|x\\|_\\alpha}. " }, { "math_id": 50, "text": "\\|A\\|_{p}" }, { "math_id": 51, "text": "\\|A\\|_{p, p}" }, { "math_id": 52, "text": "\\alpha = 2" }, { "math_id": 53, "text": "\\beta=\\infty" }, { "math_id": 54, "text": " \\|A\\|_{2,\\infty}= \\max_{1\\le i\\le m}\\|A_{i:}\\|_2, " }, { "math_id": 55, "text": "A_{i:}" }, { "math_id": 56, "text": " A " }, { "math_id": 57, "text": "\\alpha = 1" }, { "math_id": 58, "text": "\\beta=2" }, { "math_id": 59, "text": " \\|A\\|_{1, 2} = \\max_{1\\le j\\le n}\\|A_{:j}\\|_2, " }, { "math_id": 60, "text": "A_{:j}" }, { "math_id": 61, "text": " \\|A\\|_{2,\\infty} " }, { "math_id": 62, "text": " \\|A\\|_{1, 2} " }, { "math_id": 63, "text": "\\|Ax\\|_\\beta \\leq \\|A\\|_{\\alpha,\\beta}\\|x\\|_\\alpha." }, { "math_id": 64, "text": "\\|\\cdot\\|_{\\beta,\\gamma}" }, { "math_id": 65, "text": "\\|\\cdot\\|_{\\alpha,\\gamma}" }, { "math_id": 66, "text": "(\\|\\cdot\\|_\\alpha, \\|\\cdot\\|_\\beta)" }, { "math_id": 67, "text": "(\\|\\cdot\\|_\\beta, \\|\\cdot\\|_{\\gamma})" }, { "math_id": 68, "text": "(\\|\\cdot\\|_\\alpha, \\|\\cdot\\|_\\gamma)" }, { "math_id": 69, "text": "\\|AB\\|_{\\alpha,\\gamma} \\leq \\|A\\|_{\\beta, \\gamma} \\|B\\|_{\\alpha, \\beta} ;" }, { "math_id": 70, "text": "\\|ABx\\|_\\gamma \\leq \\|A\\|_{\\beta, \\gamma} \\|Bx\\|_\\beta \\leq \\|A\\|_{\\beta, \\gamma} \\|B\\|_{\\alpha, \\beta} \\|x\\|_\\alpha " }, { "math_id": 71, "text": "\\sup_{\\|x\\|_\\alpha = 1} \\|ABx \\|_\\gamma = \\|AB\\|_{\\alpha, \\gamma} ." }, { "math_id": 72, "text": "\\|\\cdot\\|_{\\alpha, \\alpha}" }, { "math_id": 73, "text": "K^{n \\times n}" }, { "math_id": 74, "text": "\\|\\cdot\\|_\\alpha" }, { "math_id": 75, "text": "\\|AB\\|_{\\alpha, \\alpha} \\leq \\|A\\|_{\\alpha, \\alpha} \\|B\\|_{\\alpha, \\alpha}." }, { "math_id": 76, "text": "A = \\begin{bmatrix} 0 & 1 \\\\ 0 & 0 \\end{bmatrix}," }, { "math_id": 77, "text": "\\lim_{r\\to\\infty}\\|A^r\\|^{1/r}=\\rho(A). " }, { "math_id": 78, "text": "\\| \\cdot \\|" }, { "math_id": 79, "text": "\\| \\cdot \\|_{\\alpha}" }, { "math_id": 80, "text": "\\| \\cdot \\|_{\\beta}" }, { "math_id": 81, "text": "\\left\\|Ax\\right\\|_{\\beta} \\leq \\left\\|A\\right\\| \\left\\|x\\right\\|_{\\alpha}" }, { "math_id": 82, "text": "A \\in K^{m \\times n}" }, { "math_id": 83, "text": "x \\in K^n" }, { "math_id": 84, "text": "\\alpha = \\beta" }, { "math_id": 85, "text": "\\|\\cdot \\|_{\\alpha}" }, { "math_id": 86, "text": " K^{n \\times n} " }, { "math_id": 87, "text": " \\left\\| v \\right\\| := \\left\\| \\left( v, v, \\dots, v \\right) \\right\\| " }, { "math_id": 88, "text": " m \\times n " }, { "math_id": 89, "text": " m \\cdot n " }, { "math_id": 90, "text": "\\| A \\|_{p,p} = \\| \\mathrm{vec}(A) \\|_p = \\left( \\sum_{i=1}^m \\sum_{j=1}^n |a_{ij}|^p \\right)^{1/p}" }, { "math_id": 91, "text": "(a_1, \\ldots, a_n) " }, { "math_id": 92, "text": "L_{2,1}" }, { "math_id": 93, "text": "\\| A \\|_{2,1} = \\sum_{j=1}^n \\| a_{j} \\|_2 = \\sum_{j=1}^n \\left( \\sum_{i=1}^m |a_{ij}|^2 \\right)^{1/2}" }, { "math_id": 94, "text": "L_{p,q}" }, { "math_id": 95, "text": "\\| A \\|_{p,q} = \\left(\\sum_{j=1}^n \\left( \\sum_{i=1}^m |a_{ij}|^p \\right)^{\\frac{q}{p}}\\right)^{\\frac{1}{q}}." }, { "math_id": 96, "text": "\\|A\\|_\\text{F} = \\sqrt{\\sum_{i}^m\\sum_{j}^n |a_{ij}|^2} = \\sqrt{\\operatorname{trace}\\left(A^* A\\right)} = \\sqrt{\\sum_{i=1}^{\\min\\{m, n\\}} \\sigma_i^2(A)}," }, { "math_id": 97, "text": "\\sigma_i(A)" }, { "math_id": 98, "text": "\\mathrm{trace}(A^*A)" }, { "math_id": 99, "text": "\\|A\\|_\\text{F} = \\|AU\\|_\\text{F} = \\|UA\\|_\\text{F}" }, { "math_id": 100, "text": "U" }, { "math_id": 101, "text": "\\operatorname{trace}(XYZ) =\\operatorname{trace}(YZX) = \\operatorname{trace}(ZXY)" }, { "math_id": 102, "text": "\\|AU\\|_\\text{F}^2 = \\operatorname{trace}\\left( (AU)^{*}A U \\right)\n = \\operatorname{trace}\\left( U^{*} A^{*}A U \\right)\n = \\operatorname{trace}\\left( UU^{*} A^{*}A \\right)\n = \\operatorname{trace}\\left( A^{*} A \\right)\n = \\|A\\|_\\text{F}^2," }, { "math_id": 103, "text": "\\|UA\\|_\\text{F}^2 = \\operatorname{trace}\\left( (UA)^{*}UA \\right)\n = \\operatorname{trace}\\left( A^{*} U^{*} UA \\right)\n = \\operatorname{trace}\\left( A^{*}A \\right)\n = \\|A\\|_\\text{F}^2," }, { "math_id": 104, "text": "U^* U = U U^* = \\mathbf{I}" }, { "math_id": 105, "text": "\\|A^* A\\|_\\text{F} = \\|AA^*\\|_\\text{F} \\leq \\|A\\|_\\text{F}^2" }, { "math_id": 106, "text": "\\|A + B\\|_\\text{F}^2 = \\|A\\|_\\text{F}^2 + \\|B\\|_\\text{F}^2 + 2 \\operatorname{Re} \\left( \\langle A, B \\rangle_\\text{F} \\right)," }, { "math_id": 107, "text": "\\langle A, B \\rangle_\\text{F}" }, { "math_id": 108, "text": " \\|A\\|_{\\max} = \\max_{i, j} |a_{ij}|. " }, { "math_id": 109, "text": "\\sqrt{m n} \\max_{i, j} \\vert a_{i j} \\vert" }, { "math_id": 110, "text": "\\gamma_2" }, { "math_id": 111, "text": " \\gamma_2(A) = \\min_{U,V: A = UV^T} \\| U \\|_{2,\\infty} \\| V \\|_{2,\\infty} = \\min_{U,V: A = UV^T} \\max_{i,j} \\| U_{i,:} \\|_2 \\| V_{j,:} \\|_2 " }, { "math_id": 112, "text": " \\|A\\|_p = \\left( \\sum_{i=1}^{\\min\\{m,n\\}} \\sigma_i^p(A) \\right)^{1/p}." }, { "math_id": 113, "text": "\\|A\\| = \\|UAV\\|" }, { "math_id": 114, "text": "V" }, { "math_id": 115, "text": "\\|A\\|_{*} = \\operatorname{trace} \\left(\\sqrt{A^*A}\\right) = \\sum_{i=1}^{\\min\\{m,n\\}} \\sigma_i(A)," }, { "math_id": 116, "text": "\\sqrt{A^*A}" }, { "math_id": 117, "text": "B" }, { "math_id": 118, "text": "BB=A^*A" }, { "math_id": 119, "text": "A^*A" }, { "math_id": 120, "text": "\\|A\\|_{*}" }, { "math_id": 121, "text": "\\text{rank}(A)" }, { "math_id": 122, "text": " 1/p + 1/q = 1 " }, { "math_id": 123, "text": " \\left|\\operatorname{trace}(A'B)\\right| \\le \\|A\\|_p \\|B\\|_q, " }, { "math_id": 124, "text": " \\|A\\|_F^2 \\le \\|A\\|_p \\|A\\|_q. " }, { "math_id": 125, "text": "\\|\\cdot \\|" }, { "math_id": 126, "text": "A \\preccurlyeq B \\Rightarrow \\|A\\| \\leq \\|B\\|." }, { "math_id": 127, "text": "\\|A\\|_{\\Box}=\\max_{S\\subseteq[n], T\\subseteq[m]}{\\left|\\sum_{s\\in S,t\\in T}{A_{t,s}}\\right|}" }, { "math_id": 128, "text": "\\|A\\|_{G,k}=\\sup_{\\text{each } u_j, v_j\\in K^k; \\|u_j\\| = \\|v_j\\| = 1}{\\sum_{j \\in [n], \\ell \\in [m]}{(u_j\\cdot v_j) A_{\\ell,j}}}" }, { "math_id": 129, "text": "r\\|A\\|_\\alpha\\leq\\|A\\|_\\beta\\leq s\\|A\\|_\\alpha" }, { "math_id": 130, "text": "A\\in K^{m \\times n}" }, { "math_id": 131, "text": "\\|\\cdot\\|" }, { "math_id": 132, "text": "\\R^{n\\times n}" }, { "math_id": 133, "text": "k" }, { "math_id": 134, "text": "\\ell\\|\\cdot\\|" }, { "math_id": 135, "text": "\\ell \\ge k" }, { "math_id": 136, "text": "k = \\sup\\{\\Vert A B \\Vert \\,:\\, \\Vert A \\Vert \\leq 1, \\Vert B \\Vert \\leq 1\\}. " }, { "math_id": 137, "text": "\\|\\cdot\\|_{\\beta} < \\|\\cdot\\|_{\\alpha}" }, { "math_id": 138, "text": "\\|A\\|_p" }, { "math_id": 139, "text": "A\\in\\R^{m\\times n}" }, { "math_id": 140, "text": "r" }, { "math_id": 141, "text": "\\|A\\|_2\\le\\|A\\|_F\\le\\sqrt{r}\\|A\\|_2" }, { "math_id": 142, "text": "\\|A\\|_F \\le \\|A\\|_{*} \\le \\sqrt{r} \\|A\\|_F" }, { "math_id": 143, "text": "\\|A\\|_{\\max} \\le \\|A\\|_2 \\le \\sqrt{mn}\\|A\\|_{\\max}" }, { "math_id": 144, "text": "\\frac{1}{\\sqrt{n}}\\|A\\|_\\infty\\le\\|A\\|_2\\le\\sqrt{m}\\|A\\|_\\infty" }, { "math_id": 145, "text": "\\frac{1}{\\sqrt{m}}\\|A\\|_1\\le\\|A\\|_2\\le\\sqrt{n}\\|A\\|_1." } ]
https://en.wikipedia.org/wiki?curid=1543735
15438225
Distributed lag
In statistics and econometrics, a distributed lag model is a model for time series data in which a regression equation is used to predict current values of a dependent variable based on both the current values of an explanatory variable and the lagged (past period) values of this explanatory variable. The starting point for a distributed lag model is an assumed structure of the form formula_0 or the form formula_1 where "y""t" is the value at time period "t" of the dependent variable "y", "a" is the intercept term to be estimated, and "w""i" is called the lag weight (also to be estimated) placed on the value "i" periods previously of the explanatory variable "x". In the first equation, the dependent variable is assumed to be affected by values of the independent variable arbitrarily far in the past, so the number of lag weights is infinite and the model is called an "infinite distributed lag model". In the alternative, second, equation, there are only a finite number of lag weights, indicating an assumption that there is a maximum lag beyond which values of the independent variable do not affect the dependent variable; a model based on this assumption is called a "finite distributed lag model". In an infinite distributed lag model, an infinite number of lag weights need to be estimated; clearly this can be done only if some structure is assumed for the relation between the various lag weights, with the entire infinitude of them expressible in terms of a finite number of assumed underlying parameters. In a finite distributed lag model, the parameters could be directly estimated by ordinary least squares (assuming the number of data points sufficiently exceeds the number of lag weights); nevertheless, such estimation may give very imprecise results due to extreme multicollinearity among the various lagged values of the independent variable, so again it may be necessary to assume some structure for the relation between the various lag weights. The concept of distributed lag models easily generalizes to the context of more than one right-side explanatory variable. Unstructured estimation. The simplest way to estimate parameters associated with distributed lags is by ordinary least squares, assuming a fixed maximum lag formula_2, assuming independently and identically distributed errors, and imposing no structure on the relationship of the coefficients of the lagged explanators with each other. However, multicollinearity among the lagged explanators often arises, leading to high variance of the coefficient estimates. Structured estimation. Structured distributed lag models come in two types: finite and infinite. Infinite distributed lags allow the value of the independent variable at a particular time to influence the dependent variable infinitely far into the future, or to put it another way, they allow the current value of the dependent variable to be influenced by values of the independent variable that occurred infinitely long ago; but beyond some lag length the effects taper off toward zero. Finite distributed lags allow for the independent variable at a particular time to influence the dependent variable for only a finite number of periods. Finite distributed lags. The most important structured finite distributed lag model is the Almon lag model. This model allows the data to determine the shape of the lag structure, but the researcher must specify the maximum lag length; an incorrectly specified maximum lag length can distort the shape of the estimated lag structure as well as the cumulative effect of the independent variable. The Almon lag assumes that "k" + 1 lag weights are related to "n" + 1 linearly estimable underlying parameters ("n &lt; k") "a""j" according to formula_3 for formula_4 Infinite distributed lags. The most common type of structured infinite distributed lag model is the geometric lag, also known as the Koyck lag. In this lag structure, the weights (magnitudes of influence) of the lagged independent variable values decline exponentially with the length of the lag; while the shape of the lag structure is thus fully imposed by the choice of this technique, the rate of decline as well as the overall magnitude of effect are determined by the data. Specification of the regression equation is very straightforward: one includes as explanators (right-hand side variables in the regression) the one-period-lagged value of the dependent variable and the current value of the independent variable: formula_5 where formula_6. In this model, the short-run (same-period) effect of a unit change in the independent variable is the value of "b", while the long-run (cumulative) effect of a sustained unit change in the independent variable can be shown to be formula_7 Other infinite distributed lag models have been proposed to allow the data to determine the shape of the lag structure. The polynomial inverse lag assumes that the lag weights are related to underlying, linearly estimable parameters "aj" according to formula_8 for formula_9 The geometric combination lag assumes that the lags weights are related to underlying, linearly estimable parameters "aj" according to either formula_10 for formula_11 or formula_12 for formula_9 The gamma lag and the rational lag are other infinite distributed lag structures. Distributed lag model in health studies. Distributed lag models were introduced into health-related studies in 2002 by Zanobetti and Schwartz. The Bayesian version of the model was suggested by Welty in 2007. Gasparrini introduced more flexible statistical models in 2010 that are capable of describing additional time dimensions of the exposure-response relationship, and developed a family of distributed lag non-linear models (DLNM), a modeling framework that can simultaneously represent non-linear exposure-response dependencies and delayed effects. The distributed lag model concept was first to applied to longitudinal cohort research by Hsu in 2015, studying the relationship between PM2.5 and child asthma, and more complicated distributed lag method aimed to accommodate longitudinal cohort research analysis such as Bayesian Distributed Lag Interaction Model by Wilson have been subsequently developed to answer similar research questions. *ARMAX *Mixed data sampling References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y_t = a + w_0x_t + w_1x_{t-1} + w_2x_{t-2} + ... + \\text{error term}" }, { "math_id": 1, "text": "y_t = a + w_0x_t + w_1x_{t-1} + w_2x_{t-2} + ... + w_nx_{t-n} + \\text{error term}," }, { "math_id": 2, "text": " p " }, { "math_id": 3, "text": " w_i = \\sum_{j=0}^{n} a_j i^j " }, { "math_id": 4, "text": " i=0, \\dots , k. " }, { "math_id": 5, "text": " y_t= a + \\lambda y_{t-1} + bx_t + \\text{error term}," }, { "math_id": 6, "text": "0 \\le \\lambda < 1" }, { "math_id": 7, "text": "b+ \\lambda b + \\lambda^2 b + ... = b/(1-\\lambda)." }, { "math_id": 8, "text": "w_i = \\sum_{j=2}^{n}\\frac{a_j}{(i+1)^j}, " }, { "math_id": 9, "text": "i=0, \\dots , \\infty ." }, { "math_id": 10, "text": " w_i = \\sum_{j=2}^{n} a_j(1/j)^i, " }, { "math_id": 11, "text": "i=0, \\dots , \\infty " }, { "math_id": 12, "text": " w_i = \\sum_{j=1}^{n} a_j [j/(n+1)]^i, " } ]
https://en.wikipedia.org/wiki?curid=15438225
154389
Fourier
Fourier may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "\\mathit{Fo}" }, { "math_id": 1, "text": "\\alpha t/d^2" }, { "math_id": 2, "text": "\\alpha t" }, { "math_id": 3, "text": "d^2" } ]
https://en.wikipedia.org/wiki?curid=154389
1543932
Mercury coulometer
Instrument for analyzing redox reactions involving mercury In electrochemistry, a mercury coulometer is an analytical instrument which uses mercury to perform coulometry (determining the amount of matter transformed in a chemical reaction by measuring electric current) on the following reaction:&lt;ref name="Patrick/Fardo2000"&gt;&lt;/ref&gt; formula_0 These oxidation/reduction processes have 100% efficiency with the wide range of the current densities. Measuring of the quantity of electricity (coulombs) is based on the changes of the mass of the mercury electrode. Mass of the electrode can be increased during cathodic deposition of the mercury ions or decreased during the anodic dissolution of the metal. formula_1 where Q is the quantity of electricity; Δ"m" are the mass changes; F is the Faraday constant; and "M"Hg is the molar mass of mercury. Use as Hour Meter. Before the development of solid-state electronics, coulometers were used as long-period (up to 25,000 hours) elapsed hour meters in electronic equipment and other devices, including on the Apollo Program space vehicles. By passing a constant, calibrated current through the coulometer, the movement of a gap between mercury droplets provides a visual indication of elapsed time. Brands included HP's "Chronister" and Curtis' "Indachron". Construction. This coulometer has different constructions but all of them are based on mass measurements. The device consists of two reservoirs connected by a thin graduated capillary tube containing a solution of the mercury(II)-ions. Each of the reservoirs has an electrode immersed in a drop of mercury. Another small drop of mercury is inserted into the capillary. When the current is turned on, it initiates dissolution of the metallic mercury on one side of the drop in the capillary and deposition on the other side of the same drop. This drop starts to move. Because of the high efficiency of the deposition/dissolution of the mercury under the current influence, the mass or volume of this small drop is constant and its movement is linearly correlated with the passed charge. If the direction of the current is changed, the drop moves in the opposite direction. The sensitivity of this type of coulometer depends on the diameter of the capillary. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{Hg^2+} + \\ce{2e^- <=> Hg^\\circ}" }, { "math_id": 1, "text": "Q = \\frac{ 2 \\, \\Delta m \\, F}{M_\\ce{Hg}}," } ]
https://en.wikipedia.org/wiki?curid=1543932
1543960
Effective population size
Ecological concept The effective population size ("N""e") is the size of an idealised population that would experience the same rate of genetic drift or increase in inbreeding as the real population. Idealised populations are based on unrealistic but convenient assumptions including random mating, simultaneous birth of each new generation, and constant population size. For most quantities of interest and most real populations, "N""e" is smaller than the census population size "N" of a real population. The same population may have multiple effective population sizes for different properties of interest, including genetic drift and inbreeding. The effective population size is most commonly measured with respect to the coalescence time. In an idealised diploid population with no selection at any locus, the expectation of the coalescence time in generations is equal to twice the census population size. The effective population size is measured as within-species genetic diversity divided by four times the mutation rate formula_0, because in such an idealised population, the heterozygosity is equal to formula_1. In a population with selection at many loci and abundant linkage disequilibrium, the coalescent effective population size may not reflect the census population size at all, or may reflect its logarithm. The concept of effective population size was introduced in the field of population genetics in 1931 by the American geneticist Sewall Wright. Overview: Types of effective population size. Depending on the quantity of interest, effective population size can be defined in several ways. Ronald Fisher and Sewall Wright originally defined it as "the number of breeding individuals in an idealised population that would show the same amount of dispersion of allele frequencies under random genetic drift or the same amount of inbreeding as the population under consideration". More generally, an effective population size may be defined as the number of individuals in an idealised population that has a value of any given population genetic quantity that is equal to the value of that quantity in the population of interest. The two population genetic quantities identified by Wright were the one-generation increase in variance across replicate populations (variance effective population size) and the one-generation change in the inbreeding coefficient (inbreeding effective population size). These two are closely linked, and derived from F-statistics, but they are not identical. Today, the effective population size is usually estimated empirically with respect to the sojourn or coalescence time, estimated as the within-species genetic diversity divided by the mutation rate, yielding a coalescent effective population size. Another important effective population size is the selection effective population size 1/scritical, where scritical is the critical value of the selection coefficient at which selection becomes more important than genetic drift. Empirical measurements. In "Drosophila" populations of census size 16, the variance effective population size has been measured as equal to 11.5. This measurement was achieved through studying changes in the frequency of a neutral allele from one generation to another in over 100 replicate populations. For coalescent effective population sizes, a survey of publications on 102 mostly wildlife animal and plant species yielded 192 "N""e"/"N" ratios. Seven different estimation methods were used in the surveyed studies. Accordingly, the ratios ranged widely from 10"-6" for Pacific oysters to 0.994 for humans, with an average of 0.34 across the examined species. Based on these data they subsequently estimated more comprehensive ratios, accounting for fluctuations in population size, variance in family size and unequal sex-ratio. These ratios average to only 0.10-0.11. A genealogical analysis of human hunter-gatherers (Eskimos) determined the effective-to-census population size ratio for haploid (mitochondrial DNA, Y chromosomal DNA), and diploid (autosomal DNA) loci separately: the ratio of the effective to the census population size was estimated as 0.6–0.7 for autosomal and X-chromosomal DNA, 0.7–0.9 for mitochondrial DNA and 0.5 for Y-chromosomal DNA. Variance effective size. In the Wright-Fisher idealized population model, the conditional variance of the allele frequency formula_2, given the allele frequency formula_3 in the previous generation, is formula_4 Let formula_5 denote the same, typically larger, variance in the actual population under consideration. The variance effective population size formula_6 is defined as the size of an idealized population with the same variance. This is found by substituting formula_5 for formula_7 and solving for formula_8 which gives formula_9 Theoretical examples. In the following examples, one or more of the assumptions of a strictly idealised population are relaxed, while other assumptions are retained. The variance effective population size of the more relaxed population model is then calculated with respect to the strict model. Variations in population size. Population size varies over time. Suppose there are "t" non-overlapping generations, then effective population size is given by the harmonic mean of the population sizes: formula_10 For example, say the population size was "N" = 10, 100, 50, 80, 20, 500 for six generations ("t" = 6). Then the effective population size is the harmonic mean of these, giving: Note this is less than the arithmetic mean of the population size, which in this example is 126.7. The harmonic mean tends to be dominated by the smallest bottleneck that the population goes through. Dioeciousness. If a population is dioecious, i.e. there is no self-fertilisation then formula_11 or more generally, formula_12 where "D" represents dioeciousness and may take the value 0 (for not dioecious) or 1 for dioecious. When "N" is large, "N""e" approximately equals "N", so this is usually trivial and often ignored: formula_13 Variance in reproductive success. If population size is to remain constant, each individual must contribute on average two gametes to the next generation. An idealized population assumes that this follows a Poisson distribution so that the variance of the number of gametes contributed, "k" is equal to the mean number contributed, i.e. 2: formula_14 However, in natural populations the variance is often larger than this. The vast majority of individuals may have no offspring, and the next generation stems only from a small number of individuals, so formula_15 The effective population size is then smaller, and given by: formula_16 Note that if the variance of "k" is less than 2, "N""e" is greater than "N". In the extreme case of a population experiencing no variation in family size, in a laboratory population in which the number of offspring is artificially controlled, "V""k" = 0 and "N""e" = 2"N". Non-Fisherian sex-ratios. When the sex ratio of a population varies from the Fisherian 1:1 ratio, effective population size is given by: formula_17 Where "N""m" is the number of males and "N""f" the number of females. For example, with 80 males and 20 females (an absolute population size of 100): Again, this results in "N""e" being less than "N". Inbreeding effective size. Alternatively, the effective population size may be defined by noting how the average inbreeding coefficient changes from one generation to the next, and then defining "N""e" as the size of the idealized population that has the same change in average inbreeding coefficient as the population under consideration. The presentation follows Kempthorne (1957). For the idealized population, the inbreeding coefficients follow the recurrence equation formula_18 Using Panmictic Index (1 − "F") instead of inbreeding coefficient, we get the approximate recurrence equation formula_19 The difference per generation is formula_20 The inbreeding effective size can be found by solving formula_21 This is formula_22 although researchers rarely use this equation directly. Theoretical example: overlapping generations and age-structured populations. When organisms live longer than one breeding season, effective population sizes have to take into account the life tables for the species. Haploid. Assume a haploid population with discrete age structure. An example might be an organism that can survive several discrete breeding seasons. Further, define the following age structure characteristics: formula_23 Fisher's reproductive value for age formula_24, formula_25 The chance an individual will survive to age formula_24, and formula_26 The number of newborn individuals per breeding season. The generation time is calculated as formula_27 average age of a reproducing individual Then, the inbreeding effective population size is formula_28 Diploid. Similarly, the inbreeding effective number can be calculated for a diploid population with discrete age structure. This was first given by Johnson, but the notation more closely resembles Emigh and Pollak. Assume the same basic parameters for the life table as given for the haploid case, but distinguishing between male and female, such as "N"0"ƒ" and "N"0"m" for the number of newborn females and males, respectively (notice lower case "ƒ" for females, compared to upper case "F" for inbreeding). The inbreeding effective number is formula_29 Coalescent effective size. According to the neutral theory of molecular evolution, a neutral allele remains in a population for Ne generations, where Ne is the effective population size. An idealised diploid population will have a pairwise nucleotide diversity equal to 4formula_0Ne, where formula_0 is the mutation rate. The sojourn effective population size can therefore be estimated empirically by dividing the nucleotide diversity by the mutation rate. The coalescent effective size may have little relationship to the number of individuals physically present in a population. Measured coalescent effective population sizes vary between genes in the same population, being low in genome areas of low recombination and high in genome areas of high recombination. Sojourn times are proportional to N in neutral theory, but for alleles under selection, sojourn times are proportional to log(N). Genetic hitchhiking can cause neutral mutations to have sojourn times proportional to log(N): this may explain the relationship between measured effective population size and the local recombination rate. Selection effective size. In an idealised Wright-Fisher model, the fate of an allele, beginning at an intermediate frequency, is largely determined by selection if the selection coefficient s ≫ 1/N, and largely determined by neutral genetic drift if s ≪ 1/N. In real populations, the cutoff value of s may depend instead on local recombination rates. This limit to selection in a real population may be captured in a toy Wright-Fisher simulation through the appropriate choice of Ne. Populations with different selection effective population sizes are predicted to evolve profoundly different genome architectures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": "4N\\mu" }, { "math_id": 2, "text": "p'" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "\\operatorname{var}(p' \\mid p)= {p(1-p) \\over 2N}." }, { "math_id": 5, "text": "\\widehat{\\operatorname{var}}(p'\\mid p)" }, { "math_id": 6, "text": "N_e^{(v)}" }, { "math_id": 7, "text": "\\operatorname{var}(p'\\mid p)" }, { "math_id": 8, "text": "N" }, { "math_id": 9, "text": "N_e^{(v)} = {p(1-p) \\over 2 \\widehat{\\operatorname{var}}(p)}." }, { "math_id": 10, "text": "{1 \\over N_e} = {1 \\over t} \\sum_{i=1}^t {1 \\over N_i}" }, { "math_id": 11, "text": "N_e = N + \\begin{matrix} \\frac{1}{2} \\end{matrix}" }, { "math_id": 12, "text": "N_e = N + \\begin{matrix} \\frac{D}{2} \\end{matrix}" }, { "math_id": 13, "text": "N_e = N + \\begin{matrix} \\frac{1}{2} \\approx N \\end{matrix}" }, { "math_id": 14, "text": "\\operatorname{var}(k) = \\bar{k} = 2." }, { "math_id": 15, "text": "\\operatorname{var}(k) > 2." }, { "math_id": 16, "text": "N_e^{(v)} = {4 N - 2D \\over 2 + \\operatorname{var}(k)}" }, { "math_id": 17, "text": "N_e^{(v)} = N_e^{(F)} = {4 N_m N_f \\over N_m + N_f}" }, { "math_id": 18, "text": "F_t = \\frac{1}{N}\\left(\\frac{1+F_{t-2}}{2}\\right)+\\left(1-\\frac{1}{N}\\right)F_{t-1}." }, { "math_id": 19, "text": "1-F_t = P_t = P_0\\left(1-\\frac{1}{2N}\\right)^t. " }, { "math_id": 20, "text": "\\frac{P_{t+1}}{P_t} = 1-\\frac{1}{2N}. " }, { "math_id": 21, "text": "\\frac{P_{t+1}}{P_t} = 1-\\frac{1}{2N_e^{(F)}}. " }, { "math_id": 22, "text": "N_e^{(F)} = \\frac{1}{2\\left(1-\\frac{P_{t+1}}{P_t}\\right)} " }, { "math_id": 23, "text": "v_i = " }, { "math_id": 24, "text": "i" }, { "math_id": 25, "text": "\\ell_i = " }, { "math_id": 26, "text": "N_0 = " }, { "math_id": 27, "text": "T = \\sum_{i=0}^\\infty \\ell_i v_i = " }, { "math_id": 28, "text": "N_e^{(F)} = \\frac{N_0T}{1 + \\sum_i\\ell_{i+1}^2v_{i+1}^2(\\frac{1}{\\ell_{i+1}}-\\frac{1}{\\ell_i})}." }, { "math_id": 29, "text": "\n\\begin{align}\n\\frac{1}{N_e^{(F)}} = \\frac{1}{4T}\\left\\{\\frac{1}{N_0^f}+\\frac{1}{N_0^m} + \\sum_i\\left(\\ell_{i+1}^f\\right)^2\\left(v_{i+1}^f\\right)^2\\left(\\frac{1}{\\ell_{i+1}^f}-\\frac{1}{\\ell_i^f}\\right)\\right. \\,\\,\\,\\,\\,\\,\\,\\, & \\\\\n \\left. {} + \\sum_i\\left(\\ell_{i+1}^m\\right)^2\\left(v_{i+1}^m\\right)^2\\left(\\frac{1}{\\ell_{i+1}^m}-\\frac{1}{\\ell_i^m}\\right) \\right\\}. &\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1543960
15440345
Vivification
Operation on a description logic knowledge base Vivification is an operation on a description logic knowledge base to improve performance of a semantic reasoner. Vivification replaces a disjunction of concepts formula_0 by the "least common subsumer" of the concepts formula_1. The goal of this operation is to improve the performance of the reasoner by replacing a complex set of concepts with a single concept which subsumes the original concepts. For example, consider the example given in (Cohen 92): Suppose we have the concept formula_2. This concept can be vivified into a simpler concept formula_3. This summarization leads to an approximation that may not be exactly equivalent to the original. An approximation. Knowledge base vivification is not necessarily exact. If the reasoner is operating under the open world assumption we may get surprising results. In the previous example, if we replace the disjunction with the vivified concept, we will arrive at a surprising results. First, we find that the reasoner will no longer classify Jill as either a pianist or an organist. Even though formula_4 and formula_5 are the only two sub-classes, under the OWA we can no longer classify Jill as playing one or the other. The reason is that there may be another keyboard instrument (e.g. a harpsichord) that Jill plays but which does not have a specific subclass.
[ { "math_id": 0, "text": "C_1 \\sqcup C_2 \\ldots \\sqcup C_n" }, { "math_id": 1, "text": "C_1,C_2,\\ldots C_n" }, { "math_id": 2, "text": "\\textrm{PIANIST(Jill)} \\vee \\textrm{ORGANIST(Jill)}" }, { "math_id": 3, "text": "\\textrm{KEYBOARD-PLAYER(Jill)}" }, { "math_id": 4, "text": "\\textrm{ORGANIST}" }, { "math_id": 5, "text": "\\textrm{PIANIST}" } ]
https://en.wikipedia.org/wiki?curid=15440345
15440535
Stokes wave
Nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth In fluid dynamics, a Stokes wave is a nonlinear and periodic surface wave on an inviscid fluid layer of constant mean depth. This type of modelling has its origins in the mid 19th century when Sir George Stokes – using a perturbation series approach, now known as the Stokes expansion – obtained approximate solutions for nonlinear wave motion. Stokes's wave theory is of direct practical use for waves on intermediate and deep water. It is used in the design of coastal and offshore structures, in order to determine the wave kinematics (free surface elevation and flow velocities). The wave kinematics are subsequently needed in the design process to determine the wave loads on a structure. For long waves (as compared to depth) – and using only a few terms in the Stokes expansion – its applicability is limited to waves of small amplitude. In such shallow water, a cnoidal wave theory often provides better periodic-wave approximations. While, in the strict sense, "Stokes wave" refers to a progressive periodic wave of permanent form, the term is also used in connection with standing waves and even random waves. Examples. The examples below describe Stokes waves under the action of gravity (without surface tension effects) in case of pure wave motion, so without an ambient mean current. Third-order Stokes wave on deep water. According to Stokes's third-order theory, the free surface elevation "η", the velocity potential Φ, the phase speed (or celerity) "c" and the wave phase "θ" are, for a progressive surface gravity wave on deep water – i.e. the fluid layer has infinite depth: formula_0 where The expansion parameter "ka" is known as the wave steepness. The phase speed increases with increasing nonlinearity "ka" of the waves. The wave height "H", being the difference between the surface elevation "η" at a crest and a trough, is: formula_1 Note that the second- and third-order terms in the velocity potential Φ are zero. Only at fourth order do contributions deviating from first-order theory – i.e. Airy wave theory – appear. Up to third order the orbital velocity field u = ∇Φ consists of a circular motion of the velocity vector at each position ("x","z"). As a result, the surface elevation of deep-water waves is to a good approximation trochoidal, as already noted by . Stokes further observed, that although (in this Eulerian description) the third-order orbital velocity field consists of a circular motion at each point, the Lagrangian paths of fluid parcels are not closed circles. This is due to the reduction of the velocity amplitude at increasing depth below the surface. This Lagrangian drift of the fluid parcels is known as the Stokes drift. Second-order Stokes wave on arbitrary depth. The surface elevation "η" and the velocity potential Φ are, according to Stokes's second-order theory of surface gravity waves on a fluid layer of mean depth "h": formula_2 Observe that for finite depth the velocity potential Φ contains a linear drift in time, independent of position ("x" and "z"). Both this temporal drift and the double-frequency term (containing sin 2θ) in Φ vanish for deep-water waves. Stokes and Ursell parameters. The ratio "S" of the free-surface amplitudes at second order and first order – according to Stokes's second-order theory – is: formula_3 In deep water, for large "kh" the ratio "S" has the asymptote formula_4 For long waves, i.e. small "kh", the ratio "S" behaves as formula_5 or, in terms of the wave height "H" = 2"a" and wavelength "λ" = 2"π" / "k": formula_6 with formula_7 Here "U" is the Ursell parameter (or Stokes parameter). For long waves ("λ" ≫ "h") of small height "H", i.e. "U" ≪ 32π2/3 ≈ 100, second-order Stokes theory is applicable. Otherwise, for fairly long waves ("λ" &gt; 7"h") of appreciable height "H" a cnoidal wave description is more appropriate. According to Hedges, fifth-order Stokes theory is applicable for "U" &lt; 40, and otherwise fifth-order cnoidal wave theory is preferable. Third-order dispersion relation. For Stokes waves under the action of gravity, the third-order dispersion relation is – according to Stokes's first definition of celerity: formula_8 This third-order dispersion relation is a direct consequence of avoiding secular terms, when inserting the second-order Stokes solution into the third-order equations (of the perturbation series for the periodic wave problem). In deep water (short wavelength compared to the depth): formula_9 and in shallow water (long wavelengths compared to the depth): formula_10 As shown above, the long-wave Stokes expansion for the dispersion relation will only be valid for small enough values of the Ursell parameter: "U" ≪ 100. Overview. Stokes's approach to the nonlinear wave problem. A fundamental problem in finding solutions for surface gravity waves is that boundary conditions have to be applied at the position of the free surface, which is not known beforehand and is thus a part of the solution to be found. Sir George Stokes solved this nonlinear wave problem in 1847 by expanding the relevant potential flow quantities in a Taylor series around the mean (or still) surface elevation. As a result, the boundary conditions can be expressed in terms of quantities at the mean (or still) surface elevation (which is fixed and known). Next, a solution for the nonlinear wave problem (including the Taylor series expansion around the mean or still surface elevation) is sought by means of a perturbation series – known as the "Stokes expansion" – in terms of a small parameter, most often the wave steepness. The unknown terms in the expansion can be solved sequentially. Often, only a small number of terms is needed to provide a solution of sufficient accuracy for engineering purposes. Typical applications are in the design of coastal and offshore structures, and of ships. Another property of nonlinear waves is that the phase speed of nonlinear waves depends on the wave height. In a perturbation-series approach, this easily gives rise to a spurious secular variation of the solution, in contradiction with the periodic behaviour of the waves. Stokes solved this problem by also expanding the dispersion relationship into a perturbation series, by a method now known as the Lindstedt–Poincaré method. Applicability. "Stokes's wave theory", when using a low order of the perturbation expansion (e.g. up to second, third or fifth order), is valid for nonlinear waves on intermediate and deep water, that is for wavelengths ("λ") not large as compared with the mean depth ("h"). In shallow water, the low-order Stokes expansion breaks down (gives unrealistic results) for appreciable wave amplitude (as compared to the depth). Then, Boussinesq approximations are more appropriate. Further approximations on Boussinesq-type (multi-directional) wave equations lead – for one-way wave propagation – to the Korteweg–de Vries equation or the Benjamin–Bona–Mahony equation. Like (near) exact Stokes-wave solutions, these two equations have solitary wave (soliton) solutions, besides periodic-wave solutions known as cnoidal waves. Modern extensions. Already in 1914, Wilton extended the Stokes expansion for deep-water surface gravity waves to tenth order, although introducing errors at the eight order. A fifth-order theory for finite depth was derived by De in 1955. For engineering use, the fifth-order formulations of Fenton are convenient, applicable to both Stokes first and second definition of phase speed (celerity). The demarcation between when fifth-order Stokes theory is preferable over fifth-order cnoidal wave theory is for Ursell parameters below about 40. Different choices for the frame of reference and expansion parameters are possible in Stokes-like approaches to the nonlinear wave problem. In 1880, Stokes himself inverted the dependent and independent variables, by taking the velocity potential and stream function as the independent variables, and the coordinates ("x","z") as the dependent variables, with "x" and "z" being the horizontal and vertical coordinates respectively. This has the advantage that the free surface, in a frame of reference in which the wave is steady (i.e. moving with the phase velocity), corresponds with a line on which the stream function is a constant. Then the free surface location is known beforehand, and not an unknown part of the solution. The disadvantage is that the radius of convergence of the rephrased series expansion reduces. Another approach is by using the Lagrangian frame of reference, following the fluid parcels. The Lagrangian formulations show enhanced convergence, as compared to the formulations in both the Eulerian frame, and in the frame with the potential and streamfunction as independent variables. An exact solution for nonlinear pure capillary waves of permanent form, and for infinite fluid depth, was obtained by Crapper in 1957. Note that these capillary waves – being short waves forced by surface tension, if gravity effects are negligible – have sharp troughs and flat crests. This contrasts with nonlinear surface gravity waves, which have sharp crests and flat troughs. By use of computer models, the Stokes expansion for surface gravity waves has been continued, up to high (117th) order by . Schwartz has found that the amplitude "a" (or "a"1) of the first-order fundamental reaches a maximum "before" the maximum wave height "H" is reached. Consequently, the wave steepness "ka" in terms of wave amplitude is not a monotone function up to the highest wave, and Schwartz utilizes instead "kH" as the expansion parameter. To estimate the highest wave in deep water, Schwartz has used Padé approximants and Domb–Sykes plots in order to improve the convergence of the Stokes expansion. Extended tables of Stokes waves on various depths, computed by a different method (but in accordance with the results by others), are provided in Williams (1981, 1985). Several exact relationships exist between integral properties – such as kinetic and potential energy, horizontal wave momentum and radiation stress – as found by . He shows, for deep-water waves, that many of these integral properties have a maximum before the maximum wave height is reached (in support of Schwartz's findings). , using a method similar to the one of Schwartz, computed and tabulated integral properties for a wide range of finite water depths (all reaching maxima below the highest wave height). Further, these integral properties play an important role in the conservation laws for water waves, through Noether's theorem. In 2005, Hammack, Henderson and Segur have provided the first experimental evidence for the existence of three-dimensional progressive waves of permanent form in deep water – that is bi-periodic and two-dimensional progressive wave patterns of permanent form. The existence of these three-dimensional steady deep-water waves has been revealed in 2002, from a bifurcation study of two-dimensional Stokes waves by Craig and Nicholls, using numerical methods. Convergence and instability. Convergence. Convergence of the Stokes expansion was first proved by for the case of small-amplitude waves – on the free surface of a fluid of infinite depth. This was extended shortly afterwards by for the case of finite depth and small-amplitude waves. Near the end of the 20th century, it was shown that for finite-amplitude waves the convergence of the Stokes expansion depends strongly on the formulation of the periodic wave problem. For instance, an inverse formulation of the periodic wave problem as used by Stokes – with the spatial coordinates as a function of velocity potential and stream function – does not converge for high-amplitude waves. While other formulations converge much more rapidly, e.g. in the Eulerian frame of reference (with the velocity potential or stream function as a function of the spatial coordinates). Highest wave. The maximum wave steepness, for periodic and propagating deep-water waves, is "H" / "λ" = 0.1410633 ± 4 · 10−7, so the wave height is about one-seventh () of the wavelength λ. And surface gravity waves of this maximum height have a sharp wave crest – with an angle of 120° (in the fluid domain) – also for finite depth, as shown by Stokes in 1880. An accurate estimate of the highest wave steepness in deep water ("H" / "λ" ≈ 0.142) was already made in 1893, by John Henry Michell, using a numerical method. A more detailed study of the behaviour of the highest wave near the sharp-cornered crest has been published by Malcolm A. Grant, in 1973. The existence of the highest wave on deep water with a sharp-angled crest of 120° was proved by John Toland in 1978. The convexity of η(x) between the successive maxima with a sharp-angled crest of 120° was independently proven by C.J. Amick et al. and Pavel I. Plotnikov in 1982 The highest Stokes wave – under the action of gravity – can be approximated with the following simple and accurate representation of the free surface elevation "η"("x","t"): formula_11 with formula_12 for formula_13 and shifted horizontally over an integer number of wavelengths to represent the other waves in the regular wave train. This approximation is accurate to within 0.7% everywhere, as compared with the "exact" solution for the highest wave. Another accurate approximation – however less accurate than the previous one – of the fluid motion on the surface of the steepest wave is by analogy with the swing of a pendulum in a grandfather clock. Large library of Stokes waves computed with high precision for the case of infinite depth, represented with high accuracy (at least 27 digits after decimal point) as a Padé approximant can be found at StokesWave.org Instability. In deeper water, Stokes waves are unstable. This was shown by T. Brooke Benjamin and Jim E. Feir in 1967. The Benjamin–Feir instability is a side-band or modulational instability, with the side-band modulations propagating in the same direction as the carrier wave; waves become unstable on deeper water for a relative depth "kh" &gt; 1.363 (with "k" the wavenumber and "h" the mean water depth). The Benjamin–Feir instability can be described with the nonlinear Schrödinger equation, by inserting a Stokes wave with side bands. Subsequently, with a more refined analysis, it has been shown – theoretically and experimentally – that the Stokes wave and its side bands exhibit Fermi–Pasta–Ulam–Tsingou recurrence: a cyclic alternation between modulation and demodulation. In 1978 Longuet-Higgins, by means of numerical modelling of fully non-linear waves and modulations (propagating in the carrier wave direction), presented a detailed analysis of the region of instability in deep water: both for superharmonics (for perturbations at the spatial scales smaller than the wavelength formula_14) and subharmonics (for perturbations at the spatial scales larger than formula_14). With increase of Stokes wave's amplitude, new modes of superharmonic instability appear. Appearance of a new branch of instability happens when the energy of the wave passes extremum. Detailed analysis of the mechanism of appearance of the new branches of instability has shown that their behavior follows closely a simple law, which allows to find with a good accuracy instability growth rates for all known and predicted branches. In Longuet-Higgins studies of two-dimensional wave motion, as well as the subsequent studies of three-dimensional modulations by McLean et al., new types of instabilities were found – these are associated with resonant wave interactions between five (or more) wave components. Stokes expansion. Governing equations for a potential flow. In many instances, the oscillatory flow in the fluid interior of surface waves can be described accurately using potential flow theory, apart from boundary layers near the free surface and bottom (where vorticity is important, due to viscous effects, see Stokes boundary layer). Then, the flow velocity u can be described as the gradient of a velocity potential formula_15: Consequently, assuming incompressible flow, the velocity field u is divergence-free and the velocity potential formula_15 satisfies Laplace's equation in the fluid interior. The fluid region is described using three-dimensional Cartesian coordinates ("x","y","z"), with "x" and "y" the horizontal coordinates, and "z" the vertical coordinate – with the positive "z"-direction opposing the direction of the gravitational acceleration. Time is denoted with "t". The free surface is located at "z" = "η"("x","y","t"), and the bottom of the fluid region is at "z" = −"h"("x","y"). The free-surface boundary conditions for surface gravity waves – using a potential flow description – consist of a "kinematic" and a "dynamic" boundary condition. The "kinematic" boundary condition ensures that the normal component of the fluid's flow velocity, formula_16 in matrix notation, at the free surface equals the normal velocity component of the free-surface motion "z" = "η"("x","y","t"): The "dynamic" boundary condition states that, without surface tension effects, the atmospheric pressure just above the free surface equals the fluid pressure just below the surface. For an unsteady potential flow this means that the Bernoulli equation is to be applied at the free surface. In case of a constant atmospheric pressure, the dynamic boundary condition becomes: where the constant atmospheric pressure has been taken equal to zero, without loss of generality. Both boundary conditions contain the potential formula_15 as well as the surface elevation "η". A (dynamic) boundary condition in terms of only the potential formula_15 can be constructed by taking the material derivative of the dynamic boundary condition, and using the kinematic boundary condition: formula_17 formula_18 At the bottom of the fluid layer, impermeability requires the normal component of the flow velocity to vanish: where "h"("x","y") is the depth of the bed below the datum "z" = 0 and "n" is the coordinate component in the direction normal to the bed. For permanent waves above a horizontal bed, the mean depth "h" is a constant and the boundary condition at the bed becomes: formula_19 Taylor series in the free-surface boundary conditions. The free-surface boundary conditions (D) and (E) apply at the yet unknown free-surface elevation "z" = "η"("x","y","t"). They can be transformed into boundary conditions at a fixed elevation "z" = constant by use of Taylor series expansions of the flow field around that elevation. Without loss of generality the mean surface elevation – around which the Taylor series are developed – can be taken at "z" = 0. This assures the expansion is around an elevation in the proximity of the actual free-surface elevation. Convergence of the Taylor series for small-amplitude steady-wave motion was proved by . The following notation is used: the Taylor series of some field "f"("x","y","z","t") around "z" = 0 – and evaluated at "z" = "η"("x","y","t") – is: formula_20 with subscript zero meaning evaluation at "z" = 0, e.g.: ["f"]0 = "f"("x","y",0,"t"). Applying the Taylor expansion to free-surface boundary condition Eq. (E) in terms of the potential Φ gives: showing terms up to triple products of "η", "Φ" and u, as required for the construction of the Stokes expansion up to third-order O(("ka")3). Here, "ka" is the wave steepness, with "k" a characteristic wavenumber and "a" a characteristic wave amplitude for the problem under study. The fields "η", "Φ" and u are assumed to be "O"("ka"). The dynamic free-surface boundary condition Eq. (D) can be evaluated in terms of quantities at "z" = 0 as: The advantages of these Taylor-series expansions fully emerge in combination with a perturbation-series approach, for weakly non-linear waves ("ka" ≪ 1). Perturbation-series approach. The perturbation series are in terms of a small ordering parameter "ε" ≪ 1 – which subsequently turns out to be proportional to (and of the order of) the wave slope "ka", see the series solution in this section. So, take "ε" = "ka": formula_21 When applied in the flow equations, they should be valid independent of the particular value of "ε". By equating in powers of "ε", each term proportional to "ε" to a certain power has to equal to zero. As an example of how the perturbation-series approach works, consider the non-linear boundary condition (G); it becomes: formula_22 The resulting boundary conditions at "z" = 0 for the first three orders are: In a similar fashion – from the dynamic boundary condition (H) – the conditions at "z" = 0 at the orders 1, 2 and 3 become: For the linear equations (A), (B) and (F) the perturbation technique results in a series of equations independent of the perturbation solutions at other orders: The above perturbation equations can be solved sequentially, i.e. starting with first order, thereafter continuing with the second order, third order, etc. Application to progressive periodic waves of permanent form. The waves of permanent form propagate with a constant phase velocity (or celerity), denoted as "c". If the steady wave motion is in the horizontal "x"-direction, the flow quantities "η" and u are not separately dependent on "x" and time "t", but are functions of "x" − "ct": formula_23 Further the waves are periodic – and because they are also of permanent form – both in horizontal space "x" and in time "t", with wavelength "λ" and period "τ" respectively. Note that "Φ"("x","z","t") itself is not necessary periodic due to the possibility of a constant (linear) drift in "x" and/or "t": formula_24 with "φ"("x","z","t") – as well as the derivatives ∂"Φ"/∂"t" and ∂"Φ"/∂"x" – being periodic. Here "β" is the mean flow velocity below trough level, and "γ" is related to the hydraulic head as observed in a frame of reference moving with the wave's phase velocity "c" (so the flow becomes steady in this reference frame). In order to apply the Stokes expansion to progressive periodic waves, it is advantageous to describe them through Fourier series as a function of the wave phase "θ"("x","t"): formula_25 assuming waves propagating in the "x"–direction. Here "k" = 2"π" / "λ" is the wavenumber, "ω" = 2"π" / "τ" is the angular frequency and "c" = "ω" / "k" (= "λ" / "τ") is the phase velocity. Now, the free surface elevation "η"("x","t") of a periodic wave can be described as the Fourier series: formula_26 Similarly, the corresponding expression for the velocity potential "Φ"("x","z","t") is: formula_27 satisfying both the Laplace equation ∇2"Φ" = 0 in the fluid interior, as well as the boundary condition ∂"Φ"/∂"z" = 0 at the bed "z" = −"h". For a given value of the wavenumber "k", the parameters: "A"n, "B"n (with "n" = 1, 2, 3, ...), "c", "β" and "γ" have yet to be determined. They all can be expanded as perturbation series in "ε". provides these values for fifth-order Stokes's wave theory. For progressive periodic waves, derivatives with respect to "x" and "t" of functions "f"("θ","z") of "θ"("x","t") can be expressed as derivatives with respect to "θ": formula_28 The important point for non-linear waves – in contrast to linear Airy wave theory – is that the phase velocity "c" also depends on the wave amplitude "a", besides its dependence on wavelength "λ" = 2π / "k" and mean depth "h". Negligence of the dependence of "c" on wave amplitude results in the appearance of secular terms, in the higher-order contributions to the perturbation-series solution. already applied the required non-linear correction to the phase speed "c" in order to prevent secular behaviour. A general approach to do so is now known as the Lindstedt–Poincaré method. Since the wavenumber "k" is given and thus fixed, the non-linear behaviour of the phase velocity "c" = "ω" / "k" is brought into account by also expanding the angular frequency "ω" into a perturbation series: formula_29 Here "ω"0 will turn out to be related to the wavenumber "k" through the linear dispersion relation. However time derivatives, through ∂"f"/∂"t" = −"ω" ∂"f"/∂"θ", now also give contributions – containing "ω"1, "ω"2, etc. – to the governing equations at higher orders in the perturbation series. By tuning "ω"1, "ω"2, etc., secular behaviour can be prevented. For surface gravity waves, it is found that "ω"1 = 0 and the first non-zero contribution to the dispersion relation comes from "ω"2 (see e.g. the sub-section "Third-order dispersion relation" above). Stokes's two definitions of wave celerity. For non-linear surface waves there is, in general, ambiguity in splitting the total motion into a wave part and a mean part. As a consequence, there is some freedom in choosing the phase speed (celerity) of the wave. identified two logical definitions of phase speed, known as Stokes's first and second definition of wave celerity: As pointed out by Michael E. McIntyre, the mean horizontal mass transport will be (near) zero for a wave group approaching into still water, with also in deep water the mass transport caused by the waves balanced by an opposite mass transport in a return flow (undertow). This is due to the fact that otherwise a large mean force will be needed to accelerate the body of water into which the wave group is propagating. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Reprinted in: Reprinted in: And in (including corrections):
[ { "math_id": 0, "text": "\\begin{align}\n \\eta(x,t) =& a \\left\\{ \n \\left[1 - \\tfrac{1}{16} (ka)^2 \\right] \\cos \\theta \n + \\tfrac12 (k a)\\, \\cos 2\\theta \n + \\tfrac38 (k a)^2\\, \\cos 3\\theta \n \\right\\}\n + \\mathcal{O}\\left( (ka)^4 \\right),\n \\\\\n \\Phi(x,z,t) =& a \\sqrt{\\frac{g}{k}}\\, \\text{e}^{kz}\\, \\sin \\theta\n + \\mathcal{O}\\left( (ka)^4 \\right),\n \\\\\n c =& \\frac{\\omega}{k} = \\left( 1 + \\tfrac12 (ka)^2 \\right)\\, \\sqrt{\\frac{g}{k}}\n + \\mathcal{O}\\left( (ka)^4 \\right), \\text{ and}\n \\\\\n \\theta(x,t) =& kx - \\omega t,\n\\end{align}" }, { "math_id": 1, "text": "H = 2a\\, \\left( 1 + \\tfrac38\\, k^2 a^2 \\right)." }, { "math_id": 2, "text": "\\begin{align}\n \\eta(x,t) =& \n a \\left\\{ \n \\cos\\, \\theta\n + ka\\, \\frac{3 - \\sigma^2}{4\\, \\sigma^3}\\, \\cos\\, 2\\theta\n \\right\\}\n + \\mathcal{O} \\left( (ka)^3 \\right),\n \\\\\n \\Phi(x,z,t) =&\n a\\, \\frac{\\omega}{k}\\, \\frac{1}{\\sinh\\, kh}\n \\\\ & \\times\n \\left\\{\\cosh\\, k(z+h)\n \\sin\\, \\theta\n + ka\\, \\frac{3 \\cosh\\, 2k(z+h)}{8\\, \\sinh^3\\, kh}\\, \\sin\\, 2\\theta\n \\right\\}\n \\\\ &\n - (ka)^2\\, \\frac{1}{2\\, \\sinh\\, 2kh}\\, \\frac{g\\, t}{k}\n + \\mathcal{O} \\left( (ka)^3 \\right),\n \\\\\n c =& \\frac{\\omega}{k} = \\sqrt{\\frac{g}{k}\\, \\sigma}\n + \\mathcal{O} \\left( (ka)^2 \\right),\n \\\\\n \\sigma =& \\tanh\\, kh\n \\quad \\text{and} \\quad \n \\theta(x,t) = k x - \\omega t.\n\\end{align}" }, { "math_id": 3, "text": "\\mathcal{S} = ka\\, \\frac{3 - \\tanh^2\\, kh}{4\\, \\tanh^3\\, kh}." }, { "math_id": 4, "text": "\\lim_{kh \\to \\infty} \\mathcal{S} = \\frac{1}{2}\\, ka." }, { "math_id": 5, "text": "\\lim_{kh \\downarrow 0} \\mathcal{S} = \\frac{3}{4}\\, \\frac{ka}{(kh)^3}," }, { "math_id": 6, "text": "\\lim_{kh \\downarrow 0} \\mathcal{S} \n= \\frac{3}{32\\,\\pi^2}\\, \\frac{H\\, \\lambda^2}{h^3}\n= \\frac{3}{32\\,\\pi^2}\\, \\mathcal{U}," }, { "math_id": 7, "text": " \\mathcal{U} \\equiv \\frac{H\\, \\lambda^2}{h^3}." }, { "math_id": 8, "text": "\\begin{align}\n \\omega^2 &= \\left( gk\\, \\tanh\\, kh \\right)\\;\n \\left\\{\n 1 \n + \\frac{9 - 10\\, \\sigma^2 + 9\\, \\sigma^4}{8\\, \\sigma^4}\\, ( ka )^2\n \\right\\}+ \\mathcal{O}\\left( (ka)^4 \\right),\n \\\\\n &\n \\qquad \\text{with}\n \\\\\n \\sigma &= \\tanh\\, kh.\n\\end{align}" }, { "math_id": 9, "text": "\\lim_{kh \\to \\infty} \\omega^2 = gk\\, \\left\\{ 1 + \\left( ka \\right)^2 \\right\\} + \\mathcal{O}\\left( (ka)^4 \\right)," }, { "math_id": 10, "text": "\n \\lim_{kh \\downarrow 0} \\omega^2 = \n k^2\\, gh\\, \\left\\{ 1 + \\frac98\\, \\frac{\\left( ka \\right)^2}{\\left( kh \\right)^4} \\right\\}\n + \\mathcal{O}\\left( (ka)^4 \\right).\n" }, { "math_id": 11, "text": " \\frac{\\eta}{\\lambda} = A\\, \\left[ \\cosh\\, \\left( \\frac{x-ct}{\\lambda} \\right) - 1 \\right], " }, { "math_id": 12, "text": " A = \\frac{1}{\\sqrt{3}\\, \\sinh \\left( \\frac12 \\right)} \\approx 1.108, " }, { "math_id": 13, "text": "-\\tfrac 1 2\\,\\lambda \\le (x-ct) \\le \\tfrac12\\, \\lambda," }, { "math_id": 14, "text": "\\lambda" }, { "math_id": 15, "text": "\\Phi" }, { "math_id": 16, "text": "\\mathbf u = [\\partial\\Phi/\\partial x ~~~ \\partial\\Phi/\\partial y ~~~ \\partial\\Phi/\\partial z]^{\\mathrm T}" }, { "math_id": 17, "text": "\n {\\color{Gray}{\n \\Bigl( \\frac{\\partial}{\\partial t}\n + \\mathbf{u} \\cdot \\boldsymbol{\\nabla} \\Bigr)\\,\n \\left( \\frac{\\partial\\Phi}{\\partial t} + \\tfrac12\\, |\\mathbf{u}|^2 + g\\, \\eta \\right) \n = 0}}\n" }, { "math_id": 18, "text": "\n {\\color{Gray}{\n \\Rightarrow \\quad\n \\frac{\\partial^2 \\Phi}{\\partial t^2}\n + g\\, \\frac{\\partial \\Phi}{\\partial z}\n + \\mathbf{u} \\cdot \\boldsymbol{\\nabla} \\frac{\\partial \\Phi}{\\partial t}\n + \\tfrac12\\, \\frac{\\partial}{\\partial t} \\left( |\\mathbf{u}|^2 \\right)\n + \\tfrac12\\, \\mathbf{u} \\cdot \\boldsymbol{\\nabla} \\left( |\\mathbf{u}|^2 \\right)\n = 0}}\n" }, { "math_id": 19, "text": "\\frac{\\partial\\Phi}{\\partial z} = 0 \\qquad \\text{ at } z = -h." }, { "math_id": 20, "text": "\n f(x,y,\\eta,t) = \n \\left[ f \\right]_0\n + \\eta\\, \\left[ \\frac{\\partial f}{\\partial z} \\right]_0\n + \\frac12\\, \\eta^2\\, \\left[ \\frac{\\partial^2 f}{\\partial z^2} \\right]_0\n + \\cdots\n" }, { "math_id": 21, "text": "\\begin{align}\n \\eta &= \\varepsilon\\, \\eta_1 + \\varepsilon^2\\, \\eta_2 + \\varepsilon^3\\, \\eta_3 + \\cdots ,\n \\\\\n \\Phi &= \\varepsilon\\, \\Phi_1 + \\varepsilon^2\\, \\Phi_2 + \\varepsilon^3\\, \\Phi_3 + \\cdots \n \\quad \\text{and}\n \\\\\n \\mathbf{u} &= \\varepsilon\\, \\mathbf{u}_1 + \\varepsilon^2\\, \\mathbf{u}_2 + \\varepsilon^3\\, \\mathbf{u}_3 + \\cdots .\n\\end{align}" }, { "math_id": 22, "text": "\\begin{align}\n & \\varepsilon\\, \n \\left\\{\n \\frac{\\partial^2 \\Phi_1}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_1}{\\partial z}\n \\right\\}\n \\\\\n & + \\varepsilon^2\\, \n \\left\\{\n \\frac{\\partial^2 \\Phi_2}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_2}{\\partial z}\n + \\eta_1\\, \\frac{\\partial}{\\partial z} \n \\left(\n \\frac{\\partial^2 \\Phi_1}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_1}{\\partial z}\n \\right) \n + \\frac{\\partial}{\\partial t} \\left( |\\mathbf{u}_1|^2 \\right)\n \\right\\}\n \\\\\n & + \\varepsilon^3\\, \n \\left\\{\n \\frac{\\partial^2 \\Phi_3}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_3}{\\partial z}\n + \\eta_1\\, \\frac{\\partial}{\\partial z} \n \\left(\n \\frac{\\partial^2 \\Phi_2}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_2}{\\partial z}\n \\right) \n \\right.\n \\\\ & \\qquad \\quad \\left.\n + \\eta_2\\, \\frac{\\partial}{\\partial z} \n \\left(\n \\frac{\\partial^2 \\Phi_1}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_1}{\\partial z}\n \\right) \n + 2\\, \\frac{\\partial}{\\partial t} \\left( \\mathbf{u}_1 \\cdot \\mathbf{u}_2 \\right)\n \\right.\n \\\\ & \\qquad \\quad \\left.\n + \\tfrac12\\, \\eta_1^2\\, \n \\frac{\\partial^2}{\\partial z^2} \n \\left(\n \\frac{\\partial^2 \\Phi_1}{\\partial t^2} \n + g\\, \\frac{\\partial \\Phi_1}{\\partial z}\n \\right)\n + \\eta_1\\, \\frac{\\partial^2}{\\partial t\\, \\partial z} \\left( |\\mathbf{u}_1|^2 \\right)\n + \\tfrac12\\, \\mathbf{u}_1 \\cdot \\boldsymbol{\\nabla} \\left( |\\mathbf{u}_1|^2 \\right)\n \\right\\}\n \\\\ &\n + \\mathcal{O}\\left( \\varepsilon^4 \\right)\n = 0, \n \\qquad \\text{at } z=0.\n\\end{align}" }, { "math_id": 23, "text": " \\eta(x,t) = \\eta(x-ct) \\quad \\text{and} \\quad \\mathbf{u}(x,z,t) = \\mathbf{u}(x-ct,z). " }, { "math_id": 24, "text": "\\Phi(x,z,t) = \\beta x - \\gamma t + \\varphi(x-ct,z)," }, { "math_id": 25, "text": " \\theta = k x - \\omega t = k \\left( x - c t \\right), " }, { "math_id": 26, "text": "\\eta = \\sum_{n=1}^{\\infty} A_n\\, \\cos\\, (n\\theta)." }, { "math_id": 27, "text": " \\Phi = \\beta x - \\gamma t + \\sum_{n=1}^\\infty B_n\\, \\biggl[ \\cosh\\, \\left( nk\\, (z+h) \\right) \\biggr]\\, \\sin\\, (n\\theta)," }, { "math_id": 28, "text": "\n \\frac{\\partial f}{\\partial x} = +k\\, \\frac{\\partial f}{\\partial \\theta} \n \\qquad \\text{and} \\qquad \n \\frac{\\partial f}{\\partial t} = -\\omega\\, \\frac{\\partial f}{\\partial \\theta}.\n" }, { "math_id": 29, "text": " \\omega = \\omega_0 + \\varepsilon\\, \\omega_1 + \\varepsilon^2\\, \\omega_2 + \\cdots." } ]
https://en.wikipedia.org/wiki?curid=15440535
1544327
Limiting reagent
Reactant introduced in deficit, totally consumed, and stopping the chemical reaction The limiting reagent (or limiting reactant or limiting agent) in a chemical reaction is a reactant that is totally consumed when the chemical reaction is completed. The amount of product formed is limited by this reagent, since the reaction cannot continue without it. If one or more other reagents are present in excess of the quantities required to react with the limiting reagent, they are described as "excess reagents" or "excess reactants" (sometimes abbreviated as "xs"), or to be in "abundance". The limiting reagent must be identified in order to calculate the percentage yield of a reaction since the theoretical yield is defined as the amount of product obtained when the limiting reagent reacts completely. Given the balanced chemical equation, which describes the reaction, there are several equivalent ways to identify the limiting reagent and evaluate the excess quantities of other reagents. Method 1: Comparison of reactant amounts. This method is most useful when there are only two reactants. One reactant (A) is chosen, and the balanced chemical equation is used to determine the amount of the other reactant (B) necessary to react with A. If the amount of B actually present exceeds the amount required, then B is in excess and A is the limiting reagent. If the amount of B present is less than required, then B is the limiting reagent. Example for two reactants. Consider the combustion of benzene, represented by the following chemical equation: &lt;chem&gt;2 C6H6(l) + 15 O2(g) -&gt; 12 CO2(g) + 6 H2O(l)&lt;/chem&gt; This means that 15 moles of molecular oxygen (O2) is required to react with 2 moles of benzene (C6H6) The amount of oxygen required for other quantities of benzene can be calculated using cross-multiplication (the rule of three). For example, if 1.5 mol C6H6 is present, 11.25 mol O2 is required: formula_0 If in fact 18 mol O2 are present, there will be an excess of (18 - 11.25) = 6.75 mol of unreacted oxygen when all the benzene is consumed. Benzene is then the limiting reagent. This conclusion can be verified by comparing the mole ratio of O2 and C6H6 required by the balanced equation with the mole ratio actually present: Since the actual ratio is larger than required, O2 is the reagent in excess, which confirms that benzene is the limiting reagent. Method 2: Comparison of product amounts which can be formed from each reactant. In this method the chemical equation is used to calculate the amount of one product which can be formed from each reactant in the amount present. The limiting reactant is the one which can form the smallest amount of the product considered. This method can be extended to any number of reactants more easily than the first method. Example. 20.0 g of iron (III) oxide (Fe2O3) are reacted with 8.00 g aluminium (Al) in the following thermite reaction: &lt;chem&gt;Fe2O3(s) + 2 Al(s) -&gt; 2 Fe(l) + Al2O3(s)&lt;/chem&gt; Since the reactant amounts are given in grams, they must be first converted into moles for comparison with the chemical equation, in order to determine how many moles of Fe can be produced from either reactant. There is enough Al to produce 0.297 mol Fe, but only enough Fe2O3 to produce 0.250 mol Fe. This means that the amount of Fe actually produced is limited by the Fe2O3 present, which is therefore the limiting reagent. Shortcut. It can be seen from the example above that the amount of product (Fe) formed from each reagent X (Fe2O3 or Al) is proportional to the quantity formula_7 This suggests a shortcut which works for any number of reagents. Just calculate this formula for each reagent, and the reagent that has the lowest value of this formula is the limiting reagent. We can apply this shortcut in the above example. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 1.5 \\ \\ce{mol\\,C6H6} \\times \\frac{15 \\ \\ce{mol\\,O2}}{2 \\ \\ce{mol\\,C6H6}} = 11.25 \\ \\ce{mol\\,O2}" }, { "math_id": 1, "text": "\\frac{\\ce{mol\\,O2}}{\\ce{mol\\,C6H6}} = \\frac{15 \\ \\ce{mol\\,O2}}{2 \\ \\ce{mol\\,C6H6}}=7.5 \\ \\ce{mol\\,O2}" }, { "math_id": 2, "text": "\\frac{\\ce{mol\\,O2}}{\\ce{mol\\,C6H6}} = \\frac{18 \\ \\ce{mol\\,O2}}{1.5 \\ \\ce{mol\\,C6H6}}=12 \\ \\ce{mol\\,O2}" }, { "math_id": 3, "text": "\\begin{align}\n \\ce{mol~Fe2O3} &= \\frac{\\ce{grams~Fe2O3}}{\\ce{g/mol~Fe2O3}}\\\\\n &= \\frac{20.0~\\ce g}{159.7~\\ce{g/mol}} = 0.125~\\ce{mol}\n\\end{align}" }, { "math_id": 4, "text": " \\ce{mol~Fe} = 0.125 \\ \\ce{mol~Fe2O3} \\times \\frac{\\ce{2~mol~Fe}}{\\ce{1~mol~Fe2O3}} = 0.250~\\ce{mol~Fe}" }, { "math_id": 5, "text": "\\begin{align}\n \\ce{mol~Al} &= \\frac\\ce{grams~Al}\\ce{g/mol~Al}\\\\\n & = \\frac{8.00~\\ce g}{26.98~\\ce{g/mol}} = 0.297~\\ce{mol}\n\\end{align}" }, { "math_id": 6, "text": " \\ce{mol~Fe} = 0.297~\\ce{mol~Al} \\times \\frac\\ce{2~mol~Fe}\\ce{2~mol~Al} = 0.297~\\ce{mol~Fe}" }, { "math_id": 7, "text": "\\frac{\\mbox{Moles of Reagent X }}{\\mbox{Stoichiometric Coefficient of Reagent X}}" } ]
https://en.wikipedia.org/wiki?curid=1544327
15444460
Open-circuit time constant method
The open-circuit time constant (OCT) method is an approximate analysis technique used in electronic circuit design to determine the corner frequency of complex circuits. It is a special case of zero-value time constant (ZVT) method technique when reactive elements consist of only capacitors. The zero-value time (ZVT) constant method itself is a special case of the general Time- and Transfer Constant (TTC) analysis that allows full evaluation of the zeros and poles of any lumped LTI systems of with both inductors and capacitors as reactive elements using time constants and transfer constants. The OCT method provides a quick evaluation, and identifies the largest contributions to time constants as a guide to the circuit improvements. The basis of the method is the approximation that the corner frequency of the amplifier is determined by the term in the denominator of its transfer function that is linear in frequency. This approximation can be extremely inaccurate in some cases where a zero in the numerator is near in frequency. If all the poles are real and there are no zeros, this approximation is always conservative, in the sense that the inverse of the sum of the zero-value time constants is less than the actual corner frequency of the circuit. The method also uses a simplified method for finding the term linear in frequency based upon summing the RC-products for each capacitor in the circuit, where the resistor R for a selected capacitor is the resistance found by inserting a test source at its site and setting all other capacitors to zero. Hence the name "zero-value time constant technique". Example: Simple RC network. Figure 1 shows a simple RC low-pass filter. Its transfer function is found using Kirchhoff's current law as follows. At the output, formula_0 where "V"1 is the voltage at the top of capacitor "C"1. At the center node: formula_1 Combining these relations the transfer function is found to be: formula_2 The linear term in "j"ω in this transfer function can be derived by the following method, which is an application of the open-circuit time constant method to this example. In effect, it is as though each capacitor charges and discharges through the resistance found in the circuit when the other capacitor is an open circuit. The open circuit time constant procedure provides the linear term in "j"ω regardless of how complex the RC network becomes. This was originally developed and proven by calculating the co-factors of the admittance matrix by Thornton and Searle. A more intuitive inductive proof of this (and other properties of TTC) was later developed by Hajimiri. For a complex circuit, the procedure consists of following the above rules, going through all the capacitors in the circuit. A more general derivation is found in Gray and Meyer. So far the result is general, but an approximation is introduced to make use of this result: the assumption is made that this linear term in "j"ω determines the corner frequency of the circuit. That assumption can be examined more closely using the example of Figure 1: suppose the time constants of this circuit are τ1 and τ2; that is: formula_3 Comparing the coefficients of the linear and quadratic terms in "j"ω, there results: formula_4 formula_5 One of the two time constants will be the longest; let it be τ1. Suppose for the moment that it is much larger than the other, τ1 » τ2. In this case, the approximations hold that: formula_6 and formula_7 In other words, substituting the RC-values: formula_8 and formula_9 where ( ^ ) denotes the approximate result. As an aside, notice that the circuit time constants both involve both capacitors; in other words, in general the circuit time constants are not decided by any single capacitor. Using these results, it is easy to explore how well the corner frequency (the 3 dB frequency) is given by formula_10 as the parameters vary. Also, the exact transfer function can be compared with the approximate one, that is, formula_11 formula_12   with   formula_12 formula_13 Of course agreement is good when the assumption τ1 » τ2 is accurate. Figure 2 illustrates the approximation. The x-axis is the ratio τ1 / τ2 on a logarithmic scale. An increase in this variable means the higher pole is further above the corner frequency. The y-axis is the ratio of the OCTC (open-circuit time constant) estimate to the true time constant. For the lowest pole use curve T_1; this curve refers to the corner frequency; and for the higher pole use curve T_2. The worst agreement is for τ1 = τ2. In this case τ^1 = 2 τ1 and the corner frequency is a factor of 2 too small. The higher pole is a factor of 2 too high (its time constant is half of the real value). In all cases, the estimated corner frequency is closer than a factor of two from the real one, and always is "conservative" that is, lower than the real corner, so the actual circuit will behave better than predicted. However, the higher pole always is "optimistic", that is, predicts the high pole at a higher frequency than really is the case. To use these estimates for step response predictions, which depend upon the ratio of the two pole frequencies (see article on pole splitting for an example), Figure 2 suggests a fairly large ratio of τ1 / τ2 is needed for accuracy because the errors in τ^1 and τ^2 reinforce each other in the ratio τ^1 / τ^2. The open-circuit time constant method focuses upon the corner frequency alone, but as seen above, estimates for higher poles also are possible. Application of the open-circuit time constant method to a number of single transistor amplifier stages can be found in Pittet and Kandaswamy.
[ { "math_id": 0, "text": " \\frac {V_1 - V_O } {R_2} = j\\omega C_2 V_O \\ , " }, { "math_id": 1, "text": " \\frac {V_S-V_1}{R_1} = j \\omega C_1 V_1 + \\frac {V_1-V_O} {R_2} \\ . " }, { "math_id": 2, "text": " \\frac {V_O} {V_S} = \\frac {1} {1 + j \\omega \\left(C_2 (R_1+R_2) +C_1 R_1 \\right) +(j \\omega )^2 C_1 C_2 R_1 R_2 } " }, { "math_id": 3, "text": " \\left( 1 + j \\omega { \\tau}_1) (1 +j \\omega {\\tau}_2 \\right) = 1 + j \\omega \\left(C_2 (R_1+R_2) +C_1 R_1 \\right) +(j \\omega )^2 C_1 C_2 R_1 R_2 " }, { "math_id": 4, "text": " \\tau_1 + \\tau_2 = C_2 (R_1+R_2) +C_1 R_1 \\ , " }, { "math_id": 5, "text": " \\tau_1 \\tau_2 = C_1 C_2 R_1 R_2 \\ . " }, { "math_id": 6, "text": " \\tau_1 + \\tau_2 \\approx \\tau_1 \\ , " }, { "math_id": 7, "text": " \\tau_2 = \\frac {\\tau_1 \\tau_2 } { \\tau_1} \\approx \\frac {\\tau_1 \\tau_2 } { \\tau_1 + \\tau_2 } \\ . " }, { "math_id": 8, "text": " \\tau_1 \\approx \\hat { \\tau_1} =\\ \\tau_1 + \\tau_2 = C_2 (R_1+R_2) +C_1 R_1 \\ , " }, { "math_id": 9, "text": " \\tau_2 \\approx \\hat { \\tau_2} =\\frac {\\tau_1 \\tau_2 } { \\tau_1 + \\tau_2 } = \\frac {C_1 C_2 R_1 R_2} {C_2 (R_1+R_2) +C_1 R_1} \\ , " }, { "math_id": 10, "text": " f_{3dB} = \\frac {1} {2 \\pi \\hat{ \\tau_1 }} \\ , " }, { "math_id": 11, "text": " \\frac {1} {(1+j \\omega \\tau_1)(1 + j \\omega \\tau_2)}\\ " }, { "math_id": 12, "text": " \\ " }, { "math_id": 13, "text": "\\ \\frac {1} {(1+j \\omega \\hat {\\tau_1 })(1 + j \\omega \\hat { \\tau_2} )} \\ . " } ]
https://en.wikipedia.org/wiki?curid=15444460
15445
Entropy (information theory)
Expected amount of information needed to specify the output of a stochastic data source In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable formula_0, which takes values in the set formula_1 and is distributed according to formula_2, the entropy is formula_3 where formula_4 denotes the sum over the variable's possible values. The choice of base for formula_5, the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base "e" gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem. Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs's formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition formula_6 generalizes the above. Introduction. The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number "will not" be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number "will" win a lottery has high informational value because it communicates the occurrence of a very low probability event. The "information content," also called the "surprisal" or "self-information," of an event formula_7 is a function which increases as the probability formula_8 of an event decreases. When formula_8 is close to 1, the surprisal of the event is low, but if formula_8 is close to 0, the surprisal of the event is high. This relationship is described by the function formula_9 where formula_5 is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section "". Hence, we can define the information, or surprisal, of an event formula_7 by formula_10 or equivalently, formula_11 Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (formula_12) than each outcome of a coin toss (formula_13). Consider a coin with probability "p" of landing on heads and probability 1 − "p" of landing on tails. The maximum surprise is when "p" 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit. (Similarly, one trit with equiprobable values contains formula_14 (about 1.58496) bits of information because it can have one of three values.) The minimum surprise is when "p" 0 or "p" 1, when the event outcome is known ahead of time, and the entropy is zero bits. When the entropy is zero bits, this is sometimes referred to as unity, where there is no uncertainty at all – no freedom of choice – no information. Other values of "p" give entropies between zero and one bits. Example. Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one cannot do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and 'D' as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message. Definition. Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable formula_15, which takes values in the set formula_1 and is distributed according to formula_16 such that formula_17: formula_18 Here formula_19 is the expected value operator, and I is the information content of "X". formula_20 is itself a random variable. The entropy can explicitly be written as: formula_21 where "b" is the base of the logarithm used. Common values of "b" are 2, Euler's number "e", and 10, and the corresponding units of entropy are the bits for "b" 2, nats for "b" "e", and bans for "b" 10. In the case of formula_22 for some formula_23, the value of the corresponding summand 0 log"b"(0) is taken to be 0, which is consistent with the limit: formula_24 One may also define the conditional entropy of two variables formula_0 and formula_25 taking values from sets formula_1 and formula_26 respectively, as: formula_27 where formula_28 and formula_29. This quantity should be understood as the remaining randomness in the random variable formula_0 given the random variable formula_25. Measure theory. Entropy can be formally defined in the language of measure theory as follows: Let formula_30 be a probability space. Let formula_31 be an event. The surprisal of formula_32 is formula_33 The "expected" surprisal of formula_32 is formula_34 A formula_35-almost partition is a set family formula_36 such that formula_37 and formula_38 for all distinct formula_39. (This is a relaxation of the usual conditions for a partition.) The entropy of formula_40 is formula_41 Let formula_42 be a sigma-algebra on formula_0. The entropy of formula_42 is formula_43 Finally, the entropy of the probability space is formula_44, that is, the entropy with respect to formula_35 of the sigma-algebra of "all" measurable subsets of formula_0. Example. Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modelled as a Bernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because formula_45 However, if we know the coin is not fair, but comes up heads or tails with probabilities "p" and "q", where "p" ≠ "q", then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if "p" = 0.7, then formula_46 Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain. Characterization. To understand the meaning of −Σ "p""i" log("p""i"), first define an information function I in terms of an event "i" with probability "p""i". The amount of information acquired due to the observation of event "i" follows from Shannon's solution of the fundamental properties of information: 0: events that always occur do not communicate information. I("p"1) + I("p"2): the information learned from independent events is the sum of the information learned from each event. Given two independent events, if the first event can yield one of "n" equiprobable outcomes and another has one of "m" equiprobable outcomes then there are "mn" equiprobable outcomes of the joint event. This means that if log2("n") bits are needed to encode the first value and log2("m") to encode the second, one needs log2("mn") log2("m") + log2("n") to encode both. Shannon discovered that a suitable choice of formula_47 is given by: formula_48 In fact, the only possible values of formula_47 are formula_49 for formula_50. Additionally, choosing a value for "k" is equivalent to choosing a value formula_51 for formula_52, so that "x" corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties. The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, "n" tosses provide "n" bits of information, which is approximately 0.693"n" nats or 0.301"n" decimal digits. The "meaning" of the events observed (the meaning of "messages") does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. Alternative characterization. Another characterization of entropy uses the following properties. We denote "p""i" Pr("X" "x""i") and Η"n"("p"1, ..., "p""n") Η("X"). Discussion. The rule of additivity has the following consequences: for positive integers "b""i" where "b"1 + ... + "b""k" "n", formula_59 Choosing "k" "n", "b"1 "b""n" 1 this implies that the entropy of a certain outcome is zero: Η1(1) 0. This implies that the efficiency of a source set with "n" symbols can be defined simply as being equal to its "n"-ary entropy. See also Redundancy (information theory). The characterization here imposes an additive property with respect to a partition of a set. Meanwhile, the conditional probability is defined in terms of a multiplicative property, formula_60. Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals formula_61 for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, formula_62 lends itself to practical interpretations. Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on. Alternative characterization via additivity and subadditivity. Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties: Discussion. It was shown that any function formula_70 satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector formula_71. It is worth noting that if we drop the "small for small probabilities" property, then formula_70 must be a non-negative linear combination of the Shannon entropy and the Hartley entropy. Further properties. The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable "X": formula_72. formula_73. formula_74 formula_79 so formula_80, the entropy of a variable can only decrease when the latter is passed through a function. formula_81 formula_82. formula_86 for all probability mass functions formula_87 and formula_88. * Accordingly, the negative entropy (negentropy) function is convex, and its convex conjugate is LogSumExp. Aspects. Relationship to thermodynamic entropy. The inspiration for adopting the word "entropy" in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics. In statistical thermodynamics the most general formula for the thermodynamic entropy "S" of a thermodynamic system is the Gibbs entropy formula_89 where "k"B is the Boltzmann constant, and "p""i" is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Boltzmann (1872). The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927: formula_90 where ρ is the density matrix of the quantum mechanical system and Tr is the trace. At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in "changes" in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant "k"B indicates, the changes in "S" / "k"B for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. The connection between thermodynamics and what is now known as information theory was first made by Ludwig Boltzmann and expressed by his equation: formula_91 where formula_92 is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), "W" is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and "k"B is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is "p""i" = 1/"W". When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently "k"B times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate. In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an "application" of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: "maximum entropy thermodynamics"). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient. Data compression. Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text. If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have "more" than one bit of information per bit of message, but that any value "less" than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten "all" messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. A 2011 study in "Science" estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunications networks. Entropy as a measure of diversity. Entropy is one of several ways to measure biodiversity, and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of 1D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types. Entropy of a sequence. There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message: (The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information. It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy "rate". Shannon himself used the term in this way. If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are "N" published books, and each book is only published once, the estimate of the probability of each book is 1/"N", and the entropy (in bits) is −log2(1/"N") log2("N"). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest. The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, ... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2("n"). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F("n") F("n"−1) + F("n"−2) for "n" 3, 4, 5, ..., F(1) 1, F(2) 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence. Limitations of entropy in cryptography. In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) formula_93 guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called "guesswork" can be used to measure the effort required for a brute force attack. Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all. Data as a Markov process. A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is: formula_94 where "p""i" is the probability of "i". For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is: formula_95 where "i" is a state (certain preceding characters) and formula_96 is the probability of "j" given "i" as the previous character. For a second order Markov source, the entropy rate is formula_97 Efficiency (normalized entropy). A source set formula_1 with a non-uniform distribution will have less entropy than the same set with a uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency: formula_98 Applying the basic properties of the logarithm, this quantity can also be expressed as: formula_99 Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy formula_100. Furthermore, the efficiency is indifferent to choice of (positive) base "b", as indicated by the insensitivity within the final logarithm above thereto. Entropy for continuous random variables. Differential entropy. The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function "f"("x") with finite or infinite support formula_101 on the real line is defined by analogy, using the above form of the entropy as an expectation: formula_102 This is the differential entropy (or continuous entropy). A precursor of the continuous entropy "h"["f"] is the expression for the functional "Η" in the H-theorem of Boltzmann. Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points. To answer this question, a connection must be established between the two functions: In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the "n" (finite or infinite) bins whose probabilities are denoted by "p""n". As the continuous domain is generalized, the width must be made explicit. To do this, start with a continuous function "f" discretized into bins of size formula_103. By the mean-value theorem there exists a value "x""i" in each bin such that formula_104 the integral of the function "f" can be approximated (in the Riemannian sense) by formula_105 where this limit and "bin size goes to zero" are equivalent. We will denote formula_106 and expanding the logarithm, we have formula_107 As Δ → 0, we have formula_108 Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy: formula_109 which is, as said before, referred to as the differential entropy. This means that the differential entropy "is not" a limit of the Shannon entropy for "n" → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension). Limiting density of discrete points. It turns out as a result that, unlike the Shannon entropy, the differential entropy is "not" in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when "x" is a dimensioned variable. "f"("x") will then have the units of 1/"x". The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If "Δ" is some "standard" value of "x" (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as: formula_110 and the result will be the same for any choice of units for "x". In fact, the limit of discrete entropy as formula_111 would also include a term of formula_112, which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme. Relative entropy. Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure "m" as follows. Assume that a probability distribution "p" is absolutely continuous with respect to a measure "m", i.e. is of the form "p"("dx") "f"("x")"m"("dx") for some non-negative "m"-integrable function "f" with "m"-integral 1, then the relative entropy can be defined as formula_113 In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure "m" is the counting measure, and the differential entropy, where the measure "m" is the Lebesgue measure. If the measure "m" is itself a probability distribution, the relative entropy is non-negative, and zero if "p" "m" as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure "m". The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure "m". Use in number theory. Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem. Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) "X""H" formula_114. And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of "X""H"' could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem. The proof is quite involved and it brought together breakthroughs not just in novel use of Shannon Entropy, but also its used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem. While the use of Shannon Entropy in the proof is novel it is likely to open new research in this direction. Use in combinatorics. Entropy has become a useful quantity in combinatorics. Loomis–Whitney inequality. A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset "A" ⊆ Z"d", we have formula_115 where "P""i" is the orthogonal projection in the "i"th coordinate: formula_116 The proof follows as a simple corollary of Shearer's inequality: if "X"1, ..., "X""d" are random variables and "S"1, ..., "S""n" are subsets of {1, ..., "d"} such that every integer between 1 and "d" lies in exactly "r" of these subsets, then formula_117 where formula_118 is the Cartesian product of random variables "X""j" with indexes "j" in "S""i" (so the dimension of this vector is equal to the size of "S""i"). We sketch how Loomis–Whitney follows from this: Indeed, let "X" be a uniformly distributed random variable with values in "A" and so that each point in "A" occurs with equal probability. Then (by the further properties of entropy mentioned above) Η("X") log|"A"|, where denotes the cardinality of "A". Let "S""i" {1, 2, ..., "i"−1, "i"+1, ..., "d"}. The range of formula_119 is contained in "P""i"("A") and hence formula_120. Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain. Approximation to binomial coefficient. For integers 0 &lt; "k" &lt; "n" let "q" "k"/"n". Then formula_121 where formula_122 A nice interpretation of this is that the number of binary strings of length "n" with exactly "k" many 1's is approximately formula_123. Use in machine learning. Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty. Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees formula_124, which is equal to the difference between the entropy of formula_25 and the conditional entropy of formula_25 given formula_0, quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute formula_0. The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally. Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior. Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy). References. &lt;templatestyles src="Reflist/styles.css" /&gt; "This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{X}" }, { "math_id": 2, "text": "p\\colon \\mathcal{X}\\to[0, 1]" }, { "math_id": 3, "text": "\\Eta(X) := -\\sum_{x \\in \\mathcal{X}} p(x) \\log p(x)," }, { "math_id": 4, "text": "\\Sigma" }, { "math_id": 5, "text": "\\log" }, { "math_id": 6, "text": "\\mathbb{E}[-\\log p(X)] " }, { "math_id": 7, "text": "E" }, { "math_id": 8, "text": "p(E)" }, { "math_id": 9, "text": "\\log\\left(\\frac{1}{p(E)}\\right) ," }, { "math_id": 10, "text": "I(E) = -\\log_2(p(E)) ," }, { "math_id": 11, "text": "I(E) = \\log_2\\left(\\frac{1}{p(E)}\\right) ." }, { "math_id": 12, "text": "p=1/6" }, { "math_id": 13, "text": "p=1/2" }, { "math_id": 14, "text": "\\log_2 3" }, { "math_id": 15, "text": "X" }, { "math_id": 16, "text": "p: \\mathcal{X} \\to [0, 1]" }, { "math_id": 17, "text": "p(x) := \\mathbb{P}[X = x]" }, { "math_id": 18, "text": "\\Eta(X) = \\mathbb{E}[\\operatorname{I}(X)] = \\mathbb{E}[-\\log p(X)]." }, { "math_id": 19, "text": "\\mathbb{E}" }, { "math_id": 20, "text": "\\operatorname{I}(X)" }, { "math_id": 21, "text": "\\Eta(X) = -\\sum_{x \\in \\mathcal{X}} p(x)\\log_b p(x) ," }, { "math_id": 22, "text": "p(x) = 0" }, { "math_id": 23, "text": "x \\in \\mathcal{X}" }, { "math_id": 24, "text": "\\lim_{p\\to0^+}p\\log (p) = 0." }, { "math_id": 25, "text": "Y" }, { "math_id": 26, "text": "\\mathcal{Y}" }, { "math_id": 27, "text": " \\Eta(X|Y)=-\\sum_{x,y \\in \\mathcal{X} \\times \\mathcal{Y}} p_{X,Y}(x,y)\\log\\frac{p_{X,Y}(x,y)}{p_Y(y)} ," }, { "math_id": 28, "text": "p_{X,Y}(x,y) := \\mathbb{P}[X=x,Y=y]" }, { "math_id": 29, "text": "p_Y(y) = \\mathbb{P}[Y = y]" }, { "math_id": 30, "text": "(X, \\Sigma, \\mu)" }, { "math_id": 31, "text": "A \\in \\Sigma" }, { "math_id": 32, "text": "A" }, { "math_id": 33, "text": " \\sigma_\\mu(A) = -\\ln \\mu(A) ." }, { "math_id": 34, "text": " h_\\mu(A) = \\mu(A) \\sigma_\\mu(A) ." }, { "math_id": 35, "text": "\\mu" }, { "math_id": 36, "text": "P \\subseteq \\mathcal{P}(X)" }, { "math_id": 37, "text": "\\mu(\\mathop{\\cup} P) = 1" }, { "math_id": 38, "text": "\\mu(A \\cap B) = 0" }, { "math_id": 39, "text": "A, B \\in P" }, { "math_id": 40, "text": "P" }, { "math_id": 41, "text": " \\Eta_\\mu(P) = \\sum_{A \\in P} h_\\mu(A) ." }, { "math_id": 42, "text": "M" }, { "math_id": 43, "text": " \\Eta_\\mu(M) = \\sup_{P \\subseteq M} \\Eta_\\mu(P) ." }, { "math_id": 44, "text": "\\Eta_\\mu(\\Sigma)" }, { "math_id": 45, "text": "\\begin{align}\n\\Eta(X) &= -\\sum_{i=1}^n {p(x_i) \\log_b p(x_i)} \n\\\\ &= -\\sum_{i=1}^2 {\\frac{1}{2}\\log_2{\\frac{1}{2}}} \n\\\\ &= -\\sum_{i=1}^2 {\\frac{1}{2} \\cdot (-1)} = 1.\n\\end{align}" }, { "math_id": 46, "text": "\\begin{align}\n\\Eta(X) &= - p \\log_2 (p) - q \\log_2 (q)\n\\\\ &= - 0.7 \\log_2 (0.7) - 0.3 \\log_2 (0.3) \n\\\\ &\\approx - 0.7 \\cdot (-0.515) - 0.3 \\cdot (-1.737) \n\\\\ &= 0.8816 < 1.\n\\end{align}" }, { "math_id": 47, "text": "\\operatorname{I}" }, { "math_id": 48, "text": "\\operatorname{I}(p) = \\log\\left(\\tfrac{1}{p}\\right) = -\\log(p)." }, { "math_id": 49, "text": "\\operatorname{I}(u) = k \\log u" }, { "math_id": 50, "text": "k<0" }, { "math_id": 51, "text": "x>1" }, { "math_id": 52, "text": "k = - 1/\\log x" }, { "math_id": 53, "text": "\\Eta_n\\left(p_1, p_2, \\ldots p_n \\right) = \\Eta_n\\left(p_{i_1}, p_{i_2}, \\ldots, p_{i_n} \\right)" }, { "math_id": 54, "text": "\\{i_1, ..., i_n\\}" }, { "math_id": 55, "text": "\\{1, ..., n\\}" }, { "math_id": 56, "text": "\\Eta_n" }, { "math_id": 57, "text": "\\Eta_n(p_1,\\ldots,p_n) \\le \\Eta_n\\left(\\frac{1}{n}, \\ldots, \\frac{1}{n}\\right)" }, { "math_id": 58, "text": "\\Eta_n\\bigg(\\underbrace{\\frac{1}{n}, \\ldots, \\frac{1}{n}}_{n}\\bigg) < \\Eta_{n+1}\\bigg(\\underbrace{\\frac{1}{n+1}, \\ldots, \\frac{1}{n+1}}_{n+1}\\bigg)." }, { "math_id": 59, "text": "\\Eta_n\\left(\\frac{1}{n}, \\ldots, \\frac{1}{n}\\right) = \\Eta_k\\left(\\frac{b_1}{n}, \\ldots, \\frac{b_k}{n}\\right) + \\sum_{i=1}^k \\frac{b_i}{n} \\, \\Eta_{b_i}\\left(\\frac{1}{b_i}, \\ldots, \\frac{1}{b_i}\\right)." }, { "math_id": 60, "text": "P(A\\mid B)\\cdot P(B)=P(A\\cap B)" }, { "math_id": 61, "text": "\\mu(A)\\cdot \\ln\\mu(A)" }, { "math_id": 62, "text": "\\log_2" }, { "math_id": 63, "text": "\\Eta(X,Y) \\le \\Eta(X)+\\Eta(Y)" }, { "math_id": 64, "text": "X,Y" }, { "math_id": 65, "text": "\\Eta(X,Y) = \\Eta(X)+\\Eta(Y)" }, { "math_id": 66, "text": "\\Eta_{n+1}(p_1, \\ldots, p_n, 0) = \\Eta_n(p_1, \\ldots, p_n)" }, { "math_id": 67, "text": "\\Eta_n(p_1, \\ldots, p_n)" }, { "math_id": 68, "text": "p_1, \\ldots, p_n" }, { "math_id": 69, "text": "\\lim_{q \\to 0^+} \\Eta_2(1-q, q) = 0" }, { "math_id": 70, "text": "\\Eta" }, { "math_id": 71, "text": "p_1,\\ldots ,p_n" }, { "math_id": 72, "text": "\\Eta_{n+1}(p_1,\\ldots,p_n,0) = \\Eta_n(p_1,\\ldots,p_n)" }, { "math_id": 73, "text": "\\Eta(p_1,\\dots,p_n) \\leq \\log_b n" }, { "math_id": 74, "text": " \\Eta(X,Y)=\\Eta(X|Y)+\\Eta(Y)=\\Eta(Y|X)+\\Eta(X)." }, { "math_id": 75, "text": "Y=f(X)" }, { "math_id": 76, "text": "f" }, { "math_id": 77, "text": "\\Eta(f(X)|X) = 0" }, { "math_id": 78, "text": "\\Eta(X,f(X))" }, { "math_id": 79, "text": " \\Eta(X)+\\Eta(f(X)|X)=\\Eta(f(X))+\\Eta(X|f(X))," }, { "math_id": 80, "text": "\\Eta(f(X)) \\le \\Eta(X)" }, { "math_id": 81, "text": " \\Eta(X|Y)=\\Eta(X)." }, { "math_id": 82, "text": " \\Eta(X|Y)\\leq \\Eta(X)" }, { "math_id": 83, "text": " \\Eta(X,Y)\\leq \\Eta(X)+\\Eta(Y)" }, { "math_id": 84, "text": "\\Eta(p)" }, { "math_id": 85, "text": "p" }, { "math_id": 86, "text": "\\Eta(\\lambda p_1 + (1-\\lambda) p_2) \\ge \\lambda \\Eta(p_1) + (1-\\lambda) \\Eta(p_2)" }, { "math_id": 87, "text": "p_1,p_2" }, { "math_id": 88, "text": " 0 \\le \\lambda \\le 1" }, { "math_id": 89, "text": "S = - k_\\text{B} \\sum p_i \\ln p_i \\,," }, { "math_id": 90, "text": "S = - k_\\text{B} \\,{\\rm Tr}(\\rho \\ln \\rho) \\,," }, { "math_id": 91, "text": "S=k_\\text{B} \\ln W," }, { "math_id": 92, "text": "S" }, { "math_id": 93, "text": "2^{127}" }, { "math_id": 94, "text": "\\Eta(\\mathcal{S}) = - \\sum p_i \\log p_i ," }, { "math_id": 95, "text": "\\Eta(\\mathcal{S}) = - \\sum_i p_i \\sum_j \\ p_i (j) \\log p_i (j) ," }, { "math_id": 96, "text": "p_i(j)" }, { "math_id": 97, "text": "\\Eta(\\mathcal{S}) = -\\sum_i p_i \\sum_j p_i(j) \\sum_k p_{i,j}(k)\\ \\log \\ p_{i,j}(k) ." }, { "math_id": 98, "text": "\\eta(X) = \\frac{H}{H_{max}} = -\\sum_{i=1}^n \\frac{p(x_i) \\log_b (p(x_i))}{\\log_b (n)}.\n" }, { "math_id": 99, "text": "\\eta(X) = -\\sum_{i=1}^n \\frac{p(x_i) \\log_b (p(x_i))}{\\log_b (n)} = \\sum_{i=1}^n \\frac{\\log_b(p(x_i)^{-p(x_i)})}{\\log_b(n)} = \n\\sum_{i=1}^n \\log_n(p(x_i)^{-p(x_i)}) = \n\\log_n (\\prod_{i=1}^n p(x_i)^{-p(x_i)}).\n" }, { "math_id": 100, "text": "{\\log_b (n)}" }, { "math_id": 101, "text": "\\mathbb X" }, { "math_id": 102, "text": "\\Eta(X) = \\mathbb{E}[-\\log f(X)] = -\\int_\\mathbb X f(x) \\log f(x)\\, \\mathrm{d}x." }, { "math_id": 103, "text": "\\Delta" }, { "math_id": 104, "text": "f(x_i) \\Delta = \\int_{i\\Delta}^{(i+1)\\Delta} f(x)\\, dx" }, { "math_id": 105, "text": "\\int_{-\\infty}^{\\infty} f(x)\\, dx = \\lim_{\\Delta \\to 0} \\sum_{i = -\\infty}^{\\infty} f(x_i) \\Delta ," }, { "math_id": 106, "text": "\\Eta^{\\Delta} := - \\sum_{i=-\\infty}^{\\infty} f(x_i) \\Delta \\log \\left( f(x_i) \\Delta \\right)" }, { "math_id": 107, "text": "\\Eta^{\\Delta} = - \\sum_{i=-\\infty}^{\\infty} f(x_i) \\Delta \\log (f(x_i)) -\\sum_{i=-\\infty}^{\\infty} f(x_i) \\Delta \\log (\\Delta)." }, { "math_id": 108, "text": "\\begin{align}\n\\sum_{i=-\\infty}^{\\infty} f(x_i) \\Delta &\\to \\int_{-\\infty}^{\\infty} f(x)\\, dx = 1 \\\\\n\\sum_{i=-\\infty}^{\\infty} f(x_i) \\Delta \\log (f(x_i)) &\\to \\int_{-\\infty}^{\\infty} f(x) \\log f(x)\\, dx.\n\\end{align}" }, { "math_id": 109, "text": "h[f] = \\lim_{\\Delta \\to 0} \\left(\\Eta^{\\Delta} + \\log \\Delta\\right) = -\\int_{-\\infty}^{\\infty} f(x) \\log f(x)\\,dx," }, { "math_id": 110, "text": "\\Eta=\\int_{-\\infty}^\\infty f(x) \\log(f(x)\\,\\Delta)\\,dx ," }, { "math_id": 111, "text": " N \\rightarrow \\infty " }, { "math_id": 112, "text": " \\log(N)" }, { "math_id": 113, "text": "D_{\\mathrm{KL}}(p \\| m ) = \\int \\log (f(x)) p(dx) = \\int f(x)\\log (f(x)) m(dx) ." }, { "math_id": 114, "text": "\\lambda(n+H)" }, { "math_id": 115, "text": " |A|^{d-1}\\leq \\prod_{i=1}^{d} |P_{i}(A)|" }, { "math_id": 116, "text": " P_{i}(A)=\\{(x_{1}, \\ldots, x_{i-1}, x_{i+1}, \\ldots, x_{d}) : (x_{1}, \\ldots, x_{d})\\in A\\}." }, { "math_id": 117, "text": " \\Eta[(X_{1}, \\ldots ,X_{d})]\\leq \\frac{1}{r}\\sum_{i=1}^{n}\\Eta[(X_{j})_{j\\in S_{i}}]" }, { "math_id": 118, "text": " (X_{j})_{j\\in S_{i}}" }, { "math_id": 119, "text": "(X_{j})_{j\\in S_{i}}" }, { "math_id": 120, "text": " \\Eta[(X_{j})_{j\\in S_{i}}]\\leq \\log |P_{i}(A)|" }, { "math_id": 121, "text": "\\frac{2^{n\\Eta(q)}}{n+1} \\leq \\tbinom nk \\leq 2^{n\\Eta(q)}," }, { "math_id": 122, "text": "\\Eta(q) = -q \\log_2(q) - (1-q) \\log_2(1-q)." }, { "math_id": 123, "text": "2^{n\\Eta(k/n)}" }, { "math_id": 124, "text": "IG(Y,X)" } ]
https://en.wikipedia.org/wiki?curid=15445
15445023
Earnings growth
Compound annual growth rate of earnings from investments Earnings growth is the annual compound annual growth rate (CAGR) of earnings from investments. Overview. When the dividend payout ratio is the same, the dividend growth rate is equal to the earnings growth rate. Earnings growth rate is a key value that is needed when the Discounted cash flow model, or the Gordon's model is used for stock valuation. The present value is given by: formula_0 . where P = the present value, k = discount rate, D = current dividend and formula_1 is the revenue growth rate for period i. If the growth rate is constant for formula_2 to formula_3, then, formula_4 The last term corresponds to the terminal case. When the growth rate is always the same for perpetuity, Gordon's model results: formula_5. As Gordon's model suggests, the valuation is very sensitive to the value of g used. Part of the earnings is paid out as dividends and part of it is retained to fund growth, as given by the payout ratio and the plowback ratio. Thus the growth rate is given by formula_6. For the S&amp;P 500 Index, the return on equity has ranged between 10 and 15% during the 20th century, the plowback ratio has ranged from 10 to 67% (see payout ratio). Other related measures. It is sometimes recommended that revenue growth should be checked to ensure that earnings growth is not coming from special situations like sale of assets. When the earnings acceleration (rate of change of earnings growth) is positive, it ensures that earnings growth is likely to continue. Historical growth rates. According to economist Robert J. Shiller, real earnings per share grew at a 3.5% annualized rate over 150 years. Since 1980, the most bullish period in U.S. stock market history, real earnings growth according to Shiller, has been 2.6%. The table below gives recent values of earnings growth for S&amp;P 500. The Federal Reserve responded to decline in earnings growth by cutting the target Federal funds rate (from 6.00 to 1.75% in 2001) and raising them when the growth rates are high (from 3.25 to 5.50 in 1994, 2.50 to 4.25 in 2005). P/E ratio and growth rate. Growth stocks generally command a higher P/E ratio because their future earnings are expected to be greater. In Stocks for the Long Run, Jeremy Siegel examines the P/E ratios of growth and technology stocks. He examined Nifty Fifty stocks for the duration December 1972 to Nov 2001. He found that This suggests that the significantly high P/E ratio for the Nifty Fifty as a group in 1972 was actually justified by the returns during the next three decades. However, he found that some individual stocks within the Nifty Fifty were overvalued while others were undervalued. Sustainability of high growth rates. High growth rates cannot be sustained indefinitely. Ben McClure suggests that period for which such rates can be sustained can be estimated using the following: Relationship with GDP growth. It has been suggested that the earnings growth depends on the nominal GDP, since the earnings form a part of the GDP. It has been argued that the earnings growth must grow slower than GDP by approximately 2%. See Sustainable growth rate#From a financial perspective. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = D\\cdot\\sum_{i=1}^{\\infty}\\left(\\frac{1+g_i}{1+k}\\right)^{i}" }, { "math_id": 1, "text": "g_i" }, { "math_id": 2, "text": "i=n+1 " }, { "math_id": 3, "text": "\\infty" }, { "math_id": 4, "text": "P = D\\cdot\\frac{1+g_1}{1+k} + D\\cdot(\\frac{1+g_2}{1+k})^2 +...+ D\\cdot(\\frac{1+g_n}{1+k})^n+ D\\cdot\\sum_{i=n+1}^{\\infty}\\left(\\frac{1+g_\\infty}{1+k}\\right)^{i}" }, { "math_id": 5, "text": "P = D\\times\\frac{1+g}{k-g}" }, { "math_id": 6, "text": "g = {Plowback\\ ratio}\\times {return\\ on\\ equity}" } ]
https://en.wikipedia.org/wiki?curid=15445023
154473
Fermi level
Quantity in solid state thermodynamics The Fermi level of a solid-state body is the thermodynamic work required to add one electron to the body. It is a thermodynamic quantity usually denoted by "μ" or "E"F for brevity. The Fermi level does not include the work required to remove the electron from wherever it came from. A precise understanding of the Fermi level—how it relates to electronic band structure in determining electronic properties; how it relates to the voltage and flow of charge in an electronic circuit—is essential to an understanding of solid-state physics. In band structure theory, used in solid state physics to analyze the energy levels in a solid, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a "50% probability of being occupied at any given time". The position of the Fermi level in relation to the band energy levels is a crucial factor in determining electrical properties. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. Voltage measurement. Sometimes it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true. As a counterexample, multi-material devices such as p–n junctions contain internal electrostatic potential differences at equilibrium, yet without any accompanying net current; if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material—Pauli repulsion, carrier concentration gradients, electromagnetic induction, and thermal effects also play an important role. In fact, the quantity called "voltage" as measured in an electronic circuit has a simple relationship to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the "total" work transferred when a unit charge is allowed to move from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The Fermi level of a body expresses the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, "V"A − "V"B, the observed difference in voltage between two points, "A" and "B", in an electronic circuit is exactly related to the corresponding chemical potential difference, "μ"A − "μ"B, in Fermi level by the formula formula_0 where −"e" is the electron charge. From the above discussion it can be seen that electrons will move from a body of high "μ" (low voltage) to low "μ" (high voltage) if a simple path is provided. This flow of electrons will cause the lower "μ" to increase (due to charging or other repulsion effects) and likewise cause the higher "μ" to decrease. Eventually, "μ" will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: &lt;templatestyles src="Block indent/styles.css"/&gt;"An electronic circuit in thermodynamic equilibrium will have a constant Fermi level throughout its connected parts." This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit be internally connected and not contain any batteries or other power sources, nor any variations in temperature. Band structure of solids. In the band theory of solids, electrons occupy a series of bands composed of single-particle energy eigenstates each labelled by "ϵ". Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi–Dirac distribution, formula_1, gives the probability that (at thermodynamic equilibrium) a state having energy "ϵ" is occupied by an electron: formula_2 Here, "T" is the absolute temperature and "k"B is the Boltzmann constant. If there is a state at the Fermi level ("ϵ" = "μ"), then this state will have a 50% chance of being occupied. The distribution is plotted in the left figure. The closer "f" is to 1, the higher chance this state is occupied. The closer "f" is to 0, the higher chance this state is empty. The location of "μ" within a material's band structure is important in determining the electrical behaviour of the material. In semiconductors and semimetals the position of "μ" relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change "μ" which is fixed by the electrodes, but rather they cause the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. Local conduction band referencing, internal chemical potential and the parameter "ζ". If the symbol "ℰ" is used to denote an electron energy level measured relative to the energy of the edge of its enclosing band, "ϵ"C, then in general we have formula_3 We can define a parameter "ζ" that references the Fermi level with respect to the band edge:formula_4It follows that the Fermi–Dirac distribution function can be written asformula_5The band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. Confusingly, in some contexts the band-referenced quantity "ζ" may be called the "Fermi level", "chemical potential", or "electrochemical potential", leading to ambiguity with the globally-referenced Fermi level. In this article, the terms "conduction-band referenced Fermi level" or "internal chemical potential" are used to refer to "ζ". "ζ" is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of "ζ" when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the "ℰ" of a state is the kinetic energy of that state and "ϵ"C is its potential energy. With this in mind, the parameter, "ζ", could also be labelled the "Fermi kinetic energy". Unlike "μ", the parameter, "ζ", is not a constant at equilibrium, but rather varies from location to location in a material due to variations in "ϵ"C, which is determined by factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, "ζ" can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. In a multi-band material, "ζ" may even take on multiple values in a single location. For example, in a piece of aluminum there are two conduction bands crossing the Fermi level (even more bands in other materials); each band has a different edge energy, "ϵ"C, and a different "ζ". The value of "ζ" at zero temperature is widely known as the Fermi energy, sometimes written "ζ"0. Confusingly (again), the name "Fermi energy" sometimes is used to refer to "ζ" at non-zero temperature. Temperature out of equilibrium. The Fermi level, "μ", and temperature, "T", are well defined constants for a solid-state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in "quasi-equilibrium" when and where such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient of "μ") or its thermal conductivity (as resulting from a gradient in "T"). The quasi-"μ" and quasi-"T" can vary (or not exist at all) in any non-equilibrium situation, such as: In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be "non-thermalized". In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of "μ" and "T" to different bands (conduction band vs. valence band). Even then, the values of "μ" and "T" may jump discontinuously across a material interface (e.g., p–n junction) when a current is being driven, and be ill-defined at the interface itself. Technicalities. Terminology problems. The term "Fermi level" is mainly used in discussing the solid state physics of electrons in semiconductors, and a precise usage of this term is necessary to describe band diagrams in devices comprising different materials with different levels of doping. In these contexts, however, one may also see Fermi level used imprecisely to refer to the "band-referenced Fermi level", "μ" − "ϵ"C, called "ζ" above. It is common to see scientists and engineers refer to "controlling", "pinning", or "tuning" the Fermi level inside a conductor, when they are in fact describing changes in "ϵ"C due to doping or the field effect. In fact, thermodynamic equilibrium guarantees that the Fermi level in a conductor is "always" fixed to be exactly equal to the Fermi level of the electrodes; only the band structure (not the Fermi level) can be changed by doping or the field effect (see also band diagram). A similar ambiguity exists between the terms, "chemical potential" and "electrochemical potential". It is also important to note that Fermi "level" is not necessarily the same thing as Fermi "energy". In the wider context of quantum mechanics, the term Fermi energy usually refers to "the maximum kinetic energy of a fermion in an idealized non-interacting, disorder free, zero temperature Fermi gas". This concept is very theoretical (there is no such thing as a non-interacting Fermi gas, and zero temperature is impossible to achieve). However, it finds some use in approximately describing white dwarfs, neutron stars, atomic nuclei, and electrons in a metal. On the other hand, in the fields of semiconductor physics and engineering, "Fermi energy" often is used to refer to the Fermi level described in this article. Fermi level referencing and the location of zero Fermi level. Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily. Observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they all be consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is inherently ambiguous (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its "μ" is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. Why it is not advisable to use "the energy in vacuum" as a reference zero. In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for energies. This approach is not advisable unless one is careful to define exactly where "the vacuum" is. The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the Earth-referenced Fermi level suggested above. This also has the advantage that it can be measured with a voltmeter. Discrete charging effects in small systems. In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider a capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the chemical potential as well as the state of the device: is it electrically isolated, or is it connected to an electrode? These chemical potentials are not equivalent, "μ" ≠ "μ"′ ≠ "μ"″, except in the thermodynamic limit. The distinction is important in small systems such as those showing Coulomb blockade. The parameter, "μ", (i.e., in the case where the number of electrons is allowed to fluctuate) remains exactly related to the voltmeter voltage, even in small systems. To be precise, then, the Fermi level is defined not by a deterministic charging event by one electron charge, but rather a statistical charging event by an infinitesimal fraction of an electron. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " V_\\mathrm{A} - V_\\mathrm{B} = \\frac{\\mu_\\mathrm{A} - \\mu_\\mathrm{B}}{-e} " }, { "math_id": 1, "text": "f(\\epsilon)" }, { "math_id": 2, "text": " f(\\epsilon) = \\frac{1}{e^{(\\epsilon - \\mu)/k_\\mathrm{B} T} + 1} " }, { "math_id": 3, "text": "\\text{ℰ} = \\varepsilon - \\varepsilon_{\\rm C}.\n" }, { "math_id": 4, "text": "\\zeta = \\mu - \\epsilon_{\\rm C}." }, { "math_id": 5, "text": "f(\\mathcal{E}) = \\frac{1}{e^{(\\mathcal{E} - \\zeta)/k_\\mathrm{B} T} + 1}. " }, { "math_id": 6, "text": "\\mu(\\left\\langle N \\right\\rangle,T) = \\left(\\frac{\\partial F}{\\partial \\left\\langle N \\right\\rangle}\\right)_T," }, { "math_id": 7, "text": "\\mu'(N, T) = F(N + 1, T) - F(N, T)," }, { "math_id": 8, "text": "\\mu''(N, T) = F(N, T) - F(N - 1, T) = \\mu'(N - 1, T)." } ]
https://en.wikipedia.org/wiki?curid=154473
1544750
Inversive congruential generator
Inversive congruential generators are a type of nonlinear congruential pseudorandom number generator, which use the modular multiplicative inverse (if it exists) to generate the next number in a sequence. The standard formula for an inversive congruential generator, modulo some prime "q" is: formula_0 formula_1 Such a generator is denoted symbolically as ICG("q", "a", "c", "seed") and is said to be an ICG with parameters "q", "a", "c" and seed "seed". Period. The sequence formula_2 must have formula_3 after finitely many steps, and since the next element depends only on its direct predecessor, also formula_4 etc. The maximum possible period for the modulus "q" is "q" itself, i.e. the sequence includes every value from 0 to "q" − 1 before repeating. A sufficient condition for the sequence to have the maximum possible period is to choose "a" and "c" such that the polynomial formula_5 (polynomial ring over formula_6) is primitive. This is not a necessary condition; there are choices of "q", "a" and "c" for which formula_7 is not primitive, but the sequence nevertheless has a period of "q". Any polynomial, primitive or not, that leads to a maximal-period sequence is called an inversive maximal-period (IMP) polynomial. Chou describes an algorithm for choosing the parameters "a" and "c" to get such polynomials. Eichenauer-Herrmann, Lehn, Grothe and Niederreiter have shown that inversive congruential generators have good uniformity properties, in particular with regard to lattice structure and serial correlations. Example. ICG(5, 2, 3, 1) gives the sequence 1, 0, 3, 2, 4, 1, 0, 3, 4, 2, 1, 0, ... In this example, formula_8 is irreducible in formula_9, as none of 0, 1, 2, 3 or 4 is a root. It can also be verified that "x" is a primitive element of formula_10 and hence "f" is primitive. Compound inversive generator. The construction of a compound inversive generator (CIG) relies on combining two or more inversive congruential generators according to the method described below. Let formula_11 be distinct prime integers, each formula_12. For each index "j", "1" ≤ "j" ≤ "r", let formula_2 be a sequence of elements of formula_13 periodic with period length formula_14. In other words, formula_15. For each index "j", 1 ≤ "j" ≤ r, we consider formula_16, where formula_17 is the period length of the following sequence formula_2. The sequence formula_2 of compound pseudorandom numbers is defined as the sum formula_18. The compound approach allows combining inversive congruential generators, provided they have full period, in parallel generation systems. Advantages of CIG. The CIG are accepted for practical purposes for a number of reasons. Firstly, binary sequences produced in this way are free of undesirable statistical deviations. Inversive sequences extensively tested with variety of statistical tests remain stable under the variation of parameter. Secondly, there exists a steady and simple way of parameter choice, based on the Chou algorithm that guarantees maximum period length. Thirdly, compound approach has the same properties as single inversive generators, but it also provides period length significantly greater than obtained by a single inversive congruential generator. They seem to be designed for application with multiprocessor parallel hardware platforms. There exists an algorithm that allows designing compound generators with predictable period length, predictable linear complexity level, with excellent statistical properties of produced bit streams. The procedure of designing this complex structure starts with defining finite field of "p" elements and ends with choosing the parameters "a" and "c" for each inversive congruential generator being the component of the compound generator. It means that each generator is associated to a fixed IMP polynomial. Such a condition is sufficient for maximum period of each inversive congruential generator and finally for maximum period of the compound generator. The construction of IMP polynomials is the most efficient approach to find parameters for inversive congruential generator with maximum period length. Discrepancy and its boundaries. Equidistribution and statistical independence properties of the generated sequences, which are very important for their usability in a stochastic simulation, can be analyzed based on the "discrepancy" of "s"-tuples of successive pseudorandom numbers with formula_19 and formula_20 respectively. The discrepancy computes the distance of a generator from a uniform one. A low discrepancy means that the sequence generated can be used for cryptographic purposes, and the first aim of the inversive congruential generator is to provide pseudorandom numbers. Definition. For N arbitrary points formula_21 the discrepancy is defined by formula_22, where the supremum is extended over all subintervals J of formula_23, formula_24 is formula_25 times the number of points among formula_26 falling into J and &amp;NoBreak;&amp;NoBreak; denotes the s-dimensional volume of J. Until now, we had sequences of integers from 0 to &amp;NoBreak;&amp;NoBreak;, in order to have sequences of formula_23, one can divide a sequences of integers by its period T. From this definition, we can say that if the sequence formula_27 is perfectly random then its well distributed on the interval formula_28 then formula_29 and all points are in J so formula_30 hence formula_31 but instead if the sequence is concentrated close to one point then the subinterval J is very small formula_32 and formula_33 so formula_34 Then we have from the better and worst case: formula_35. Notations. Some further notation is necessary. For integers formula_36 and formula_37 let formula_38 be the set of nonzero lattice points formula_39 with formula_40 for formula_41. Define formula_42 and formula_43 for formula_44. For real formula_45 the abbreviation formula_46 is used, and formula_47 stands for the standard inner product of formula_48 in formula_49. Higher bound. Let formula_50 and formula_51 be integers. Let formula_52 with formula_53 for formula_54. Then the discrepancy of the points formula_55 satisfies formula_56 ≤ formula_57 + formula_58 formula_59formula_60 Lower bound. The discrepancy of formula_61 arbitrary points formula_62 satisfies formula_63 for any nonzero lattice point formula_64, where formula_65 denotes the number of nonzero coordinates of formula_66. These two theorems show that the CIG is not perfect because the discrepancy is greater strictly than a positive value but also the CIG is not the worst generator as the discrepancy is lower than a value less than 1. There exist also theorems which bound the average value of the discrepancy for Compound Inversive Generators and also ones which take values such that the discrepancy is bounded by some value depending on the parameters. For more details see the original paper. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_0 = \\text{seed}," }, { "math_id": 1, "text": "x_{i+1} = \\begin{cases}\n (ax_i^{-1} + c) \\bmod q & \\text{if } x_i \\ne 0, \\\\\n c & \\text{if } x_i = 0.\n\\end{cases}\n" }, { "math_id": 2, "text": "(x_n)_{n\\geq 0}" }, { "math_id": 3, "text": "x_i = x_j" }, { "math_id": 4, "text": "x_{i+1} = x_{j+1}" }, { "math_id": 5, "text": "f(x) = x^2 - cx - a \\in \\mathbb F_q[x]" }, { "math_id": 6, "text": "\\mathbb F_q" }, { "math_id": 7, "text": "f(x)" }, { "math_id": 8, "text": "f(x) = x^2 - 3x - 2" }, { "math_id": 9, "text": "\\mathbb F_5[x]" }, { "math_id": 10, "text": "\\mathbb F_5[x]/(f)" }, { "math_id": 11, "text": "p_1, \\dots, p_r" }, { "math_id": 12, "text": "p_j \\geq 5" }, { "math_id": 13, "text": "\\mathbb F_{p_j}" }, { "math_id": 14, "text": "p_j" }, { "math_id": 15, "text": "\\{x_n^{(j)} \\mid 0 \\leq n \\leq p_j\\} \\in \\mathbb F_{p_j}" }, { "math_id": 16, "text": "T_j = T/p_j" }, { "math_id": 17, "text": "T = p_1 \\cdots p_r" }, { "math_id": 18, "text": "x_n = \\left(T_1 x_n^{(1)} + T_2 x_n^{(2)} + \\dots + T_r x_n^{(r)}\\right) \\bmod T" }, { "math_id": 19, "text": "s = 1" }, { "math_id": 20, "text": "s = 2" }, { "math_id": 21, "text": "{\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1}\\in [0,1)" }, { "math_id": 22, "text": "D_N({\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1})={\\rm sup}_J|F_N(J)- V(J)|" }, { "math_id": 23, "text": "[0,1)^s" }, { "math_id": 24, "text": "F_N(J)" }, { "math_id": 25, "text": "N^{-1}" }, { "math_id": 26, "text": " {\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1}" }, { "math_id": 27, "text": "{\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1} " }, { "math_id": 28, "text": "J=[0,1)^s" }, { "math_id": 29, "text": "V(J)=1" }, { "math_id": 30, "text": "F_N(J)=N/N=1" }, { "math_id": 31, "text": "D_N({\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1})=0" }, { "math_id": 32, "text": "V(j)\\approx 0" }, { "math_id": 33, "text": "F_N(j)\\approx N/N\\approx 1" }, { "math_id": 34, "text": "D_N({\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1})=1" }, { "math_id": 35, "text": "0\\leq D_N({\\mathbf t}_1, \\dots , {\\mathbf t}_{N-1})\\leq 1" }, { "math_id": 36, "text": "k\\geq 1 " }, { "math_id": 37, "text": "q\\geq 2" }, { "math_id": 38, "text": "C_k(q)" }, { "math_id": 39, "text": "(h_1,\\dots ,h_k)\\in Z^k" }, { "math_id": 40, "text": "-q/2< h_j< q/2" }, { "math_id": 41, "text": "1\\leq j \\leq k" }, { "math_id": 42, "text": "r(h,q)= \\begin{cases}\nq \\sin (\\pi|h|/q)&\\text{for }h \\in C_{1}(q)\\\\\n1 &\\text{for }h = 0\n\n\\end{cases}" }, { "math_id": 43, "text": "\nr (\\mathbf{h},q)=\\prod_{j=1}^k r(h_j,q)\n" }, { "math_id": 44, "text": "{\\mathbf h} =(h_1,\\dots ,h_k) \\in C_k(q)" }, { "math_id": 45, "text": "t" }, { "math_id": 46, "text": "e(t)={\\rm exp}(2\\pi\\cdot it)" }, { "math_id": 47, "text": "u\\cdot v" }, { "math_id": 48, "text": "u,v" }, { "math_id": 49, "text": "R^k" }, { "math_id": 50, "text": "N \\geq 1" }, { "math_id": 51, "text": "q \\geq 2" }, { "math_id": 52, "text": "{\\mathbf t}_n= y_n/q \\in [0,1)^k" }, { "math_id": 53, "text": "y_n \\in \\{0,1,\\dots ,q-1\\}^k" }, { "math_id": 54, "text": "0\\leq n< N" }, { "math_id": 55, "text": "{\\mathbf t}_0 ,\\dots ,{\\mathbf t}_{N-1}" }, { "math_id": 56, "text": "D_N (\\mathbf{t}_0,\\mathbf{t}_1, \\dots ,\\mathbf{t}_{N-1})" }, { "math_id": 57, "text": " \\frac kq " }, { "math_id": 58, "text": "\\frac 1N" }, { "math_id": 59, "text": "\\sum_{h \\in\\Complex_k(q)}" }, { "math_id": 60, "text": "\\frac 1{r(\\mathbf{h},q)} \\Bigg| \\sum_{n=0}^{N-1} e(\\mathbf{h}\\cdot \\mathbf{t}_n)\\Bigg|" }, { "math_id": 61, "text": "N" }, { "math_id": 62, "text": "\\mathbf{t}_1, \\dots ,\\mathbf{t}_{N-1}\\in [0,1)^k" }, { "math_id": 63, "text": "D_N (\\mathbf{t}_0,\\mathbf{t}_1, \\dots ,\\mathbf{t}_{N-1}) \\ge \\frac {\\pi}{2N((\\pi+1)^l -1)\\prod_{j=1}^k {\\rm max}(1,h_j)}\\Bigg|\\sum_{n=0}^{N-1} e(\\mathbf{h}\\cdot \\mathbf{t}_n)\\Bigg|" }, { "math_id": 64, "text": "{\\mathbf h}=(h_1,\\dots ,h_k)\\in Z^k" }, { "math_id": 65, "text": "l" }, { "math_id": 66, "text": "{\\mathbf h}" } ]
https://en.wikipedia.org/wiki?curid=1544750
1544814
Wahlund effect
Effect in population genetics In population genetics, the Wahlund effect is a reduction of heterozygosity (that is when an organism has two different alleles at a locus) in a population caused by subpopulation structure. Namely, if two or more subpopulations are in a Hardy–Weinberg equilibrium but have different allele frequencies, the overall heterozygosity is reduced compared to if the whole population was in equilibrium. The underlying causes of this population subdivision could be geographic barriers to gene flow followed by genetic drift in the subpopulations. The Wahlund effect was first described by the Swedish geneticist Sten Wahlund in 1928. Simplest example. Suppose there is a population formula_0, with allele frequencies of A and a given by formula_1 and formula_2 respectively (formula_3). Suppose this population is split into two equally-sized subpopulations, formula_4 and formula_5, and that all the "A" alleles are in subpopulation formula_4 and all the "a" alleles are in subpopulation formula_5 (this could occur due to drift). Then, there are no heterozygotes, even though the subpopulations are in a Hardy–Weinberg equilibrium. Case of two alleles and two subpopulations. To make a slight generalization of the above example, let formula_6 and formula_7 represent the allele frequencies of A in formula_4 and formula_5, respectively (and formula_8 and formula_9 likewise represent a). Let the allele frequency in each population be different, i.e. formula_10. Suppose each population is in an internal Hardy–Weinberg equilibrium, so that the genotype frequencies AA, Aa and aa are "p"2, 2"pq", and "q"2 respectively for each population. Then the heterozygosity (formula_11) in the overall population is given by the mean of the two: formula_12 which is always smaller than formula_13 (formula_14) unless formula_15 Generalization. The Wahlund effect may be generalized to different subpopulations of different sizes. The heterozygosity of the total population is then given by the mean of the heterozygosities of the subpopulations, weighted by the subpopulation size. "F"-statistics. The reduction in heterozygosity can be measured using "F"-statistics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "q" }, { "math_id": 3, "text": "p + q = 1" }, { "math_id": 4, "text": "P_1" }, { "math_id": 5, "text": "P_2" }, { "math_id": 6, "text": "p_1" }, { "math_id": 7, "text": "p_2" }, { "math_id": 8, "text": "q_1" }, { "math_id": 9, "text": "q_2" }, { "math_id": 10, "text": "p_1 \\ne p_2" }, { "math_id": 11, "text": "H" }, { "math_id": 12, "text": "\n\\begin{align}\nH & = {2p_1q_1 + 2p_2q_2 \\over 2} \\\\[5pt]\n& = {p_1q_1 + p_2q_2} \\\\[5pt]\n& = {p_1(1-p_1) + p_2(1-p_2)}\n\\end{align}\n" }, { "math_id": 13, "text": "2p(1-p)" }, { "math_id": 14, "text": "{}=2pq" }, { "math_id": 15, "text": "p_1=p_2" } ]
https://en.wikipedia.org/wiki?curid=1544814
15449749
Dividend payout ratio
The dividend payout ratio is the fraction of net income a firm pays to its stockholders in dividends: formula_0 The part of earnings not paid to investors is left for investment to provide for future earnings growth. Investors seeking high current income and limited capital growth prefer companies with a high dividend payout ratio. However, investors seeking capital growth may prefer a lower payout ratio because capital gains are taxed at a lower rate. High growth firms in early life generally have low or zero payout ratios. As they mature, they tend to return more of the earnings back to investors. The dividend payout ratio is calculated as DPS/EPS. According to Financial Accounting by Walter T. Harrison, the calculation for the payout ratio is as follows: Payout Ratio = (Dividends - Preferred Stock Dividends)/Net Income The dividend yield is given by earnings yield times the dividend payout ratio: formula_1 Conversely, the P/E ratio is the Price/Dividend ratio times the DPR. Impact of buybacks. Some companies choose stock buybacks as an alternative to dividends; in such cases this ratio becomes less meaningful. One way to adapt it using an augmented payout ratio: Augmented Payout Ratio = (Dividends + Buybacks)/ Net Income for the same period Historic data. The data for S&amp;P 500 is taken from a 2006 Eaton Vance post. The payout rate has gradually declined from 90% of operating earnings in 1940s to about 30% in recent years. For smaller, growth companies, the average payout ratio can be as low as 10%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{Dividend payout ratio}=\\frac{\\mbox{Dividends}}{\\mbox{Net Income for the same period}}" }, { "math_id": 1, "text": "\n\\begin{array}{lcl}\n \\mbox{Current Dividend Yield} & = & \\frac{\\mbox{Most Recent Full-Year Dividend}}{\\mbox{Current Share Price}} \\\\\n & = & \\frac{\\mbox{Dividend payout ratio}\\times \\mbox{Most Recent Full-Year earnings per share}}{\\mbox{Current Share Price}} \\\\\n \\end{array}\n" } ]
https://en.wikipedia.org/wiki?curid=15449749
1544998
Van Wijngaarden grammar
Notation techniques for grammars in computer science In computer science, a Van Wijngaarden grammar (also vW-grammar or W-grammar) is a formalism for defining formal languages. The name derives from the formalism invented by Adriaan van Wijngaarden for the purpose of defining the ALGOL 68 programming language. The resulting specification remains its most notable application. Van Wijngaarden grammars address the problem that context-free grammars cannot express agreement or reference, where two different parts of the sentence must agree with each other in some way. For example, the sentence "The birds was eating" is not Standard English because it fails to agree on number. A context-free grammar would parse "The birds was eating" and "The birds were eating" and "The bird was eating" in the same way. However, context-free grammars have the benefit of simplicity whereas van Wijngaarden grammars are considered highly complex. Two levels. W-grammars are two-level grammars: they are defined by a pair of grammars, that operate on different levels: The set of strings generated by a W-grammar is defined by a two-stage process: The "consistent substitution" used in the first step is the same as substitution in predicate logic, and actually supports logic programming; it corresponds to unification in Prolog, as noted by Alain Colmerauer. W-grammars are Turing complete; hence, all decision problems regarding the languages they generate, such as are undecidable. Curtailed variants, known as affix grammars, were developed, and applied in compiler construction and to the description of natural languages. Definite logic programs, that is, logic programs that make no use of negation, can be viewed as a subclass of W-grammars. Motivation and history. In the 1950s, attempts started to apply computers to the recognition, interpretation and translation of natural languages, such as English and Russian. This requires a machine-readable description of the phrase structure of sentences, that can be used to parse and interpret them, and to generate them. Context-free grammars, a concept from structural linguistics, were adopted for this purpose; their rules can express how sentences are recursively built out of parts of speech, such as noun phrases and verb phrases, and ultimately, words, such as nouns, verbs, and pronouns. This work influenced the design and implementation of programming languages, most notably, of ALGOL 60, which introduced a syntax description in Backus–Naur form. However, context-free rules cannot express agreement or reference (anaphora), where two different parts of the sentence must agree with each other in some way. These can be readily expressed in W-grammars. (See example below.) Programming languages have the analogous notions of typing and scoping. A compiler or interpreter for the language must recognize which uses of a variable belong together (refer to the same variable). This is typically subject to constraints such as: W-grammars are based on the idea of providing the nonterminal symbols of context-free grammars with "attributes" (or "affixes") that pass information between the nodes of the parse tree, used to constrain the syntax and to specify the semantics. This idea was well known at the time; e.g. Donald Knuth visited the ALGOL 68 design committee while developing his own version of it, attribute grammars. By augmenting the syntax description with attributes, constraints like the above can be checked, ruling many invalid programs out at compile time. As Van Wijngaarden wrote in his preface: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;My main objections were certain to me unnecessary restrictions and the definition of the syntax and semantics. Actually the syntax viewed in MR 75 produces a large number of programs, whereas I should prefer to have the subset of meaningful programs as large as possible, which requires a stricter syntax. [...] it soon became clear that some better tools than the Backus notation might be advantageous [...]. I developed a scheme [...] which enables the design of a language to carry much more information in the syntax than is normally carried. Quite peculiar to W-grammars was their strict treatment of attributes as strings, defined by a context-free grammar, on which concatenation is the only possible operation; complex data structures and operations can be defined by pattern matching. (See example below.) After their introduction in the 1968 ALGOL 68 "Final Report", W-grammars were widely considered as too powerful and unconstrained to be practical. This was partly a consequence of the way in which they had been applied; the 1973 ALGOL 68 "Revised Report" contains a much more readable grammar, without modifying the W-grammar formalism itself. Meanwhile, it became clear that W-grammars, when used in their full generality, are indeed too powerful for such practical purposes as serving as the input for a parser generator. They describe precisely all recursively enumerable languages, which makes parsing impossible in general: it is an undecidable problem to decide whether a given string can be generated by a given W-grammar. Hence, their use must be seriously constrained when used for automatic parsing or translation. Restricted and modified variants of W-grammars were developed to address this, e.g. After the 1970s, interest in the approach waned; occasionally, new studies are published. Examples. Agreement in English grammar. In English, nouns, pronouns and verbs have attributes such as grammatical number, gender, and person, which must agree between subject, main verb, and pronouns referring to the subject: are valid sentences; invalid are, for instance: Here, agreement serves to stress that both pronouns (e.g. "I" and "myself") refer to the same person. A context-free grammar to generate all such sentences: &lt;sentence&gt; ::= &lt;subject&gt; &lt;verb&gt; &lt;object&gt; &lt;subject&gt; ::= I | You | He | She | We | They &lt;verb&gt; ::= wash | washes &lt;object&gt; ::= myself | yourself | himself | herself | ourselves | yourselves | themselves From codice_0, we can generate all combinations: I wash myself I wash yourself I wash himself They wash yourselves They wash themselves A W-grammar to generate only the valid sentences: &lt;sentence &lt;NUMBER&gt; &lt;GENDER&gt; &lt;PERSON» ::= &lt;subject &lt;NUMBER&gt; &lt;GENDER&gt; &lt;PERSON» &lt;verb &lt;NUMBER&gt; &lt;PERSON» &lt;object &lt;NUMBER&gt; &lt;GENDER&gt; &lt;PERSON» &lt;subject singular &lt;GENDER&gt; 1st&gt; ::= I &lt;subject &lt;NUMBER&gt; &lt;GENDER&gt; 2nd&gt; ::= You &lt;subject singular male 3rd&gt; ::= He &lt;subject singular female 3rd&gt; ::= She &lt;subject plural &lt;GENDER&gt; 1st&gt; ::= We &lt;subject singular &lt;GENDER&gt; 3rd&gt; ::= They &lt;verb singular 1st&gt; ::= wash &lt;verb singular 2nd&gt; ::= wash &lt;verb singular 3rd&gt; ::= washes &lt;verb plural &lt;PERSON» ::= wash &lt;object singular &lt;GENDER&gt; 1st&gt; ::= myself &lt;object singular &lt;GENDER&gt; 2nd&gt; ::= yourself &lt;object singular male 3rd&gt; ::= himself &lt;object singular female 3rd&gt; ::= herself &lt;object plural &lt;GENDER&gt; 1st&gt; ::= ourselves &lt;object plural &lt;GENDER&gt; 2nd&gt; ::= yourselves &lt;object plural &lt;GENDER&gt; 3rd&gt; ::= themselves &lt;NUMBER&gt; ::== singular | plural &lt;GENDER&gt; ::== male | female &lt;PERSON&gt; ::== 1st | 2nd | 3rd A standard non-context-free language. A well-known non-context-free language is formula_0 A two-level grammar for this language is the metagrammar N ::= 1 | N1 X ::= a | b together with grammar schema Start ::= ⟨aN⟩⟨bN⟩⟨aN⟩ ⟨XN1⟩ ::= ⟨XN⟩ X ⟨X1⟩ ::= X Questions. If one substitutes a new letter, say C, for N1, is the language generated by the grammar preserved? Or N1 should be read as a string of two symbols, that is, N followed by 1? End of questions. Requiring valid use of variables in ALGOL. The Revised Report on the Algorithmic Language Algol 60 defines a full context-free syntax for the language. Assignments are defined as follows (section 4.2.1): &lt;left part&gt; ::= &lt;variable&gt; := | &lt;procedure identifier&gt; := &lt;left part list&gt; ::= &lt;left part&gt; | &lt;left part list&gt; &lt;left part&gt; &lt;assignment statement&gt; ::= &lt;left part list&gt; &lt;arithmetic expression&gt; | &lt;left part list&gt; &lt;Boolean expression&gt; A codice_1 can be (amongst other things) an codice_2, which in turn is defined as: &lt;identifier&gt; ::= &lt;letter&gt; | &lt;identifier&gt; &lt;letter&gt; | &lt;identifier&gt; &lt;digit&gt; Examples (section 4.2.2): s:=p[0]:=n:=n+1+s n:=n+1 A:=B/C-v-q×S S[v,k+2]:=3-arctan(sTIMESzeta) V:=Q&gt;Y^Z Expressions and assignments must be type checked: for instance, The rules above distinguish between codice_6 and codice_7, but they cannot verify that the same variable always has the same type. This (non-context-free) requirement can be expressed in a W-grammar by annotating the rules with attributes that record, for each variable used or assigned to, its name and type. This record can then be carried along to all places in the grammar where types need to be matched, and implement type checking. Similarly, it can be used to checking initialization of variables before use, etcetera. One may wonder how to create and manipulate such a data structure without explicit support in the formalism for data structures and operations on them. It can be done by using the metagrammar to define a string representation for the data structure and using pattern matching to define operations: &lt;left part with &lt;TYPED&gt; &lt;NAME» ::= &lt;variable with &lt;TYPED&gt; &lt;NAME» := | &lt;procedure identifier with &lt;TYPED&gt; &lt;NAME» := &lt;left part list &lt;TYPEMAP1» ::= &lt;left part with &lt;TYPED&gt; &lt;NAME» &lt;where &lt;TYPEMAP1&gt; is &lt;TYPED&gt; &lt;NAME&gt; added to sorted &lt;EMPTY» | &lt;left part list &lt;TYPEMAP2» &lt;left part with &lt;TYPED&gt; &lt;NAME» &lt;where &lt;TYPEMAP1&gt; is &lt;TYPED&gt; &lt;NAME&gt; added to sorted &lt;TYPEMAP2» &lt;assignment statement &lt;ASSIGNED TO&gt; &lt;USED» ::= &lt;left part list &lt;ASSIGNED TO» &lt;arithmetic expression &lt;USED» | &lt;left part list &lt;ASSIGNED TO» &lt;Boolean expression &lt;USED» &lt;where &lt;TYPED&gt; &lt;NAME&gt; is &lt;TYPED&gt; &lt;NAME&gt; added to sorted &lt;EMPTY» &lt;where &lt;TYPEMAP1&gt; is &lt;TYPED1&gt; &lt;NAME1&gt; added to sorted &lt;TYPEMAP2» ::= &lt;where &lt;TYPEMAP2&gt; is &lt;TYPED2&gt; &lt;NAME2&gt; added to sorted &lt;TYPEMAP3» &lt;where &lt;NAME1&gt; is lexicographically before &lt;NAME2» &lt;where &lt;TYPEMAP1&gt; is &lt;TYPED1&gt; &lt;NAME1&gt; added to sorted &lt;TYPEMAP2» ::= &lt;where &lt;TYPEMAP2&gt; is &lt;TYPED2&gt; &lt;NAME2&gt; added to sorted &lt;TYPEMAP3» &lt;where &lt;NAME2&gt; is lexicographically before &lt;NAME1» &lt;where &lt;TYPEMAP3&gt; is &lt;TYPED1&gt; &lt;NAME1&gt; added to sorted &lt;TYPEMAP4» &lt;where &lt;EMPTY&gt; is lexicographically before &lt;NAME1» ::= &lt;where &lt;NAME1&gt; is &lt;LETTER OR DIGIT&gt; followed by &lt;NAME2» &lt;where &lt;NAME1&gt; is lexicographically before &lt;NAME2» ::= &lt;where &lt;NAME1&gt; is &lt;LETTER OR DIGIT&gt; followed by &lt;NAME3» &lt;where &lt;NAME2&gt; is &lt;LETTER OR DIGIT&gt; followed by &lt;NAME4» &lt;where &lt;NAME3&gt; is lexicographically before &lt;NAME4» &lt;where &lt;NAME1&gt; is lexicographically before &lt;NAME2» ::= &lt;where &lt;NAME1&gt; is &lt;LETTER OR DIGIT 1&gt; followed by &lt;NAME3» &lt;where &lt;NAME2&gt; is &lt;LETTER OR DIGIT 2&gt; followed by &lt;NAME4» &lt;where &lt;LETTER OR DIGIT 1&gt; precedes+ &lt;LETTER OR DIGIT 2&gt; &lt;where &lt;LETTER OR DIGIT 1&gt; precedes+ &lt;LETTER OR DIGIT 2&gt; ::= &lt;where &lt;LETTER OR DIGIT 1&gt; precedes &lt;LETTER OR DIGIT 2&gt; &lt;where &lt;LETTER OR DIGIT 1&gt; precedes+ &lt;LETTER OR DIGIT 2&gt; ::= &lt;where &lt;LETTER OR DIGIT 1&gt; precedes+ &lt;LETTER OR DIGIT 3&gt; &lt;where &lt;LETTER OR DIGIT 3&gt; precedes+ &lt;LETTER OR DIGIT 2&gt; &lt;where a precedes b&gt; :== &lt;where b precedes c&gt; :== &lt;TYPED&gt; ::== real | integer | Boolean &lt;NAME&gt; ::== &lt;LETTER&gt; | &lt;NAME&gt; &lt;LETTER&gt; | &lt;NAME&gt; &lt;DIGIT&gt; &lt;LETTER OR DIGIT&gt; ::== &lt;LETTER&gt; | &lt;DIGIT&gt; &lt;LETTER OR DIGIT 1&gt; ::= &lt;LETTER OR DIGIT&gt; &lt;LETTER OR DIGIT 2&gt; ::= &lt;LETTER OR DIGIT&gt; &lt;LETTER OR DIGIT 3&gt; ::= &lt;LETTER OR DIGIT&gt; &lt;LETTER&gt; ::== a | b | c | [...] &lt;DIGIT&gt; ::== 0 | 1 | 2 | [...] &lt;NAMES1&gt; ::== &lt;NAMES&gt; &lt;NAMES2&gt; ::== &lt;NAMES&gt; &lt;ASSIGNED TO&gt; ::== &lt;NAMES&gt; &lt;USED&gt; ::== &lt;NAMES&gt; &lt;NAMES&gt; ::== &lt;NAME&gt; | &lt;NAME&gt; &lt;NAMES&gt; &lt;EMPTY&gt; ::== &lt;TYPEMAP&gt; ::== (&lt;TYPED&gt; &lt;NAME&gt;) &lt;TYPEMAP&gt; &lt;TYPEMAP1&gt; ::== &lt;TYPEMAP&gt; &lt;TYPEMAP2&gt; ::== &lt;TYPEMAP&gt; &lt;TYPEMAP3&gt; ::== &lt;TYPEMAP&gt; When compared to the original grammar, three new elements have been added: The new hyperrules are ε-rules: they only generate the empty string. ALGOL 68 examples. The ALGOL 68 reports use a slightly different notation without &lt;angled brackets&gt;. ALGOL 68 as in the 1968 Final Report §2.1. a) program : open symbol, standard prelude, library prelude option, particular program, exit, library postlude option, standard postlude, close symbol. b) standard prelude : declaration prelude sequence. c) library prelude : declaration prelude sequence. d) particular program : label sequence option, strong CLOSED void clause. e) exit : go on symbol, letter e letter x letter i letter t, label symbol. f) library postlude : statement interlude. g) standard postlude : strong void clause train ALGOL 68 as in the 1973 Revised Report §2.2.1, §10.1.1. program : strong void new closed clause A) EXTERNAL :: standard ; library ; system ; particular. B) STOP :: label letter s letter t letter o letter p. a) program text : STYLE begin token, new LAYER1 preludes, parallel token, new LAYER1 tasks PACK, STYLE end token. b) NEST1 preludes : NEST1 standard prelude with DECS1, NEST1 library prelude with DECSETY2, NEST1 system prelude with DECSETY3, where (NEST1) is (new EMPTY new DECS1 DECSETY2 DECSETY3). c) NEST1 EXTERNAL prelude with DECSETY1 : strong void NEST1 series with DECSETY1, go on token ; where (DECSETY1) is (EMPTY), EMPTY. d) NEST1 tasks : NEST1 system task list, and also token, NEST1 user task PACK list. e) NEST1 system task : strong void NEST1 unit. f) NEST1 user task : NEST2 particular prelude with DECS, NEST2 particular program PACK, go on token, NEST2 particular postlude, where (NEST2) is (NEST1 new DECS STOP). g) NEST2 particular program : NEST2 new LABSETY3 joined label definition of LABSETY3, strong void NEST2 new LABSETY3 ENCLOSED clause. h) NEST joined label definition of LABSETY : where (LABSETY) is (EMPTY), EMPTY ; where (LABSETY) is (LAB1 LABSETY1), NEST label definition of LAB1, NEST joined label definition of$ LABSETY1. i) NEST2 particular postlude : strong void NEST2 series with STOP. A simple example of the power of W-grammars is clause a) program text : STYLE begin token, new LAYER1 preludes, parallel token, new LAYER1 tasks PACK, STYLE end token. This allows BEGIN ... END and { } as block delimiters, while ruling out BEGIN ... } and { ... END. One may wish to compare the grammar in the report with the Yacc parser for a subset of ALGOL 68 by Marc van Leeuwen. Implementations. Anthony Fisher wrote "yo-yo", a parser for a large class of W-grammars, with example grammars for "expressions", "eva", "sal" and Pascal (the actual ISO 7185 standard for Pascal uses extended Backus–Naur form). Dick Grune created a C program that would generate all possible productions of a W-grammar. Applications outside of ALGOL 68. The applications of Extended Affix Grammars (EAG)s mentioned above can effectively be regarded as applications of W-grammars, since EAGs are so close to W-grammars. W-grammars have also been proposed for the description of complex human actions in ergonomics. A W-Grammar Description has also been supplied for Ada. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{a^n b^n a^n | n \\ge 1\\}." } ]
https://en.wikipedia.org/wiki?curid=1544998
1545079
Barnett effect
Magnetization of an uncharged body when spun on its axis The Barnett effect is the magnetization of an uncharged body when spun on its axis. It was discovered by American physicist Samuel Barnett in 1915. An uncharged object rotating with angular velocity "ω" tends to spontaneously magnetize, with a magnetization given by formula_0 where "γ" is the gyromagnetic ratio for the material, "χ" is the magnetic susceptibility. The magnetization occurs parallel to the axis of spin. Barnett was motivated by a prediction by Owen Richardson in 1908, later named the Einstein–de Haas effect, that magnetizing a ferromagnet can induce a mechanical rotation. He instead looked for the opposite effect, that is, that spinning a ferromagnet could change its magnetization. He established the effect with a long series of experiments between 1908 and 1915. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M = \\chi \\omega / \\gamma, " } ]
https://en.wikipedia.org/wiki?curid=1545079
15452231
Wind stress
The shear stress exerted by the wind on the surface of large bodies of water In physical oceanography and fluid dynamics, the wind stress is the shear stress exerted by the wind on the surface of large bodies of water – such as oceans, seas, estuaries and lakes. When wind is blowing over a water surface, the wind applies a wind force on the water surface. The wind stress is the component of this wind force that is parallel to the surface per unit area. Also, the wind stress can be described as the flux of horizontal momentum applied by the wind on the water surface. The wind stress causes a deformation of the water body whereby wind waves are generated. Also, the wind stress drives ocean currents and is therefore an important driver of the large-scale ocean circulation. The wind stress is affected by the wind speed, the shape of the wind waves and the atmospheric stratification. It is one of the components of the air–sea interaction, with others being the atmospheric pressure on the water surface, as well as the exchange of energy and mass between the water and the atmosphere. Background. Stress is the quantity that describes the magnitude of a force that is causing a deformation of an object. Therefore, stress is defined as the force per unit area and its SI unit is the Pascal. When the deforming force acts parallel to the object's surface, this force is called a shear force and the stress it causes is called a shear stress. Dynamics. Wind blowing over an ocean at rest first generates small-scale wind waves which extract energy and momentum from the wave field. As a result, the momentum flux (the rate of momentum transfer per unit area and unit time) generates a current. These surface currents are able to transport energy (e.g. heat) and mass (e.g. water or nutrients) around the globe. The different processes described here are depicted in the sketches shown in figures 1.1 till 1.4. Interactions between wind, wind waves and currents are an essential part of the world ocean dynamics. Eventually, the wind waves also influence the wind field leading to a complex interaction between wind and water whereof the research for a correct theoretical description is ongoing. The Beaufort scale quantifies the correspondence between wind speed and different sea states. Only the top layer of the ocean, called the mixed layer, is stirred by the wind stress. This upper layer of the ocean has a depth on the order of 10m. The wind blowing parallel to a water surface deforms that surface as a result of shear action caused by the fast wind blowing over the stagnant water. The wind blowing over the surface applies a shear force on the surface. The wind stress is the component of this force that acts parallel to the surface per unit area. This wind force exerted on the water surface due to shear stress is given by: formula_0 Here, "F" represents the shear force, formula_1 represents the air density and formula_2 represents the wind shear stress. Furthermore, "x" corresponds to the zonal direction and "y" corresponds to the meridional direction. The vertical derivatives of the wind stress components are also called the vertical eddy viscosity. The equation describes how the force exerted on the water surface decreases for a denser atmosphere or, to be more precise, a denser atmospheric boundary layer (this is the layer of a fluid where the influence of friction is felt). On the other hand, the exerted force on the water surface increases when the vertical eddy viscosity increases. The wind stress can also be described as a downward transfer of momentum and energy from the air to the water. The magnitude of the wind stress (formula_2) is often parametrized as a function of wind speed at a certain height above the surface (formula_3) in the form formula_4 Here, formula_5 is the density of the surface air and "CD" is a dimensionless wind drag coefficient which is a repository function for all remaining dependencies. An often used value for the drag coefficient is formula_6. Since the exchange of energy, momentum and moisture is often parametrized using bulk atmospheric formulae, the equation above is the semi-empirical bulk formula for the surface wind stress. The height at which the wind speed is referred to in wind drag formulas is usually 10 meters above the water surface. The formula for the wind stress explains how the stress increases for a denser atmosphere and higher wind speeds. When the wind stress forces, that were given above, are in balance with the Coriolis force, this can be written as: formula_7 where "f" is the Coriolis parameter, "u" and "v" are respectively the zonal and meridional currents and formula_8 and formula_9 are respectively the zonal and meridional Coriolis forces. This balance of forces is known as the Ekman balance. Some important assumptions that underlie the Ekman balance are that there are no boundaries, an infinitely deep water layer, constant vertical eddy viscosity, barotropic conditions with no geostrophic flow and a constant Coriolis parameter. The oceanic currents that are generated by this balance are referred to as Ekman currents. In the Northern Hemisphere, Ekman currents at the surface are directed with an angle of formula_10° to the right of the wind stress direction and in the Southern Hemisphere they are directed with the same angle to the left of the wind stress direction. Flow directions of deeper positioned currents are deflected even more to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. This phenomenon is called the Ekman spiral. The Ekman transport can be obtained from vertically integrating the Ekman balance, giving: formula_11 where "D" is the depth of the Ekman layer. Depth-averaged Ekman transport is directed perpendicular to the wind stress and, again, directed to the right of the wind stress direction in the Northern Hemisphere and to the left of the wind stress direction in the Southern Hemisphere. Alongshore winds therefore generate transport towards or away from the coast. For small values of "D", water can return from or to deeper water layers, resulting in Ekman up- or downwelling. Upwelling due to Ekman transport can also happen at the equator due to the change of sign of the Coriolis parameter in the Northern and Southern Hemisphere and the stable easterly winds that are blowing to the North and South of the equator. Due to the strong temporal variability of the wind, the wind forcing on the ocean surface is also highly variable. This is one of the causes of the internal variability of ocean flows as these changes in the wind forcing cause changes in the wave field and the thereby generated currents. Variability of ocean flows also occurs because the changes of the wind forcing are disturbances of the mean ocean flow, which leads to instabilities. A well known phenomenon that is caused by changes in surface wind stress over the tropical Pacific is the El Niño-Southern Oscillation (ENSO). Global wind stress patterns. The global annual mean wind stress forces the global ocean circulation. Typical values for the wind stress are about 0.1Pa and, in general, the zonal wind stress is stronger than the meridional wind stress as can be seen in figures 2.1 and 2.2. It can also be seen that the largest values of the wind stress occur in the Southern Ocean for the zonal direction with values of about 0.3Pa. Figures 2.3 and 2.4 show that monthly variations in the wind stress patterns are only minor and the general patterns stay the same during the whole year. It can be seen that there are strong easterly winds (i.e. blowing toward the West), called easterlies or trade winds near the equator, very strong westerly winds at midlatitudes (between ±30° and ±60°), called westerlies, and weaker easterly winds at polar latitudes. Also, on a large annual scale, the wind-stress field is fairly zonally homogeneous. Important meridional wind stress patterns are northward (southward) currents on the eastern (western) coasts of continents in the Northern Hemisphere and on the western (eastern) coast in the Southern Hemisphere since these generate coastal upwelling which causes biological activity. Examples of such patterns can be observed in figure 2.2 on the East coast of North America and on the West coast of South America. Large-scale ocean circulation. Wind stress in one of the drivers of the large-scale ocean circulation with other drivers being the gravitational pull exerted by the Moon and Sun, differences in atmospheric pressure at sea level and convection resulting from atmospheric cooling and evaporation. However, the contribution of the wind stress to the forcing of the oceanic general circulation is largest. Ocean waters respond to the wind stress because of their low resistance to shear and the relative consistence with which winds blow over the ocean. The combination of easterly winds near the equator and westerly winds at midlatitudes drives significant circulations in the North and South Atlantic Oceans, the North and South Pacific Oceans and the Indian Ocean with westward currents near the equator and eastward currents at midlatitudes. This results in characteristic gyre flows in the Atlantic and Pacific consisting of a subpolar and subtropical gyre. The strong westerlies in the Southern ocean drive the Antarctic Circumpolar Current which is the dominant current in the Southern Hemisphere whereof no comparable current exists in the Northern Hemisphere. The equations to describe large-scale ocean dynamics were formulated by Harald Sverdrup and came to be known as Sverdrup dynamics. Important is the Sverdrup balance which describes the relation between the wind stress and the vertically integrated meridional transport of water. Other significant contributions to the description of large-scale ocean circulation were made by Henry Stommel who formulated the first correct theory for the Gulf Stream and theories of the abyssal circulation. Long before these theories were formulated, mariners have been aware of the major surface ocean currents. As an example, Benjamin Franklin already published a map of the Gulf Stream in 1770 and in European discovery of the gulf stream dates back to the 1512 expedition of Juan Ponce de León. Apart from such hydrographic measurement there are two methods to measure the ocean currents directly. Firstly, the Eulerian velocity can be measured using a current meter along a rope in the water column. And secondly, a drifter can be used which is an object that moves with the currents whereof the velocity can be measured. Wind-driven upwelling. Wind-driven upwelling brings nutrients from deep waters to the surface which leads to biological productivity. Therefore, wind stress impacts biological activity around the globe. Two important forms of wind-driven upwelling are coastal upwelling and equatorial upwelling. Coastal upwelling occurs when the wind stress is directed with the coast on its left (right) in the Northern (Southern) Hemisphere. If so, Ekman transport is directed away from the coast forcing waters from below to move upward. Well known coastal upwelling areas are the Canary Current, the Benguela Current, the California Current, the Humboldt Current, and the Somali Current. All of these currents support major fisheries due to the increased biological activities. Equatorial upwelling occurs due to the trade winds blowing towards the west in both the Northern Hemisphere and the Southern Hemisphere. However, the Ekman transport that is associated with these trade winds is directed 90° to the right of the winds in the Northern Hemisphere and 90° to the left of the winds in the Southern Hemisphere. As a result, to the North of the equator water is transported away from the equator and to the South of the equator water is transported away from the equator. This horizontal divergence of mass has to be compensated and hence upwelling occurs. Wind waves. Wind waves are waves at the water surface that are generated due to the shear action of wind stress on the water surface and the aim of gravity, that acts as a restoring force, to return the water surface to its equilibrium position. Wind waves in the ocean are also known as ocean surface waves. The wind waves interact with both the air and water flows above and below the waves. Therefore, the characteristics of wind waves are determined by the coupling processes between the boundary layers of both the atmosphere and ocean. Wind waves also play an important role themselves in the interaction processes between the ocean and the atmosphere. Wind waves in the ocean can travel thousands of kilometers. A proper description of the physical mechanisms that cause the growth of wind waves and is in accordance with observations has yet to be completed. A necessary condition for wind waves to grow is a minimum wind speed of 0.05 m/s. Expressions for the drag coefficient. The drag coefficient is a dimensionless quantity which quantifies the resistance of the water surface. Due to the fact that the drag coefficient depends on the past of the wind, the drag coefficient is expressed differently for different time and spatial scales. A general expression for the drag coefficient does not yet exist and the value is unknown for unsteady and non-ideal conditions. In general, the drag coefficient increases with increasing wind speed and is greater for shallower waters. The geostrophic drag coefficient is expressed as: formula_12 where formula_13 is the geostrophic wind which is given by: formula_14 In global climate models, often a drag coefficient appropriate for a spatial scale of 1° by 1° and a monthly time scale is used. In such a timescale, the wind can strongly fluctuate. The monthly mean shear stress can be expressed as: formula_15 where formula_1 is the density, formula_16 is the drag coefficient, formula_17 is the monthly mean wind and "U"' is the fluctuation from the monthly mean. formula_18 Measurements. It is not possible to directly measure the wind stress on the ocean surface. To obtain measurements of the wind stress, another easily measurable quantity like wind speed is measured and then via a parametrization the wind stress observations are obtained. Still, measurements of the wind stress are important as the value of the drag coefficient is not known for unsteady and non-ideal conditions. Measurements of the wind stress for such conditions can resolve the issue of the unknown drag coefficient. Four methods of measuring the drag coefficient are known as the Reynolds stress method, the dissipation method, the profile method and a method of using radar remote sensing. Wind stress on land surface. The wind can also exert a stress force on land surface which can lead to erosion of the ground. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\nF_x & = \\frac{1}{\\rho}\\frac{\\partial\\tau_x}{\\partial z}, \\\\[5pt]\nF_y & = \\frac{1}{\\rho}\\frac{\\partial\\tau_y}{\\partial z}.\n\\end{align}\n" }, { "math_id": 1, "text": "\\rho" }, { "math_id": 2, "text": "\\tau" }, { "math_id": 3, "text": "U_h" }, { "math_id": 4, "text": "\\tau_\\text{wind} = \\rho_\\text{air} C_D U_h^2. " }, { "math_id": 5, "text": "\\rho_\\text{air}" }, { "math_id": 6, "text": "C_D = 0.0015" }, { "math_id": 7, "text": "\n\\begin{align}\n-fv & = \\frac{1}{\\rho}\\frac{\\partial\\tau_x}{\\partial z}, \\\\[5pt]\nfu & = \\frac{1}{\\rho}\\frac{\\partial\\tau_y}{\\partial z},\n\\end{align}\n" }, { "math_id": 8, "text": "-fv" }, { "math_id": 9, "text": "fu" }, { "math_id": 10, "text": "45" }, { "math_id": 11, "text": "\n\\begin{align}\nU_E & = \\frac{\\tau_y}{f\\rho D}, \\\\[5pt]\nV_E & = -\\frac{\\tau_x}{f\\rho D},\n\\end{align}\n" }, { "math_id": 12, "text": "C_g = \\frac{\\tau}{U_g^2}," }, { "math_id": 13, "text": "U_g" }, { "math_id": 14, "text": "U_g = \\frac{1}{\\rho f}\\frac{\\partial p}{\\partial y}." }, { "math_id": 15, "text": " \\langle\\tau\\rangle = \\rho \\langle C_D\\rangle\\langle U \\rangle^2 \\left(1+\\frac{\\langle U'^2\\rangle}{\\langle U\\rangle^2}\\right)," }, { "math_id": 16, "text": "C_D" }, { "math_id": 17, "text": "\\langle U\\rangle" }, { "math_id": 18, "text": " C_D = 1.3\\times 10^{-3}\\left(1+\\frac{\\langle U'^2\\rangle}{\\langle U\\rangle^2} \\right)." } ]
https://en.wikipedia.org/wiki?curid=15452231
1545350
Genotype frequency
Genetic variation in populations can be analyzed and quantified by the frequency of alleles. Two fundamental calculations are central to population genetics: allele frequencies and genotype frequencies. Genotype frequency in a population is the number of individuals with a given genotype divided by the total number of individuals in the population. In population genetics, the genotype frequency is the frequency or proportion (i.e., 0 &lt; "f" &lt; 1) of genotypes in a population. Although allele and genotype frequencies are related, it is important to clearly distinguish them. Genotype frequency may also be used in the future (for "genomic profiling") to predict someone's having a disease or even a birth defect. It can also be used to determine ethnic diversity. Genotype frequencies may be represented by a De Finetti diagram. Numerical example. As an example, consider a population of 100 four-o-'clock plants ("Mirabilis jalapa") with the following genotypes: When calculating an allele frequency for a diploid species, remember that homozygous individuals have two copies of an allele, whereas heterozygotes have only one. In our example, each of the 42 pink-flowered heterozygotes has one copy of the a allele, and each of the 9 white-flowered homozygotes has two copies. Therefore, the allele frequency for a (the white color allele) equals formula_0 This result tells us that the allele frequency of a is 0.3. In other words, 30% of the alleles for this gene in the population are the a allele. Compare genotype frequency: let's now calculate the genotype frequency of aa homozygotes (white-flowered plants). formula_1 Allele and genotype frequencies always sum to one (100%). Equilibrium. The Hardy–Weinberg law describes the relationship between allele and genotype frequencies when a population is not evolving. Let's examine the Hardy–Weinberg equation using the population of four-o'clock plants that we considered above: if the allele A frequency is denoted by the symbol p and the allele a frequency denoted by q, then p+q=1. For example, if p=0.7, then q must be 0.3. In other words, if the allele frequency of A equals 70%, the remaining 30% of the alleles must be a, because together they equal 100%. For a gene that exists in two alleles, the Hardy–Weinberg equation states that ("p"2) + (2"pq") + ("q"2) = 1. If we apply this equation to our flower color gene, then formula_2 (genotype frequency of homozygotes) formula_3 (genotype frequency of heterozygotes) formula_4 (genotype frequency of homozygotes) If p=0.7 and q=0.3, then formula_2 = (0.7)2 = 0.49 formula_3 = 2×(0.7)×(0.3) = 0.42 formula_4 = (0.3)2 = 0.09 This result tells us that, if the allele frequency of A is 70% and the allele frequency of a is 30%, the expected genotype frequency of AA is 49%, Aa is 42%, and aa is 9%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\nf({a}) & = { (Aa) + 2 \\times (aa) \\over 2 \\times (AA) + 2 \\times (Aa) + 2 \\times (aa)} = { 42 + 2 \\times 9 \\over 2 \\times 49 + 2 \\times 42 + 2 \\times 9 } = { 60 \\over 200 } = 0.3 \\\\\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\nf({aa}) & = { 9 \\over 49 + 42 + 9 } = { 9 \\over 100 } = 0.09 = (9\\%) \\\\ \n\\end{align}\n" }, { "math_id": 2, "text": "f(\\mathbf{AA}) = p^2" }, { "math_id": 3, "text": "f(\\mathbf{Aa}) = 2pq" }, { "math_id": 4, "text": "f(\\mathbf{aa}) = q^2" } ]
https://en.wikipedia.org/wiki?curid=1545350
15454890
History of quaternions
In mathematics, quaternions are a non-commutative number system that extends the complex numbers. Quaternions and their applications to rotations were first described in print by Olinde Rodrigues in all but name in 1840, but independently discovered by Irish mathematician Sir William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. They find uses in both theoretical and applied mathematics, in particular for calculations involving three-dimensional rotations. Hamilton's discovery. In 1843, Hamilton knew that the complex numbers could be viewed as points in a plane and that they could be added and multiplied together using certain geometric operations. Hamilton sought to find a way to do the same for points in space. Points in space can be represented by their coordinates, which are triples of numbers and have an obvious addition, but Hamilton had difficulty defining the appropriate multiplication. According to a letter Hamilton wrote later to his son Archibald: Every morning in the early part of October 1843, on my coming down to breakfast, your brother William Edwin and yourself used to ask me: "Well, Papa, can you multiply triples?" Whereto I was always obliged to reply, with a sad shake of the head, "No, I can only add and subtract them." On October 16, 1843, Hamilton and his wife took a walk along the Royal Canal in Dublin. While they walked across Brougham Bridge (now Broom Bridge), a solution suddenly occurred to him. While he could not "multiply triples", he saw a way to do so for "quadruples". By using three of the numbers in the quadruple as the points of a coordinate in space, Hamilton could represent points in space by his new system of numbers. He then carved the basic rules for multiplication into the bridge: "i"2 = "j"2 = "k"2 = "ijk" = −1 Hamilton called a quadruple with these rules of multiplication a "quaternion", and he devoted the remainder of his life to studying and teaching them. From 1844 to 1850 "Philosophical Magazine" communicated Hamilton's exposition of quaternions. In 1853 he issued "Lectures on Quaternions", a comprehensive treatise that also described biquaternions. The facility of the algebra in expressing geometric relationships led to broad acceptance of the method, several compositions by other authors, and stimulation of applied algebra generally. As mathematical terminology has grown since that time, and usage of some terms has changed, the traditional expressions are referred to classical Hamiltonian quaternions. Precursors. Hamilton's innovation consisted of expressing quaternions as an algebra over R. The formulae for the multiplication of quaternions are implicit in the four squares formula devised by Leonhard Euler in 1748; Olinde Rodrigues applied this formula to representing rotations in 1840. Response. The special claims of quaternions as the algebra of four-dimensional space were challenged by James Cockle with his exhibits in 1848 and 1849 of tessarines and coquaternions as alternatives. Nevertheless, these new algebras from Cockle were, in fact, to be found inside Hamilton's biquaternions. From Italy, in 1858 Giusto Bellavitis responded to connect Hamilton's vector theory with his theory of equipollences of directed line segments. Jules Hoüel led the response from France in 1874 with a textbook on the elements of quaternions. To ease the study of versors, he introduced "biradials" to designate great circle arcs on the sphere. Then the quaternion algebra provided the foundation for spherical trigonometry introduced in chapter 9. Hoüel replaced Hamilton's basis vectors i, j, k with "i"1, "i"2, and "i"3. The variety of fonts available led Hoüel to another notational innovation: "A" designates a point, "a" and a are algebraic quantities, and in the equation for a quaternion formula_0 A is a vector and "α" is an angle. This style of quaternion exposition was perpetuated by Charles-Ange Laisant and Alexander Macfarlane. William K. Clifford expanded the types of biquaternions, and explored elliptic space, a geometry in which the points can be viewed as versors. Fascination with quaternions began before the language of set theory and mathematical structures was available. In fact, there was little mathematical notation before the Formulario mathematico. The quaternions stimulated these advances: For example, the idea of a vector space borrowed Hamilton's term but changed its meaning. Under the modern understanding, any quaternion is a vector in four-dimensional space. (Hamilton's vectors lie in the subspace with scalar part zero.) Since quaternions demand their readers to imagine four dimensions, there is a metaphysical aspect to their invocation. Quaternions are a philosophical object. Setting quaternions before freshmen students of engineering asks too much. Yet the utility of dot products and cross products in three-dimensional space, for illustration of processes, calls for the uses of these operations which are cut out of the quaternion product. Thus Willard Gibbs and Oliver Heaviside made this accommodation, for pragmatism, to avoid the distracting superstructure. For mathematicians the quaternion structure became familiar and lost its status as something mathematically interesting. Thus in England, when Arthur Buchheim prepared a paper on biquaternions, it was published in the American Journal of Mathematics since some novelty in the subject lingered there. Research turned to hypercomplex numbers more generally. For instance, Thomas Kirkman and Arthur Cayley considered the number of equations between basis vectors would be necessary to determine a unique system. The wide interest that quaternions aroused around the world resulted in the Quaternion Society. In contemporary mathematics, the division ring of quaternions exemplifies an algebra over a field. Octonions. Octonions were developed independently by Arthur Cayley in 1845 and John T. Graves, a friend of Hamilton's. Graves had interested Hamilton in algebra, and responded to his discovery of quaternions with "If with your alchemy you can make three pounds of gold [the three imaginary units], why should you stop there?" Two months after Hamilton's discovery of quaternions, Graves wrote Hamilton on December 26, 1843, presenting a kind of double quaternion that is called an "octonion", and showed that they were what we now call a normed division algebra; Graves called them "octaves". Hamilton needed a way to distinguish between two different types of double quaternions, the associative biquaternions and the octaves. He spoke about them to the Royal Irish Society and credited his friend Graves for the discovery of the second type of double quaternion. observed in reply that they were not associative, which may have been the invention of the concept. He also promised to get Graves' work published, but did little about it; Cayley, working independently of Graves, but inspired by Hamilton's publication of his own work, published on octonions in March 1845 – as an appendix to a paper on a different subject. Hamilton was stung into protesting Graves' priority in discovery, if not publication; nevertheless, octonions are known by the name Cayley gave them – or as "Cayley numbers". The major deduction from the existence of octonions was the eight squares theorem, which follows directly from the product rule from octonions, had also been previously discovered as a purely algebraic identity, by Carl Ferdinand Degen in 1818. This sum-of-squares identity is characteristic of composition algebra, a feature of complex numbers, quaternions, and octonions. Mathematical uses. Quaternions continued to be a well-studied "mathematical" structure in the twentieth century, as the third term in the Cayley–Dickson construction of hypercomplex number systems over the reals, followed by the octonions and the sedenions; they are also a useful tool in number theory, particularly in the study of the representation of numbers as sums of squares. The group of eight basic unit quaternions, positive and negative, the quaternion group, is also the simplest non-commutative Sylow group. The study of integral quaternions began with Rudolf Lipschitz in 1886, whose system was later simplified by Leonard Eugene Dickson; but the modern system was published by Adolf Hurwitz in 1919. The difference between them consists of which quaternions are accounted integral: Lipschitz included only those quaternions with integral coordinates, but Hurwitz added those quaternions "all four" of whose coordinates are half-integers. Both systems are closed under subtraction and multiplication, and are therefore rings, but Lipschitz's system does not permit unique factorization, while Hurwitz's does. Quaternions as rotations. Quaternions are a concise method of representing the automorphisms of three- and four-dimensional spaces. They have the technical advantage that unit quaternions form the simply connected cover of the space of three-dimensional rotations. For this reason, quaternions are used in computer graphics, control theory, robotics, signal processing, attitude control, physics, bioinformatics, and orbital mechanics. For example, it is common for spacecraft attitude-control systems to be commanded in terms of quaternions. "Tomb Raider" (1996) is often cited as the first mass-market computer game to have used quaternions to achieve smooth 3D rotation. Quaternions have received another boost from number theory because of their relation to quadratic forms. Memorial. Since 1989, the Department of Mathematics of the National University of Ireland, Maynooth has organized a pilgrimage, where scientists (including physicists Murray Gell-Mann in 2002, Steven Weinberg in 2005, Frank Wilczek in 2007, and mathematician Andrew Wiles in 2003) take a walk from Dunsink Observatory to the Royal Canal bridge where, unfortunately, no trace of Hamilton's carving remains. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{ A} = \\cos \\alpha + \\mathbf{A} \\sin \\alpha ," } ]
https://en.wikipedia.org/wiki?curid=15454890
1545790
Subgroup series
In mathematics, specifically group theory, a subgroup series of a group formula_0 is a chain of subgroups: formula_1 where formula_2 is the trivial subgroup. Subgroup series can simplify the study of a group to the study of simpler subgroups and their relations, and several subgroup series can be invariantly defined and are important invariants of groups. A subgroup series is used in the subgroup method. Subgroup series are a special example of the use of filtrations in abstract algebra. Definition. Normal series, subnormal series. A subnormal series (also normal series, normal tower, subinvariant series, or just series) of a group "G" is a sequence of subgroups, each a normal subgroup of the next one. In a standard notation formula_3 There is no requirement made that "A""i" be a normal subgroup of "G", only a normal subgroup of "A""i" +1. The quotient groups "A""i" +1/"A""i" are called the factor groups of the series. If in addition each "A""i" is normal in "G", then the series is called a normal series, when this term is not used for the weaker sense, or an invariant series. Length. A series with the additional property that "A""i" ≠ "A""i" +1 for all "i" is called a series "without repetition"; equivalently, each "A""i" is a proper subgroup of "A""i" +1. The "length" of a series is the number of strict inclusions "A""i" &lt; "A""i" +1. If the series has no repetition then the length is "n". For a subnormal series, the length is the number of non-trivial factor groups. Every nontrivial group has a normal series of length 1, namely formula_4, and any nontrivial proper normal subgroup gives a normal series of length 2. For simple groups, the trivial series of length 1 is the longest subnormal series possible. Ascending series, descending series. Series can be notated in either ascending order: formula_5 or descending order: formula_6 For a given finite series, there is no distinction between an "ascending series" or "descending series" beyond notation. For "infinite" series however, there is a distinction: the ascending series formula_7 has a smallest term, a second smallest term, and so forth, but no largest proper term, no second largest term, and so forth, while conversely the descending series formula_8 has a largest term, but no smallest proper term. Further, given a recursive formula for producing a series, the terms produced are either ascending or descending, and one calls the resulting series an ascending or descending series, respectively. For instance the derived series and lower central series are descending series, while the upper central series is an ascending series. Noetherian groups, Artinian groups. A group that satisfies the ascending chain condition (ACC) on subgroups is called a Noetherian group, and a group that satisfies the descending chain condition (DCC) is called an Artinian group (not to be confused with Artin groups), by analogy with Noetherian rings and Artinian rings. The ACC is equivalent to the maximal condition: every non-empty collection of subgroups has a maximal member, and the DCC is equivalent to the analogous minimal condition. A group can be Noetherian but not Artinian, such as the infinite cyclic group, and unlike for rings, a group can be Artinian but not Noetherian, such as the Prüfer group. Every finite group is clearly Noetherian and Artinian. Homomorphic images and subgroups of Noetherian groups are Noetherian, and an extension of a Noetherian group by a Noetherian group is Noetherian. Analogous results hold for Artinian groups. Noetherian groups are equivalently those such that every subgroup is finitely generated, which is stronger than the group itself being finitely generated: the free group on 2 or finitely more generators is finitely generated, but contains free groups of infinite rank. Noetherian groups need not be finite extensions of polycyclic groups. Infinite and transfinite series. Infinite subgroup series can also be defined and arise naturally, in which case the specific (totally ordered) indexing set becomes important, and there is a distinction between ascending and descending series. An ascending series formula_7 where the formula_9 are indexed by the natural numbers may simply be called an infinite ascending series, and conversely for an infinite descending series. If the subgroups are more generally indexed by ordinal numbers, one obtains a transfinite series, such as this ascending series: formula_10 Given a recursive formula for producing a series, one can define a transfinite series by transfinite recursion by defining the series at limit ordinals by formula_11 (for ascending series) or formula_12 (for descending series). Fundamental examples of this construction are the transfinite lower central series and upper central series. Other totally ordered sets arise rarely, if ever, as indexing sets of subgroup series. For instance, one can define but rarely sees naturally occurring bi-infinite subgroup series (series indexed by the integers): formula_13 Comparison of series. A "refinement" of a series is another series containing each of the terms of the original series. Two subnormal series are said to be "equivalent" or "isomorphic" if there is a bijection between the sets of their factor groups such that the corresponding factor groups are isomorphic. Refinement gives a partial order on series, up to equivalence, and they form a lattice, while subnormal series and normal series form sublattices. The existence of the supremum of two subnormal series is the Schreier refinement theorem. Of particular interest are "maximal" series without repetition. Equivalently, a subnormal series for which each of the "A""i" is a maximal normal subgroup of "A""i" +1. Equivalently, a composition series is a subnormal series for which each of the factor groups are simple. A nilpotent series exists if and only if the group is solvable. A central series exists if and only if the group is nilpotent. Examples. Functional series. Some subgroup series are defined , in terms of subgroups such as the center and operations such as the commutator. These include: "p"-series. There are series coming from subgroups of prime power order or prime power index, related to ideas such as Sylow subgroups. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G" }, { "math_id": 1, "text": "1 = A_0 \\leq A_1 \\leq \\cdots \\leq A_n = G" }, { "math_id": 2, "text": "1" }, { "math_id": 3, "text": "1 = A_0\\triangleleft A_1\\triangleleft \\cdots \\triangleleft A_n = G." }, { "math_id": 4, "text": "1 \\triangleleft G" }, { "math_id": 5, "text": "1 = A_0\\leq A_1\\leq \\cdots \\leq A_n = G" }, { "math_id": 6, "text": "G = B_0\\geq B_1\\geq \\cdots \\geq B_n = 1." }, { "math_id": 7, "text": "1 = A_0\\leq A_1\\leq \\cdots \\leq G" }, { "math_id": 8, "text": "G = B_0\\geq B_1\\geq \\cdots \\geq 1" }, { "math_id": 9, "text": "A_i" }, { "math_id": 10, "text": "1 = A_0\\leq A_1\\leq \\cdots \\leq A_\\omega \\leq A_{\\omega+1} = G" }, { "math_id": 11, "text": "A_\\lambda := \\bigcup_{\\alpha < \\lambda} A_\\alpha" }, { "math_id": 12, "text": "A_\\lambda := \\bigcap_{\\alpha < \\lambda} A_\\alpha" }, { "math_id": 13, "text": "1 \\leq \\cdots \\leq A_{-1} \\leq A_0\\leq A_1 \\leq \\cdots \\leq G" }, { "math_id": 14, "text": "A_{i+1}/A_i \\subseteq Z(G/A_i)" }, { "math_id": 15, "text": "i=0, 1, \\ldots, n-2" } ]
https://en.wikipedia.org/wiki?curid=1545790
1545816
Leslie matrix
Age-structured model of population growth The Leslie matrix is a discrete, age-structured model of population growth that is very popular in population ecology named after Patrick H. Leslie. The Leslie matrix (also called the Leslie model) is one of the most well-known ways to describe the growth of populations (and their projected age distribution), in which a population is closed to migration, growing in an unlimited environment, and where only one sex, usually the female, is considered. The Leslie matrix is used in ecology to model the changes in a population of organisms over a period of time. In a Leslie model, the population is divided into groups based on age classes. A similar model which replaces age classes with ontogenetic stages is called a Lefkovitch matrix, whereby individuals can both remain in the same stage class or move on to the next one. At each time step, the population is represented by a vector with an element for each age class where each element indicates the number of individuals currently in that class. The Leslie matrix is a square matrix with the same number of rows and columns as the population vector has elements. The (i,j)th cell in the matrix indicates how many individuals will be in the age class "i" at the next time step for each individual in stage "j". At each time step, the population vector is multiplied by the Leslie matrix to generate the population vector for the subsequent time step. To build a matrix, the following information must be known from the population: From the observations that formula_3 at time "t+1" is simply the sum of all offspring born from the previous time step and that the organisms surviving to time "t+1" are the organisms at time "t" surviving at probability formula_1, one gets formula_6. This implies the following matrix representation: formula_7 where formula_8 is the maximum age attainable in the population. This can be written as: formula_9 or: formula_10 where formula_11 is the population vector at time "t" and formula_12 is the Leslie matrix. The dominant eigenvalue of formula_12, denoted formula_13, gives the population's asymptotic growth rate (growth rate at the stable age distribution). The corresponding eigenvector provides the stable age distribution, the proportion of individuals of each age within the population, which remains constant at this point of asymptotic growth barring changes to vital rates. Once the stable age distribution has been reached, a population undergoes exponential growth at rate formula_13. The characteristic polynomial of the matrix is given by the Euler–Lotka equation. The Leslie model is very similar to a discrete-time Markov chain. The main difference is that in a Markov model, one would have formula_14 for each formula_15, while the Leslie model may have these sums greater or less than 1. Stable age structure. This age-structured growth model suggests a steady-state, or stable, age-structure and growth rate. Regardless of the initial population size, formula_16, or age distribution, the population tends asymptotically to this age-structure and growth rate. It also returns to this state following perturbation. The Euler–Lotka equation provides a means of identifying the intrinsic growth rate. The stable age-structure is determined both by the growth rate and the survival function (i.e. the Leslie matrix). For example, a population with a large intrinsic growth rate will have a disproportionately “young” age-structure. A population with high mortality rates at all ages (i.e. low survival) will have a similar age-structure. Random Leslie model. There is a generalization of the population growth rate to when a Leslie matrix has random elements which may be correlated. When characterizing the disorder, or uncertainties, in vital parameters; a perturbative formalism has to be used to deal with linear non-negative random matrix difference equations. Then the non-trivial, effective eigenvalue which defines the long-term asymptotic dynamics of the mean-value population state vector can be presented as the effective growth rate. This eigenvalue and the associated mean-value invariant state vector can be calculated from the smallest positive root of a secular polynomial and the residue of the mean-valued Green function. Exact and perturbative results can thusly be analyzed for several models of disorder. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n_x" }, { "math_id": 1, "text": "s_x" }, { "math_id": 2, "text": "f_x" }, { "math_id": 3, "text": "n_0" }, { "math_id": 4, "text": "b_{x+ 1}" }, { "math_id": 5, "text": "f_x = s_xb_{x+1}." }, { "math_id": 6, "text": " n_{x+1} = s_xn_x" }, { "math_id": 7, "text": "\n \\begin{bmatrix}\n n_0 \\\\\n n_1 \\\\\n \\vdots \\\\\n n_{\\omega - 1} \\\\\n \\end{bmatrix}_{t+1}\n=\n \\begin{bmatrix}\nf_0 & f_1 & f_2 & \\ldots & f_{\\omega - 2} & f_{\\omega - 1} \\\\\ns_0 & 0 & 0 & \\ldots & 0 & 0\\\\\n0 & s_1 & 0 & \\ldots & 0 & 0\\\\\n0 & 0 & s_2 & \\ldots & 0 & 0\\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots\\\\\n0 & 0 & 0 & \\ldots & s_{\\omega - 2} & 0\n \\end{bmatrix}\n \\begin{bmatrix}\n n_0 \\\\ n_1 \\\\ \\vdots\\\\ n_{\\omega - 1}\n \\end{bmatrix}_{t}\n" }, { "math_id": 8, "text": "\\omega" }, { "math_id": 9, "text": "\\mathbf{n}_{t+1} = \\mathbf{L}\\mathbf{n}_t" }, { "math_id": 10, "text": "\\mathbf{n}_{t} = \\mathbf{L}^t\\mathbf{n}_0" }, { "math_id": 11, "text": "\\mathbf{n}_t" }, { "math_id": 12, "text": "\\mathbf{L}" }, { "math_id": 13, "text": "\\lambda" }, { "math_id": 14, "text": "f_x+s_x=1" }, { "math_id": 15, "text": "x" }, { "math_id": 16, "text": "N_0" } ]
https://en.wikipedia.org/wiki?curid=1545816
1546092
Electrical mobility
Electrical mobility is the ability of charged particles (such as electrons or protons) to move through a medium in response to an electric field that is pulling them. The separation of ions according to their mobility in gas phase is called ion mobility spectrometry, in liquid phase it is called electrophoresis. Theory. When a charged particle in a gas or liquid is acted upon by a uniform electric field, it will be accelerated until it reaches a constant drift velocity according to the formula formula_0 where In other words, the electrical mobility of the particle is defined as the ratio of the drift velocity to the magnitude of the electric field: formula_4 For example, the mobility of the sodium ion (Na+) in water at 25 °C is . This means that a sodium ion in an electric field of 1 V/m would have an average drift velocity of . Such values can be obtained from measurements of ionic conductivity in solution. Electrical mobility is proportional to the net charge of the particle. This was the basis for Robert Millikan's demonstration that electrical charges occur in discrete units, whose magnitude is the charge of the electron. Electrical mobility is also inversely proportional to the Stokes radius formula_5 of the ion, which is the effective radius of the moving ion including any molecules of water or other solvent that move with it. This is true because the solvated ion moving at a constant drift velocity formula_6 is subject to two equal and opposite forces: an electrical force formula_7 and a frictional force formula_8, where formula_9 is the frictional coefficient, formula_10 is the solution viscosity. For different ions with the same charge such as Li+, Na+ and K+ the electrical forces are equal, so that the drift speed and the mobility are inversely proportional to the radius formula_5. In fact, conductivity measurements show that ionic mobility "increases" from Li+ to Cs+, and therefore that Stokes radius "decreases" from Li+ to Cs+. This is the opposite of the order of ionic radii for crystals and shows that in solution the smaller ions (Li+) are more extensively hydrated than the larger (Cs+). Mobility in gas phase. Mobility is defined for any species in the gas phase, encountered mostly in plasma physics and is defined as formula_11 where Mobility is related to the species' diffusion coefficient formula_15 through an exact (thermodynamically required) equation known as the Einstein relation: formula_16 where If one defines the mean free path in terms of momentum transfer, then one gets for the diffusion coefficient formula_19 But both the "momentum-transfer mean free path" and the "momentum-transfer collision frequency" are difficult to calculate. Many other mean free paths can be defined. In the gas phase, formula_20 is often defined as the diffusional mean free path, by assuming that a simple approximate relation is exact: formula_21 where formula_22 is the root mean square speed of the gas molecules: formula_23 where formula_14 is the mass of the diffusing species. This approximate equation becomes exact when used to define the diffusional mean free path. Applications. Electrical mobility is the basis for electrostatic precipitation, used to remove particles from exhaust gases on an industrial scale. The particles are given a charge by exposing them to ions from an electrical discharge in the presence of a strong field. The particles acquire an electrical mobility and are driven by the field to a collecting electrode. Instruments exist which select particles with a narrow range of electrical mobility, or particles with electrical mobility larger than a predefined value. The former are generally referred to as "differential mobility analyzers". The selected mobility is often identified with the diameter of a singly charged spherical particle, thus the "electrical-mobility diameter" becomes a characteristic of the particle, regardless of whether it is actually spherical. Passing particles of the selected mobility to a detector such as a condensation particle counter allows the number concentration of particles with the currently selected mobility to be measured. By varying the selected mobility over time, mobility vs concentration data may be obtained. This technique is applied in scanning mobility particle sizers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v_\\text{d} = \\mu E," }, { "math_id": 1, "text": "v_\\text{d}" }, { "math_id": 2, "text": "E" }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": "\\mu = \\frac{v_\\text{d}}{E}." }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "s" }, { "math_id": 7, "text": "zeE" }, { "math_id": 8, "text": "F_\\text{drag} = fs = (6 \\pi \\eta a)s" }, { "math_id": 9, "text": "f" }, { "math_id": 10, "text": "\\eta" }, { "math_id": 11, "text": "\\mu = \\frac{q}{m \\nu_\\text{m}}," }, { "math_id": 12, "text": "q" }, { "math_id": 13, "text": "\\nu_\\text{m}" }, { "math_id": 14, "text": "m" }, { "math_id": 15, "text": "D" }, { "math_id": 16, "text": "\\mu = \\frac{q}{kT} D," }, { "math_id": 17, "text": "k" }, { "math_id": 18, "text": "T" }, { "math_id": 19, "text": "D = \\frac{\\pi}{8} \\lambda^2 \\nu_\\text{m}." }, { "math_id": 20, "text": "\\lambda" }, { "math_id": 21, "text": "D = \\frac{1}{2} \\lambda v," }, { "math_id": 22, "text": "v" }, { "math_id": 23, "text": "v = \\sqrt{\\frac{3kT}{m}}," } ]
https://en.wikipedia.org/wiki?curid=1546092
154616
Negative number
Real number that is strictly less than zero In mathematics, a negative number represents an opposite. In the real number system, a negative number is a number that is less than zero. Negative numbers are often used to represent the magnitude of a loss or deficiency. A debt that is owed may be thought of as a negative asset. If a quantity, such as the charge on an electron, may have either of two opposite senses, then one may choose to distinguish between those senses—perhaps arbitrarily—as "positive" and "negative". Negative numbers are used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit scales for temperature. The laws of arithmetic for negative numbers ensure that the common-sense idea of an opposite is reflected in arithmetic. For example, −(−3) = 3 because the opposite of an opposite is the original value. Negative numbers are usually written with a minus sign in front. For example, −3 represents a negative quantity with a magnitude of three, and is pronounced "minus three" or "negative three". To help tell the difference between a subtraction operation and a negative number, occasionally the negative sign is placed slightly higher than the minus sign (as a superscript). Conversely, a number that is greater than zero is called "positive"; zero is usually (but not always) thought of as neither positive nor negative. The positivity of a number may be emphasized by placing a plus sign before it, e.g. +3. In general, the negativity or positivity of a number is referred to as its sign. Every real number other than zero is either positive or negative. The non-negative whole numbers are referred to as natural numbers (i.e., 0, 1, 2, 3...), while the positive and negative whole numbers (together with zero) are referred to as integers. (Some definitions of the natural numbers exclude zero.) In bookkeeping, amounts owed are often represented by red numbers, or a number in parentheses, as an alternative notation to represent negative numbers. Negative numbers were used in the "Nine Chapters on the Mathematical Art", which in its present form dates from the period of the Chinese Han dynasty (202 BC – AD 220), but may well contain much older material. Liu Hui (c. 3rd century) established rules for adding and subtracting negative numbers. By the 7th century, Indian mathematicians such as Brahmagupta were describing the use of negative numbers. Islamic mathematicians further developed the rules of subtracting and multiplying negative numbers and solved problems with negative coefficients. Prior to the concept of negative numbers, mathematicians such as Diophantus considered negative solutions to problems "false" and equations requiring negative solutions were described as absurd. Western mathematicians like Leibniz held that negative numbers were invalid, but still used them in calculations. Introduction. The number line. The relationship between negative numbers, positive numbers, and zero is often expressed in the form of a number line: Numbers appearing farther to the right on this line are greater, while numbers appearing farther to the left are lesser. Thus zero appears in the middle, with the positive numbers to the right and the negative numbers to the left. Note that a negative number with greater magnitude is considered less. For example, even though (positive) 8 is greater than (positive) 5, written &lt;templatestyles src="Block indent/styles.css"/&gt;8 &gt; 5 negative 8 is considered to be less than negative 5: &lt;templatestyles src="Block indent/styles.css"/&gt;−8 &lt; −5. Signed numbers. In the context of negative numbers, a number that is greater than zero is referred to as positive. Thus every real number other than zero is either positive or negative, while zero itself is not considered to have a sign. Positive numbers are sometimes written with a plus sign in front, e.g. +3 denotes a positive three. Because zero is neither positive nor negative, the term nonnegative is sometimes used to refer to a number that is either positive or zero, while nonpositive is used to refer to a number that is either negative or zero. Zero is a neutral number. As the result of subtraction. Negative numbers can be thought of as resulting from the subtraction of a larger number from a smaller. For example, negative three is the result of subtracting three from zero: &lt;templatestyles src="Block indent/styles.css"/&gt;0 − 3  =  −3. In general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers. For example, &lt;templatestyles src="Block indent/styles.css"/&gt;5 − 8  =  −3 since 8 − 5 = 3. Everyday uses of negative numbers. Sport. Negative golf scores relative to par. Arithmetic involving negative numbers. The minus sign "−" signifies the operator for both the binary (two-operand) operation of subtraction (as in "y" − "z") and the unary (one-operand) operation of negation (as in −"x", or twice in −(−"x")). A special case of unary negation occurs when it operates on a positive number, in which case the result is a negative number (as in −5). The ambiguity of the "−" symbol does not generally lead to ambiguity in arithmetical expressions, because the order of operations makes only one interpretation or the other possible for each "−". However, it can lead to confusion and be difficult for a person to understand an expression when operator symbols appear adjacent to one another. A solution can be to parenthesize the unary "−" along with its operand. For example, the expression 7 + −5 may be clearer if written 7 + (−5) (even though they mean exactly the same thing formally). The subtraction expression 7 – 5 is a different expression that doesn't represent the same operations, but it evaluates to the same result. Sometimes in elementary schools a number may be prefixed by a superscript minus sign or plus sign to explicitly distinguish negative and positive numbers as in &lt;templatestyles src="Block indent/styles.css"/&gt;−2 + −5  gives −7. Addition. Addition of two negative numbers is very similar to addition of two positive numbers. For example, &lt;templatestyles src="Block indent/styles.css"/&gt;(−3) + (−5)  =  −8. The idea is that two debts can be combined into a single debt of greater magnitude. When adding together a mixture of positive and negative numbers, one can think of the negative numbers as positive quantities being subtracted. For example: &lt;templatestyles src="Block indent/styles.css"/&gt;8 + (−3)  =  8 − 3  =  5  and (−2) + 7  =  7 − 2  =  5. In the first example, a credit of 8 is combined with a debt of 3, which yields a total credit of 5. If the negative number has greater magnitude, then the result is negative: &lt;templatestyles src="Block indent/styles.css"/&gt;(−8) + 3  =  3 − 8  =  −5  and 2 + (−7)  =  2 − 7  =  −5. Here the credit is less than the debt, so the net result is a debt. Subtraction. As discussed above, it is possible for the subtraction of two non-negative numbers to yield a negative answer: &lt;templatestyles src="Block indent/styles.css"/&gt;5 − 8  =  −3 In general, subtraction of a positive number yields the same result as the addition of a negative number of equal magnitude. Thus &lt;templatestyles src="Block indent/styles.css"/&gt;5 − 8  =  5 + (−8)  =  −3 and &lt;templatestyles src="Block indent/styles.css"/&gt;(−3) − 5  =  (−3) + (−5)  =  −8 On the other hand, subtracting a negative number yields the same result as the addition a positive number of equal magnitude. (The idea is that "losing" a debt is the same thing as "gaining" a credit.) Thus &lt;templatestyles src="Block indent/styles.css"/&gt;3 − (−5)  =  3 + 5  =  8 and &lt;templatestyles src="Block indent/styles.css"/&gt;(−5) − (−8)  =  (−5) + 8  =  3. Multiplication. When multiplying numbers, the magnitude of the product is always just the product of the two magnitudes. The sign of the product is determined by the following rules: Thus &lt;templatestyles src="Block indent/styles.css"/&gt;(−2) × 3  =  −6 and &lt;templatestyles src="Block indent/styles.css"/&gt;(−2) × (−3)  =  6. The reason behind the first example is simple: adding three −2's together yields −6: &lt;templatestyles src="Block indent/styles.css"/&gt;(−2) × 3  =  (−2) + (−2) + (−2)  =  −6. The reasoning behind the second example is more complicated. The idea again is that losing a debt is the same thing as gaining a credit. In this case, losing two debts of three each is the same as gaining a credit of six: &lt;templatestyles src="Block indent/styles.css"/&gt;(−2 debts ) × (−3 each)  =  +6 credit. The convention that a product of two negative numbers is positive is also necessary for multiplication to follow the distributive law. In this case, we know that &lt;templatestyles src="Block indent/styles.css"/&gt;(−2) × (−3)  +  2 × (−3)  =  (−2 + 2) × (−3)  =  0 × (−3)  =  0. Since 2 × (−3) = −6, the product (−2) × (−3) must equal 6. These rules lead to another (equivalent) rule—the sign of any product "a" × "b" depends on the sign of "a" as follows: The justification for why the product of two negative numbers is a positive number can be observed in the analysis of complex numbers. Division. The sign rules for division are the same as for multiplication. For example, &lt;templatestyles src="Block indent/styles.css"/&gt;8 ÷ (−2)  =  −4, &lt;templatestyles src="Block indent/styles.css"/&gt;(−8) ÷ 2  =  −4, and &lt;templatestyles src="Block indent/styles.css"/&gt;(−8) ÷ (−2)  =  4. If dividend and divisor have the same sign, the result is positive, if they have different signs the result is negative. Negation. The negative version of a positive number is referred to as its negation. For example, −3 is the negation of the positive number 3. The sum of a number and its negation is equal to zero: &lt;templatestyles src="Block indent/styles.css"/&gt;3 + (−3)  =  0. That is, the negation of a positive number is the additive inverse of the number. Using algebra, we may write this principle as an algebraic identity: &lt;templatestyles src="Block indent/styles.css"/&gt;"x" + (−"x") =  0. This identity holds for any positive number "x". It can be made to hold for all real numbers by extending the definition of negation to include zero and negative numbers. Specifically: For example, the negation of −3 is +3. In general, &lt;templatestyles src="Block indent/styles.css"/&gt;−(−"x")  =  "x". The absolute value of a number is the non-negative number with the same magnitude. For example, the absolute value of −3 and the absolute value of 3 are both equal to 3, and the absolute value of 0 is 0. Formal construction of negative integers. In a similar manner to rational numbers, we can extend the natural numbers N to the integers Z by defining integers as an ordered pair of natural numbers ("a", "b"). We can extend addition and multiplication to these pairs with the following rules: &lt;templatestyles src="Block indent/styles.css"/&gt;("a", "b") + ("c", "d") = ("a" + "c", "b" + "d") &lt;templatestyles src="Block indent/styles.css"/&gt;("a", "b") × ("c", "d") = ("a" × "c" + "b" × "d", "a" × "d" + "b" × "c") We define an equivalence relation ~ upon these pairs with the following rule: &lt;templatestyles src="Block indent/styles.css"/&gt;("a", "b") ~ ("c", "d") if and only if "a" + "d" = "b" + "c". This equivalence relation is compatible with the addition and multiplication defined above, and we may define Z to be the quotient set N²/~, i.e. we identify two pairs ("a", "b") and ("c", "d") if they are equivalent in the above sense. Note that Z, equipped with these operations of addition and multiplication, is a ring, and is in fact, the prototypical example of a ring. We can also define a total order on Z by writing &lt;templatestyles src="Block indent/styles.css"/&gt;("a", "b") ≤ ("c", "d") if and only if "a" + "d" ≤ "b" + "c". This will lead to an "additive zero" of the form ("a", "a"), an "additive inverse" of ("a", "b") of the form ("b", "a"), a multiplicative unit of the form ("a" + 1, "a"), and a definition of subtraction &lt;templatestyles src="Block indent/styles.css"/&gt;("a", "b") − ("c", "d") = ("a" + "d", "b" + "c"). This construction is a special case of the Grothendieck construction. Uniqueness. The additive inverse of a number is unique, as is shown by the following proof. As mentioned above, an additive inverse of a number is defined as a value which when added to the number yields zero. Let "x" be a number and let "y" be its additive inverse. Suppose "y′" is another additive inverse of "x". By definition, formula_0 And so, "x" + "y′" = "x" + "y". Using the law of cancellation for addition, it is seen that "y′" = "y". Thus "y" is equal to any other additive inverse of "x". That is, "y" is the unique additive inverse of "x". History. For a long time, understanding of negative numbers was delayed by the impossibility of having a negative-number amount of a physical object, for example "minus-three apples", and negative solutions to problems were considered "false". In Hellenistic Egypt, the Greek mathematician Diophantus in the 3rd century AD referred to an equation that was equivalent to formula_1 (which has a negative solution) in "Arithmetica", saying that the equation was absurd. For this reason Greek geometers were able to solve geometrically all forms of the quadratic equation which give positive roots; while they could take no account of others. Negative numbers appear for the first time in history in the "Nine Chapters on the Mathematical Art" (九章算術, "Jiǔ zhāng suàn-shù"), which in its present form dates from the Han period, but may well contain much older material. The mathematician Liu Hui (c. 3rd century) established rules for the addition and subtraction of negative numbers. The historian Jean-Claude Martzloff theorized that the importance of duality in Chinese natural philosophy made it easier for the Chinese to accept the idea of negative numbers. The Chinese were able to solve simultaneous equations involving negative numbers. The "Nine Chapters" used red counting rods to denote positive coefficients and black rods for negative. This system is the exact opposite of contemporary printing of positive and negative numbers in the fields of banking, accounting, and commerce, wherein red numbers denote negative values and black numbers signify positive values. Liu Hui writes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; The ancient Indian "Bakhshali Manuscript" carried out calculations with negative numbers, using "+" as a negative sign. The date of the manuscript is uncertain. LV Gurjar dates it no later than the 4th century, Hoernle dates it between the third and fourth centuries, Ayyangar and Pingree dates it to the 8th or 9th centuries, and George Gheverghese Joseph dates it to about AD 400 and no later than the early 7th century, During the 7th century AD, negative numbers were used in India to represent debts. The Indian mathematician Brahmagupta, in "Brahma-Sphuta-Siddhanta" (written c. AD 630), discussed the use of negative numbers to produce the general form quadratic formula that remains in use today. In the 9th century, Islamic mathematicians were familiar with negative numbers from the works of Indian mathematicians, but the recognition and use of negative numbers during this period remained timid. Al-Khwarizmi in his "Al-jabr wa'l-muqabala" (from which the word "algebra" derives) did not use negative numbers or negative coefficients. But within fifty years, Abu Kamil illustrated the rules of signs for expanding the multiplication formula_2, and al-Karaji wrote in his "al-Fakhrī" that "negative quantities must be counted as terms". In the 10th century, Abū al-Wafā' al-Būzjānī considered debts as negative numbers in "A Book on What Is Necessary from the Science of Arithmetic for Scribes and Businessmen". By the 12th century, al-Karaji's successors were to state the general rules of signs and use them to solve polynomial divisions. As al-Samaw'al writes: the product of a negative number—"al-nāqiṣ" (loss)—by a positive number—"al-zāʾid" (gain)—is negative, and by a negative number is positive. If we subtract a negative number from a higher negative number, the remainder is their negative difference. The difference remains positive if we subtract a negative number from a lower negative number. If we subtract a negative number from a positive number, the remainder is their positive sum. If we subtract a positive number from an empty power ("martaba khāliyya"), the remainder is the same negative, and if we subtract a negative number from an empty power, the remainder is the same positive number. In the 12th century in India, Bhāskara II gave negative roots for quadratic equations but rejected them because they were inappropriate in the context of the problem. He stated that a negative value is "in this case not to be taken, for it is inadequate; people do not approve of negative roots." Fibonacci allowed negative solutions in financial problems where they could be interpreted as debits (chapter 13 of "Liber Abaci", 1202) and later as losses (in "Flos", 1225). In the 15th century, Nicolas Chuquet, a Frenchman, used negative numbers as exponents but referred to them as "absurd numbers". Michael Stifel dealt with negative numbers in his 1544 AD "Arithmetica Integra", where he also called them "numeri absurdi" (absurd numbers). In 1545, Gerolamo Cardano, in his "Ars Magna", provided the first satisfactory treatment of negative numbers in Europe. He did not allow negative numbers in his consideration of cubic equations, so he had to treat, for example, formula_3 separately from formula_4 (with formula_5 in both cases). In all, Cardano was driven to the study of thirteen types of cubic equations, each with all negative terms moved to the other side of the = sign to make them positive. (Cardano also dealt with complex numbers, but understandably liked them even less.) See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "x + y' = 0, \\quad \\text{and} \\quad x + y = 0." }, { "math_id": 1, "text": "4x + 20 = 4" }, { "math_id": 2, "text": "(a \\pm b)(c \\pm d)" }, { "math_id": 3, "text": "x^3 + a x = b" }, { "math_id": 4, "text": "x^3 = a x + b" }, { "math_id": 5, "text": "a, b > 0" } ]
https://en.wikipedia.org/wiki?curid=154616
15462
Integral domain
Commutative ring with no zero divisors other than zero In mathematics, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element "a" has the cancellation property, that is, if "a" ≠ 0, an equality "ab" "ac" implies "b" "c". "Integral domain" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a multiplicative identity, generally denoted 1, but some authors do not follow this, by not requiring integral domains to have a multiplicative identity. Noncommutative integral domains are sometimes admitted. This article, however, follows the much more usual convention of reserving the term "integral domain" for the commutative case and using "domain" for the general case including noncommutative rings. Some sources, notably Lang, use the term entire ring for integral domain. Some specific kinds of integral domains are given with the following chain of class inclusions: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ algebraically closed fields Definition. An "integral domain" is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Equivalently: Non-examples. The following rings are "not" integral domains. Neither formula_40 nor formula_41 is everywhere zero, but formula_42 is. Divisibility, prime elements, and irreducible elements. In this section, "R" is an integral domain. Given elements "a" and "b" of "R", one says that "a" "divides" "b", or that "a" is a "divisor" of "b", or that "b" is a "multiple" of "a", if there exists an element "x" in "R" such that "ax" = "b". The "units" of "R" are the elements that divide 1; these are precisely the invertible elements in "R". Units divide all other elements. If "a" divides "b" and "b" divides "a", then "a" and "b" are associated elements or associates. Equivalently, "a" and "b" are associates if "a" = "ub" for some unit "u". An "irreducible element" is a nonzero non-unit that cannot be written as a product of two non-units. A nonzero non-unit "p" is a "prime element" if, whenever "p" divides a product "ab", then "p" divides "a" or "p" divides "b". Equivalently, an element "p" is prime if and only if the principal ideal ("p") is a nonzero prime ideal. Both notions of irreducible elements and prime elements generalize the ordinary definition of prime numbers in the ring formula_50 if one considers as prime the negative primes. Every prime element is irreducible. The converse is not true in general: for example, in the quadratic integer ring formula_51 the element 3 is irreducible (if it factored nontrivially, the factors would each have to have norm 3, but there are no norm 3 elements since formula_52 has no integer solutions), but not prime (since 3 divides formula_53 without dividing either factor). In a unique factorization domain (or more generally, a GCD domain), an irreducible element is a prime element. While unique factorization does not hold in formula_51, there is unique factorization of ideals. See Lasker–Noether theorem. Field of fractions. The field of fractions "K" of an integral domain "R" is the set of fractions "a"/"b" with "a" and "b" in "R" and "b" ≠ 0 modulo an appropriate equivalence relation, equipped with the usual addition and multiplication operations. It is "the smallest field containing "R"" in the sense that there is an injective ring homomorphism "R" → "K" such that any injective ring homomorphism from "R" to a field factors through "K". The field of fractions of the ring of integers formula_0 is the field of rational numbers formula_54 The field of fractions of a field is isomorphic to the field itself. Algebraic geometry. Integral domains are characterized by the condition that they are reduced (that is "x"2 = 0 implies "x" = 0) and irreducible (that is there is only one minimal prime ideal). The former condition ensures that the nilradical of the ring is zero, so that the intersection of all the ring's minimal primes is zero. The latter condition is that the ring have only one minimal prime. It follows that the unique minimal prime ideal of a reduced and irreducible ring is the zero ideal, so such rings are integral domains. The converse is clear: an integral domain has no nonzero nilpotent elements, and the zero ideal is the unique minimal prime ideal. This translates, in algebraic geometry, into the fact that the coordinate ring of an affine algebraic set is an integral domain if and only if the algebraic set is an algebraic variety. More generally, a commutative ring is an integral domain if and only if its spectrum is an integral affine scheme. Characteristic and homomorphisms. The characteristic of an integral domain is either 0 or a prime number. If "R" is an integral domain of prime characteristic "p", then the Frobenius endomorphism "x" ↦ "x""p" is injective. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Z" }, { "math_id": 1, "text": "\\R" }, { "math_id": 2, "text": "\\Z \\supset 2\\Z \\supset \\cdots \\supset 2^n\\Z \\supset 2^{n+1}\\Z \\supset \\cdots" }, { "math_id": 3, "text": "\\Z[x]" }, { "math_id": 4, "text": "\\Complex[x_1,\\ldots,x_n]" }, { "math_id": 5, "text": "\\Complex[x,y]/(y^2 - x(x-1)(x-2))" }, { "math_id": 6, "text": "y^2 - x(x-1)(x-2)" }, { "math_id": 7, "text": "\\Z[x]/(x^2 - n) \\cong \\Z[\\sqrt{n}]" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "n > 0" }, { "math_id": 10, "text": "\\Complex." }, { "math_id": 11, "text": "\\Z_p" }, { "math_id": 12, "text": "U" }, { "math_id": 13, "text": "\\Complex" }, { "math_id": 14, "text": "\\mathcal{H}(U)" }, { "math_id": 15, "text": "0=1" }, { "math_id": 16, "text": "\\Z/m\\Z" }, { "math_id": 17, "text": "m = xy" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "y" }, { "math_id": 20, "text": "1" }, { "math_id": 21, "text": "m" }, { "math_id": 22, "text": "x \\not\\equiv 0 \\bmod{m}" }, { "math_id": 23, "text": "y \\not\\equiv 0 \\bmod{m}" }, { "math_id": 24, "text": "xy \\equiv 0 \\bmod{m}" }, { "math_id": 25, "text": "R \\times S" }, { "math_id": 26, "text": "(1,0) \\cdot (0,1) = (0,0)" }, { "math_id": 27, "text": "\\Z[x]/(x^2 - n^2)" }, { "math_id": 28, "text": "n \\in \\mathbb{Z}" }, { "math_id": 29, "text": "x+n" }, { "math_id": 30, "text": "x-n" }, { "math_id": 31, "text": "M" }, { "math_id": 32, "text": "N" }, { "math_id": 33, "text": "MN = 0" }, { "math_id": 34, "text": "M = N = (\\begin{smallmatrix} 0 & 1 \\\\ 0 & 0 \\end{smallmatrix})" }, { "math_id": 35, "text": "k[x_1,\\ldots,x_n]/(fg)" }, { "math_id": 36, "text": "k" }, { "math_id": 37, "text": "f,g \\in k[x_1,\\ldots,x_n]" }, { "math_id": 38, "text": "(fg)" }, { "math_id": 39, "text": " f(x) = \\begin{cases} 1-2x & x \\in \\left [0, \\tfrac{1}{2} \\right ] \\\\ 0 & x \\in \\left [\\tfrac{1}{2}, 1 \\right ] \\end{cases} \\qquad g(x) = \\begin{cases} 0 & x \\in \\left [0, \\tfrac{1}{2} \\right ] \\\\ 2x-1 & x \\in \\left [\\tfrac{1}{2}, 1 \\right ] \\end{cases}" }, { "math_id": 40, "text": "f" }, { "math_id": 41, "text": "g" }, { "math_id": 42, "text": "fg" }, { "math_id": 43, "text": "\\Complex \\otimes_{\\R} \\Complex" }, { "math_id": 44, "text": "e_1 = \\tfrac{1}{2}(1 \\otimes 1) - \\tfrac{1}{2}(i \\otimes i)" }, { "math_id": 45, "text": "e_2 = \\tfrac{1}{2}(1 \\otimes 1) + \\tfrac{1}{2}(i \\otimes i)" }, { "math_id": 46, "text": "e_1e_2 = 0" }, { "math_id": 47, "text": "\\Complex \\times \\Complex \\to \\Complex \\otimes_{\\R} \\Complex" }, { "math_id": 48, "text": "(z, w) \\mapsto z \\cdot e_1 + w \\cdot e_2" }, { "math_id": 49, "text": "z \\otimes w \\mapsto (zw, z\\overline{w})" }, { "math_id": 50, "text": "\\Z," }, { "math_id": 51, "text": "\\Z\\left[\\sqrt{-5}\\right]" }, { "math_id": 52, "text": "a^2+5b^2=3" }, { "math_id": 53, "text": "\\left(2 + \\sqrt{-5}\\right)\\left(2 - \\sqrt{-5}\\right)" }, { "math_id": 54, "text": "\\Q." } ]
https://en.wikipedia.org/wiki?curid=15462
1546202
Vienna Standard Mean Ocean Water
Standard defining the isotopic composition of ocean water Vienna Standard Mean Ocean Water (VSMOW) is an isotopic standard for water, that is, a particular sample of water whose proportions of different isotopes of hydrogen and oxygen are accurately known. VSMOW is distilled from ocean water and does not contain salt or other impurities. Published and distributed by the Vienna-based International Atomic Energy Agency in 1968, the standard and its essentially identical successor, VSMOW2, continue to be used as a reference material. Water samples made up of different isotopes of hydrogen and oxygen have slightly different physical properties. As an extreme example, heavy water, which contains two deuterium (2H) atoms instead of the usual, lighter hydrogen-1 (1H), has a melting point of and boiling point of . Different rates of evaporation cause water samples from different places in the water cycle to contain slightly different ratios of isotopes. Ocean water (richer in heavy isotopes) and rain water (poorer in heavy isotopes) roughly represent the two extremes found on Earth. With VSMOW, the IAEA simultaneously published an analogous standard for rain water, Standard Light Antarctic Precipitation (SLAP), and eventually its successor SLAP2. SLAP contains about 5% less oxygen-18 and 42.8% less deuterium than VSMOW. A scale based on VSMOW and SLAP is used to report oxygen-18 and deuterium concentrations. From 2005 until its redefinition in 2019, the kelvin was specified to be 1/273.16 of the temperature of specifically VSMOW at its triple point. History and background. Abundances of a particular isotope in a substance are usually given relative to some reference material, as a delta in parts per thousand (‰) from the reference. For example, the ratio of deuterium (2H) to hydrogen-1 in a substance "x" may be given as formula_0, where formula_1 denotes the absolute concentration in "x". In 1961, pursuing a standard for measuring and reporting deuterium and oxygen-18 concentrations, Harmon Craig of the Scripps Institution of Oceanography in San Diego, California, proposed an abstract water standard. He based the proportions on his measurements of samples taken by of ocean waters around the world. Approximating an average of their measurements, Craig "defined" his "standard mean ocean water" (SMOW) relative to a water sample held in the United States' National Bureau of Standards called NBS-1 (sampled from the Potomac River). In particular, SMOW had the following parameters relative to NBS-1: Later, researchers at the California Institute of Technology defined another abstract reference, also called "SMOW", for oxygen-18 concentrations, such that a sample of Potsdam Sandstone in their possession satisfied δ18O sandstone/SMOW 15.5‰. To resolve the confusion, November 1966 meeting of the Vienna-based International Atomic Energy Agency (IAEA) recommended the preparation of two water isotopic standards: Vienna SMOW (VSMOW; initially just "SMOW" but later disambiguated) and Standard Light Antarctic Precipitation (SLAP). Craig prepared VSMOW by mixing distilled Pacific Ocean water with small amounts of other waters. VSMOW was intended to match the SMOW standard as closely as possible. Craig's measurements found an identical 18O concentration and a 0.2‰ lower 2H concentration. The SLAP standard was created from a melted firn sample from Plateau Station in Antarctica. A standard with oxygen-18 and deuterium concentrations between that of VSMOW and SLAP, called Greenland Ice Sheet Precipitation (GISP), was also prepared. The IAEA began distributing samples in 1968, and compiled analyses of VSMOW and SLAP from 45 laboratories around the world. The VSMOW sample was stored in a stainless-steel container under nitrogen and was transferred to glass ampoules in 1977. The deuterium and oxygen-18 concentrations in VSMOW are close to the upper end of naturally occurring materials, and the concentrations in SLAP are close to the lower end. Due to confusion over multiple water standards, the Commission on Isotopic Abundances and Atomic Weights recommended in 1994 that all future isotopic measurements of oxygen-18 (18O) and deuterium (2H) be reported relative to VSMOW, on a scale such that the δ18O of SLAP is −55.5‰ and the δ2H of SLAP is −428‰, relative to VSMOW. Therefore, SLAP is defined to contain 94.45% the oxygen-18 concentration and 57.2% the deuterium concentration of VSMOW. Using a scale with two defined samples improves comparison of results between laboratories. In December 1996, because of a dwindling supply of VSMOW, the IAEA decided to create a replacement standard, VSMOW2. Published in 1999, it contains a nearly identical isotopic mixture. About 300 liters was prepared from a mixture of distilled waters, from Lake Bracciano in Italy, the Sea of Galilee in Israel, and a well in Egypt, in proportions chosen to reach VSMOW isotopic ratios. The IAEA also published a successor to SLAP, called SLAP2, derived from melted water from four Antarctic drilling sites. Deviations of 17O, and 18O in the new standards from the old standards are zero within the error of measurement. There is a small but measurable deviation of 2H concentration in SLAP2 from SLAP—δ2HSLAP2/VSMOW is defined to be −427.5‰ instead of −428‰—but not in VSMOW2 from VSMOW. The IAEA recommends that measurements still be reported on the VSMOW–SLAP scale. The older two standards are now kept at the IAEA and no longer sold. Measurements. All measurements are reported with their standard uncertainty. Measurements of particular combinations of oxygen and hydrogen isotopes are unnecessary because water molecules constantly exchange atoms with each other. VSMOW. Except for tritium, which was determined by the helium gas emitted by radioactive decay, these measurements were taken using mass spectroscopy. SLAP. Based on the results of , the IAEA defined the delta scale with SLAP at −55.5‰ for 18O and −428‰ for 2H. That is, SLAP was measured to contain approximately 5.55% less oxygen-18 and 42.8% less deuterium than does VSMOW, and these figures were used to anchor the scale at two points. Experimental figures are given below. −428.5 ± 0.4‰, about 1 in 11230 atoms −55.5‰, about 1 in 528 atoms −28.86 ± 0.1‰, about 1 in 3700 atoms VSMOW2 and SLAP2. The concentrations of 17O, and 18O are indistinguishable between VSMOW and VSMOW2, and between SLAP and SLAP2. The specification sheet gives the standard errors in these measurements. The concentration of 2H is unchanged in VSMOW2 as well, but is slightly increased in SLAP2. The IAEA reports: On 6 July 2007, the tritium concentration was 3.5 ± 1.0 TU in VSMOW2, and 27.6 ± 1.6 TU in SLAP2. GISP. −189.5 ± 1.2‰ −24.66 ± 0.09‰ −12.71 ± 0.1‰ Applications. Reporting isotopic ratios. The VSMOW–SLAP scale is recommended by the USGS, IUPAC, and IAEA for measurement of deuterium and 18O concentrations in any substance. For 18O, a scale based on Vienna Pee Dee Belemnite can also be used. The physical samples, which are distributed by the IAEA and U.S. National Institute of Standards and Technology, are used to calibrate isotope-measuring equipment. Variations in isotopic content are useful in hydrology, meteorology, and oceanography. Different parts of the ocean do have slightly different isotopic concentrations: δ 18O values range from –11.35‰ in water off the coast of Greenland to +1.32‰ in the north Atlantic, and δ 2H concentrations in deep ocean water range from roughly –1.7‰ near Antarctica to +2.2‰ in the Arctic. Variations are much larger in surface water than in deep water. Temperature measurements. In 1954, the International Committee for Weights and Measures (CIPM) established the definition of the Kelvin as 1/273.16 of the absolute temperature of the triple point of water. Waters with different isotopic compositions had slightly different triple points. Thus, the International Committee for Weights and Measures specified in 2005 that the definition of the kelvin temperature scale would refer to water with a composition of the nominal specification of VSMOW. The decision was welcomed in 2007 by Resolution 10 of the 23rd CGPM. The triple point is measured in triple-point cells, where the water is held at its triple point and allowed to reach equilibrium with its surroundings. Using ordinary waters, the range of inter-laboratory measurements of the triple point can be about 250 μK. With VSMOW, the inter-laboratory range of measurements of the triple point is about 50 μK. After the 2019 redefinition of the SI base units, the kelvin is defined in terms of the Boltzmann constant, which makes its definition completely independent of the properties of water. The defined value for the Boltzmann constant was selected so that the measured value of the VSMOW triple point is identical to the prior defined value, within measurable accuracy. Triple-point cells remain a practical method of calibrating thermometers. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\delta\\,^2\\mathrm{H}_{x/\\text{reference}}\\text{ (in ‰)} = \\left(\\frac{(^2\\mathrm{H} / ^1\\mathrm{H})_x}{(^2\\mathrm{H} / ^1\\mathrm{H})_\\text{reference}}-1\\right)\\cdot 1000" }, { "math_id": 1, "text": "(^2\\mathrm{H} / ^1\\mathrm{H})_x" } ]
https://en.wikipedia.org/wiki?curid=1546202
15462488
Fermi–Ulam model
Dynamical system The Fermi–Ulam model (FUM) is a dynamical system that was introduced by Polish mathematician Stanislaw Ulam in 1961. FUM is a variant of Enrico Fermi's primary work on acceleration of cosmic rays, namely Fermi acceleration. The system consists of a particle that collides elastically between a fixed wall and a moving one, each of infinite mass. The walls represent the magnetic mirrors with which the cosmic particles collide. A. J. Lichtenberg and M. A. Lieberman provided a simplified version of FUM (SFUM) that derives from the Poincaré surface of section formula_0 and writes formula_1 formula_2 where formula_3 is the velocity of the particle after the formula_4-th collision with the fixed wall, formula_5 is the corresponding phase of the moving wall, formula_6 is the velocity law of the moving wall and formula_7 is the stochasticity parameter of the system. If the velocity law of the moving wall is differentiable enough, according to KAM theorem invariant curves in the phase space formula_8 exist. These invariant curves act as barriers that do not allow for a particle to further accelerate and the average velocity of a population of particles saturates after finite iterations of the map. For instance, for sinusoidal velocity law of the moving wall such curves exist, while they do not for sawtooth velocity law that is discontinuous. Consequently, at the first case particles cannot accelerate infinitely, reversely to what happens at the last one. Over the years FUM became a prototype model for studying non-linear dynamics and coupled mappings. The rigorous solution of the Fermi-Ulam problem (the velocity and energy of the particle are bounded) was given first by L. D. Pustyl'nikov in (see also and references therein). In spite of these negative results, if one considers the Fermi–Ulam model in the framework of the special theory of relativity, then under some general conditions the energy of the particle tends to infinity for an open set of initial data. 2D generalization. Though the 1D Fermi–Ulam model does not lead to acceleration for smooth oscillations, unbounded energy growth has been observed in 2D billiards with oscillating boundaries, The growth rate of energy in chaotic billiards is found to be much larger than that in billiards that are integrable in the static limit. Strongly chaotic billiard with oscillating boundary can serve as a paradigm for driven chaotic systems. In the experimental arena this topic arises in the theory of "nuclear friction", and more recently in the studies of cold atoms that are trapped in "optical billiards". The driving induces diffusion in energy, and consequently the absorption coefficient is determined by the Kubo formula. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x=const." }, { "math_id": 1, "text": "u_{n+1}=|u_n+U_\\mathrm{wall}(\\varphi_n)| " }, { "math_id": 2, "text": "\\varphi_{n+1}=\\varphi_n+\\frac{kM}{u_{n+1}} \\pmod k," }, { "math_id": 3, "text": "u_n" }, { "math_id": 4, "text": "n" }, { "math_id": 5, "text": "\\varphi_n" }, { "math_id": 6, "text": "U_\\mathrm{wall}" }, { "math_id": 7, "text": "M" }, { "math_id": 8, "text": "(\\varphi,u)" } ]
https://en.wikipedia.org/wiki?curid=15462488
15464235
Coulomb gap
Physical phenomenon First introduced by M. Pollak, the Coulomb gap is a soft gap in the single-particle density of states (DOS) of a system of interacting localized electrons. Due to the long-range Coulomb interactions, the single-particle DOS vanishes at the chemical potential, at low enough temperatures, such that thermal excitations do not wash out the gap. Theory. At zero temperature, a classical treatment of a system gives an upper bound for the DOS near the Fermi energy, first suggested by Efros and Shklovskii. The argument is as follows: Let us look at the ground state configuration of the system. Defining formula_0 as the energy of an electron at site formula_1, due to the disorder and the Coulomb interaction with all other electrons (we define this both for occupied and unoccupied sites), it is easy to see that the energy needed to move an electron from an occupied site formula_1 to an unoccupied site formula_2 is given by the expression: formula_3. The subtraction of the last term accounts for the fact that formula_4 contains a term due to the interaction with the electron present at site formula_1, but after moving the electron this term should not be considered. It is easy to see from this that there exists an energy formula_5 such that all sites with energies above it are empty, and below it are full (this is the Fermi energy, but since we are dealing with a system with interactions it is not obvious a-priori that it is still well-defined). Assume we have a finite single-particle DOS at the Fermi energy, formula_6. For every possible transfer of an electron from an occupied site formula_1 to an unoccupied site formula_2, the energy invested should be positive, since we are assuming we are in the ground state of the system, i.e., formula_7. Assuming we have a large system, consider all the sites with energies in the interval formula_8 The number of these, by assumption, is formula_9 As explained, formula_10 of these would be occupied, and the others unoccupied. Of all pairs of occupied and unoccupied sites, let us choose the one where the two are closest to each other. If we assume the sites are randomly distributed in space, we find that the distance between these two sites is of order: formula_11, where formula_12 is the dimension of space. Plugging the expression for formula_13 into the previous equation, we obtain the inequality: formula_14 where formula_15 is a coefficient of order unity. Since formula_16, this inequality will necessarily be violated for small enough formula_17. Hence, assuming a finite DOS at formula_5 led to a contradiction. Repeating the above calculation under the assumption that the DOS near formula_5 is proportional to formula_18 shows that formula_19. This is an upper bound for the Coulomb gap. Efros considered single electron excitations, and obtained an integro-differential equation for the DOS, showing the Coulomb gap in fact follows the above equation (i.e., the upper bound is a tight bound). Other treatments of the problem include a mean-field numerical approach, as well as more recent treatments such as, also verifying the upper bound suggested above is a tight bound. Many Monte Carlo simulations were also performed, some of them in disagreement with the result quoted above. Few works deal with the quantum aspect of the problem. Classical Coulomb gap in clean system without disorder is well captured within Extended Dynamical Mean Field Theory (EDMFT) supported by Metropolis Monte Carlo simulations. Experimental observations. Direct experimental confirmation of the gap has been done via tunneling experiments, which probed the single-particle DOS in two and three dimensions. The experiments clearly showed a linear gap in two dimensions, and a parabolic gap in three dimensions. Another experimental consequence of the Coulomb gap is found in the conductivity of samples in the localized regime. The existence of a gap in the spectrum of excitations would result in a lowered conductivity than that predicted by Mott variable-range hopping. If one uses the analytical expression of the single-particle DOS in the Mott derivation, a universal formula_20 dependence is obtained, for any dimension. The observation of this is expected to occur below a certain temperature, such that the optimal energy of hopping would be smaller than the width of the Coulomb gap. The transition from Mott to so-called Efros–Shklovskii variable-range hopping has been observed experimentally for various systems. Nevertheless, no rigorous derivation of the Efros–Shklovskii conductivity formula has been put forth, and in some experiments formula_21 behavior is observed, with a value of formula_22 that fits neither the Mott nor the Efros–Shklovskii theories. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " E_i " }, { "math_id": 1, "text": " i " }, { "math_id": 2, "text": " j " }, { "math_id": 3, "text": "\\Delta E=E_j-E_i-e^2/r_{ij} " }, { "math_id": 4, "text": " E_j " }, { "math_id": 5, "text": " E_f " }, { "math_id": 6, "text": " g(E_f) " }, { "math_id": 7, "text": "\\Delta E>=0 " }, { "math_id": 8, "text": " [E_f-\\epsilon, E_f+\\epsilon]. " }, { "math_id": 9, "text": " N= 2 \\epsilon g(E_f). " }, { "math_id": 10, "text": " N/2" }, { "math_id": 11, "text": " R \\sim (N/V)^{-1/d} " }, { "math_id": 12, "text": " d " }, { "math_id": 13, "text": " N " }, { "math_id": 14, "text": " E_j-E_i-C e^2 (\\epsilon g(E_f)/V)^{1/d} >0 " }, { "math_id": 15, "text": " C" }, { "math_id": 16, "text": " E_j-E_i <2\\epsilon " }, { "math_id": 17, "text": " \\epsilon " }, { "math_id": 18, "text": " (E-E_f)^\\alpha " }, { "math_id": 19, "text": " \\alpha>=d-1 " }, { "math_id": 20, "text": " e^{-1/T^{1/2}} " }, { "math_id": 21, "text": " e^{-1/T^{\\alpha}} " }, { "math_id": 22, "text": " \\alpha " } ]
https://en.wikipedia.org/wiki?curid=15464235
1546530
Breather
In physics, a breather is a nonlinear wave in which energy concentrates in a localized and oscillatory fashion. This contradicts with the expectations derived from the corresponding linear system for infinitesimal amplitudes, which tends towards an even distribution of initially localized energy. A discrete breather is a breather solution on a nonlinear lattice. The term breather originates from the characteristic that most breathers are localized in space and oscillate (breathe) in time. But also the opposite situation: oscillations in space and localized in time, is denoted as a breather. Overview. A breather is a localized periodic solution of either continuous media equations or discrete lattice equations. The exactly solvable sine-Gordon equation and the focusing nonlinear Schrödinger equation are examples of one-dimensional partial differential equations that possess breather solutions. Discrete nonlinear Hamiltonian lattices in many cases support breather solutions. Breathers are solitonic structures. There are two types of breathers: standing or traveling ones. Standing breathers correspond to localized solutions whose amplitude vary in time (they are sometimes called oscillons). A necessary condition for the existence of breathers in discrete lattices is that the breather main frequency and all its multipliers are located outside of the phonon spectrum of the lattice. Example of a breather solution for the sine-Gordon equation. The sine-Gordon equation is the nonlinear dispersive partial differential equation formula_0 with the field "u" a function of the spatial coordinate "x" and time "t". An exact solution found by using the inverse scattering transform is: formula_1 which, for "ω &lt; 1", is periodic in time "t" and decays exponentially when moving away from "x = 0". Example of a breather solution for the nonlinear Schrödinger equation. The focusing nonlinear Schrödinger equation is the dispersive partial differential equation: formula_2 with "u" a complex field as a function of "x" and "t". Further "i" denotes the imaginary unit. One of the breather solutions (Kuznetsov-Ma breather) is formula_3 with formula_4 which gives breathers periodic in space "x" and approaching the uniform value "a" when moving away from the focus time "t" = 0. These breathers exist for values of the modulation parameter "b" less than √2. Note that a limiting case of the breather solution is the Peregrine soliton. References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{\\partial^2 u}{\\partial t^2} - \\frac{\\partial^2 u}{\\partial x^2} + \\sin u = 0," }, { "math_id": 1, "text": "u = 4 \\arctan\\left(\\frac{\\sqrt{1-\\omega^2}\\;\\cos(\\omega t)}{\\omega\\;\\cosh(\\sqrt{1-\\omega^2}\\; x)}\\right)," }, { "math_id": 2, "text": "i\\,\\frac{\\partial u}{\\partial t} + \\frac{\\partial^2 u}{\\partial x^2} + |u|^2 u = 0," }, { "math_id": 3, "text": "\n u =\n \\left(\n \\frac{2 b^2 \\cosh(\\theta) + 2 i b \\sqrt{2-b^2} \\sinh(\\theta)}\n {2 \\cosh(\\theta)-\\sqrt{2}\\sqrt{2-b^2} \\cos(a b x)} - 1\n \\right)a\\,e^{i a^2 t}\n" }, { "math_id": 4, "text": "\n \\theta=a^2\\,b\\,\\sqrt{2-b^2}\\;t,\n" } ]
https://en.wikipedia.org/wiki?curid=1546530
15465998
Mogensen–Scott encoding
Way to represent data types in the lambda calculus In computer science, Scott encoding is a way to represent (recursive) data types in the lambda calculus. Church encoding performs a similar function. The data and operators form a mathematical structure which is embedded in the lambda calculus. Whereas Church encoding starts with representations of the basic data types, and builds up from it, Scott encoding starts from the simplest method to compose algebraic data types. Mogensen–Scott encoding extends and slightly modifies Scott encoding by applying the encoding to Metaprogramming. This encoding allows the representation of lambda calculus terms, as data, to be operated on by a meta program. History. Scott encoding appears first in a set of unpublished lecture notes by Dana Scott whose first citation occurs in the book "Combinatorial Logic, Volume II". Michel Parigot gave a logical interpretation of and strongly normalizing recursor for Scott-encoded numerals, referring to them as the "Stack type" representation of numbers. Torben Mogensen later extended Scott encoding for the encoding of Lambda terms as data. Discussion. Lambda calculus allows data to be stored as parameters to a function that does not yet have all the parameters required for application. For example, formula_0 May be thought of as a record or struct where the fields formula_1 have been initialized with the values formula_2. These values may then be accessed by applying the term to a function "f". This reduces to, formula_3 "c" may represent a constructor for an algebraic data type in functional languages such as Haskell. Now suppose there are "N" constructors, each with formula_4 arguments; formula_5 Each constructor selects a different function from the function parameters formula_6. This provides branching in the process flow, based on the constructor. Each constructor may have a different arity (number of parameters). If the constructors have no parameters then the set of constructors acts like an "enum"; a type with a fixed number of values. If the constructors have parameters, recursive data structures may be constructed. Definition. Let "D" be a datatype with "N" constructors, formula_7, such that constructor formula_8 has arity formula_4. Scott encoding. The Scott encoding of constructor formula_8 of the data type "D" is formula_9 Mogensen–Scott encoding. Mogensen extends Scott encoding to encode any untyped lambda term as data. This allows a lambda term to be represented as data, within a Lambda calculus meta program. The meta function "mse" converts a lambda term into the corresponding data representation of the lambda term; formula_10 The "lambda term" is represented as a tagged union with three cases: For example, formula_11 Comparison to the Church encoding. The Scott encoding coincides with the Church encoding for booleans. Church encoding of pairs may be generalized to arbitrary data types by encoding formula_8 of "D" above as formula_12 compare this to the Mogensen Scott encoding, formula_13 With this generalization, the Scott and Church encodings coincide on all enumerated datatypes (such as the boolean datatype) because each constructor is a constant (no parameters). Concerning the practicality of using either the Church or Scott encoding for programming, there is a symmetric trade-off: Church-encoded numerals support a constant-time addition operation and have no better than a linear-time predecessor operation; Scott-encoded numerals support a constant-time predecessor operation and have no better than a linear-time addition operation. Type definitions. Church-encoded data and operations on them are typable in system F, as are Scott-encoded data and operations. However, the encoding is significantly more complicated. The type of the Scott encoding of the natural numbers is the positive recursive type: formula_14 Full recursive types are not part of System F, but positive recursive types are expressible in System F via the encoding: formula_15 Combining these two facts yields the System F type of the Scott encoding: formula_16 This can be contrasted with the type of the Church encoding: formula_17 The Church encoding is a second-order type, but the Scott encoding is fourth-order! Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " ((\\lambda x_1 \\ldots x_n.\\lambda c.c\\ x_1 \\ldots x_n)\\ v_1 \\ldots v_n)\\ f " }, { "math_id": 1, "text": " x_1 \\ldots x_n " }, { "math_id": 2, "text": " v_1 \\ldots v_n " }, { "math_id": 3, "text": " f\\ v_1 \\ldots v_n " }, { "math_id": 4, "text": "A_i" }, { "math_id": 5, "text": "\\begin{array}{c|c|c}\n\\text{Constructor} & \\text{Given arguments} & \\text{Result} \\\\\n\\hline\n((\\lambda x_1 \\ldots x_{A_1}.\\lambda c_1 \\ldots c_N.c_1\\ x_1 \\ldots x_{A_1})\\ v_1 \\ldots v_{A_1}) &\nf_1 \\ldots f_N &\nf_1\\ v_1 \\ldots v_{A_1} \\\\\n((\\lambda x_1 \\ldots x_{A_2}.\\lambda c_1 \\ldots c_N.c_2\\ x_1 \\ldots x_{A_2})\\ v_1 \\ldots v_{A_2}) &\nf_1 \\ldots f_N &\nf_2\\ v_1 \\ldots v_{A_2} \\\\\n\\vdots & \\vdots & \\vdots \\\\\n\n((\\lambda x_1 \\ldots x_{A_N}.\\lambda c_1 \\ldots c_N.c_N\\ x_1 \\ldots x_{A_N})\\ v_1 \\ldots v_{A_N}) &\nf_1 \\ldots f_N &\nf_N\\ v_1 \\ldots v_{A_N}\n\\end{array}" }, { "math_id": 6, "text": " f_1 \\ldots f_N " }, { "math_id": 7, "text": "\\{c_i\\}_{i=1}^N" }, { "math_id": 8, "text": "c_i" }, { "math_id": 9, "text": "\\lambda x_1 \\ldots x_{A_i} . \\lambda c_1 \\ldots c_N . c_i\\ x_1 \\ldots x_{A_i}" }, { "math_id": 10, "text": "\\begin{align}\n \\operatorname{mse}[x] & = \\lambda a, b, c.a\\ x \\\\\n \\operatorname{mse}[M\\ N] & = \\lambda a, b, c.b\\ \\operatorname{mse}[M]\\ \\operatorname{mse}[N] \\\\\n \\operatorname{mse}[\\lambda x . M] & = \\lambda a, b, c.c\\ (\\lambda x.\\operatorname{mse}[M]) \\\\\n\\end{align}" }, { "math_id": 11, "text": " \\begin{array}{l}\n \\operatorname{mse}[\\lambda x.f\\ (x\\ x)]\\\\\n \\lambda a, b, c.c\\ (\\lambda x.\\operatorname{mse}[f\\ (x\\ x)])\\\\\n \\lambda a, b, c.c\\ (\\lambda x.\\lambda a, b, c.b\\ \\operatorname{mse}[f]\\ \\operatorname{mse}[x\\ x])\\\\\n \\lambda a, b, c.c\\ (\\lambda x.\\lambda a, b, c.b\\ (\\lambda a, b, c.a\\ f)\\ \\operatorname{mse}[x\\ x])\\\\\n \\lambda a, b, c.c\\ (\\lambda x.\\lambda a, b, c.b\\ (\\lambda a, b, c.a\\ f)\\ (\\lambda a, b, c.b\\ \\operatorname{mse}[x]\\ \\operatorname{mse}[x]))\\\\\n \\lambda a, b, c.c\\ (\\lambda x.\\lambda a, b, c.b\\ (\\lambda a, b, c.a\\ f)\\ (\\lambda a, b, c.b\\ (\\lambda a, b, c.a\\ x)\\ (\\lambda a, b, c.a\\ x)))\n\\end{array} " }, { "math_id": 12, "text": "\\lambda x_1 \\ldots x_{A_i} . \\lambda c_1 \\ldots c_N . c_i (x_1 c_1 \\ldots c_N) \\ldots (x_{A_i} c_1 \\ldots c_N)" }, { "math_id": 13, "text": "\\lambda x_1 \\ldots x_{A_i} . \\lambda c_1 \\ldots c_N . c_i x_1 \\ldots x_{A_i}" }, { "math_id": 14, "text": "\\mu X. \\forall R. R \\to (X \\to R) \\to R" }, { "math_id": 15, "text": "\\mu X. G[X] = \\forall X. ((G[X] \\to X) \\to X)" }, { "math_id": 16, "text": "\\forall X. (((\\forall R. R \\to (X \\to R) \\to R) \\to X) \\to X)" }, { "math_id": 17, "text": "\\forall X. X \\to (X \\to X) \\to X" } ]
https://en.wikipedia.org/wiki?curid=15465998
154664
Turbulence
Motion characterized by chaotic changes in pressure and flow velocity In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to laminar flow, which occurs when a fluid flows in parallel layers with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Physicist Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in physics: Is it possible to make a theoretical model to describe the behavior of a turbulent flow—in particular, its internal structures? Features. Turbulence is characterized by the following features: "Turbulent diffusion" is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories. The integral time scale for a Lagrangian flow can be defined as: formula_0 where "u"′ is the velocity fluctuation, and formula_1 is the time lag between measurements. Large eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as formula_2 where "r" is the distance between two measurement locations, and "u"′ is the velocity fluctuation in that same direction. Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. A complete description of turbulence is one of the unsolved problems in physics. According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first." A similar witticism has been attributed to Horace Lamb in a speech to the British Association for the Advancement of Science: "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." Onset of turbulence. The onset of turbulence can be, to some extent, predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not always linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number (Re) is used as a guide. With respect to laminar and turbulent flow regimes: The Reynolds number is defined as formula_3 where: While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040; moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000. The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Heat and momentum transfer. When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient. Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity v ("vx","vy") of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value: formula_4 and similarly for temperature ("T" "T" + "T′") and pressure ("P" "P" + "P′"), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables. The heat flux and momentum transfer (represented by the shear stress τ) in the direction normal to the flow for a given time are formula_5 where cP is the heat capacity at constant pressure, ρ is the density of the fluid, "μ"turb is the coefficient of turbulent viscosity and "k"turb is the turbulent thermal conductivity. Kolmogorov's theory of 1941. Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small-scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as L). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity ν and the rate of energy dissipation ε. With only these two parameters, the unique length that can be formed by dimensional analysis is formula_6 This is today known as the Kolmogorov length scale (see Kolmogorov microscales). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length η, while the input of energy into the cascade comes from the decay of the large scales, of order L. These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length r) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. "η" ≪ "r" ≪ "L"). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range "η" ≪ "r" ≪ "L" are universally and uniquely determined by the scale r and the rate of energy dissipation ε. The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the "energy spectrum function" "E"("k"), where k is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field u(x): formula_7 where û(k) is the Fourier transform of the flow velocity field. Thus, "E"("k") d"k" represents the contribution to the kinetic energy from all the Fourier modes with "k" &lt; |k| &lt; "k" + d"k", and therefore, formula_8 where ⟨"uiui"⟩ is the mean turbulent kinetic energy of the flow. The wavenumber k corresponding to length scale r is "k" . Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is formula_9 where formula_10 would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory , describing transport of energy through scale space without any loss or gain. The Kolmogorov five-thirds law was first observed in a tidal channel , and considerable experimental evidence has since accumulated that supports it. Outside of the inertial area, one can find the formula below : formula_11 In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant and non-intermittent in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments: formula_12 that is, the difference in flow velocity between points separated by a vector r (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of r). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation r when statistics are computed. The statistical scale-invariance without intermittency implies that the scaling of flow velocity increments should occur with a unique scaling exponent β, so that when r is scaled by a factor λ, formula_13 should have the same statistical distribution as formula_14 with β independent of the scale r. From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as "structure functions" in turbulence) should scale as formula_15 where the brackets denote the statistical average, and the Cn would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the value predicted by the theory, becoming a non-linear function of the order n of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law formula_16 with 1 &lt; "p" &lt; 3, the second order structure function has also a power law, with the form formula_17 Since the experimental values obtained for the second order structure function only deviate slightly from the value predicted by Kolmogorov theory, the value for p is very near to (differences are about 2%). Thus the "Kolmogorov − spectrum" is generally observed in turbulence. However, for high order structure functions, the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the Cn constants, are related with the phenomenon of intermittency in turbulence and can be related to the non-trivial scaling behavior of the dissipation rate averaged over scale r. This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is universal in the inertial range, and how to deduce intermittency properties from the Navier-Stokes equations, i.e. from first principles. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Col-begin/styles.css"/&gt;
[ { "math_id": 0, "text": "T = \\left ( \\frac{1}{\\langle u'u'\\rangle} \\right )\\int_0^\\infty \\langle u'u'(\\tau)\\rangle \\, d\\tau" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "L = \\left ( \\frac{1}{\\langle u'u'\\rangle} \\right ) \\int_0^\\infty \\langle u'u'(r)\\rangle \\, dr" }, { "math_id": 3, "text": " \\mathrm{Re} = \\frac{\\rho v L}{\\mu} \\,," }, { "math_id": 4, "text": "v_x = \\underbrace{\\overline{v}_x}_\\text{mean value} + \\underbrace{v'_x}_\\text{fluctuation} \\quad \\text{and} \\quad v_y=\\overline{v}_y + v'_y \\,;" }, { "math_id": 5, "text": "\\begin{align}\nq&=\\underbrace{v'_y \\rho c_P T'}_\\text{experimental value} = -k_\\text{turb}\\frac{\\partial \\overline{T}}{\\partial y} \\,; \\\\ \n\\tau &=\\underbrace{-\\rho \\overline{v'_y v'_x}}_\\text{experimental value} = \\mu_\\text{turb}\\frac{\\partial \\overline{v}_x}{\\partial y} \\,;\n\\end{align}" }, { "math_id": 6, "text": "\\eta = \\left(\\frac{\\nu^3}{\\varepsilon}\\right)^{1/4} \\,." }, { "math_id": 7, "text": "\\mathbf{u}(\\mathbf{x}) = \\iiint_{\\mathbb{R}^3} \\hat{\\mathbf{u}}(\\mathbf{k})e^{i \\mathbf{k \\cdot x}} \\, \\mathrm{d}^3\\mathbf{k} \\,," }, { "math_id": 8, "text": "\\tfrac12\\left\\langle u_i u_i \\right\\rangle = \\int_0^\\infty E(k) \\, \\mathrm{d}k \\,," }, { "math_id": 9, "text": "E(k) = K_0 \\varepsilon^\\frac23 k^{-\\frac53} \\,," }, { "math_id": 10, "text": " K_0 \\approx 1.5" }, { "math_id": 11, "text": "E(k) = K_0 \\varepsilon^\\frac23 k^{-\\frac53} \\exp \\left[ - \\frac{3 K_0}{2} \\left( \\frac{\\nu^3 k^4}{\\varepsilon} \\right)^{\\frac13} \\right] \\,," }, { "math_id": 12, "text": "\\delta \\mathbf{u}(r) = \\mathbf{u}(\\mathbf{x} + \\mathbf{r}) - \\mathbf{u}(\\mathbf{x}) \\,;" }, { "math_id": 13, "text": "\\delta \\mathbf{u}(\\lambda r)" }, { "math_id": 14, "text": "\\lambda^\\beta \\delta \\mathbf{u}(r)\\,," }, { "math_id": 15, "text": "\\Big\\langle \\big ( \\delta \\mathbf{u}(r)\\big )^n \\Big\\rangle = C_n \\langle (\\varepsilon r )^\\frac{n}{3} \\rangle \\,," }, { "math_id": 16, "text": "E(k) \\propto k^{-p} \\,," }, { "math_id": 17, "text": "\\Big\\langle \\big (\\delta \\mathbf{u}(r)\\big )^2 \\Big\\rangle \\propto r^{p-1} \\,," } ]
https://en.wikipedia.org/wiki?curid=154664
154665
Vortex
Fluid flow revolving around an axis of rotation In fluid dynamics, a vortex (pl.: vortices or vortexes) is a region in a fluid in which the flow revolves around an axis line, which may be straight or curved. Vortices form in stirred fluids, and may be observed in smoke rings, whirlpools in the wake of a boat, and the winds surrounding a tropical cyclone, tornado or dust devil. Vortices are a major component of turbulent flow. The distribution of velocity, vorticity (the curl of the flow velocity), as well as the concept of circulation are used to characterise vortices. In most vortices, the fluid flow velocity is greatest next to its axis and decreases in inverse proportion to the distance from the axis. In the absence of external forces, viscous friction within the fluid tends to organise the flow into a collection of irrotational vortices, possibly superimposed to larger-scale flows, including larger-scale vortices. Once formed, vortices can move, stretch, twist, and interact in complex ways. A moving vortex carries some angular and linear momentum, energy, and mass, with it. Overview. In the dynamics of fluid, a vortex is fluid that revolves around the axis line. This fluid might be curved or straight. Vortices form from stirred fluids: they might be observed in smoke rings, whirlpools, in the wake of a boat or the winds around a tornado or dust devil. Vortices are an important part of turbulent flow. Vortices can otherwise be known as a circular motion of a liquid. In the cases of the absence of forces, the liquid settles. This makes the water stay still instead of moving. When they are created, vortices can move, stretch, twist and interact in complicated ways. When a vortex is moving, sometimes, it can affect an angular position. For an example, if a water bucket is rotated or spun constantly, it will rotate around an invisible line called the axis line. The rotation moves around in circles. In this example the rotation of the bucket creates extra force. The reason that the vortices can change shape is the fact that they have open particle paths. This can create a moving vortex. Examples of this fact are the shapes of tornadoes and drain whirlpools. When two or more vortices are close together they can merge to make a vortex. Vortices also hold energy in its rotation of the fluid. If the energy is never removed, it would consist of circular motion forever. Properties. Vorticity. A key concept in the dynamics of vortices is the vorticity, a vector that describes the "local" rotary motion at a point in the fluid, as would be perceived by an observer that moves along with it. Conceptually, the vorticity could be observed by placing a tiny rough ball at the point in question, free to move with the fluid, and observing how it rotates about its center. The direction of the vorticity vector is defined to be the direction of the axis of rotation of this imaginary ball (according to the right-hand rule) while its length is twice the ball's angular velocity. Mathematically, the vorticity is defined as the curl (or rotational) of the velocity field of the fluid, usually denoted by formula_0 and expressed by the vector analysis formula formula_1, where formula_2 is the nabla operator and formula_3 is the local flow velocity. The local rotation measured by the vorticity formula_0 must not be confused with the angular velocity vector of that portion of the fluid with respect to the external environment or to any fixed axis. In a vortex, in particular, formula_0 may be opposite to the mean angular velocity vector of the fluid relative to the vortex's axis. Vortex types. In theory, the speed u of the particles (and, therefore, the vorticity) in a vortex may vary with the distance r from the axis in many ways. There are two important special cases, however: Irrotational vortices. In the absence of external forces, a vortex usually evolves fairly quickly toward the irrotational flow pattern, where the flow velocity u is inversely proportional to the distance r. Irrotational vortices are also called "free vortices". For an irrotational vortex, the circulation is zero along any closed contour that does not enclose the vortex axis; and has a fixed value, Γ, for any contour that does enclose the axis once. The tangential component of the particle velocity is then formula_6. The angular momentum per unit mass relative to the vortex axis is therefore constant, formula_7. The ideal irrotational vortex flow in free space is not physically realizable, since it would imply that the particle speed (and hence the force needed to keep particles in their circular paths) would grow without bound as one approaches the vortex axis. Indeed, in real vortices there is always a core region surrounding the axis where the particle velocity stops increasing and then decreases to zero as r goes to zero. Within that region, the flow is no longer irrotational: the vorticity formula_0 becomes non-zero, with direction roughly parallel to the vortex axis. The Rankine vortex is a model that assumes a rigid-body rotational flow where r is less than a fixed distance r0, and irrotational flow outside that core regions. In a viscous fluid, irrotational flow contains viscous dissipation everywhere, yet there are no net viscous forces, only viscous stresses. Due to the dissipation, this means that sustaining an irrotational viscous vortex requires continuous input of work at the core (for example, by steadily turning a cylinder at the core). In free space there is no energy input at the core, and thus the compact vorticity held in the core will naturally diffuse outwards, converting the core to a gradually-slowing and gradually-growing rigid-body flow, surrounded by the original irrotational flow. Such a decaying irrotational vortex has an exact solution of the viscous Navier–Stokes equations, known as a Lamb–Oseen vortex. Rotational vortices. A rotational vortex – a vortex that rotates in the same way as a rigid body – cannot exist indefinitely in that state except through the application of some extra force, that is not generated by the fluid motion itself. It has non-zero vorticity everywhere outside the core. Rotational vortices are also called rigid-body vortices or forced vortices. For example, if a water bucket is spun at constant angular speed w about its vertical axis, the water will eventually rotate in rigid-body fashion. The particles will then move along circles, with velocity u equal to wr. In that case, the free surface of the water will assume a parabolic shape. In this situation, the rigid rotating enclosure provides an extra force, namely an extra pressure gradient in the water, directed inwards, that prevents transition of the rigid-body flow to the irrotational state. Vortex formation on boundaries. Vortex structures are defined by their vorticity"," the local rotation rate of fluid particles. They can be formed via the phenomenon known as boundary layer separation which can occur when a fluid moves over a surface and experiences a rapid acceleration from the fluid velocity to zero due to the no-slip condition. This rapid negative acceleration creates a boundary layer which causes a local rotation of fluid at the wall (i.e. vorticity) which is referred to as the wall shear rate. The thickness of this boundary layer is proportional to formula_8 (where v is the free stream fluid velocity and t is time). If the diameter or thickness of the vessel or fluid is less than the boundary layer thickness then the boundary layer will not separate and vortices will not form. However, when the boundary layer does grow beyond this critical boundary layer thickness then separation will occur which will generate vortices. This boundary layer separation can also occur in the presence of combatting pressure gradients (i.e. a pressure that develops downstream). This is present in curved surfaces and general geometry changes like a convex surface. A unique example of severe geometric changes is at the trailing edge of a bluff body where the fluid flow deceleration, and therefore boundary layer and vortex formation, is located. Another form of vortex formation on a boundary is when fluid flows perpendicularly into a wall and creates a "splash effect." The velocity streamlines are immediately deflected and decelerated so that the boundary layer separates and forms a toroidal vortex ring. Vortex geometry. In a stationary vortex, the typical streamline (a line that is everywhere tangent to the flow velocity vector) is a closed loop surrounding the axis; and each vortex line (a line that is everywhere tangent to the vorticity vector) is roughly parallel to the axis. A surface that is everywhere tangent to both flow velocity and vorticity is called a vortex tube. In general, vortex tubes are nested around the axis of rotation. The axis itself is one of the vortex lines, a limiting case of a vortex tube with zero diameter. According to Helmholtz's theorems, a vortex line cannot start or end in the fluid – except momentarily, in non-steady flow, while the vortex is forming or dissipating. In general, vortex lines (in particular, the axis line) are either closed loops or end at the boundary of the fluid. A whirlpool is an example of the latter, namely a vortex in a body of water whose axis ends at the free surface. A vortex tube whose vortex lines are all closed will be a closed torus-like surface. A newly created vortex will promptly extend and bend so as to eliminate any open-ended vortex lines. For example, when an airplane engine is started, a vortex usually forms ahead of each propeller, or the turbofan of each jet engine. One end of the vortex line is attached to the engine, while the other end usually stretches out and bends until it reaches the ground. When vortices are made visible by smoke or ink trails, they may seem to have spiral pathlines or streamlines. However, this appearance is often an illusion and the fluid particles are moving in closed paths. The spiral streaks that are taken to be streamlines are in fact clouds of the marker fluid that originally spanned several vortex tubes and were stretched into spiral shapes by the non-uniform flow velocity distribution. Pressure in a vortex. The fluid motion in a vortex creates a dynamic pressure (in addition to any hydrostatic pressure) that is lowest in the core region, closest to the axis, and increases as one moves away from it, in accordance with Bernoulli's principle. One can say that it is the gradient of this pressure that forces the fluid to follow a curved path around the axis. In a rigid-body vortex flow of a fluid with constant density, the dynamic pressure is proportional to the square of the distance r from the axis. In a constant gravity field, the free surface of the liquid, if present, is a concave paraboloid. In an irrotational vortex flow with constant fluid density and cylindrical symmetry, the dynamic pressure varies as "P"∞ −, where "P"∞ is the limiting pressure infinitely far from the axis. This formula provides another constraint for the extent of the core, since the pressure cannot be negative. The free surface (if present) dips sharply near the axis line, with depth inversely proportional to "r"2. The shape formed by the free surface is called a hyperboloid, or "Gabriel's Horn" (by Evangelista Torricelli). The core of a vortex in air is sometimes visible because water vapor condenses as the low pressure of the core causes adiabatic cooling; the funnel of a tornado is an example. When a vortex line ends at a boundary surface, the reduced pressure may also draw matter from that surface into the core. For example, a dust devil is a column of dust picked up by the core of an air vortex attached to the ground. A vortex that ends at the free surface of a body of water (like the whirlpool that often forms over a bathtub drain) may draw a column of air down the core. The forward vortex extending from a jet engine of a parked airplane can suck water and small stones into the core and then into the engine. Evolution. Vortices need not be steady-state features; they can move and change shape. In a moving vortex, the particle paths are not closed, but are open, loopy curves like helices and cycloids. A vortex flow might also be combined with a radial or axial flow pattern. In that case the streamlines and pathlines are not closed curves but spirals or helices, respectively. This is the case in tornadoes and in drain whirlpools. A vortex with helical streamlines is said to be solenoidal. As long as the effects of viscosity and diffusion are negligible, the fluid in a moving vortex is carried along with it. In particular, the fluid in the core (and matter trapped by it) tends to remain in the core as the vortex moves about. This is a consequence of Helmholtz's second theorem. Thus vortices (unlike surface waves and pressure waves) can transport mass, energy and momentum over considerable distances compared to their size, with surprisingly little dispersion. This effect is demonstrated by smoke rings and exploited in vortex ring toys and guns. Two or more vortices that are approximately parallel and circulating in the same direction will attract and eventually merge to form a single vortex, whose circulation will equal the sum of the circulations of the constituent vortices. For example, an airplane wing that is developing lift will create a sheet of small vortices at its trailing edge. These small vortices merge to form a single wingtip vortex, less than one wing chord downstream of that edge. This phenomenon also occurs with other active airfoils, such as propeller blades. On the other hand, two parallel vortices with opposite circulations (such as the two wingtip vortices of an airplane) tend to remain separate. Vortices contain substantial energy in the circular motion of the fluid. In an ideal fluid this energy can never be dissipated and the vortex would persist forever. However, real fluids exhibit viscosity and this dissipates energy very slowly from the core of the vortex. It is only through dissipation of a vortex due to viscosity that a vortex line can end in the fluid, rather than at the boundary of the fluid. References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec \\omega" }, { "math_id": 1, "text": "\\nabla \\times \\vec{\\mathit{u}}" }, { "math_id": 2, "text": "\\nabla" }, { "math_id": 3, "text": "\\vec{\\mathit{u}}" }, { "math_id": 4, "text": "\\begin{align}\n \\vec\\Omega &= (0, 0, \\Omega) , \\quad \\vec{r} = (x, y, 0) , \\\\ \n \\vec{u} &= \\vec{\\Omega} \\times \\vec{r} = (-\\Omega y, \\Omega x, 0) , \\\\\n \\vec\\omega &= \\nabla \\times \\vec{u} = (0, 0, 2\\Omega) = 2\\vec{\\Omega} .\n\\end{align}" }, { "math_id": 5, "text": "\\begin{align}\n \\vec{\\Omega} &= \\left(0, 0, \\alpha r^{-2}\\right) , \\quad \\vec{r} = (x, y, 0) , \\\\\n \\vec{u} &= \\vec{\\Omega} \\times \\vec{r} = \\left(-\\alpha y r^{-2}, \\alpha x r^{-2}, 0\\right) , \\\\\n \\vec{\\omega} &= \\nabla \\times \\vec{u} = 0 .\n\\end{align}" }, { "math_id": 6, "text": "u_{\\theta} = \\tfrac{\\Gamma}{2 \\pi r}" }, { "math_id": 7, "text": " r u_{\\theta} = \\tfrac{\\Gamma}{2 \\pi}" }, { "math_id": 8, "text": "\\surd(vt)" } ]
https://en.wikipedia.org/wiki?curid=154665
15470598
Shift space
Set of infinite words In symbolic dynamics and related branches of mathematics, a shift space or subshift is a set of infinite words that represent the evolution of a discrete system. In fact, shift spaces and "symbolic dynamical systems" are often considered synonyms. The most widely studied shift spaces are the subshifts of finite type and the sofic shifts. In the classical framework a shift space is any subset formula_0 of formula_1, where formula_2 is a finite set, which is closed for the Tychonov topology and invariant by translations. More generally one can define a shift space as the closed and translation-invariant subsets of formula_3, where formula_2 is any non-empty set and formula_4 is any monoid. Definition. Let formula_4 be a monoid, and given formula_5, denote the operation of formula_6 with formula_7 by the product formula_8. Let formula_9 denote the identity of formula_4. Consider a non-empty set formula_2 (an alphabet) with the discrete topology, and define formula_3 as the set of all patterns over formula_2 indexed by formula_4. For formula_10 and a subset formula_11, we denote the restriction of formula_12 to the indices of formula_13 as formula_14. On formula_3, we consider the prodiscrete topology, which makes formula_3 a Hausdorff and totally disconnected topological space. In the case of formula_2 being finite, it follows that formula_3 is compact. However, if formula_2 is not finite, then formula_3 is not even locally compact. This topology will be metrizable if and only if formula_4 is countable, and, in any case, the base of this topology consists of a collection of open/closed sets (called cylinders), defined as follows: given a finite set of indices formula_15, and for each formula_16, let formula_17. The cylinder given by formula_18 and formula_19 is the set formula_20 When formula_21, we denote the cylinder fixing the symbol formula_22 at the entry indexed by formula_6 simply as formula_23. In other words, a cylinder formula_24 is the set of all set of all infinite patterns of formula_3 which contain the finite pattern formula_19. Given formula_25, the "g"-shift map on formula_3 is denoted by formula_26 and defined as formula_27. A shift space over the alphabet formula_2 is a set formula_28 that is closed under the topology of formula_3 and invariant under translations, i.e., formula_29 for all formula_25. We consider in the shift space formula_0 the induced topology from formula_3, which has as basic open sets the cylinders formula_30. For each formula_31, define formula_32, and formula_33. An equivalent way to define a shift space is to take a set of forbidden patterns formula_34 and define a shift space as the set formula_35 Intuitively, a shift space formula_36 is the set of all infinite patterns that do not contain any forbidden finite pattern of formula_37. Language of shift space. Given a shift space formula_28 and a finite set of indices formula_11, let formula_38, where formula_39 stands for the empty word, and for formula_40 let formula_41 be the set of all finite configurations of formula_42 that appear in some sequence of formula_0, i.e., formula_43 Note that, since formula_0 is a shift space, if formula_44 is a translation of formula_11, i.e., formula_45 for some formula_25, then formula_46 if and only if there exists formula_47 such that formula_48 if formula_49. In other words, formula_50 and formula_51 contain the same configurations modulo translation. We will call the set formula_52 the language of formula_0. In the general context stated here, the language of a shift space has not the same mean of that in Formal Language Theory, but in the classical framework which considers the alphabet formula_2 being finite, and formula_4 being formula_53 or formula_54 with the usual addition, the language of a shift space is a formal language. Classical framework. The classical framework for shift spaces consists of considering the alphabet formula_2 as finite, and formula_4 as the set of non-negative integers (formula_53) with the usual addition, or the set of all integers (formula_54) with the usual addition. In both cases, the identity element formula_9 corresponds to the number 0. Furthermore, when formula_55, since all formula_56 can be generated from the number 1, it is sufficient to consider a unique shift map given by formula_57 for all formula_58. On the other hand, for the case of formula_59, since all formula_54 can be generated from the numbers {-1, 1}, it is sufficient to consider two shift maps given for all formula_58 by formula_57 and by formula_60. Furthermore, whenever formula_4 is formula_53 or formula_54 with the usual addition (independently of the cardinality of formula_61), due to its algebraic structure, it is sufficient consider only cylinders in the form formula_62 Moreover, the language of a shift space formula_63 will be given by formula_64 where formula_65 and formula_39 stands for the empty word, and formula_66 In the same way, for the particular case of formula_59, it follows that to define a shift space formula_67 we do not need to specify the index of formula_4 on which the forbidden words of formula_37 are defined, that is, we can just consider formula_68 and then formula_69 However, if formula_55, if we define a shift space formula_67 as above, without to specify the index of where the words are forbidden, then we will just capture shift spaces which are invariant through the shift map, that is, such that formula_70. In fact, to define a shift space formula_71 such that formula_72 it will be necessary to specify from which index on the words of formula_37 are forbidden. In particular, in the classical framework of formula_2 being finite, and formula_4 being formula_53) or formula_54 with the usual addition, it follows that formula_73 is finite if and only if formula_37 is finite, which leads to classical definition of a shift of finite type as those shift spaces formula_28 such that formula_67 for some finite formula_37. Some types of shift spaces. Among several types of shift spaces, the most widely studied are the shifts of finite type and the sofic shifts. In the case when the alphabet formula_2 is finite, a shift space formula_0 is a shift of finite type if we can take a finite set of forbidden patterns formula_37 such that formula_67, and formula_0 is a sofic shift if it is the image of a shift of finite type under sliding block code (that is, a map formula_74 that is continuous and invariant for all formula_6-shift maps ). If formula_2 is finite and formula_4 is formula_53 or formula_54 with the usual addition, then the shift formula_0 is a sofic shift if and only if formula_75 is a regular language. The name "sofic" was coined by , based on the Hebrew word סופי meaning "finite", to refer to the fact that this is a generalization of a finiteness property. When formula_2 is infinite, it is possible to define shifts of finite type as shift spaces formula_0 for those one can take a set formula_37 of forbidden words such that formula_76 is finite and formula_67. In this context of infinite alphabet, a sofic shift will be defined as the image of a shift of finite type under a particular class of sliding block codes. Both, the finiteness of formula_73 and the additional conditions the sliding block codes, are trivially satisfied whenever formula_2 is finite. Topological dynamical systems on shift spaces. Shift spaces are the topological spaces on which symbolic dynamical systems are usually defined. Given a shift space formula_28 and a formula_6-shift map formula_77 it follows that the pair formula_78 is a topological dynamical system. Two shift spaces formula_28 and formula_79 are said to be topologically conjugate (or simply conjugate) if for each formula_6-shift map it follows that the topological dynamical systems formula_78 and formula_80 are topologically conjugate, that is, if there exists a continuous map formula_81 such that formula_82. Such maps are known as generalized sliding block codes or just as sliding block codes whenever formula_74 is uniformly continuous. Although any continuous map formula_74 from formula_28 to itself will define a topological dynamical system formula_83, in symbolic dynamics it is usual to consider only continuous maps formula_84 which commute with all formula_6-shift maps, i. e., maps which are generalized sliding block codes. The dynamical system formula_83 is known as a 'generalized cellular automaton' (or just as a cellular automaton whenever formula_74 is uniformly continuous). Examples. The first trivial example of shift space (of finite type) is the "full shift" formula_85. Let formula_86. The set of all infinite words over "A" containing at most one "b" is a sofic subshift, not of finite type. The set of all infinite words over "A" whose "b" form blocks of prime length is not sofic (this can be shown by using the pumping lemma). The space of infinite strings in two letters, formula_87 is called the Bernoulli process. It is isomorphic to the Cantor set. The bi-infinite space of strings in two letters, formula_88 is commonly known as the Baker's map, or rather is homomorphic to the Baker's map. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Lambda" }, { "math_id": 1, "text": "A^\\mathbb{Z}:=\\{(x_i)_{i\\in\\mathbb{Z}}:\\ x_i\\in A\\ \\forall i\\in\\mathbb{Z}\\}" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "A^\\mathbb{G}" }, { "math_id": 4, "text": "\\mathbb{G}" }, { "math_id": 5, "text": "g,h\\in\\mathbb{G}" }, { "math_id": 6, "text": "g" }, { "math_id": 7, "text": "h" }, { "math_id": 8, "text": "gh" }, { "math_id": 9, "text": "\\mathbf{1}_{\\mathbb{G}}" }, { "math_id": 10, "text": "\\mathbf{x}=(x_i)_{i\\in \\mathbb{G}}\\in A^\\mathbb{G}" }, { "math_id": 11, "text": "N\\subset\\mathbb{G}" }, { "math_id": 12, "text": "\\mathbf{x}" }, { "math_id": 13, "text": "N" }, { "math_id": 14, "text": "\\mathbf{x}_N:=(x_i)_{i\\in N}" }, { "math_id": 15, "text": "D\\subset \\mathbb{G}" }, { "math_id": 16, "text": "i\\in D" }, { "math_id": 17, "text": "a_i\\in A" }, { "math_id": 18, "text": "D" }, { "math_id": 19, "text": "(a_i)_{i\\in D}\\in A^{|D|}" }, { "math_id": 20, "text": "\\big[(a_i)_{i\\in D}\\big]_D:=\\{\\mathbf{x}\\in A^\\mathbb{G}:\\ x_i=a_i,\\ \\forall i\\in D\\}." }, { "math_id": 21, "text": "D=\\{g\\}" }, { "math_id": 22, "text": "b" }, { "math_id": 23, "text": "[b]_g" }, { "math_id": 24, "text": "\\big[(a_i)_{i\\in D}\\big]_D" }, { "math_id": 25, "text": "g\\in\\mathbb{G}" }, { "math_id": 26, "text": "\\sigma^g:A^\\mathbb{G}\\to A^\\mathbb{G}" }, { "math_id": 27, "text": "\\sigma^g\\big((x_i)_{i\\in\\mathbb{G}}\\big)=(x_{gi})_{i\\in\\mathbb{G}}" }, { "math_id": 28, "text": "\\Lambda\\subset A^\\mathbb{G}" }, { "math_id": 29, "text": "\\sigma^g(\\Lambda)\\subset \\Lambda" }, { "math_id": 30, "text": "\\big[(a_i)_{i\\in D}\\big]_\\Lambda:=\\big[(a_i)_{i\\in D}\\big]\\cap\\Lambda" }, { "math_id": 31, "text": "k\\in\\N^*" }, { "math_id": 32, "text": "\\mathcal{N}_k:=\\bigcup_{{N\\subset \\mathbb{G} \\atop \\#N=k}}A^N" }, { "math_id": 33, "text": "\\mathcal{N}^f_{A^\\mathbb{G}}:=\\bigcup_{{k\\in\\N}}\\mathcal{N}_k= \\bigcup_{{N\\subset \\mathbb{G} \\atop \\#N<\\infty}}A^N" }, { "math_id": 34, "text": "F\\subset\\mathcal{N}^f_{A^\\mathbb{G}}" }, { "math_id": 35, "text": "X_F:=\\{\\mathbf{x}\\in A^\\mathbb{G}:\\ \\forall N\\subset\\mathbb{G}, \\forall g\\in\\mathbb{G},\\ \\left(\\sigma^g(\\mathbf{x})\\right)_{N}=\\mathbf{x}_{gN}\\notin F\\}." }, { "math_id": 36, "text": "X_F" }, { "math_id": 37, "text": "F" }, { "math_id": 38, "text": "W_\\emptyset(\\Lambda):=\\{\\epsilon\\}" }, { "math_id": 39, "text": "\\epsilon" }, { "math_id": 40, "text": "N\\neq\\emptyset" }, { "math_id": 41, "text": "W_N(\\Lambda)\\subset A^N" }, { "math_id": 42, "text": "A^N" }, { "math_id": 43, "text": "W_N(\\Lambda):=\\{(w_i)_{i\\in N}\\in A^N:\\ \\exists \\ \\mathbf{x}\\in\\Lambda \\text{ s.t. } x_i=w_i\\ \\forall i\\in N\\}." }, { "math_id": 44, "text": "M\\subset\\mathbb{G}" }, { "math_id": 45, "text": "M=gN" }, { "math_id": 46, "text": "(w_j)_{j\\in M}\\in W_M(\\Lambda)" }, { "math_id": 47, "text": "(v_i)_{i\\in N}\\in W_N(\\Lambda)" }, { "math_id": 48, "text": "w_j=v_i" }, { "math_id": 49, "text": "j=gi" }, { "math_id": 50, "text": "W_M(\\Lambda)" }, { "math_id": 51, "text": "W_N(\\Lambda)" }, { "math_id": 52, "text": "W(\\Lambda):=\\bigcup_{{N\\subset \\mathbb{G}\\atop \\#N<\\infty}}W_N(\\Lambda)" }, { "math_id": 53, "text": "\\mathbb{N}" }, { "math_id": 54, "text": "\\mathbb{Z}" }, { "math_id": 55, "text": "\\mathbb{G}=\\mathbb{N}" }, { "math_id": 56, "text": "\\mathbb{N}\\setminus\\{0\\}" }, { "math_id": 57, "text": "\\sigma(\\mathbf{x})_n=x_{n+1}" }, { "math_id": 58, "text": "n" }, { "math_id": 59, "text": "\\mathbb{G}=\\mathbb{Z}" }, { "math_id": 60, "text": "\\sigma^{-1}(\\mathbf{x})_n=x_{n-1}" }, { "math_id": 61, "text": "A " }, { "math_id": 62, "text": "[a_0a_1...a_n]:=\\{(x_i)_{i\\in\\mathbb{G}}:\\ x_i=a_i\\ \\forall i=0,..,n\\}." }, { "math_id": 63, "text": "\\Lambda \\subset A^\\mathbb{G}" }, { "math_id": 64, "text": "W(\\Lambda):=\\bigcup_{n\\geq 0}W_n(\\Lambda), " }, { "math_id": 65, "text": "W_0:=\\{\\epsilon\\}" }, { "math_id": 66, "text": "W_n(\\Lambda):=\\{((a_i)_{i=0,..n}\\in A^n:\\ \\exists \\mathbf{x}\\in \\Lambda\\ s.t.\\ x_i=a_i\\ \\forall i=0,...,n\\}. " }, { "math_id": 67, "text": "\\Lambda=X_F" }, { "math_id": 68, "text": "F\\subset \\bigcup_{n\\geq 1}A^n" }, { "math_id": 69, "text": "X_F=\\{\\mathbb{x}\\in A^\\mathbb{Z}:\\ \\forall i\\in\\mathbb{Z},\\ \\forall k\\geq 0,\\ (x_i...x_{i+k})\\notin F \\}." }, { "math_id": 70, "text": "\\sigma(X_F)=X_F" }, { "math_id": 71, "text": "X_F\\subset A^\\mathbb{N}" }, { "math_id": 72, "text": "\\sigma(X_F)\\subsetneq X_F" }, { "math_id": 73, "text": "M_F" }, { "math_id": 74, "text": "\\Phi" }, { "math_id": 75, "text": "W(\\Lambda)" }, { "math_id": 76, "text": "M_F:=\\{g\\in\\mathbb{G}:\\ \\exists N\\subset \\mathbb{G}\\text{ s.t. } g\\in N\\text{ and } (w_i)_{i\\in N}\\in F \\}," }, { "math_id": 77, "text": "\\sigma^g:\\Lambda\\to\\Lambda" }, { "math_id": 78, "text": "(\\Lambda,\\sigma^g)" }, { "math_id": 79, "text": "\\Gamma\\subset B^\\mathbb{G}" }, { "math_id": 80, "text": "(\\Gamma,\\sigma^g)" }, { "math_id": 81, "text": "\\Phi:\\Lambda\\to\\Gamma" }, { "math_id": 82, "text": "\\Phi\\circ\\sigma^g=\\sigma^g\\circ \\Phi" }, { "math_id": 83, "text": "(\\Lambda,\\Phi)" }, { "math_id": 84, "text": "\\Phi:\\Lambda\\to\\Lambda" }, { "math_id": 85, "text": "A^\\mathbb N" }, { "math_id": 86, "text": "A=\\{a,b\\}" }, { "math_id": 87, "text": "\\{0,1\\}^\\mathbb{N}" }, { "math_id": 88, "text": "\\{0,1\\}^\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=15470598
1547121
Twisted cubic
In mathematics, a twisted cubic is a smooth, rational curve "C" of degree three in projective 3-space P3. It is a fundamental example of a skew curve. It is essentially unique, up to projective transformation ("the" twisted cubic, therefore). In algebraic geometry, the twisted cubic is a simple example of a projective variety that is not linear or a hypersurface, in fact not a complete intersection. It is the three-dimensional case of the rational normal curve, and is the image of a Veronese map of degree three on the projective line. Definition. The twisted cubic is most easily given parametrically as the image of the map formula_0 which assigns to the homogeneous coordinate formula_1 the value formula_2 In one coordinate patch of projective space, the map is simply the moment curve formula_3 That is, it is the closure by a single point at infinity of the affine curve formula_4. The twisted cubic is a projective variety, defined as the intersection of three quadrics. In homogeneous coordinates formula_5 on P3, the twisted cubic is the closed subscheme defined by the vanishing of the three homogeneous polynomials formula_6 formula_7 formula_8 It may be checked that these three quadratic forms vanish identically when using the explicit parameterization above; that is, substitute "x"3 for "X", and so on. More strongly, the homogeneous ideal of the twisted cubic "C" is generated by these three homogeneous polynomials of degree 2. Properties. The twisted cubic has the following properties:
[ { "math_id": 0, "text": "\\nu:\\mathbf{P}^1\\to\\mathbf{P}^3" }, { "math_id": 1, "text": "[S:T]" }, { "math_id": 2, "text": "\\nu:[S:T] \\mapsto [S^3:S^2T:ST^2:T^3]." }, { "math_id": 3, "text": "\\nu:x \\mapsto (x,x^2,x^3)" }, { "math_id": 4, "text": "(x,x^2,x^3)" }, { "math_id": 5, "text": "[X:Y:Z:W]" }, { "math_id": 6, "text": "F_0 = XZ - Y^2" }, { "math_id": 7, "text": "F_1 = YW - Z^2" }, { "math_id": 8, "text": "F_2 = XW - YZ." }, { "math_id": 9, "text": "XZ - Y^2" }, { "math_id": 10, "text": "Z(YW-Z^2)-W(XW-YZ)" }, { "math_id": 11, "text": "(YW-Z^2)^2" }, { "math_id": 12, "text": "YW-Z^2" } ]
https://en.wikipedia.org/wiki?curid=1547121
1547158
Hypotrochoid
Curve traced by a point outside a circle rolling within another circle In geometry, a hypotrochoid is a roulette traced by a point attached to a circle of radius r rolling around the inside of a fixed circle of radius R, where the point is a distance d from the center of the interior circle. The parametric equations for a hypotrochoid are: formula_0 where θ is the angle formed by the horizontal and the center of the rolling circle (these are not polar equations because θ is not the polar angle). When measured in radian, θ takes values from 0 to formula_1 (where LCM is least common multiple). Special cases include the hypocycloid with "d" = "r" and the ellipse with "R" = 2"r" and "d" ≠ "r". The eccentricity of the ellipse is formula_2 becoming 1 when formula_3 (see Tusi couple). The classic Spirograph toy traces out hypotrochoid and epitrochoid curves. Hypotrochoids describe the support of the eigenvalues of some random matrices with cyclic correlations.
[ { "math_id": 0, "text": "\\begin{align}\n& x (\\theta) = (R - r)\\cos\\theta + d\\cos\\left({R - r \\over r}\\theta\\right) \\\\\n& y (\\theta) = (R - r)\\sin\\theta - d\\sin\\left({R - r \\over r}\\theta\\right)\n\\end{align}" }, { "math_id": 1, "text": "2 \\pi \\times \\tfrac{\\operatorname{LCM}(r, R)}{R}" }, { "math_id": 2, "text": "e=\\frac{2\\sqrt{d/r}}{1+(d/r)}" }, { "math_id": 3, "text": "d=r" } ]
https://en.wikipedia.org/wiki?curid=1547158
154725
Probability mass function
Discrete-variable probability distribution In probability and statistics, a probability mass function (sometimes called "probability function" or "frequency function") is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete probability density function. The probability mass function is often the primary means of defining a discrete probability distribution, and such functions exist for either scalar or multivariate random variables whose domain is discrete. A probability mass function differs from a probability density function (PDF) in that the latter is associated with continuous rather than discrete random variables. A PDF must be integrated over an interval to yield a probability. The value of the random variable having the largest probability mass is called the mode. Formal definition. Probability mass function is the probability distribution of a discrete random variable, and provides the possible values and their associated probabilities. It is the function formula_0 defined by formula_1 for formula_2, where formula_3 is a probability measure. formula_4 can also be simplified as formula_5. The probabilities associated with all (hypothetical) values must be non-negative and sum up to 1, formula_6 and formula_7 Thinking of probability as mass helps to avoid mistakes since the physical mass is conserved as is the total probability for all hypothetical outcomes formula_8. Measure theoretic formulation. A probability mass function of a discrete random variable formula_9 can be seen as a special case of two more general measure theoretic constructions: the distribution of formula_9 and the probability density function of formula_9 with respect to the counting measure. We make this more precise below. Suppose that formula_10 is a probability space and that formula_11 is a measurable space whose underlying σ-algebra is discrete, so in particular contains singleton sets of formula_12. In this setting, a random variable formula_13 is discrete provided its image is countable. The pushforward measure formula_14—called the distribution of formula_9 in this context—is a probability measure on formula_12 whose restriction to singleton sets induces the probability mass function (as mentioned in the previous section) formula_15 since formula_16 for each formula_17. Now suppose that formula_18 is a measure space equipped with the counting measure formula_19. The probability density function formula_20 of formula_9 with respect to the counting measure, if it exists, is the Radon–Nikodym derivative of the pushforward measure of formula_9 (with respect to the counting measure), so formula_21 and formula_20 is a function from formula_12 to the non-negative reals. As a consequence, for any formula_17 we have formula_22 demonstrating that formula_20 is in fact a probability mass function. When there is a natural order among the potential outcomes formula_8, it may be convenient to assign numerical values to them (or "n"-tuples in case of a discrete multivariate random variable) and to consider also values not in the image of formula_9. That is, formula_23 may be defined for all real numbers and formula_24 for all formula_25 as shown in the figure. The image of formula_9 has a countable subset on which the probability mass function formula_26 is one. Consequently, the probability mass function is zero for all but a countable number of values of formula_8. The discontinuity of probability mass functions is related to the fact that the cumulative distribution function of a discrete random variable is also discontinuous. If formula_9 is a discrete random variable, then formula_27 means that the casual event formula_28 is certain (it is true in 100% of the occurrences); on the contrary, formula_29 means that the casual event formula_28 is always impossible. This statement isn't true for a continuous random variable formula_9, for which formula_29 for any possible formula_8. Discretization is the process of converting a continuous random variable into a discrete one. Examples. Finite. There are three major distributions associated, the Bernoulli distribution, the binomial distribution and the geometric distribution. Infinite. The following exponentially declining distribution is an example of a distribution with an infinite number of possible outcomes—all the positive integers: formula_37 Despite the infinite number of possible outcomes, the total probability mass is 1/2 + 1/4 + 1/8 + ⋯ = 1, satisfying the unit total probability requirement for a probability distribution. Multivariate case. Two or more discrete random variables have a joint probability mass function, which gives the probability of each possible combination of realizations for the random variables. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p: \\R \\to [0,1]" }, { "math_id": 1, "text": "p_X(x) = P(X = x)" }, { "math_id": 2, "text": "-\\infin < x < \\infin" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "p_X(x)" }, { "math_id": 5, "text": "p(x)" }, { "math_id": 6, "text": "\\sum_x p_X(x) = 1 " }, { "math_id": 7, "text": " p_X(x)\\geq 0." }, { "math_id": 8, "text": "x" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "(A, \\mathcal A, P)" }, { "math_id": 11, "text": "(B, \\mathcal B)" }, { "math_id": 12, "text": "B" }, { "math_id": 13, "text": " X \\colon A \\to B" }, { "math_id": 14, "text": "X_{*}(P)" }, { "math_id": 15, "text": "f_X \\colon B \\to \\mathbb R" }, { "math_id": 16, "text": "f_X(b)=P( X^{-1}( b ))=P(X=b)" }, { "math_id": 17, "text": "b \\in B" }, { "math_id": 18, "text": "(B, \\mathcal B, \\mu)" }, { "math_id": 19, "text": "\\mu" }, { "math_id": 20, "text": "f" }, { "math_id": 21, "text": " f = d X_*P / d \\mu" }, { "math_id": 22, "text": "P(X=b)=P( X^{-1}( b) ) = X_*(P)(b) = \\int_{ b } f d \\mu = f(b)," }, { "math_id": 23, "text": "f_X" }, { "math_id": 24, "text": "f_X(x)=0" }, { "math_id": 25, "text": "x \\notin X(S)" }, { "math_id": 26, "text": "f_X(x)" }, { "math_id": 27, "text": " P(X = x) = 1" }, { "math_id": 28, "text": "(X = x)" }, { "math_id": 29, "text": "P(X = x) = 0" }, { "math_id": 30, "text": "p_X(x) = \\begin{cases}\np, & \\text{if }x\\text{ is 1} \\\\\n1-p, & \\text{if }x\\text{ is 0}\n\\end{cases}" }, { "math_id": 31, "text": "S" }, { "math_id": 32, "text": "p_X(x) = \\begin{cases}\n\\frac{1}{2}, &x = 0,\\\\\n\\frac{1}{2}, &x = 1,\\\\\n0, &x \\notin \\{0, 1\\}.\n\\end{cases}" }, { "math_id": 33, "text": "\\binom{n}{k} p^k (1-p)^{n-k}" }, { "math_id": 34, "text": "p_X(k) = (1-p)^{k-1} p" }, { "math_id": 35, "text": "p" }, { "math_id": 36, "text": "k" }, { "math_id": 37, "text": "\\text{Pr}(X=i)= \\frac{1}{2^i}\\qquad \\text{for } i=1, 2, 3, \\dots " } ]
https://en.wikipedia.org/wiki?curid=154725
1547292
Epitrochoid
Plane curve formed by rolling a circle on the outside of another In geometry, an epitrochoid ( or ) is a roulette traced by a point attached to a circle of radius r rolling around the outside of a fixed circle of radius R, where the point is at a distance d from the center of the exterior circle. The parametric equations for an epitrochoid are: formula_0 The parameter θ is geometrically the polar angle of the center of the exterior circle. (However, θ is not the polar angle of the point formula_1 on the epitrochoid.) Special cases include the limaçon with "R" = "r" and the epicycloid with "d" = "r". The classic Spirograph toy traces out epitrochoid and hypotrochoid curves. The paths of planets in the once popular geocentric system of deferents and epicycles are epitrochoids with formula_2 for both the outer planets and the inner planets. The orbit of the Moon, when centered around the Sun, approximates an epitrochoid. The combustion chamber of the Wankel engine is an epitrochoid.
[ { "math_id": 0, "text": "\\begin{align}\n& x (\\theta) = (R + r)\\cos\\theta - d\\cos\\left({R + r \\over r}\\theta\\right) \\\\\n& y (\\theta) = (R + r)\\sin\\theta - d\\sin\\left({R + r \\over r}\\theta\\right)\n\\end{align}" }, { "math_id": 1, "text": "(x(\\theta),y(\\theta))" }, { "math_id": 2, "text": "d>r," } ]
https://en.wikipedia.org/wiki?curid=1547292
1547359
Injective sheaf
Mathematical object in sheaf cohomology In mathematics, injective sheaves of abelian groups are used to construct the resolutions needed to define sheaf cohomology (and other derived functors, such as sheaf Ext). There is a further group of related concepts applied to sheaves: flabby ("flasque" in French), fine, soft ("mou" in French), acyclic. In the history of the subject they were introduced before the 1957 "Tohoku paper" of Alexander Grothendieck, which showed that the abelian category notion of "injective object" sufficed to found the theory. The other classes of sheaves are historically older notions. The abstract framework for defining cohomology and derived functors does not need them. However, in most concrete situations, resolutions by acyclic sheaves are often easier to construct. Acyclic sheaves therefore serve for computational purposes, for example the Leray spectral sequence. Injective sheaves. An injective sheaf formula_0 is a sheaf that is an injective object of the category of abelian sheaves; in other words, homomorphisms from formula_1 to formula_0 can always be extended to any sheaf formula_2 containing formula_3 The category of abelian sheaves has enough injective objects: this means that any sheaf is a subsheaf of an injective sheaf. This result of Grothendieck follows from the existence of a "generator" of the category (it can be written down explicitly, and is related to the subobject classifier). This is enough to show that right derived functors of any left exact functor exist and are unique up to canonical isomorphism. For technical purposes, injective sheaves are usually superior to the other classes of sheaves mentioned above: they can do almost anything the other classes can do, and their theory is simpler and more general. In fact, injective sheaves are flabby ("flasque"), soft, and acyclic. However, there are situations where the other classes of sheaves occur naturally, and this is especially true in concrete computational situations. The dual concept, projective sheaves, is not used much, because in a general category of sheaves there are not enough of them: not every sheaf is the quotient of a projective sheaf, and in particular projective resolutions do not always exist. This is the case, for example, when looking at the category of sheaves on projective space in the Zariski topology. This causes problems when attempting to define left derived functors of a right exact functor (such as Tor). This can sometimes be done by ad hoc means: for example, the left derived functors of Tor can be defined using a flat resolution rather than a projective one, but it takes some work to show that this is independent of the resolution. Not all categories of sheaves run into this problem; for instance, the category of sheaves on an affine scheme contains enough projectives. Acyclic sheaves. An acyclic sheaf formula_0 over "X" is one such that all higher sheaf cohomology groups vanish. The cohomology groups of any sheaf can be calculated from any acyclic resolution of it (this goes by the name of De Rham-Weil theorem). Fine sheaves. A fine sheaf over "X" is one with "partitions of unity"; more precisely for any open cover of the space "X" we can find a family of homomorphisms from the sheaf to itself with sum 1 such that each homomorphism is 0 outside some element of the open cover. Fine sheaves are usually only used over paracompact Hausdorff spaces "X". Typical examples are the sheaf of germs of continuous real-valued functions over such a space, or smooth functions over a smooth (paracompact Hausdorff) manifold, or modules over these sheaves of rings. Also, fine sheaves over paracompact Hausdorff spaces are soft and acyclic. One can find a resolution of a sheaf on a smooth manifold by fine sheaves using the Alexander–Spanier resolution. As an application, consider a real manifold "X". There is the following resolution of the constant sheaf formula_4 by the fine sheaves of (smooth) differential forms: formula_5 This is a resolution, i.e. an exact complex of sheaves by the Poincaré lemma. The cohomology of "X" with values in formula_4 can thus be computed as the cohomology of the complex of globally defined differential forms: formula_6 Soft sheaves. A soft sheaf formula_0 over "X" is one such that any section over any closed subset of "X" can be extended to a global section. Soft sheaves are acyclic over paracompact Hausdorff spaces. Flasque or flabby sheaves. A flasque sheaf (also called a flabby sheaf) is a sheaf formula_0 with the following property: if formula_7 is the base topological space on which the sheaf is defined and formula_8 are open subsets, then the restriction map formula_9 is surjective, as a map of groups (rings, modules, etc.). Flasque sheaves are useful because (by definition) their sections extend. This means that they are some of the simplest sheaves to handle in terms of homological algebra. Any sheaf has a canonical embedding into the flasque sheaf of all possibly discontinuous sections of the étalé space, and by repeating this we can find a canonical flasque resolution for any sheaf. Flasque resolutions, that is, resolutions by means of flasque sheaves, are one approach to defining sheaf cohomology. Flasque sheaves are soft and acyclic. "Flasque" is a French word that has sometimes been translated into English as "flabby". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{F}" }, { "math_id": 1, "text": "\\mathcal{A}" }, { "math_id": 2, "text": "\\mathcal{B}" }, { "math_id": 3, "text": "\\mathcal{A}." }, { "math_id": 4, "text": "\\R" }, { "math_id": 5, "text": "0\\to\\R\\to C^0_X \\to C^1_X \\to \\cdots \\to C^{\\dim X}_X \\to 0." }, { "math_id": 6, "text": "H^i(X,\\R) = H^i(C^\\bullet_X(X))." }, { "math_id": 7, "text": "X" }, { "math_id": 8, "text": "U \\subseteq V \\subseteq X" }, { "math_id": 9, "text": "r_{U \\subseteq V} : \\Gamma(V, \\mathcal{F}) \\to \\Gamma(U, \\mathcal{F})" } ]
https://en.wikipedia.org/wiki?curid=1547359
1547360
Villarceau circles
Intersection of a torus and a plane In geometry, Villarceau circles () are a pair of circles produced by cutting a torus obliquely through its center at a special angle. Given an arbitrary point on a torus, four circles can be drawn through it. One is in a plane parallel to the equatorial plane of the torus and another perpendicular to that plane (these are analogous to lines of latitude and longitude on the Earth). The other two are Villarceau circles. They are obtained as the intersection of the torus with a plane that passes through the center of the torus and touches it tangentially at two antipodal points. If one considers all these planes, one obtains two families of circles on the torus. Each of these families consists of disjoint circles that cover each point of the torus exactly once and thus forms a 1-dimensional foliation of the torus. The Villarceau circles are named after the French astronomer and mathematician Yvon Villarceau (1813–1883) who wrote about them in 1848. Example. Consider a horizontal torus in "xyz" space, centered at the origin and with major radius 5 and minor radius 3. That means that the torus is the locus of some vertical circles of radius three whose centers are on a circle of radius five in the horizontal "xy" plane. Points on this torus satisfy this equation: formula_0 Slicing with the "z" = 0 plane produces two concentric circles, "x"2 + "y"2 = 22 and "x"2 + "y"2 = 82, the outer and inner equator. Slicing with the "x" = 0 plane produces two side-by-side circles, ("y" − 5)2 + "z"2 = 32 and ("y" + 5)2 + "z"2 = 32. Two example Villarceau circles can be produced by slicing with the plane 3"y" = 4"z". One is centered at (+3, 0, 0) and the other at (−3, 0, 0); both have radius five. They can be written in parametric form as formula_1 and formula_2 The slicing plane is chosen to be tangent to the torus at two points while passing through its center. It is tangent at (0, 16/5, 12/5) and at (0, -16/5, -12/5). The angle of slicing is uniquely determined by the dimensions of the chosen torus. Rotating any one such plane around the "z"-axis gives all of the Villarceau circles for that torus. Existence and equations. A proof of the circles’ existence can be constructed from the fact that the slicing plane is tangent to the torus at two points. One characterization of a torus is that it is a surface of revolution. Without loss of generality, choose a coordinate system so that the axis of revolution is the "z" axis. [See the figure to the right.] Begin with a circle of radius "r" in the "yz" plane, centered at (0,"R", 0): formula_3 Sweeping this circle around the "z"-axis replaces "y" by ("x"2 + "y"2)1/2, and clearing the square root produces a quartic equation for the torus: formula_4 The cross-section of the swept surface in the "yz" plane now includes a second circle, with equation formula_5 This pair of circles has two common internal tangent lines, with slope at the origin found from the right triangle with hypotenuse "R" and opposite side "r" (which has its right angle at the point of tangency). Thus, on these tangent lines, "z"/"y" equals ±"r" / ("R"2 − "r"2)1/2, and choosing the plus sign produces the equation of a plane bitangent to the torus: formula_6 We can calculate the intersection of this plane with the torus analytically, and thus show that the result is a symmetric pair of circles of radius "R" centered at formula_7 A parametric description of these circles is formula_8 These circles can also be obtained by starting with a circle of radius "R" in the "xy"-plane, centered at ("r",0,0) or (-"r",0,0), and then rotating this circle about the "x"-axis by an angle of arcsin("r"/"R"). A treatment along these lines can be found in Coxeter (1969). A more abstract — and more flexible — approach was described by Hirsch (2002), using algebraic geometry in a projective setting. In the homogeneous quartic equation for the torus, formula_9 setting "w" to zero gives the intersection with the “plane at infinity”, and reduces the equation to formula_10 This intersection is a double point, in fact a double point counted twice. Furthermore, it is included in every bitangent plane. The two points of tangency are also double points. Thus the intersection curve, which theory says must be a quartic, contains four double points. But we also know that a quartic with more than three double points must factor (it cannot be irreducible), and by symmetry the factors must be two congruent conics, which are the two Villarceau circles. Hirsch extends this argument to "any" surface of revolution generated by a conic, and shows that intersection with a bitangent plane must produce two conics of the same type as the generator when the intersection curve is real. Filling space and the Hopf fibration. The torus plays a central role in the Hopf fibration of the 3-sphere, "S"3, over the ordinary sphere, "S"2, which has circles, "S"1, as fibers. When the 3-sphere is mapped to Euclidean 3-space by stereographic projection, the inverse image of a circle of latitude on "S"2 under the fiber map is a torus, and the fibers themselves are Villarceau circles. Banchoff has explored such a torus with computer graphics imagery. One of the unusual facts about the circles making up the Hopf fibration is that each links through all the others, not just through the circles in its own torus but through the circles making up all the tori filling all of space; Berger has a discussion and drawing. Further properties. Mannheim (1903) showed that the Villarceau circles meet all of the parallel circular cross-sections of the torus at the same angle, a result that he said a Colonel Schoelcher had presented at a congress in 1891. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 0 = (x^2+y^2+z^2 + 16)^2 - 100(x^2+y^2). \\,\\! " }, { "math_id": 1, "text": " (x,y,z) = (+3+5 \\cos \\vartheta, 4 \\sin \\vartheta, 3 \\sin \\vartheta) \\,\\!" }, { "math_id": 2, "text": " (x,y,z) = (-3+5 \\cos \\vartheta, 4 \\sin \\vartheta, 3 \\sin \\vartheta) \\,\\!" }, { "math_id": 3, "text": " 0 = (y-R)^2 + z^2 - r^2 \\,\\!" }, { "math_id": 4, "text": " 0 = (x^2+y^2+z^2 + R^2 - r^2)^2 - 4R^2(x^2+y^2) . \\,\\!" }, { "math_id": 5, "text": " 0 = (y+R)^2 + z^2 - r^2 \\,\\!" }, { "math_id": 6, "text": " y r = z\\sqrt{R^2-r^2} \\,\\!" }, { "math_id": 7, "text": " (\\pm r, 0, 0)." }, { "math_id": 8, "text": " (x,y,z) = \\big(\\pm r + R \\cos \\vartheta,\\ \\sqrt{R^2-r^2}\\; \\sin\\vartheta,\\ r \\sin\\vartheta \\big) \\,\\!" }, { "math_id": 9, "text": " 0 = (x^2+y^2+z^2 + R^2w^2 - r^2w^2)^2 - 4R^2w^2(x^2+y^2) , \\,\\!" }, { "math_id": 10, "text": " 0 = (x^2+y^2+z^2)^2 . \\,\\!" } ]
https://en.wikipedia.org/wiki?curid=1547360
1548091
Rafael Bombelli
16th century Italian mathematician Rafael Bombelli (baptised on 20 January 1526; died 1572) was an Italian mathematician. Born in Bologna, he is the author of a treatise on algebra and is a central figure in the understanding of imaginary numbers. He was the one who finally managed to address the problem with imaginary numbers. In his 1572 book, "L'Algebra", Bombelli solved equations using the method of del Ferro/Tartaglia. He introduced the rhetoric that preceded the representative symbols +"i" and -"i" and described how they both worked. Life. Rafael Bombelli was baptised on 20 January 1526 in Bologna, Papal States. He was born to Antonio Mazzoli, a wool merchant, and Diamante Scudieri, a tailor's daughter. The Mazzoli family was once quite powerful in Bologna. When Pope Julius II came to power, in 1506, he exiled the ruling family, the Bentivoglios. The Bentivoglio family attempted to retake Bologna in 1508, but failed. Rafael's grandfather participated in the coup attempt, and was captured and executed. Later, Antonio was able to return to Bologna, having changed his surname to Bombelli to escape the reputation of the Mazzoli family. Rafael was the oldest of six children. Rafael received no college education, but was instead taught by an engineer-architect by the name of Pier Francesco Clementi. Bombelli felt that none of the works on algebra by the leading mathematicians of his day provided a careful and thorough exposition of the subject. Instead of another convoluted treatise that only mathematicians could comprehend, Rafael decided to write a book on algebra that could be understood by anyone. His text would be self-contained and easily read by those without higher education. Bombelli died in 1572 in Rome. Bombelli's "Algebra". In the book that was published in 1572, entitled "Algebra", Bombelli gave a comprehensive account of the algebra known at the time. He was the first European to write down the way of performing computations with negative numbers. The following is an excerpt from the text: "Plus times plus makes plus Minus times minus makes plus Plus times minus makes minus Minus times plus makes minus Plus 8 times plus 8 makes plus 64 Minus 5 times minus 6 makes plus 30 Minus 4 times plus 5 makes minus 20 Plus 5 times minus 4 makes minus 20" As was intended, Bombelli used simple language as can be seen above so that anybody could understand it. But at the same time, he was thorough. Notation. Bombelli introduced, for the first time in a printed text (in Book II of his Algebra), a form of index notation in which the equation formula_0 appeared as 1U3 a. 6U1 p. 40. in which he wrote the U3 as a raised bowl-shape (like the curved part of the capital letter U) with the number 3 above it. Full symbolic notation was developed shortly thereafter by the French mathematician François Viète. Complex numbers. Perhaps more importantly than his work with algebra, however, the book also includes Bombelli's monumental contributions to complex number theory. Before he writes about complex numbers, he points out that they occur in solutions of equations of the form formula_1 given that formula_2 which is another way of stating that the discriminant of the cubic is negative. The solution of this kind of equation requires taking the cube root of the sum of one number and the square root of some negative number. Before Bombelli delves into using imaginary numbers practically, he goes into a detailed explanation of the properties of complex numbers. Right away, he makes it clear that the rules of arithmetic for imaginary numbers are not the same as for real numbers. This was a big accomplishment, as even numerous subsequent mathematicians were extremely confused on the topic. Bombelli avoided confusion by giving a special name to square roots of negative numbers, instead of just trying to deal with them as regular radicals like other mathematicians did. This made it clear that these numbers were neither positive nor negative. This kind of system avoids the confusion that Euler encountered. Bombelli called the imaginary number "i" "plus of minus" and used "minus of minus" for -"i". Bombelli had the foresight to see that imaginary numbers were crucial and necessary to solving quartic and cubic equations. At the time, people cared about complex numbers only as tools to solve practical equations. As such, Bombelli was able to get solutions using Scipione del Ferro's rule, even in casus irreducibilis, where other mathematicians such as Cardano had given up. In his book, Bombelli explains complex arithmetic as follows: "Plus by plus of minus, makes plus of minus. Minus by plus of minus, makes minus of minus. Plus by minus of minus, makes minus of minus. Minus by minus of minus, makes plus of minus. Plus of minus by plus of minus, makes minus. Plus of minus by minus of minus, makes plus. Minus of minus by plus of minus, makes plus. Minus of minus by minus of minus makes minus." After dealing with the multiplication of real and imaginary numbers, Bombelli goes on to talk about the rules of addition and subtraction. He is careful to point out that real parts add to real parts, and imaginary parts add to imaginary parts. Reputation. Bombelli is generally regarded as the inventor of complex numbers, as no one before him had made rules for dealing with such numbers, and no one believed that working with imaginary numbers would have useful results. Upon reading Bombelli's "Algebra", Leibniz praised Bombelli as an ". . . outstanding master of the analytical art." Crossley writes in his book, "Thus we have an engineer, Bombelli, making practical use of complex numbers perhaps because they gave him useful results, while Cardan found the square roots of negative numbers useless. Bombelli is the first to give a treatment of any complex numbers. . . It is remarkable how thorough he is in his presentation of the laws of calculation of complex numbers. . ." In honor of his accomplishments, a Moon crater was named Bombelli. Bombelli's method of calculating square roots. Bombelli used a method related to continued fractions to calculate square roots. He did not yet have the concept of a continued fraction, and below is the algorithm of a later version given by Pietro Cataldi (1613). The method for finding formula_3 begins with formula_4 with formula_5, from which it can be shown that formula_6. Repeated substitution of the expression on the right hand side for formula_7 into itself yields a continued fraction formula_8 for the root but Bombelli is more concerned with better approximations for formula_7. The value chosen for formula_9 is either of the whole numbers whose squares formula_10 lies between. The method gives the following convergents for formula_11 while the actual value is 3.605551275... : formula_12 The last convergent equals 3.605550883... . Bombelli's method should be compared with formulas and results used by Heros and Archimedes. The result formula_13 used by Archimedes in his determination of the value of formula_14 can be found by using 1 and 0 for the initial values of formula_7. References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. https://www.people.iup.edu/gsstoudt/history/bombelli/bombelli.pdf
[ { "math_id": 0, "text": "x^3 = 6x + 40" }, { "math_id": 1, "text": "x^3 = ax + b," }, { "math_id": 2, "text": "(a/3)^3 > (b/2)^2," }, { "math_id": 3, "text": " \\sqrt{n} " }, { "math_id": 4, "text": " n=(a\\pm r)^2=a^2\\pm 2ar+r^2\\ " }, { "math_id": 5, "text": " 0<r<1\\ " }, { "math_id": 6, "text": " r=\\frac{|n-a^2|}{2a\\pm r}" }, { "math_id": 7, "text": "r" }, { "math_id": 8, "text": "a\\pm \\frac{|n-a^2|}{2a\\pm \\frac{|n-a^2|}{2a\\pm \\frac{|n-a^2|}{2a\\pm \\cdots }}}" }, { "math_id": 9, "text": "a" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\sqrt{13}\\ " }, { "math_id": 12, "text": " 3\\frac{2}{3},\\ 3\\frac{3}{5},\\ 3\\frac{20}{33},\\ 3\\frac{66}{109},\\ 3\\frac{109}{180},\\ 3\\frac{720}{1189},\\ \\cdots" }, { "math_id": 13, "text": "\\frac{265}{153}<\\sqrt{3}<\\frac{1351}{780}" }, { "math_id": 14, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=1548091
1548113
Ambient occlusion
Computer graphics shading and rendering technique In 3D computer graphics, modeling, and animation, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. For example, the interior of a tube is typically more occluded (and hence darker) than the exposed outer surfaces, and becomes darker the deeper inside the tube one goes. Ambient occlusion can be seen as an accessibility value that is calculated for each surface point. In scenes with open sky, this is done by estimating the amount of visible sky for each point, while in indoor environments, only objects within a certain radius are taken into account and the walls are assumed to be the origin of the ambient light. The result is a diffuse, non-directional shading effect that casts no clear shadows, but that darkens enclosed and sheltered areas and can affect the rendered image's overall tone. It is often used as a post-processing effect. Unlike local methods such as Phong shading, ambient occlusion is a global method, meaning that the illumination at each point is a function of other geometry in the scene. However, it is a very crude approximation to full global illumination. The appearance achieved by ambient occlusion alone is similar to the way an object might appear on an overcast day. The first method that allowed simulating ambient occlusion in real time was developed by the research and development department of Crytek (CryEngine 2). With the release of hardware capable of real time ray tracing (GeForce 20 series) by Nvidia in 2018, ray traced ambient occlusion (RTAO) became possible in games and other real time applications. This feature was added to the Unreal Engine with version 4.22. Implementation. In the absence of hardware-assisted ray traced ambient occlusion, real-time applications such as computer games can use screen space ambient occlusion (SSAO) techniques such as horizon-based ambient occlusion including HBAO and ground-truth ambient occlusion (GTAO) as a faster approximation of true ambient occlusion, using per-pixel depth, rather than scene geometry, to form an ambient occlusion map. Ambient occlusion is related to accessibility shading, which determines appearance based on how easy it is for a surface to be touched by various elements (e.g., dirt, light, etc.). It has been popularized in production animation due to its relative simplicity and efficiency. The ambient occlusion shading model offers a better perception of the 3D shape of the displayed objects. This was shown in a paper where the authors report the results of perceptual experiments showing that depth discrimination under diffuse uniform sky lighting is superior to that predicted by a direct lighting model. The occlusion formula_0 at a point formula_1 on a surface with normal formula_2 can be computed by integrating the visibility function over the hemisphere formula_3 with respect to projected solid angle: formula_4 where formula_5 is the visibility function at formula_1, defined to be zero if formula_1 is occluded in the direction formula_6 and one otherwise, and formula_7 is the infinitesimal solid angle step of the integration variable formula_6. A variety of techniques are used to approximate this integral in practice: perhaps the most straightforward way is to use the Monte Carlo method by casting rays from the point formula_1 and testing for intersection with other scene geometry (i.e., ray casting). Another approach (more suited to hardware acceleration) is to render the view from formula_1 by rasterizing black geometry against a white background and taking the (cosine-weighted) average of rasterized fragments. This approach is an example of a "gathering" or "inside-out" approach, whereas other algorithms (such as depth-map ambient occlusion) employ "scattering" or "outside-in" techniques. In addition to the ambient occlusion value, a "bent normal" vector formula_8 is often generated, which points in the average direction of occluded samples. The bent normal can be used to look up incident radiance from an environment map to approximate image-based lighting. However, there are some situations in which the direction of the bent normal is a misrepresentation of the dominant direction of illumination, e.g., In this example, light may reach the point p only from the left or right sides, but the bent normal points to the average of those two sources, which is directly toward the obstruction. Recognition. In 2010, Hayden Landis, Ken McGaugh and Hilmar Koch were awarded a Scientific and Technical Academy Award for their work on ambient occlusion rendering.
[ { "math_id": 0, "text": "A_\\bar p" }, { "math_id": 1, "text": "\\bar p" }, { "math_id": 2, "text": "\\hat n" }, { "math_id": 3, "text": "\\Omega" }, { "math_id": 4, "text": "\nA_\\bar p = \\frac{1}{\\pi} \\int_{\\Omega} V_{\\bar p,\\hat\\omega} (\\hat n \\cdot \\hat\\omega ) \\, \\operatorname{d}\\omega\n" }, { "math_id": 5, "text": "V_{\\bar p,\\hat\\omega}" }, { "math_id": 6, "text": "\\hat\\omega" }, { "math_id": 7, "text": "\\operatorname{d}\\omega" }, { "math_id": 8, "text": "\\hat{n}_b" } ]
https://en.wikipedia.org/wiki?curid=1548113
1548123
Fractional-order integrator
A fractional-order integrator or just simply fractional integrator is an integrator device that calculates the fractional-order integral or derivative (usually called a differintegral) of an input. Differentiation or integration is a real or complex parameter. The fractional integrator is useful in fractional-order control where the history of the system under control is important to the control system output. Overview. The differintegral function, formula_0 includes the integer order differentiation and integration functions, and allows a continuous range of functions around them. The differintegral parameters are "a", "t", and "q". The parameters "a" and "t" describe the range over which to compute the result. The differintegral parameter "q" may be any real number or complex number. If "q" is greater than zero, the differintegral computes a derivative. If "q" is less than zero, the differintegral computes an integral. The integer order integration can be computed as a Riemann–Liouville differintegral, where the weight of each element in the sum is the constant unit value 1, which is equivalent to the Riemann sum. To compute an integer order derivative, the weights in the summation would be zero, with the exception of the most recent data points, where (in the case of the first unit derivative) the weight of the data point at "t" − 1 is −1 and the weight of the data point at "t" is 1. The sum of the points in the input function using these weights results in the difference of the most recent data points. These weights are computed using ratios of the Gamma function incorporating the number of data points in the range ["a","t"], and the parameter "q". Digital devices. Digital devices have the advantage of being versatile, and are not susceptible to unexpected output variation due to heat or noise. The discrete nature of a computer however, does not allow for all of history to be computed. Some finite range [a,t] must exist. Therefore, the number of data points that can be stored in memory ("N"), determines the oldest data point in memory, so that the value a is never more than "N" samples old. The effect is that any history older than a is "completely" forgotten, and no longer influences the output. A solution to this problem is the Coopmans approximation, which allows old data to be forgotten more gracefully (though still with exponential decay, rather than with the power law decay of a purely analog device). Analog devices. Analog devices have the ability to retain history over longer intervals. This translates into the parameter a staying constant, while "t" increases. There is no error due to round-off, as in the case of digital devices, but there may be error in the device due to leakages, and also unexpected variations in behavior caused by heat and noise. An example fractional-order integrator is a modification of the standard integrator circuit, where a capacitor is used as the feedback impedance on an opamp. By replacing the capacitor with an RC Ladder circuit, a half order integrator, that is, with formula_1 can be constructed. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{}_a \\mathbb{D}^q_t \\left( f(x) \\right)" }, { "math_id": 1, "text": "q = -\\frac{1}{2}," } ]
https://en.wikipedia.org/wiki?curid=1548123
15482790
Receivables turnover ratio
Receivable turnover ratio or debtor's turnover ratio is an accounting measure used to measure how effective a company is in extending credit as well as collecting debts. The receivables turnover ratio is an activity ratio, measuring how efficiently a firm uses its assets. Formula: formula_0 A high ratio implies either that a company operates on a cash basis or that its extension of credit and collection of accounts receivable is efficient. While a low ratio implies the company is not making the timely collection of credit. A good accounts receivable turnover depends on how quickly a business recovers its dues or, in simple terms how high or low the turnover ratio is. For instance, with a 30-day payment policy, if the customers take 46 days to pay back, the Accounts Receivable Turnover is low. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Receivable\\ turnover\\ ratio} = {\\mathrm{Net\\ receivable\\ sales}\\over\\mathrm{Average\\ net\\ receivables}}" } ]
https://en.wikipedia.org/wiki?curid=15482790
15483485
R+
R+ or R Plus may refer to: Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "\\mathbb{R}^{+}" } ]
https://en.wikipedia.org/wiki?curid=15483485
15483838
Borel determinacy theorem
Theorem in descriptive set theory In descriptive set theory, the Borel determinacy theorem states that any Gale–Stewart game whose payoff set is a Borel set is determined, meaning that one of the two players will have a winning strategy for the game. A Gale–Stewart game is a possibly infinite two-player game, where both players have perfect information and no randomness is involved. The theorem is a far reaching generalization of Zermelo's theorem about the determinacy of finite games. It was proved by Donald A. Martin in 1975, and is applied in descriptive set theory to show that Borel sets in Polish spaces have regularity properties such as the perfect set property. The theorem is also known for its metamathematical properties. In 1971, before the theorem was proved, Harvey Friedman showed that any proof of the theorem in Zermelo–Fraenkel set theory must make repeated use of the axiom of replacement. Later results showed that stronger determinacy theorems cannot be proven in Zermelo–Fraenkel set theory, although they are relatively consistent with it, if certain large cardinals are consistent. Background. Gale–Stewart games. A Gale–Stewart game is a two-player game of perfect information. The game is defined using a set "A", and is denoted "G""A". The two players alternate turns, and each player is aware of all moves before making the next one. On each turn, each player chooses a single element of "A" to play. The same element may be chosen more than once without restriction. The game can be visualized through the following diagram, in which the moves are made from left to right, with the moves of player I above and the moves of player II below. formula_0 The play continues without end, so that a single play of the game determines an infinite sequence formula_1 of elements of "A". The set of all such sequences is denoted "A"ω. The players are aware, from the beginning of the game, of a fixed payoff set (a.k.a. "winning set") that will determine who wins. The payoff set is a subset of "A"ω. If the infinite sequence created by a play of the game is in the payoff set, then player I wins. Otherwise, player II wins; there are no ties. This definition initially does not seem to include traditional perfect information games such as chess, since the set of moves available in such games changes every turn. However, this sort of case can be handled by declaring that a player who makes an illegal move loses immediately, so that the Gale–Stewart notion of a game does in fact generalize the concept of a game defined by a game tree. Winning strategies. A winning strategy for a player is a function that tells the player what move to make from any position in the game, such that if the player follows the function they will surely win. More specifically, a winning strategy for player I is a function "f" that takes as input sequences of elements of A of even length and returns an element of "A", such that player I will win every play of the form formula_2 A winning strategy for player II is a function "g" that takes odd-length sequences of elements of "A" and returns elements of "A", such that player II will win every play of the form formula_3 At most one player can have a winning strategy; if both players had winning strategies, and played the strategies against each other, only one of the two strategies could win that play of the game. If one of the players has a winning strategy for a particular payoff set, that payoff set is said to be determined. Topology. For a given set "A", whether a subset of "A"ω will be determined depends to some extent on its topological structure. For the purposes of Gale–Stewart games, the set "A" is endowed with the discrete topology, and "A"ω endowed with the resulting product topology, where "A"ω is viewed as a countably infinite topological product of "A" with itself. In particular, when "A" is the set {0,1}, the topology defined on "A"ω is exactly the ordinary topology on Cantor space, and when "A" is the set of natural numbers, it is the ordinary topology on Baire space. The set "A"ω can be viewed as the set of paths through a certain tree, which leads to a second characterization of its topology. The tree consists of all finite sequences of elements of "A", and the children of a particular node σ of the tree are exactly the sequences that extend σ by one element. Thus if "A" = { 0, 1 }, the first level of the tree consists of the sequences 〈 0 〉 and 〈 1 〉; the second level consists of the four sequences 〈 0, 0 〉, 〈 0, 1 〉, 〈 1, 0 〉, 〈 1, 1 〉; and so on. For each of the finite sequences σ in the tree, the set of all elements of "A"ω that begin with σ is a basic open set in the topology on "A". The open sets of "A"ω are precisely the sets expressible as unions of these basic open sets. The closed sets, as usual, are those whose complement is open. The Borel sets of "A"ω are the smallest class of subsets of "A"ω that includes the open sets and is closed under complement and countable union. That is, the Borel sets are the smallest σ-algebra of subsets of "A"ω containing all the open sets. The Borel sets are classified in the Borel hierarchy based on how many times the operations of complement and countable union are required to produce them from open sets. Previous results. Gale and Stewart (1953) proved that if the payoff set is an open or closed subset of "A"ω then the Gale–Stewart game with that payoff set is always determined. Over the next twenty years, this was extended to slightly higher levels of the Borel hierarchy through ever more complicated proofs. This led to the question of whether the game must be determined whenever the payoff set is a Borel subset of "A"ω. It was known that, using the axiom of choice, it is possible to construct a subset of {0,1}ω that is not determined (Kechris 1995, p. 139). Harvey Friedman (1971) proved that any proof that all Borel subsets of Cantor space ({0,1}ω ) were determined would require repeated use of the axiom of replacement, an axiom not typically required to prove theorems about "small" objects such as Cantor space. Borel determinacy. Donald A. Martin (1975) proved that for any set "A", all Borel subsets of "A"ω are determined. Because the original proof was quite complicated, Martin published a shorter proof in 1982 that did not require as much technical machinery. In his review of Martin's paper, Drake describes the second proof as "surprisingly straightforward." The field of descriptive set theory studies properties of Polish spaces (essentially, complete separable metric spaces). The Borel determinacy theorem has been used to establish many properties of Borel subsets of these spaces. For example, all analytic subsets of Polish spaces have the perfect set property, the property of Baire, and are Lebesgue measurable. However, the last two properties can be more easily proved without using Borel determinacy, by showing that the σ-algebras of measurable sets or sets with the Baire property are closed under Suslin's operation formula_4. Set-theoretic aspects. The Borel determinacy theorem is of interest for its metamathematical properties as well as its consequences in descriptive set theory. Determinacy of closed sets of "A"ω for arbitrary "A" is equivalent to the axiom of choice over ZF (Kechris 1995, p. 139). When working in set-theoretical systems where the axiom of choice is not assumed, this can be circumvented by considering generalized strategies known as quasistrategies (Kechris 1995, p. 139) or by only considering games where "A" is the set of natural numbers, as in the axiom of determinacy. Zermelo set theory (Z) is Zermelo–Fraenkel set theory without the axiom of replacement. It differs from ZF in that Z does not prove that the power set operation can be iterated uncountably many times beginning with an arbitrary set. In particular, "V"ω + ω, a particular countable level of the cumulative hierarchy, is a model of Zermelo set theory. The axiom of replacement, on the other hand, is only satisfied by "V"κ for significantly larger values of κ, such as when κ is a strongly inaccessible cardinal. Friedman's theorem of 1971 showed that there is a model of Zermelo set theory (with the axiom of choice) in which Borel determinacy fails, and thus Zermelo set theory cannot prove the Borel determinacy theorem. The existence of all beth numbers of countable index is sufficient to prove the Borel determinacy theorem. Stronger forms of determinacy. Several set-theoretic principles about determinacy stronger than Borel determinacy are studied in descriptive set theory. They are closely related to large cardinal axioms. The axiom of projective determinacy states that all projective subsets of a Polish space are determined. It is known to be unprovable in ZFC but relatively consistent with it and implied by certain large cardinal axioms. The existence of a measurable cardinal is enough to imply over ZFC that all analytic subsets of Polish spaces are determined. The axiom of determinacy states that all subsets of all Polish spaces are determined. It is inconsistent with ZFC but in ZF + DC (Zermelo–Fraenkel set theory plus the axiom of dependent choice) it is equiconsistent with certain large cardinal axioms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{matrix}\n\\mathrm{I} & a_1 & \\quad & a_3 & \\quad & a_5 & \\quad & \\cdots\\\\\n\\mathrm{II} & \\quad & a_2 & \\quad & a_4 & \\quad & a_6 & \\cdots\n\\end{matrix}\n" }, { "math_id": 1, "text": "\\langle a_1,a_2,a_3\\ldots\\rangle" }, { "math_id": 2, "text": "\n\\begin{matrix}\n\\mathrm{I} & a_1 = f(\\langle \\rangle) & \\quad & a_3 = f(\\langle a_1, a_2\\rangle)& \\quad & a_5 = f(\\langle a_1, a_2, a_3, a_4\\rangle) & \\quad & \\cdots\\\\\n\\mathrm{II} & \\quad & a_2 & \\quad & a_4 & \\quad & a_6 & \\cdots.\n\\end{matrix}\n" }, { "math_id": 3, "text": "\n\\begin{matrix}\n\\mathrm{I} & a_1 & \\quad & a_3 & \\quad & a_5 & \\quad & \\cdots\\\\\n\\mathrm{II} & \\quad & a_2 = g(\\langle a_1\\rangle)& \\quad & a_4 = g(\\langle a_1,a_2,a_3\\rangle) & \\quad & a_6 = g(\\langle a_1,a_2,a_3,a_4,a_5\\rangle) & \\cdots .\n\\end{matrix}\n" }, { "math_id": 4, "text": "\\mathcal A" } ]
https://en.wikipedia.org/wiki?curid=15483838
1548510
Fractional-order control
Field of mathematical control theory Fractional-order control (FOC) is a field of control theory that uses the fractional-order integrator as part of the control system design toolkit. The use of fractional calculus (FC) can improve and generalize well-established control methods and strategies. The fundamental advantage of FOC is that the fractional-order integrator weights history using a function that decays with a power-law tail. The effect is that the effects of all time are computed for each iteration of the control algorithm. This creates a 'distribution of time constants,' the upshot of which is there is no particular time constant, or resonance frequency, for the system. In fact, the fractional integral operator formula_0 is different from any integer-order rational transfer function formula_1, in the sense that it is a non-local operator that possesses an infinite memory and takes into account the whole history of its input signal. Fractional-order control shows promise in many controlled environments that suffer from the classical problems of overshoot and resonance, as well as time diffuse applications such as thermal dissipation and chemical mixing. Fractional-order control has also been demonstrated to be capable of suppressing chaotic behaviors in mathematical models of, for example, muscular blood vessels. Initiated from the 80's by the Pr. Oustaloup's group, the CRONE approach is one of the most developed control-system design methodologies that uses fractional-order operator properties.
[ { "math_id": 0, "text": "\\frac{1}{s^{\\lambda}}" }, { "math_id": 1, "text": " {G_{I}} (s)" } ]
https://en.wikipedia.org/wiki?curid=1548510
1548669
Euclidean tilings by convex regular polygons
Subdivision of the plane into polygons that are all regular Euclidean plane tilings by convex regular polygons have been widely used since antiquity. The first systematic mathematical treatment was that of Kepler in his (Latin: "The Harmony of the World", 1619). Notation of Euclidean tilings. Euclidean tilings are usually named after Cundy &amp; Rollett’s notation. This notation represents (i) the number of vertices, (ii) the number of polygons around each vertex (arranged clockwise) and (iii) the number of sides to each of those polygons. For example: 36; 36; 34.6, tells us there are 3 vertices with 2 different vertex types, so this tiling would be classed as a ‘3-uniform (2-vertex types)’ tiling. Broken down, 36; 36 (both of different transitivity class), or (36)2, tells us that there are 2 vertices (denoted by the superscript 2), each with 6 equilateral 3-sided polygons (triangles). With a final vertex 34.6, 4 more contiguous equilateral triangles and a single regular hexagon. However, this notation has two main problems related to ambiguous conformation and uniqueness First, when it comes to k-uniform tilings, the notation does not explain the relationships between the vertices. This makes it impossible to generate a covered plane given the notation alone. And second, some tessellations have the same nomenclature, they are very similar but it can be noticed that the relative positions of the hexagons are different. Therefore, the second problem is that this nomenclature is not unique for each tessellation. In order to solve those problems, GomJau-Hogg’s notation is a slightly modified version of the research and notation presented in 2012, about the generation and nomenclature of tessellations and double-layer grids. Antwerp v3.0, a free online application, allows for the infinite generation of regular polygon tilings through a set of shape placement stages and iterative rotation and reflection operations, obtained directly from the GomJau-Hogg’s notation. Regular tilings. Following Grünbaum and Shephard (section 1.3), a tiling is said to be "regular" if the symmetry group of the tiling acts transitively on the "flags" of the tiling, where a flag is a triple consisting of a mutually incident vertex, edge and tile of the tiling. This means that, for every pair of flags, there is a symmetry operation mapping the first flag to the second. This is equivalent to the tiling being an edge-to-edge tiling by congruent regular polygons. There must be six equilateral triangles, four squares or three regular hexagons at a vertex, yielding the three regular tessellations. "C&amp;R: Cundy &amp; Rollet's notation" "GJ-H: Notation of GomJau-Hogg" Archimedean, uniform or semiregular tilings. Vertex-transitivity means that for every pair of vertices there is a symmetry operation mapping the first vertex to the second. If the requirement of flag-transitivity is relaxed to one of vertex-transitivity, while the condition that the tiling is edge-to-edge is kept, there are eight additional tilings possible, known as "Archimedean", "uniform" or "semiregular" tilings. Note that there are two mirror image (enantiomorphic or chiral) forms of 34.6 (snub hexagonal) tiling, only one of which is shown in the following table. All other regular and semiregular tilings are achiral. "C&amp;R: Cundy &amp; Rollet's notation" "GJ-H: Notation of GomJau-Hogg" Grünbaum and Shephard distinguish the description of these tilings as "Archimedean" as referring only to the local property of the arrangement of tiles around each vertex being the same, and that as "uniform" as referring to the global property of vertex-transitivity. Though these yield the same set of tilings in the plane, in other spaces there are Archimedean tilings which are not uniform. Plane-vertex tilings. There are 17 combinations of regular convex polygons that form 21 types of plane-vertex tilings. Polygons in these meet at a point with no gap or overlap. Listing by their vertex figures, one has 6 polygons, three have 5 polygons, seven have 4 polygons, and ten have 3 polygons. Three of them can make regular tilings (63, 44, 36), and eight more can make semiregular or archimedean tilings, (3.12.12, 4.6.12, 4.8.8, (3.6)2, 3.4.6.4, 3.3.4.3.4, 3.3.3.4.4, 3.3.3.3.6). Four of them can exist in higher "k"-uniform tilings (3.3.4.12, 3.4.3.12, 3.3.6.6, 3.4.4.6), while six can not be used to completely tile the plane by regular polygons with no gaps or overlaps - they only tessellate space entirely when irregular polygons are included (3.7.42, 3.8.24, 3.9.18, 3.10.15, 4.5.20, 5.5.10). "k"-uniform tilings. Such periodic tilings may be classified by the number of orbits of vertices, edges and tiles. If there are k orbits of vertices, a tiling is known as k-uniform or k-isogonal; if there are t orbits of tiles, as t-isohedral; if there are e orbits of edges, as e-isotoxal. "k"-uniform tilings with the same vertex figures can be further identified by their wallpaper group symmetry. 1-uniform tilings include 3 regular tilings, and 8 semiregular ones, with 2 or more types of regular polygon faces. There are 20 2-uniform tilings, 61 3-uniform tilings, 151 4-uniform tilings, 332 5-uniform tilings and 673 6-uniform tilings. Each can be grouped by the number "m" of distinct vertex figures, which are also called "m"-Archimedean tilings. Finally, if the number of types of vertices is the same as the uniformity ("m" = "k" below), then the tiling is said to be "". In general, the uniformity is greater than or equal to the number of types of vertices ("m" ≥ "k"), as different types of vertices necessarily have different orbits, but not vice versa. Setting "m" = "n" = "k", there are 11 such tilings for "n" = 1; 20 such tilings for "n" = 2; 39 such tilings for "n" = 3; 33 such tilings for "n" = 4; 15 such tilings for "n" = 5; 10 such tilings for "n" = 6; and 7 such tilings for "n" = 7. Below is an example of a 3-unifom tiling: 2-uniform tilings. There are twenty (20) 2-uniform tilings of the Euclidean plane. (also called 2-isogonal tilings or demiregular tilings) 62-67 Vertex types are listed for each. If two tilings share the same two vertex types, they are given subscripts 1,2. Higher "k"-uniform tilings. "k"-uniform tilings have been enumerated up to 6. There are 673 6-uniform tilings of the Euclidean plane. Brian Galebach's search reproduced Krotenheerdt's list of 10 6-uniform tilings with 6 distinct vertex types, as well as finding 92 of them with 5 vertex types, 187 of them with 4 vertex types, 284 of them with 3 vertex types, and 100 with 2 vertex types. Fractalizing "k"-uniform tilings. There are many ways of generating new "k"-uniform tilings from old "k"-uniform tilings. For example, notice that the 2-uniform [3.12.12; 3.4.3.12] tiling has a square lattice, the 4(3-1)-uniform [343.12; (3.122)3] tiling has a snub square lattice, and the 5(3-1-1)-uniform [334.12; 343.12; (3.12.12)3] tiling has an elongated triangular lattice. These higher-order uniform tilings use the same lattice but possess greater complexity. The fractalizing basis for theses tilings is as follows: The side lengths are dilated by a factor of formula_0. This can similarly be done with the truncated trihexagonal tiling as a basis, with corresponding dilation of formula_1. Tilings that are not edge-to-edge. Convex regular polygons can also form plane tilings that are not edge-to-edge. Such tilings can be considered edge-to-edge as nonregular polygons with adjacent colinear edges. There are seven families of isogonal each family having a real-valued parameter determining the overlap between sides of adjacent tiles or the ratio between the edge lengths of different tiles. Two of the families are generated from shifted square, either progressive or zig-zagging positions. Grünbaum and Shephard call these tilings "uniform" although it contradicts Coxeter's definition for uniformity which requires edge-to-edge regular polygons. Such isogonal tilings are actually topologically identical to the uniform tilings, with different geometric proportions. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Euclidean and general tiling links:
[ { "math_id": 0, "text": "2+\\sqrt{3}" }, { "math_id": 1, "text": "3+\\sqrt{3}" } ]
https://en.wikipedia.org/wiki?curid=1548669
154881
Heat index
Temperature index that accounts for the effects of humidity The heat index (HI) is an index that combines air temperature and relative humidity, in shaded areas, to posit a human-perceived equivalent temperature, as how hot it would feel if the humidity were some other value in the shade. For example, when the temperature is with 70% relative humidity, the heat index is (see table below). The heat index is meant to describe experienced temperatures in the shade, but it does not take into account heating from direct sunlight, physical activity or cooling from wind. The human body normally cools itself by evaporation of sweat. High relative humidity reduces evaporation and cooling, increasing discomfort and potential heat stress. Different individuals perceive heat differently due to body shape, metabolism, level of hydration, pregnancy, or other physical conditions. Measurement of perceived temperature has been based on reports of how hot subjects feel under controlled conditions of temperature and humidity. Besides the heat index, other measures of apparent temperature include the Canadian humidex, the wet-bulb globe temperature, "relative outdoor temperature", and the proprietary "RealFeel". History. The heat index was developed in 1979 by Robert G. Steadman. Like the wind chill index, the heat index contains assumptions about the human body mass and height, clothing, amount of physical activity, individual heat tolerance, sunlight and ultraviolet radiation exposure, and the wind speed. Significant deviations from these will result in heat index values which do not accurately reflect the perceived temperature. In Canada, the similar humidex (a Canadian innovation introduced in 1965) is used in place of the heat index. While both the humidex and the heat index are calculated using dew point, the humidex uses a dew point of as a base, whereas the heat index uses a dew point base of . Further, the heat index uses heat balance equations which account for many variables other than vapor pressure, which is used exclusively in the humidex calculation. A joint committee formed by the United States and Canada to resolve differences has since been disbanded. Definition. The heat index of a given combination of (dry-bulb) temperature and humidity is defined as the dry-bulb temperature which would feel the same if the water vapor pressure were 1.6 kPa. Quoting Steadman, "Thus, for instance, an apparent temperature of refers to the same level of sultriness, and the same clothing requirements, as a dry-bulb temperature of with a vapor pressure of 1.6 kPa." This vapor pressure corresponds for example to an air temperature of and relative humidity of 40% in the sea-level psychrometric chart, and in Steadman's table at 40% RH the apparent temperature is equal to the true temperature between . At standard atmospheric pressure (101.325 kPa), this baseline also corresponds to a dew point of and a mixing ratio of 0.01 (10 g of water vapor per kilogram of dry air). A given value of relative humidity causes larger increases in the heat index at higher temperatures. For example, at approximately , the heat index will agree with the actual temperature if the relative humidity is 45%, but at , any relative-humidity reading above 18% will make the heat index higher than 43 °C. It has been suggested that the equation described is valid only if the temperature is or more. The relative humidity threshold, below which a heat index calculation will return a number equal to or lower than the air temperature (a lower heat index is generally considered invalid), varies with temperature and is not linear. The threshold is commonly set at an arbitrary 40%. The heat index and its counterpart the humidex both take into account only two variables, shade temperature and atmospheric moisture (humidity), thus providing only a limited estimate of thermal comfort. Additional factors such as wind, sunshine and individual clothing choices also affect perceived temperature; these factors are parameterized as constants in the heat index formula. Wind, for example, is assumed to be . Wind passing over wet or sweaty skin causes evaporation and a wind chill effect that the heat index does not measure. The other major factor is sunshine; standing in direct sunlight can add up to to the apparent heat compared to shade. There have been attempts to create a universal apparent temperature, such as the wet-bulb globe temperature, "relative outdoor temperature", "feels like", or the proprietary "RealFeel". Meteorological considerations. Outdoors in open conditions, as the relative humidity increases, first haze and ultimately a thicker cloud cover develops, reducing the amount of direct sunlight reaching the surface. Thus, there is an inverse relationship between maximum potential temperature and maximum potential relative humidity. Because of this factor, it was once believed that the highest heat index reading actually attainable anywhere on Earth was approximately . However, in Dhahran, Saudi Arabia on July 8, 2003, the dew point was while the temperature was , resulting in a heat index of . On August 28, 2024, a weather station in southern Iran recorded a heat index of , which will be a new record if confirmed. The human body requires evaporative cooling to prevent overheating. Wet-bulb temperature and Wet Bulb Globe Temperature are used to determine the ability of a body to eliminate excess heat. A sustained wet-bulb temperature of about can be fatal to healthy people; at this temperature our bodies switch from shedding heat to the environment, to gaining heat from it. Thus a wet bulb temperature of is the threshold beyond which the body is no longer able to adequately cool itself. Table of values. The table below is from the U.S. National Oceanic and Atmospheric Administration. The columns begin at , but there is also a heat index effect at and similar temperatures when there is high humidity. "Key to colors:" &lt;templatestyles src="Legend/styles.css" /&gt;  Caution &lt;templatestyles src="Legend/styles.css" /&gt;  Extreme caution &lt;templatestyles src="Legend/styles.css" /&gt;  Danger &lt;templatestyles src="Legend/styles.css" /&gt;  Extreme danger For example, if the air temperature is and the relative humidity is 65%, the heat index is Effects of the heat index (shade values). Exposure to full sunshine can increase heat index values by up to 8 °C (14 °F). Formula. There are many formulas devised to approximate the original tables by Steadman. Anderson et al. (2013), NWS (2011), Jonson and Long (2004), and Schoen (2005) have lesser residuals in this order. The former two are a set of polynomials, but the third one is by a single formula with exponential functions. The formula below approximates the heat index in degrees Fahrenheit, to within ±. It is the result of a multivariate fit (temperature equal to or greater than and relative humidity equal to or greater than 40%) to a model of the human body. This equation reproduces the above NOAA National Weather Service table (except the values at &amp; 45%/70% relative humidity vary unrounded by less than ±1, respectively). formula_0 where formula_1 The following coefficients can be used to determine the heat index when the temperature is given in degrees Celsius, where formula_2 An alternative set of constants for this equation that is within ± of the NWS master table for all humidities from 0 to 80% and all temperatures between and all heat indices below is: formula_3 A further alternate is this: formula_4 where formula_5 For example, using this last formula, with temperature and relative humidity (RH) of 85%, the result would be: . Limitations. The heat index does not work well with extreme conditions, like supersaturation of air, when the air is more than 100% saturated with water. David Romps, a physicist and climate scientist at the University of California, Berkeley and his graduate student Yi-Chuan Lu, found that the heat index was underestimating the severity of intense heat waves, such as the 1995 Chicago heat wave. Other issues with the heat index include the unavailability of precise humidity data in many geographical regions, the assumption that the person is healthy, and the assumption that the person has easy access to water and shade. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{HI} = c_1 + c_2 T + c_3 R + c_4 T R + c_5 T^2 + c_6 R^2 + c_7 T^2R + c_8 T R^2 + c_9 T^2 R^2 " }, { "math_id": 1, "text": "\\begin{align}\nc_1 &= -42.379, & c_2 &= 2.049\\,015\\,23, & c_3 &= 10.143\\,331\\,27,\\\\\nc_4 &= -0.224\\,755\\,41, & c_5 &= -6.837\\,83 \\times 10^{-3}, & c_6 &= -5.481\\,717 \\times 10^{-2},\\\\\nc_7 &= 1.228\\,74 \\times 10^{-3}, & c_8 &= 8.5282 \\times 10^{-4}, & c_9 &= -1.99 \\times 10^{-6}.\n\\end{align}" }, { "math_id": 2, "text": "\\begin{align}\nc_1 &= -8.784\\,694\\,755\\,56, & c_2 &= 1.611\\,394\\,11, & c_3 &= 2.338\\,548\\,838\\,89,\\\\\nc_4 &= -0.146\\,116\\,05, & c_5 &= -0.012\\,308\\,094, & c_6 &= -0.016\\,424\\,827\\,7778,\\\\\nc_7 &= 2.211\\,732 \\times 10^{-3}, & c_8 &= 7.2546 \\times 10^{-4}, & c_9 &= -3.582 \\times 10^{-6}.\n\\end{align}" }, { "math_id": 3, "text": "\\begin{align}\nc_1 &= 0.363\\,445\\,176, & c_2 &= 0.988\\,622\\,465, & c_3 &= 4.777\\,114\\,035,\\\\\nc_4 &= -0.114\\,037\\,667, & c_5 &= -8.502\\,08 \\times 10^{-4}, & c_6 &= -2.071\\,6198 \\times 10^{-2},\\\\\nc_7 &= 6.876\\,78 \\times 10^{-4}, & c_8 &= 2.749\\,54 \\times 10^{-4}, & c_9 &= 0.\n\\end{align}" }, { "math_id": 4, "text": "\\begin{align}\n\\mathrm{HI} &= c_1 + c_2 T + c_3 R + c_4 T R + c_5 T^2 + c_6 R^2 + c_7 T^2 R + c_8 T R^2 + c_9 T^2 R^2 + \\\\\n&\\quad {} + c_{10} T^3 + c_{11} R^3 + c_{12} T^3 R + c_{13} T R^3 + c_{14} T^3 R^2 + c_{15} T^2 R^3 + c_{16} T^3 R^3\n\\end{align}" }, { "math_id": 5, "text": "\\begin{align}\nc_1 &= 16.923, & c_2 &= 0.185\\,212, & c_3 &= 5.379\\,41, & c_4 &= -0.100\\,254,\\\\\nc_5 &= 9.416\\,95 \\times 10^{-3}, & c_6 &= 7.288\\,98 \\times 10^{-3}, & c_7 &= 3.453\\,72\\times 10^{-4}, & c_8 &= -8.149\\,71 \\times 10^{-4},\\\\\nc_9 &= 1.021\\,02 \\times 10^{-5}, & c_{10} &= -3.8646 \\times 10^{-5}, & c_{11} &= 2.915\\,83 \\times 10^{-5}, & c_{12} &= 1.427\\,21 \\times 10^{-6},\\\\\nc_{13} &= 1.974\\,83 \\times 10^{-7}, & c_{14} &= -2.184\\,29 \\times 10^{-8}, & c_{15} &= 8.432\\,96 \\times 10^{-10}, & c_{16} &= -4.819\\,75 \\times 10^{-11}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=154881
15488971
Goldstine theorem
In functional analysis, a branch of mathematics, the Goldstine theorem, named after Herman Goldstine, is stated as follows: Goldstine theorem. Let formula_0 be a Banach space, then the image of the closed unit ball formula_1 under the canonical embedding into the closed unit ball formula_2 of the bidual space formula_3 is a weak*-dense subset. The conclusion of the theorem is not true for the norm topology, which can be seen by considering the Banach space of real sequences that converge to zero, c0 space formula_4 and its bi-dual space Lp space formula_5 Proof. Lemma. For all formula_6 formula_7 and formula_8 there exists an formula_9 such that formula_10 for all formula_11 Proof of lemma. By the surjectivity of formula_12 it is possible to find formula_13 with formula_10 for formula_11 Now let formula_14 Every element of formula_15 satisfies formula_16 and formula_17 so it suffices to show that the intersection is nonempty. Assume for contradiction that it is empty. Then formula_18 and by the Hahn–Banach theorem there exists a linear form formula_19 such that formula_20 and formula_21 Then formula_22 and therefore formula_23 which is a contradiction. Proof of theorem. Fix formula_6 formula_7 and formula_24 Examine the set formula_25 Let formula_26 be the embedding defined by formula_27 where formula_28 is the evaluation at formula_29 map. Sets of the form formula_30 form a base for the weak* topology, so density follows once it is shown formula_31 for all such formula_32 The lemma above says that for any formula_33 there exists a formula_9 such that formula_34 formula_35 and in particular formula_36 Since formula_37 we have formula_38 We can scale to get formula_39 The goal is to show that for a sufficiently small formula_8 we have formula_40 Directly checking, one has formula_41 Note that one can choose formula_42 sufficiently large so that formula_43 for formula_11 Note as well that formula_44 If one chooses formula_45 so that formula_46 then formula_47 Hence one gets formula_48 as desired. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "B \\subseteq X" }, { "math_id": 2, "text": "B^{\\prime\\prime}" }, { "math_id": 3, "text": "X^{\\prime\\prime}" }, { "math_id": 4, "text": "c_0," }, { "math_id": 5, "text": "\\ell^{\\infty}." }, { "math_id": 6, "text": "x^{\\prime\\prime} \\in B^{\\prime\\prime}," }, { "math_id": 7, "text": "\\varphi_1, \\ldots, \\varphi_n \\in X^{\\prime}" }, { "math_id": 8, "text": "\\delta > 0," }, { "math_id": 9, "text": "x \\in (1+\\delta)B" }, { "math_id": 10, "text": "\\varphi_i(x) = x^{\\prime\\prime}(\\varphi_i)" }, { "math_id": 11, "text": "1 \\leq i \\leq n." }, { "math_id": 12, "text": "\\begin{cases}\n\\Phi : X \\to \\Complex^{n}, \\\\ x \\mapsto \\left(\\varphi_1(x), \\cdots, \\varphi_n(x) \\right)\n\\end{cases}" }, { "math_id": 13, "text": "x \\in X" }, { "math_id": 14, "text": "Y := \\bigcap_i \\ker \\varphi_i = \\ker \\Phi." }, { "math_id": 15, "text": "z \\in (x + Y) \\cap (1 + \\delta)B" }, { "math_id": 16, "text": "z \\in (1+\\delta)B" }, { "math_id": 17, "text": "\\varphi_i(z) = \\varphi_i(x)= x^{\\prime\\prime}(\\varphi_i)," }, { "math_id": 18, "text": "\\operatorname{dist}(x, Y) \\geq 1 + \\delta" }, { "math_id": 19, "text": "\\varphi \\in X^{\\prime}" }, { "math_id": 20, "text": "\\varphi\\big\\vert_Y = 0, \\varphi(x) \\geq 1 + \\delta" }, { "math_id": 21, "text": "\\|\\varphi\\|_{X^{\\prime}} = 1." }, { "math_id": 22, "text": "\\varphi \\in \\operatorname{span} \\left\\{ \\varphi_1, \\ldots, \\varphi_n \\right\\}" }, { "math_id": 23, "text": "1+\\delta \\leq \\varphi(x) = x^{\\prime\\prime}(\\varphi) \\leq \\|\\varphi\\|_{X^{\\prime}} \\left\\|x^{\\prime\\prime}\\right\\|_{X^{\\prime\\prime}} \\leq 1," }, { "math_id": 24, "text": "\\epsilon > 0." }, { "math_id": 25, "text": "U := \\left\\{ y^{\\prime\\prime} \\in X^{\\prime\\prime} : |(x^{\\prime\\prime} - y^{\\prime\\prime})(\\varphi_i)| < \\epsilon, 1 \\leq i \\leq n \\right\\}." }, { "math_id": 26, "text": "J : X \\rightarrow X^{\\prime\\prime}" }, { "math_id": 27, "text": "J(x) = \\text{Ev}_x," }, { "math_id": 28, "text": "\\text{Ev}_x(\\varphi) = \\varphi(x)" }, { "math_id": 29, "text": "x" }, { "math_id": 30, "text": "U" }, { "math_id": 31, "text": "J(B) \\cap U \\neq \\varnothing" }, { "math_id": 32, "text": "U." }, { "math_id": 33, "text": "\\delta > 0" }, { "math_id": 34, "text": "x^{\\prime\\prime}(\\varphi_i)=\\varphi_i(x)," }, { "math_id": 35, "text": "1\\leq i\\leq n," }, { "math_id": 36, "text": "\\text{Ev}_x \\in U." }, { "math_id": 37, "text": "J(B) \\subset B^{\\prime\\prime}," }, { "math_id": 38, "text": "\\text{Ev}_x \\in (1+\\delta)J(B) \\cap U." }, { "math_id": 39, "text": "\\frac{1}{1+\\delta} \\text{Ev}_x \\in J(B)." }, { "math_id": 40, "text": "\\frac{1}{1+\\delta} \\text{Ev}_x \\in J(B) \\cap U." }, { "math_id": 41, "text": "\\left|\\left[x^{\\prime\\prime} - \\frac{1}{1+\\delta} \\text{Ev}_x\\right](\\varphi_i)\\right| = \\left|\\varphi_i(x) - \\frac{1}{1+\\delta}\\varphi_i(x)\\right| = \\frac{\\delta}{1+\\delta} |\\varphi_i(x)|." }, { "math_id": 42, "text": "M" }, { "math_id": 43, "text": "\\|\\varphi_i\\|_{X^{\\prime}} \\leq M" }, { "math_id": 44, "text": "\\|x\\|_{X} \\leq (1+\\delta)." }, { "math_id": 45, "text": "\\delta" }, { "math_id": 46, "text": "\\delta M < \\epsilon," }, { "math_id": 47, "text": "\\frac{\\delta}{1+\\delta} \\left|\\varphi_i(x)\\right| \\leq \\frac{\\delta}{1+\\delta} \\|\\varphi_i\\|_{X^{\\prime}} \\|x\\|_{X} \\leq \\delta \\|\\varphi_i\\|_{X^{\\prime}} \\leq \\delta M < \\epsilon." }, { "math_id": 48, "text": "\\frac{1}{1+\\delta} \\text{Ev}_x \\in J(B) \\cap U" } ]
https://en.wikipedia.org/wiki?curid=15488971
15490141
Migratory insertion
Chemical reaction in which two ligands of a metal complex combine In organometallic chemistry, a migratory insertion is a type of reaction wherein two ligands on a metal complex combine. It is a subset of reactions that very closely resembles the insertion reactions, and both are differentiated by the mechanism that leads to the resulting stereochemistry of the products. However, often the two are used interchangeably because the mechanism is sometimes unknown. Therefore, migratory insertion reactions or insertion reactions, for short, are defined not by the mechanism but by the overall regiochemistry wherein one chemical entity interposes itself into an existing bond of typically a second chemical entity e.g.: formula_0 Overview. In the migratory insertion, a ligand that is viewed as an anion (X) ligand in and a ligand that is viewed as neutral couple, generating a new anionic ligand. The anion and neutral ligands that react are adjacent. If the precursor complex is coordinatively saturated, migratory insertion often result in a coordinatively unsaturated product. A new (neutral) ligand can then react with the metal leading to a further insertion. The process can occur many times on a single metal, as in olefin polymerization. The anionic ligand can be: H− (hydride), R− (alkyl), acyl, Ar− (aryl), or OR− (alkoxide). The ability of these groups to migrate is called their migratory aptitude. The neutral ligand can be CO, alkene, alkyne, or in some cases, even carbene. Diverse reactions apply to the migratory insertion. One mechanism involves the attack of the anionic ligand on the electrophilic part of the neutral ligand (the anionic ligand migrates to the neutral ligand). The other mechanism involves the neutral ligand inserts itself between the metal and the anionic ligand. CO insertion. The insertion of carbon monoxide into a metal-carbon bond to form an acyl group is the basis of carbonylation reactions, which provides many commercially useful products. Mechanistic studies reveal that the alkyl group migrates intramolecularly to an adjacent CO ligand. Early studies were conducted on the conversion of to give the acetyl derivative. Using 13CO, the products is cis [Mn(COCH3)(13CO)(CO)4] (scheme 1). CO insertion does not always involve migration. Treatment of CpFe(L)(CO)CH3 with 13CO yields a mix of both alkyl migration products and products formed by true insertion of bound carbonyls into the methyl group. Product distribution is influenced by the choice of solvent. Alkyl derivatives of square planar complexes undergo CO insertions particularly readily. Insertion reactions on square planar complexes are of particular interest because of their industrial applications. Since square planar complexes are often coordinatively unsaturated, they are susceptible to formation of 5-coordinate adducts, which undergo migratory insertion readily. In most cases the in-plane migration pathway is preferred, but, unlike the nucleophilic pathway, it is inhibited by an excess of CO. Reverse reaction. Decarbonylation of aldehydes, the reverse of CO insertion, is a well-recognized reaction: RCHO → RH + CO The reaction is not widely practiced in part because the alkanes are less useful materials than are the aldehyde precursors. Furthermore, the reaction is not often conducted catalytically because the extruded CO can be slow to dissociate. Extrusion of CO from an organic aldehyde is most famously demonstrated using Wilkinson's catalyst: RhCl(PPh3)3 + RCHO → RhCl(CO)(PPh3)2 + RH + PPh3 Please see Tsuji-Wilkinson Decarbonylation Reaction for an example of this elementary organometallic step in synthesis Insertion of other oxides. Many electrophilic oxides insert into metal carbon bonds; these include sulfur dioxide, carbon dioxide, and nitric oxide. These reactions have limited or no practical significance, but are of historic interest. With transition metal alkyls, these oxides behave as electrophiles and insert into the bond between metals and their relatively nucleophilic alkyl ligands. As discussed in the article on Metal sulfur dioxide complexes, the insertion of SO2 has been examined in particular detail. SO2 inserts to give both "O"-sulphinates and "S"-sulphinates, depending on the metal centre. With square planar alkyl complexes, a pre-equilibrium is assumed involving formation of an adduct. Insertion of alkenes into metal-carbon bonds. The insertion of alkenes into both metal-carbon is important. The insertion of ethylene and propylene into titanium alkyls is the cornerstone of Ziegler–Natta catalysis, the main source of polyethylene and polypropylene. The majority of this technology involves heterogeneous catalysts, but it is widely assumed that the principles and observations on homogeneous systems are applicable to the solid-state versions. Related technologies include the Shell Higher Olefin Process which produces detergent precursors. Mechanism. Factors affecting the rate of olefin insertions include the formation of the cyclic, planar, four-center transition state involving incipient formation of a bond between the metal and an olefin carbon. From this transition state, it can be seen that a partial positive charge forms on the β-carbon with a partial negative charge formed on the carbon initially bonded to the metal. This polarization explains the subsequently observed formation of the bond between the negatively charged carbon/hydrogen and the positively charged β-carbon as well as the simultaneous formation of the metal-α-carbon bond. This transition state also highlights the two factors that most strongly contribute to the rate of olefin insertion reactions: (i) orbital overlap of the alkyl group initially attached to the metal and (ii) the strength of the metal-alkyl bond. With greater orbital overlap between the partially positive β-carbon and the partially negative hydrogen/alkyl group carbon, the formation of the new C-C bond is facilitated. With increasing strength of the metal-alkyl bond, the breaking of the bond between the metal and the hydrogen/alkyl carbon bond to form the two new bonds with the α-carbon and β-carbon (respectively) is slower, thus decreasing the rate of the insertion reaction. Insertion of alkenes into M–H bonds. The insertion of alkenes into metal-hydrogen bonds is a key step in hydrogenation and hydroformylation reactions. The reaction involves the alkene and the hydride ligands combining within the coordination sphere of a catalyst. In hydrogenation, the resulting alkyl ligand combines with a second hydride to give the alkane. Analogous reactions apply to the hydrogenation of alkynes: an alkenyl ligand combines with a hydride to eliminate an alkene. Mechanism. In terms of mechanism, the insertion of alkenes into M–H bond and into M–C bonds are described similarly. Both involve four-membered transition states that place the less substituted carbon on the metal. The reverse of olefin insertion into a metal-hydrogen bond is β-hydride elimination. The Principle of Microscopic Reversibility requires that the mechanism of β-hydride elimination follow the same pathway as the insertion of alkenes into metal hydride bonds. The first requirement for β-hydride elimination is the presence of a hydrogen at a position that is β with respect to the metal. β-elimination requires a vacant coordination position on the metal that will accommodate the hydrogen that is abstracted. Industrial applications. Carbonylation. Two widely employed applications of migratory insertion of carbonyl groups are hydroformylation and the production of acetic acid by carbonylation of methanol. The former converts alkenes, hydrogen, and carbon monoxide into aldehydes. The production of acetic acid by carbonylation proceeds via two similar industrial processes. More traditional is the Monsanto acetic acid process, which relies on a rhodium-iodine catalyst to transform methanol into acetic acid. This process has been superseded by the Cativa process which uses a related iridium catalyst, [Ir(CO)2I2]− (1). By 2002, worldwide annual production of acetic acid stood at 6 million tons, of which approximately 60% is produced by the Cativa process. The Cativa process catalytic cycle, shown above, includes both insertion and de-insertion steps. The oxidative addition reaction of methyl iodide with (1) involves the formal insertion of the iridium(I) centre into the carbon-iodine bond, whilst step (3) to (4) is an example of migratory insertion of carbon monoxide into the iridium-carbon bond. The active catalyst species is regenerated by the reductive elimination of acetyl iodide from (4), a de-insertion reaction. Alkene polymerization. Industrial applications of alkene insertions include metal-catalyzed routes to polyethylene and polypropylene. Typically these conversions are catalyzed heterogeneously by titanium trichloride which are activated by aluminium alkyls. This technology is known as Ziegler–Natta catalysts. In these reactions, ethylene coordinates to titanium metal followed by its insertion. These steps can be repeated multiple times, potentially leading to high molecular weight polymers.
[ { "math_id": 0, "text": "{\\color{red}\\ce A} + {\\color{blue}\\ce{B-C}} \\longrightarrow {\\color{blue}\\ce{B{-}}}{\\color{red}\\ce A}{\\color{blue}\\ce{-C}}" } ]
https://en.wikipedia.org/wiki?curid=15490141
154907
Angle modulation
Angle modulation is a class of carrier modulation that is used in telecommunications transmission systems. The class comprises frequency modulation (FM) and phase modulation (PM), and is based on altering the frequency or the phase, respectively, of a carrier signal to encode the message signal. This contrasts with varying the amplitude of the carrier, practiced in amplitude modulation (AM) transmission, the earliest of the major modulation methods used widely in early radio broadcasting. Foundation. In general form, an analog modulation process of a sinusoidal carrier wave may be described by the following equation: formula_0. "A(t)" represents the time-varying amplitude of the sinusoidal carrier wave and the cosine-term is the carrier at its angular frequency formula_1, and the instantaneous phase deviation formula_2. This description directly provides the two major groups of modulation, amplitude modulation and angle modulation. In amplitude modulation, the angle term is held constant, while in angle modulation the term "A(t)" is constant and the second term of the equation has a functional relationship to the modulating message signal. The functional form of the cosine term, which contains the expression of the instantaneous phase formula_3 as its argument, provides the distinction of the two types of angle modulation, frequency modulation (FM) and phase modulation (PM). In FM the message signal causes a functional variation of the instantaneous frequency. These variations are controlled by both the frequency and the amplitude of the modulating wave. In phase modulation, the instantaneous phase deviation formula_2 of the carrier is controlled by the modulating waveform, such that the principal frequency remains constant. For angle modulation, the instantaneous frequency of an angle-modulated carrier wave is given by the first derivative with respect to time of the instantaneous phase: formula_4 in which formula_5 may be defined as the instantaneous frequency deviation, measured in rad/s. For frequency modulation (FM), the modulating signal formula_6 is related linearly to the instantaneous frequency deviation, that is formula_7 which gives the FM modulated waveform asformula_8For phase modulation (PM), the modulating signal formula_6 is related linearly to the instantaneous phase deviation, that is formula_9 which gives the PM modulated waveform asformula_10In principle, the modulating signal in both frequency and phase modulation may either be analog in nature, or it may be digital. In general, however, when using digital signals to modify the carrier wave, the method is called "keying", rather than modulation. Thus, telecommunications modems use frequency-shift keying (FSK), phase-shift keying (PSK), or amplitude-phase keying (APK), or various combinations. Furthermore, another digital modulation is line coding, which uses a baseband carrier, rather than a passband wave. The methods of angle modulation can provide better discrimination against interference and noise than amplitude modulation. These improvements, however, are a tradeoff against increased bandwidth requirements. Frequency modulation. Frequency modulation is widely used for FM broadcasting of radio programming, and largely supplanted amplitude modulation for this purpose starting in the 1930s, with its invention by American engineer Edwin Armstrong in 1933. FM also has many other applications, such as in two-way radio communications, and in FM synthesis for music synthesizers. Phase modulation. Phase modulation is important in major application areas including cellular and satellite telecommunications, as well as in data networking methods, such as in some digital subscriber line systems, and WiFi. The combination of phase modulation with amplitude modulation, practiced as early as 1874 by Thomas Edison in the quadruplex telegraph for transmitting four signals, two each in both directions of transmission, constitutes the polar modulation technique. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m(t) = A(t) \\cdot \\cos(\\omega t + \\phi(t))\\," }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\phi(t)" }, { "math_id": 3, "text": "\\omega t + \\phi(t)" }, { "math_id": 4, "text": " \\omega_I = \\frac{d}{dt} [ \\omega t + \\phi(t) ] = \\omega + \\phi'(t) ," }, { "math_id": 5, "text": "\\phi'(t)" }, { "math_id": 6, "text": " s(t)" }, { "math_id": 7, "text": " \\phi_{FM}' = K_{FM} s(t)," }, { "math_id": 8, "text": " m_{FM}(t) = A \\cos \\left( \\omega t + K_{FM} \\int s(\\tau) d\\tau \\right)." }, { "math_id": 9, "text": " \\phi_{PM}(t) = K_{PM}s(t)," }, { "math_id": 10, "text": " m_{PM}(t) = A \\cos \\left( \\omega t + K_{PM} s(t) \\right). " } ]
https://en.wikipedia.org/wiki?curid=154907
15491
Integer factorization
Decomposition of a number into a product &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: Can integer factorization be solved in polynomial time on a classical computer? In number theory, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem. To factorize a small integer n using mental or pen-and-paper arithmetic, the simplest method is trial division: checking if the number is divisible by prime numbers 2, 3, 5, and so on, up to the square root of n. For larger numbers, especially when using a computer, various more sophisticated factorization algorithms are more efficient. A prime factorization algorithm typically involves testing whether each factor is prime each time a factor is found. When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure. Prime decomposition. By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors. Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if "n" = 171 × "p" × "q" where "p" &lt; "q" are very large primes, trial division will quickly produce the factors 3 and 19 but will take "p" divisions to find the next factor. As a contrasting example, if "n" is the product of the primes 13729, 1372933, and 18848997161, where 13729 × 1372933 = 18848997157, Fermat's factorization method will begin with which immediately yields "b" √"a"2 − "n" √4 2 and hence the factors "a" − "b" = 18848997157 and "a" + "b" = 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of for "a" is a factor of 10 from 1372933. Current state of the art. Among the "b"-bit numbers, the most difficult to factor in practice using existing algorithms are those semiprimes whose factors are of similar size. For this reason, these are the integers used in cryptographic applications. In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240-digit (795-bit) number (RSA-240) utilizing approximately 900 core-years of computing power. The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long. The largest such semiprime yet factored was RSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines. Time complexity. No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a "b"-bit number "n" in time O("b""k") for some constant "k". Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist. There are published algorithms that are faster than O((1 + "ε")"b") for all positive "ε", that is, sub-exponential. As of 2022[ [update]], the algorithm with best theoretical asymptotic running time is the general number field sieve (GNFS), first published in 1993, running on a "b"-bit number "n" in time: formula_0 For current computers, GNFS is the best published algorithm for large "n" (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. Shor's algorithm takes only O("b"3) time and O("b") space on "b"-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide seven qubits. In order to talk about complexity classes such as P, NP, and co-NP, the problem has to be stated as a decision problem. &lt;templatestyles src="Math_theorem/styles.css" /&gt; It is known to be in both NP and co-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization "n" = "d"() with "d" ≤ "k". An answer of "no" can be certified by exhibiting the factorization of "n" into distinct primes, all larger than "k"; one can verify their primality using the AKS primality test, and then multiply them to obtain "n". The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and co-UP. It is known to be in BQP because of Shor's algorithm. The problem is suspected to be outside all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. In contrast, the decision problem "Is "n" a composite number?" (or equivalently: "Is "n" a prime number?") appears to be much easier than the problem of specifying factors of "n". The composite/prime problem can be solved in polynomial time (in the number "b" of digits of "n") with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with. Factoring algorithms. Special-purpose. A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms. An important subclass of special-purpose factoring algorithms is the "Category 1" or "First Category" algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors. For example, naive trial division is a Category 1 algorithm. General-purpose. A general-purpose factoring algorithm, also known as a "Category 2", "Second Category", or "Kraitchik" "family" algorithm, has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method. Heuristic running time. In number theory, there are many integer factoring algorithms that heuristically have expected running time formula_1 in little-o and L-notation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr, Seysen, and Lenstra, which they proved only assuming the unproved generalized Riemann hypothesis. Rigorous running time. The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance to have expected running time "Ln"[, 1+"o"(1)] by replacing the GRH assumption with the use of multipliers. The algorithm uses the class group of positive binary quadratic forms of discriminant Δ denoted by "G"Δ. "G"Δ is the set of triples of integers ("a", "b", "c") in which those integers are relative prime. Schnorr–Seysen–Lenstra algorithm. Given an integer n that will be factored, where n is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant Δ is chosen as a multiple of n, Δ = −"dn", where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in "G"Δ. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result. Denote by "P"Δ the set of all primes q with Kronecker symbol . By constructing a set of generators of "G"Δ and prime forms "f""q" of "G"Δ with q in "P"Δ a sequence of relations between the set of generators and "f""q" are produced. The size of q can be bounded by "c"0(log|Δ|)2 for some constant "c"0. The relation that will be used is a relation between the product of powers that is equal to the neutral element of "G"Δ. These relations will be used to construct a so-called ambiguous form of "G"Δ, which is an element of "G"Δ of order dividing 2. By calculating the corresponding factorization of Δ and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps: Let n be the number to be factored. To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test. Expected running time. The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most "Ln"[, 1+"o"(1)]. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\exp\\left( \\left(\\left(\\tfrac83\\right)^\\frac23 + o(1)\\right)\\left(\\log n\\right)^\\frac13\\left(\\log \\log n\\right)^\\frac23\\right)." }, { "math_id": 1, "text": "L_n\\left[\\tfrac12,1+o(1)\\right]=e^{(1+o(1))\\sqrt{(\\log n)(\\log \\log n)}}" } ]
https://en.wikipedia.org/wiki?curid=15491
154910
Wind chill
Lowering of body temperature due to the passing flow of lower-temperature air Wind chill (popularly wind chill factor) is the sensation of cold produced by the wind for a given ambient air temperature on exposed skin as the air motion accelerates the rate of heat transfer from the body to the surrounding atmosphere. Its values are always lower than the air temperature in the range where the formula is valid. When the apparent temperature is higher than the air temperature, the heat index is used instead. Explanation. A surface loses heat through conduction, evaporation, convection, and radiation. The rate of convection depends on both the difference in temperature between the surface and the fluid surrounding it and the velocity of that fluid with respect to the surface. As convection from a warm surface heats the air around it, an insulating boundary layer of warm air forms against the surface. Moving air disrupts this boundary layer, or epiclimate, carrying the warm air away, thereby allowing cooler air to replace the warm air against the surface and increasing the temperature difference in the boundary layer. The faster the wind speed, the more readily the surface cools. Contrary to popular belief, wind chill does not refer to how cold things get, and they will only get as cold as the air temperature. This means radiators and pipes cannot freeze when wind chill is below freezing and the air temperature is above freezing. Alternative approaches. Many formulas exist for wind chill because, unlike temperature, wind chill has no universally agreed-upon standard definition or measurement. All the formulas attempt to qualitatively predict the effect of wind on the temperature humans "perceive". Weather services in different countries use standards unique to their country or region; for example, the U.S. and Canadian weather services use a model accepted by the National Weather Service. That model has evolved over time. The first wind chill formulas and tables were developed by Paul Allman Siple and Charles F. Passel working in the Antarctic before the Second World War, and were made available by the National Weather Service by the 1970s. They were based on the cooling rate of a small plastic bottle as its contents turned to ice while suspended in the wind on the expedition hut roof, at the same level as the anemometer. The so-called Windchill Index provided a pretty good indication of the severity of the weather. In the 1960s, wind chill began to be reported as a wind chill equivalent temperature (WCET), which is theoretically less useful. The author of this change is unknown, but it was not Siple or Passel as is generally believed. At first, it was defined as the temperature at which the windchill index would be the same in the complete absence of wind. This led to equivalent temperatures that exaggerated the severity of the weather. Charles Eagan realized that people are rarely still and that even when it is calm, there is some air movement. He redefined the absence of wind to be an air speed of , which was about as low a wind speed as a cup anemometer could measure. This led to more realistic (warmer-sounding) values of equivalent temperature. Original model. Equivalent temperature was not universally used in North America until the 21st century. Until the 1970s, the coldest parts of Canada reported the original Wind Chill Index, a three- or four-digit number with units of kilocalories/hour per square metre. Each individual calibrated the scale of numbers personally, through experience. The chart also provided general guidance to comfort and hazard through threshold values of the index, such as 1400, which was the threshold for frostbite. The original formula for the index was: formula_0 where: North American and United Kingdom wind chill index. In November 2001, Canada, the United States, and the United Kingdom implemented a new wind chill index developed by scientists and medical experts on the Joint Action Group for Temperature Indices (JAG/TI). It is determined by iterating a model of skin temperature under various wind speeds and temperatures using standard engineering correlations of wind speed and heat transfer rate. Heat transfer was calculated for a bare face in wind, facing the wind, while walking into it at . The model corrects the officially measured wind speed to the wind speed at face height, assuming the person is in an open field. The results of this model may be approximated, to within one degree, from the following formulas. The standard wind chill formula for Environment Canada is: formula_1 where "T"wc is the wind chill index, based on the Celsius temperature scale; "T"a is the air temperature in degrees Celsius; and "v" is the wind speed at standard anemometer height, in kilometres per hour. When the temperature is and the wind speed is , the wind chill index is −24. If the temperature remains at −20 °C and the wind speed increases to , the wind chill index falls to −33. The equivalent formula in US customary units is: formula_2 where "T"wc is the wind chill index, based on the Fahrenheit scale; "T"a is the air temperature in degrees Fahrenheit; and "v" is the wind speed in miles per hour. Windchill temperature is defined only for temperatures at or below and wind speeds above . As the air temperature falls, the chilling effect of any wind that is present increases. For example, a wind will lower the apparent temperature by a wider margin at an air temperature of than a wind of the same speed would if the air temperature were . The 2001 WCET is a steady-state calculation (except for the time-to-frostbite estimates). There are significant time-dependent aspects to wind chill because cooling is most rapid at the start of any exposure, when the skin is still warm. Australian apparent temperature. The apparent temperature (AT), invented in the late 1970s, was designed to measure thermal sensation in indoor conditions. It was extended in the early 1980s to include the effect of sun and wind. The AT index used here is based on a mathematical model of an adult, walking outdoors, in the shade (Steadman 1994). The AT is defined as the temperature, at the reference humidity level, producing the same amount of discomfort as that experienced under the current ambient temperature and humidity. The formula is: formula_3 where: The vapour pressure can be calculated from the temperature and relative humidity using the equation: formula_4 where: The Australian formula includes the important factor of humidity and is somewhat more involved than the simpler North American model. The North American formula was designed to be applied at low temperatures (as low as ) when humidity levels are also low. The hot-weather version of the AT (1984) is used by the National Weather Service in the United States. In the United States, this simple version of the AT is known as the heat index. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "WCI=\\left(10\\sqrt{v}-v+10.5\\right) \\cdot \\left(33-T_\\mathrm{a}\\right)," }, { "math_id": 1, "text": "T_\\mathrm{wc}=13.12 + 0.6215 T_\\mathrm{a}-11.37 v^{+0.16} + 0.3965 T_\\mathrm{a} v^{+0.16}," }, { "math_id": 2, "text": "T_\\mathrm{wc}=35.74+0.6215 T_\\mathrm{a}-35.75 v^{+0.16}+0.4275 T_\\mathrm{a} v^{+0.16}," }, { "math_id": 3, "text": "\\mathrm{AT} = T_\\mathrm{a} + 0.33e - 0.7v - 4.00," }, { "math_id": 4, "text": "e = \\frac\\mathrm{RH}{100} \\cdot 6.105 \\cdot \\exp {\\left(\\frac{17.27 \\cdot T_\\mathrm{a}}{237.7 + T_\\mathrm{a}}\\right)}," } ]
https://en.wikipedia.org/wiki?curid=154910
15492132
Contourlet
In image processing, contourlets form a multiresolution directional tight frame designed to efficiently approximate images made of smooth regions separated by smooth boundaries. The contourlet transform has a fast implementation based on a Laplacian pyramid decomposition followed by directional filterbanks applied on each bandpass subband. Contourlet transform. Introduction and motivation. In the field of geometrical image transforms, there are many 1-D transforms designed for detecting or capturing the geometry of image information, such as the Fourier and wavelet transform. However, the ability of 1-D transform processing of the intrinsic geometrical structures, such as smoothness of curves, is limited in one direction, then more powerful representations are required in higher dimensions. The contourlet transform which was proposed by Do and Vetterli in 2002, is a new two-dimensional transform method for image representations. The contourlet transform has properties of multiresolution, localization, directionality, critical sampling and anisotropy. Its basic functions are multiscale and multidimensional. The contours of original images, which are the dominant features in natural images, can be captured effectively with a few coefficients by using contourlet transform. The contourlet transform is inspired by the human visual system and Curvelet transform which can capture the smoothness of the contour of images with different elongated shapes and in variety of directions. However, it is difficult to sampling on a rectangular grid for Curvelet transform since Curvelet transform was developed in continuous domain and directions other than horizontal and vertical are very different on rectangular grid. Therefore, the contourlet transform was proposed initially as a directional multiresolution transform in the discrete domain. Definition. The contourlet transform uses a double filter bank structure to get the smooth contours of images. In this double filter bank, the Laplacian pyramid (LP) is first used to capture the point discontinuities, and then a directional filter bank (DFB) is used to form those point discontinuities into linear structures. The Laplacian pyramid (LP) decomposition only produce one bandpass image in a multidimensional signal processing, that can avoid frequency scrambling. And directional filter bank (DFB) is only fit for high frequency since it will leak the low frequency of signals in its directional subbands. This is the reason to combine DFB with LP, which is multiscale decomposition and remove the low frequency. Therefore, image signals pass through LP subbands to get bandpass signals and pass those signals through DFB to capture the directional information of image. This double filter bank structure of combination of LP and DFB is also called as pyramid directional filter bank (PDFB), and this transform is approximate the original image by using basic contour, so it is also called discrete contourlet transform. Nonsubsampled contourlet transform. Motivation and applications. The contourlet transform has a number of useful features and qualities, but it also has its flaws. One of the more notable variations of the contourlet transform was developed and proposed by da Cunha, Zhou and Do in 2006. The nonsubsampled contourlet transform (NSCT) was developed mainly because the contourlet transform is not shift invariant. The reason for this lies in the up-sampling and down-sampling present in both the Laplacian Pyramid and the directional filter banks. The method used in this variation was inspired by the nonsubsampled wavelet transform or the stationary wavelet transform which were computed with the à trous algorithm. Though the contourlet and this variant are relatively new, they have been used in many different applications including synthetic aperture radar despeckling, image enhancement and texture classification. Basic concept. To retain the directional and multiscale properties of the transform, the Laplacian Pyramid was replaced with a nonsubsampled pyramid structure to retain the multiscale property, and a nonsubsampled directional filter bank for directionality. The first major notable difference is that upsampling and downsampling are removed from both processes. Instead the filters in both the Laplacian Pyramid and the directional filter banks are upsampled. Though this mitigates the shift invariance issue a new issue is now present with aliasing and the directional filter bank. When processing the coarser levels of the pyramid there is potential for aliasing and loss in resolution. This issue is avoided though by upsampling the directional filter bank filters as was done with the filters from the pyramidal filter bank. The next issue that lies with this transform is the design of the filters for both filter banks. According to the authors there were some properties that they desired with this transform such as: perfect reconstruction, a sharp frequency response, easy implementation and linear-phase filters. These features were implemented by first removing the tight frame requirement and then using a mapping to design the filters and then implementing a ladder type structure. These changes lead to a transform that is not only efficient but performs well in comparison to other similar and in some cases more advanced transforms when denoising and enhancing images. Variations of the contourlet transform. Wavelet-based contourlet transform. Although the wavelet transform is not optimal in capturing the 2-D singularities of images, it can take the place of LP decomposition in the double filter bank structure to make the contourlet transform a non-redundant image transform. The wavelet-based contourlet transform is similar to the original contourlet transform, and it also consists of two filter bank stages. In the first stage, the wavelet transform is used to do the sub-band decomposition instead of the Laplacian pyramid (LP) in the contourlet transform. And the second stage of the wavelet-based contourlet transform is still a directional filter bank (DFB) to provide the link of singular points. One of the advantages to the wavelet-based contourlet transform is that the wavelet-based contourlet packets are similar to the wavelet packets which allows quad-tree decomposition of both low-pass and high-pass channels and then apply the DFB on each sub-band. The hidden Markov tree (HMT) model for the contourlet transform. Based on the study of statistics of contourlet coefficients of natural images, the HMT model for the contourlet transform is proposed. The statistics show that the contourlet coefficients are highly non-Gaussian, high interaction dependent on all their eight neighbors and high inter-direction dependent on their cousins. Therefore, the HMT model, that captures the highly non-Gaussian property, is used to get the dependence on neighborhood through the links between the hidden states of the coefficients. This HMT model of contourlet transform coefficients has better results than original contourlet transform and other HMT modeled transforms in denoising and texture retrieval, since it restores edges better visually. Contourlet transform with sharp frequency localization. An alternative or variation of the contourlet transform was proposed by Lu and Do in 2006. This new proposed method was intended as a remedy to fix non-localized basis images in frequency. The issue with the original contourlet transform was that when the contourlet transform was used with imperfect filter bank filters aliasing occurs and the frequency domain resolution is affected. There are two contributing factors to the aliasing, the first is the periodicity of 2-D frequency spectra and the second is an inherent flaw in the critical sampling of the directional filter banks. This new method mitigates these issues by changing the method of multiscale decomposition. As mentioned before, the original contourlet used the Laplacian Pyramid for multiscale decomposition. This new method as proposed by Lu and Do uses a multiscale pyramid that can be adjusted by applying low pass or high pass filters for the different levels. This method fixes multiple issues, it reduces the amount of cross terms and localizes the basis images in frequency, removes aliasing and has proven in some instances more effective in denoising images. Though it fixes all of those issues, this method requires more filters than the original contourlet transform and still has both the up-sampling and down-sampling operations meaning it is not shift-invariant. Image enhancement based on nonsubsampled contourlet transform. In prior studies the contourlet transform has proven effective in the denoising of images but in this method the researchers developed a method of image enhancement. When enhancing images preservation and the enhancement of important data is of paramount importance. The contourlet transform meets this criterion to some extent with its ability to denoise and detect edges. This transform first passes the image through the multiscale decomposition by way of the nonsubsampled laplacian pyramid. After that, the noise variance for each sub-band is calculated and relative to local statistics of the image it is classified as either noise, a weak edge or strong edge. The strong edges are retained, the weak edges are enhanced and the noise is discarded. This method of image enhancement significantly outperformed the nonsubsampled wavelet transform (NSWT) both qualitatively and quantitatively. Though this method outperformed the NSWT there still lies the issue of the complexity of designing adequate filter banks and fine tuning the filters for specific applications of which further study will be required. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "4/3" }, { "math_id": 1, "text": "j" }, { "math_id": 2, "text": "l_j" }, { "math_id": 3, "text": "width" }, { "math_id": 4, "text": "2^j" }, { "math_id": 5, "text": "length" }, { "math_id": 6, "text": "2^{j+l_j-2}" }, { "math_id": 7, "text": "O(N)" } ]
https://en.wikipedia.org/wiki?curid=15492132
1549595
Index calculus algorithm
In computational number theory, the index calculus algorithm is a probabilistic algorithm for computing discrete logarithms. Dedicated to the discrete logarithm in formula_0 where formula_1 is a prime, index calculus leads to a family of algorithms adapted to finite fields and to some families of elliptic curves. The algorithm collects relations among the discrete logarithms of small primes, computes them by a linear algebra procedure and finally expresses the desired discrete logarithm with respect to the discrete logarithms of small primes. Description. Roughly speaking, the discrete log problem asks us to find an "x" such that formula_2, where "g", "h", and the modulus "n" are given. The algorithm (described in detail below) applies to the group formula_0 where "q" is prime. It requires a "factor base" as input. This "factor base" is usually chosen to be the number −1 and the first "r" primes starting with 2. From the point of view of efficiency, we want this factor base to be small, but in order to solve the discrete log for a large group we require the "factor base" to be (relatively) large. In practical implementations of the algorithm, those conflicting objectives are compromised one way or another. The algorithm is performed in three stages. The first two stages depend only on the generator "g" and prime modulus "q", and find the discrete logarithms of a "factor base" of "r" small primes. The third stage finds the discrete log of the desired number "h" in terms of the discrete logs of the factor base. The first stage consists of searching for a set of "r" linearly independent "relations" between the factor base and power of the generator "g". Each relation contributes one equation to a system of linear equations in "r" unknowns, namely the discrete logarithms of the "r" primes in the factor base. This stage is embarrassingly parallel and easy to divide among many computers. The second stage solves the system of linear equations to compute the discrete logs of the factor base. A system of hundreds of thousands or millions of equations is a significant computation requiring large amounts of memory, and it is "not" embarrassingly parallel, so a supercomputer is typically used. This was considered a minor step compared to the others for smaller discrete log computations. However, larger discrete logarithm records were made possible only by shifting the work away from the linear algebra and onto the sieve (i.e., increasing the number of equations while reducing the number of variables). The third stage searches for a power "s" of the generator "g" which, when multiplied by the argument "h", may be factored in terms of the factor base "gsh" = (−1)"f"0 2"f"1 3"f"2···"p""r""f""r". Finally, in an operation too simple to really be called a fourth stage, the results of the second and third stages can be rearranged by simple algebraic manipulation to work out the desired discrete logarithm "x" = "f"0log"g"(−1) + "f"1log"g"2 + "f"2log"g"3 + ··· + "f""r"log"g""pr" − "s". The first and third stages are both embarrassingly parallel, and in fact the third stage does not depend on the results of the first two stages, so it may be done in parallel with them. The choice of the factor base size "r" is critical, and the details are too intricate to explain here. The larger the factor base, the easier it is to find relations in stage 1, and the easier it is to complete stage 3, but the more relations you need before you can proceed to stage 2, and the more difficult stage 2 is. The relative availability of computers suitable for the different types of computation required for stages 1 and 2 is also important. Applications in other groups. The lack of the notion of "prime elements" in the group of points on elliptic curves makes it impossible to find an efficient "factor base" to run index calculus method as presented here in these groups. Therefore this algorithm is incapable of solving discrete logarithms efficiently in elliptic curve groups. However: For special kinds of curves (so called supersingular elliptic curves) there are specialized algorithms for solving the problem faster than with generic methods. While the use of these special curves can easily be avoided, in 2009 it has been proven that for certain fields the discrete logarithm problem in the group of points on "general" elliptic curves over these fields can be solved faster than with generic methods. The algorithms are indeed adaptations of the index calculus method. The algorithm. Input: Discrete logarithm generator formula_3, modulus formula_1 and argument formula_4. Factor base formula_5, of length formula_6. Output: formula_7 such that formula_8. Complexity. Assuming an optimal selection of the factor base, the expected running time (using L-notation) of the index-calculus algorithm can be stated as formula_20. History. The basic idea of the algorithm is due to Western and Miller (1968), which ultimately relies on ideas from Kraitchik (1922). The first practical implementations followed the 1976 introduction of the Diffie-Hellman cryptosystem which relies on the discrete logarithm. Merkle's Stanford University dissertation (1979) was credited by Pohlig (1977) and Hellman and Reyneri (1983), who also made improvements to the implementation. Adleman optimized the algorithm and presented it in the present form. The Index Calculus family. Index Calculus inspired a large family of algorithms. In finite fields formula_21 with formula_22 for some prime formula_23, the state-of-art algorithms are the Number Field Sieve for Discrete Logarithms, formula_24, when formula_25 is large compared to formula_1, the function field sieve, formula_26, and Joux, formula_27 for formula_28, when formula_23 is small compared to formula_29 and the Number Field Sieve in High Degree, formula_30 for formula_28 when formula_31 is middle-sided. Discrete logarithm in some families of elliptic curves can be solved in time formula_32 for formula_33, but the general case remains exponential. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\mathbb{Z}/q\\mathbb{Z})^*" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "g^x \\equiv h \\pmod{n}" }, { "math_id": 3, "text": "g" }, { "math_id": 4, "text": "h" }, { "math_id": 5, "text": "\\{-1, 2, 3, 5, 7, 11, \\ldots, p_r\\}" }, { "math_id": 6, "text": "r+1" }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "g^x=h \\mod q" }, { "math_id": 9, "text": "k = 1, 2, \\ldots" }, { "math_id": 10, "text": "g^k \\bmod q" }, { "math_id": 11, "text": "e_i" }, { "math_id": 12, "text": "g^k \\bmod q= (-1)^{e_0}2^{e_1}3^{e_2}\\cdots p_r^{e_r}" }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": "(e_0,e_1,e_2,\\ldots,e_r,k)" }, { "math_id": 15, "text": "-1" }, { "math_id": 16, "text": "2" }, { "math_id": 17, "text": "s = 1, 2, \\ldots" }, { "math_id": 18, "text": "g^s h \\bmod q= (-1)^{f_0}2^{f_1}3^{f_2}\\cdots p_r^{f_r}" }, { "math_id": 19, "text": "x = f_0 \\log_g(-1) + f_1 \\log_g2 + \\cdots + f_r \\log_g p_r - s." }, { "math_id": 20, "text": "L_n[1/2,\\sqrt{2}+o(1)] " }, { "math_id": 21, "text": "\\mathbb{F}_{q} " }, { "math_id": 22, "text": "q=p^n" }, { "math_id": 23, "text": "p" }, { "math_id": 24, "text": " L_{q}\\left[1/3,\\sqrt[3]{64/9}\\,\\right]" }, { "math_id": 25, "text": " p " }, { "math_id": 26, "text": "L_q\\left[1/3,\\sqrt[3]{32/9}\\,\\right]" }, { "math_id": 27, "text": "L_{q}\\left[1/4+\\varepsilon,c\\right] " }, { "math_id": 28, "text": "c>0" }, { "math_id": 29, "text": "q " }, { "math_id": 30, "text": "L_q[1/3,c]" }, { "math_id": 31, "text": "p " }, { "math_id": 32, "text": "L_q\\left[1/3,c\\right]" }, { "math_id": 33, "text": " c>0" } ]
https://en.wikipedia.org/wiki?curid=1549595
154963
Dicyclic group
In group theory, a dicyclic group (notation Dic"n" or Q4"n", ⟨"n",2,2⟩) is a particular kind of non-abelian group of order 4"n" ("n" &gt; 1). It is an extension of the cyclic group of order 2 by a cyclic group of order 2"n", giving the name "di-cyclic". In the notation of exact sequences of groups, this extension can be expressed as: formula_0 More generally, given any finite abelian group with an order-2 element, one can define a dicyclic group. Definition. For each integer "n" &gt; 1, the dicyclic group Dic"n" can be defined as the subgroup of the unit quaternions generated by formula_1 More abstractly, one can define the dicyclic group Dic"n" as the group with the following presentation formula_2 Some things to note which follow from this definition: Thus, every element of Dic"n" can be uniquely written as "a""m""x""l", where 0 ≤ "m" &lt; 2"n" and "l" = 0 or 1. The multiplication rules are given by It follows that Dic"n" has order 4"n". When "n" = 2, the dicyclic group is isomorphic to the quaternion group "Q". More generally, when "n" is a power of 2, the dicyclic group is isomorphic to the generalized quaternion group. Properties. For each "n" &gt; 1, the dicyclic group Dic"n" is a non-abelian group of order 4"n". (For the degenerate case "n" = 1, the group Dic1 is the cyclic group "C"4, which is not considered dicyclic.) Let "A" = ⟨"a"⟩ be the subgroup of Dic"n" generated by "a". Then "A" is a cyclic group of order 2"n", so [Dic"n":"A"] = 2. As a subgroup of index 2 it is automatically a normal subgroup. The quotient group Dic"n"/"A" is a cyclic group of order 2. Dic"n" is solvable; note that "A" is normal, and being abelian, is itself solvable. Binary dihedral group. The dicyclic group is a binary polyhedral group — it is one of the classes of subgroups of the Pin group Pin−(2), which is a subgroup of the Spin group Spin(3) — and in this context is known as the binary dihedral group. The connection with the binary cyclic group "C"2"n", the cyclic group "C""n", and the dihedral group Dih"n" of order 2"n" is illustrated in the diagram at right, and parallels the corresponding diagram for the Pin group. Coxeter writes the "binary dihedral group" as ⟨2,2,"n"⟩ and "binary cyclic group" with angle-brackets, ⟨"n"⟩. There is a superficial resemblance between the dicyclic groups and dihedral groups; both are a sort of "mirroring" of an underlying cyclic group. But the presentation of a dihedral group would have "x"2 = 1, instead of "x"2 = "a""n"; and this yields a different structure. In particular, Dic"n" is not a semidirect product of "A" and ⟨"x"⟩, since "A" ∩ ⟨"x"⟩ is not trivial. The dicyclic group has a unique involution (i.e. an element of order 2), namely "x"2 = "a""n". Note that this element lies in the center of Dic"n". Indeed, the center consists solely of the identity element and "x"2. If we add the relation "x"2 = 1 to the presentation of Dic"n" one obtains a presentation of the dihedral group Dih"n", so the quotient group Dic"n"/&lt;"x"2&gt; is isomorphic to Dih"n". There is a natural 2-to-1 homomorphism from the group of unit quaternions to the 3-dimensional rotation group described at quaternions and spatial rotations. Since the dicyclic group can be embedded inside the unit quaternions one can ask what the image of it is under this homomorphism. The answer is just the dihedral symmetry group Dih"n". For this reason the dicyclic group is also known as the binary dihedral group. Note that the dicyclic group does not contain any subgroup isomorphic to Dih"n". The analogous pre-image construction, using Pin+(2) instead of Pin−(2), yields another dihedral group, Dih2"n", rather than a dicyclic group. Generalizations. Let "A" be an abelian group, having a specific element "y" in "A" with order 2. A group "G" is called a generalized dicyclic group, written as Dic("A", "y"), if it is generated by "A" and an additional element "x", and in addition we have that ["G":"A"] = 2, "x"2 = "y", and for all "a" in "A", "x"−1"ax" = "a"−1. Since for a cyclic group of even order, there is always a unique element of order 2, we can see that dicyclic groups are just a specific type of generalized dicyclic group. The dicyclic group is the case formula_12 of the family of binary triangle groups formula_13 defined by the presentation:formula_14Taking the quotient by the additional relation formula_15 produces an ordinary triangle group, which in this case is the dihedral quotient formula_16. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "1 \\to C_{2n} \\to \\mbox{Dic}_n \\to C_2 \\to 1. \\, " }, { "math_id": 1, "text": "\\begin{align}\n a & = e^\\frac{i\\pi}{n} = \\cos\\frac{\\pi}{n} + i\\sin\\frac{\\pi}{n} \\\\\n x & = j\n\\end{align}" }, { "math_id": 2, "text": "\\operatorname{Dic}_n = \\left\\langle a, x \\mid a^{2n} = 1,\\ x^2 = a^n,\\ x^{-1}ax = a^{-1}\\right\\rangle.\\,\\!" }, { "math_id": 3, "text": " x^4 = 1 " }, { "math_id": 4, "text": " x^2 a^m = a^{m+n} = a^m x^2 " }, { "math_id": 5, "text": " l = \\pm 1 " }, { "math_id": 6, "text": " x^l a^m = a^{-m} x^l " }, { "math_id": 7, "text": " a^m x^{-1}= a^{m-n} a^n x^{-1}= a^{m-n} x^2 x^{-1}= a^{m-n} x " }, { "math_id": 8, "text": "a^k a^m = a^{k+m}" }, { "math_id": 9, "text": "a^k a^m x = a^{k+m}x" }, { "math_id": 10, "text": "a^k x a^m = a^{k-m}x" }, { "math_id": 11, "text": "a^k x a^m x = a^{k-m+n}" }, { "math_id": 12, "text": "(p,q,r)=(2,2,n) " }, { "math_id": 13, "text": "\\Gamma(p,q,r)" }, { "math_id": 14, "text": "\\langle a,b,c \\mid a^p = b^q = c^r = abc \\rangle." }, { "math_id": 15, "text": "abc = e" }, { "math_id": 16, "text": "\\mathrm{Dic}_n\\rightarrow \\mathrm{Dih}_n" } ]
https://en.wikipedia.org/wiki?curid=154963
1549805
Linear complex structure
Mathematics concept In mathematics, a complex structure on a real vector space formula_0 is an automorphism of formula_0 that squares to the minus identity, formula_1. Such a structure on formula_0 allows one to define multiplication by complex scalars in a canonical fashion so as to regard formula_0 as a complex vector space. Every complex vector space can be equipped with a compatible complex structure in a canonical way; however, there is in general no canonical complex structure. Complex structures have applications in representation theory as well as in complex geometry where they play an essential role in the definition of almost complex manifolds, by contrast to complex manifolds. The term "complex structure" often refers to this structure on manifolds; when it refers instead to a structure on vector spaces, it may be called a linear complex structure. Definition and properties. A complex structure on a real vector space formula_0 is a real linear transformation formula_2 such that formula_3 Here formula_4 means formula_5 composed with itself and formula_6 is the identity map on formula_0. That is, the effect of applying formula_5 twice is the same as multiplication by formula_7. This is reminiscent of multiplication by the imaginary unit, formula_8. A complex structure allows one to endow formula_0 with the structure of a complex vector space. Complex scalar multiplication can be defined by formula_9 for all real numbers formula_10 and all vectors formula_11 in "V". One can check that this does, in fact, give formula_0 the structure of a complex vector space which we denote formula_12. Going in the other direction, if one starts with a complex vector space formula_13 then one can define a complex structure on the underlying real space by defining formula_14. More formally, a linear complex structure on a real vector space is an algebra representation of the complex numbers formula_15, thought of as an associative algebra over the real numbers. This algebra is realized concretely as formula_16 which corresponds to formula_17. Then a representation of formula_15 is a real vector space formula_0, together with an action of formula_15 on formula_0 (a map formula_18). Concretely, this is just an action of formula_8, as this generates the algebra, and the operator representing formula_8 (the image of formula_8 in formula_19) is exactly formula_5. If formula_12 has complex dimension formula_20 then formula_0 must have real dimension formula_21. That is, a finite-dimensional space formula_0 admits a complex structure only if it is even-dimensional. It is not hard to see that every even-dimensional vector space admits a complex structure. One can define formula_5 on pairs formula_22 of basis vectors by formula_23 and formula_24 and then extend by linearity to all of formula_0. If formula_25 is a basis for the complex vector space formula_12 then formula_26 is a basis for the underlying real space formula_0. A real linear transformation formula_27 is a complex linear transformation of the corresponding complex space formula_12 if and only if formula_28 commutes with formula_5, i.e. if and only if formula_29 Likewise, a real subspace formula_30 of formula_0 is a complex subspace of formula_12 if and only if formula_5 preserves formula_30, i.e. if and only if formula_31 Examples. Elementary example. The collection of formula_32 real matrices formula_33 over the real field is 4-dimensional. Any matrix formula_34 has square equal to the negative of the identity matrix. A complex structure may be formed in formula_33: with identity matrix formula_35, elements formula_36, with matrix multiplication form complex numbers. Complex "n"-dimensional space C"n". The fundamental example of a linear complex structure is the structure on R2"n" coming from the complex structure on C"n". That is, the complex "n"-dimensional space C"n" is also a real 2"n"-dimensional space – using the same vector addition and real scalar multiplication – while multiplication by the complex number "i" is not only a "complex" linear transform of the space, thought of as a complex vector space, but also a "real" linear transform of the space, thought of as a real vector space. Concretely, this is because scalar multiplication by "i" commutes with scalar multiplication by real numbers formula_37 – and distributes across vector addition. As a complex "n"×"n" matrix, this is simply the scalar matrix with "i" on the diagonal. The corresponding real 2"n"×2"n" matrix is denoted "J". Given a basis formula_38 for the complex space, this set, together with these vectors multiplied by "i," namely formula_39 form a basis for the real space. There are two natural ways to order this basis, corresponding abstractly to whether one writes the tensor product as formula_40 or instead as formula_41 If one orders the basis as formula_42 then the matrix for "J" takes the block diagonal form (subscripts added to indicate dimension): formula_43 This ordering has the advantage that it respects direct sums of complex vector spaces, meaning here that the basis for formula_44 is the same as that for formula_45 On the other hand, if one orders the basis as formula_46, then the matrix for "J" is block-antidiagonal: formula_47 This ordering is more natural if one thinks of the complex space as a direct sum of real spaces, as discussed below. The data of the real vector space and the "J" matrix is exactly the same as the data of the complex vector space, as the "J" matrix allows one to define complex multiplication. At the level of Lie algebras and Lie groups, this corresponds to the inclusion of gl("n",C) in gl(2"n",R) (Lie algebras – matrices, not necessarily invertible) and GL("n",C) in GL(2"n",R): &lt;templatestyles src="Block indent/styles.css"/&gt;gl("n",C) &lt; gl("2n",R) and GL("n",C) &lt; GL("2n",R). The inclusion corresponds to forgetting the complex structure (and keeping only the real), while the subgroup GL("n",C) can be characterized (given in equations) as the matrices that "commute" with "J:" formula_48 The corresponding statement about Lie algebras is that the subalgebra gl("n",C) of complex matrices are those whose Lie bracket with "J" vanishes, meaning formula_49 in other words, as the kernel of the map of bracketing with "J," formula_50 Note that the defining equations for these statements are the same, as formula_51 is the same as formula_52 which is the same as formula_53 though the meaning of the Lie bracket vanishing is less immediate geometrically than the meaning of commuting. Direct sum. If "V" is any real vector space there is a canonical complex structure on the direct sum "V" ⊕ "V" given by formula_54 The block matrix form of "J" is formula_55 where formula_56 is the identity map on "V". This corresponds to the complex structure on the tensor product formula_57 Compatibility with other structures. If "B" is a bilinear form on "V" then we say that "J" preserves "B" if formula_58 for all "u", "v" ∈ "V". An equivalent characterization is that "J" is skew-adjoint with respect to "B": formula_59 If "g" is an inner product on "V" then "J" preserves "g" if and only if "J" is an orthogonal transformation. Likewise, "J" preserves a nondegenerate, skew-symmetric form "ω" if and only if "J" is a symplectic transformation (that is, if formula_60). For symplectic forms "ω" an interesting compatibility condition between "J" and "ω" is that formula_61 holds for all non-zero "u" in "V". If this condition is satisfied, then we say that "J" tames "ω" (synonymously: that "ω" is tame with respect to "J"; that "J" is tame with respect to "ω"; or that the pair formula_62 is tame). Given a symplectic form ω and a linear complex structure "J" on "V", one may define an associated bilinear form "g""J" on "V" by formula_63 Because a symplectic form is nondegenerate, so is the associated bilinear form. The associated form is preserved by "J" if and only if the symplectic form is. Moreover, if the symplectic form is preserved by "J", then the associated form is symmetric. If in addition "ω" is tamed by "J", then the associated form is positive definite. Thus in this case "V" is an inner product space with respect to "g""J". If the symplectic form ω is preserved (but not necessarily tamed) by "J", then "g""J" is the real part of the Hermitian form (by convention antilinear in the first argument) formula_64 defined by formula_65 Relation to complexifications. Given any real vector space "V" we may define its complexification by extension of scalars: formula_66 This is a complex vector space whose complex dimension is equal to the real dimension of "V". It has a canonical complex conjugation defined by formula_67 If "J" is a complex structure on "V", we may extend "J" by linearity to "V"C: formula_68 Since C is algebraically closed, "J" is guaranteed to have eigenvalues which satisfy λ2 = −1, namely λ = ±"i". Thus we may write formula_69 where "V"+ and "V"− are the eigenspaces of +"i" and −"i", respectively. Complex conjugation interchanges "V"+ and "V"−. The projection maps onto the "V"± eigenspaces are given by formula_70 So that formula_71 There is a natural complex linear isomorphism between "V""J" and "V"+, so these vector spaces can be considered the same, while "V"− may be regarded as the complex conjugate of "V""J". Note that if "V""J" has complex dimension "n" then both "V"+ and "V"− have complex dimension "n" while "V"C has complex dimension 2"n". Abstractly, if one starts with a complex vector space "W" and takes the complexification of the underlying real space, one obtains a space isomorphic to the direct sum of "W" and its conjugate: formula_72 Extension to related vector spaces. Let "V" be a real vector space with a complex structure "J". The dual space "V"* has a natural complex structure "J"* given by the dual (or transpose) of "J". The complexification of the dual space ("V"*)C therefore has a natural decomposition formula_73 into the ±"i" eigenspaces of "J"*. Under the natural identification of ("V"*)C with ("V"C)* one can characterize ("V"*)+ as those complex linear functionals which vanish on "V"−. Likewise ("V"*)− consists of those complex linear functionals which vanish on "V"+. The (complex) tensor, symmetric, and exterior algebras over "V"C also admit decompositions. The exterior algebra is perhaps the most important application of this decomposition. In general, if a vector space "U" admits a decomposition "U" = "S" ⊕ "T" then the exterior powers of "U" can be decomposed as follows: formula_74 A complex structure "J" on "V" therefore induces a decomposition formula_75 where formula_76 All exterior powers are taken over the complex numbers. So if "V""J" has complex dimension "n" (real dimension 2"n") then formula_77 The dimensions add up correctly as a consequence of Vandermonde's identity. The space of ("p","q")-forms Λ"p","q" "V""J"* is the space of (complex) multilinear forms on "V"C which vanish on homogeneous elements unless "p" are from "V"+ and "q" are from "V"−. It is also possible to regard Λ"p","q" "V""J"* as the space of real multilinear maps from "V""J" to C which are complex linear in "p" terms and conjugate-linear in "q" terms. See complex differential form and almost complex manifold for applications of these ideas.
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": " - I d_V " }, { "math_id": 2, "text": "J :V \\to V" }, { "math_id": 3, "text": "J^2 = -Id_V." }, { "math_id": 4, "text": "J^2" }, { "math_id": 5, "text": "J" }, { "math_id": 6, "text": "Id_V" }, { "math_id": 7, "text": "-1" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "(x + iy)\\vec{v} = x\\vec{v} + yJ(\\vec{v})" }, { "math_id": 10, "text": "x,y" }, { "math_id": 11, "text": "\\vec{v}" }, { "math_id": 12, "text": "V_J" }, { "math_id": 13, "text": "W" }, { "math_id": 14, "text": "Jw = iw~~\\forall w\\in W" }, { "math_id": 15, "text": "\\mathbb{C}" }, { "math_id": 16, "text": "\\Complex = \\Reals[x]/(x^2+1)," }, { "math_id": 17, "text": "i^2=-1" }, { "math_id": 18, "text": "\\mathbb{C}\\rightarrow \\text{End}(V)" }, { "math_id": 19, "text": "\\text{End}(V)" }, { "math_id": 20, "text": "n" }, { "math_id": 21, "text": "2n" }, { "math_id": 22, "text": "e,f" }, { "math_id": 23, "text": "Je=f" }, { "math_id": 24, "text": "Jf=-e" }, { "math_id": 25, "text": "(v_1, \\dots,v_n)" }, { "math_id": 26, "text": "(v_1,Jv_1,\\dots ,v_n ,Jv_n)" }, { "math_id": 27, "text": "A:V \\rightarrow V" }, { "math_id": 28, "text": "A" }, { "math_id": 29, "text": "AJ = JA." }, { "math_id": 30, "text": "U" }, { "math_id": 31, "text": "JU = U." }, { "math_id": 32, "text": "2\\times2" }, { "math_id": 33, "text": "\\mathbb{M}(2,\\Reals)" }, { "math_id": 34, "text": "J = \\begin{pmatrix}a & c \\\\ b & -a \\end{pmatrix},~~a^2+bc=-1" }, { "math_id": 35, "text": "I" }, { "math_id": 36, "text": "xI+yJ" }, { "math_id": 37, "text": " i (\\lambda v) = (i \\lambda) v = (\\lambda i) v = \\lambda (i v) " }, { "math_id": 38, "text": "\\left\\{e_1, e_2, \\dots, e_n \\right\\}" }, { "math_id": 39, "text": "\\left\\{ie_1, ie_2, \\dots, ie_n\\right\\}," }, { "math_id": 40, "text": "\\Complex^n = \\R^n \\otimes_{\\R} \\Complex" }, { "math_id": 41, "text": "\\Complex^n = \\Complex \\otimes_{\\R} \\R^n." }, { "math_id": 42, "text": "\\left\\{e_1, ie_1, e_2, ie_2, \\dots, e_n, ie_n\\right\\}," }, { "math_id": 43, "text": "J_{2n} = \\begin{bmatrix}\n0 & -1 \\\\\n1 & 0 \\\\\n & & 0 & -1 \\\\\n & & 1 & 0 \\\\\n & & & & \\ddots \\\\\n & & & & & \\ddots \\\\\n & & & & & & 0 & -1 \\\\\n & & & & & & 1 & 0\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nJ_2 \\\\\n & J_2 \\\\\n & & \\ddots \\\\\n & & & J_2\n\\end{bmatrix}." }, { "math_id": 44, "text": "\\Complex^m \\oplus \\Complex^n" }, { "math_id": 45, "text": "\\Complex^{m+n}." }, { "math_id": 46, "text": "\\left\\{e_1,e_2,\\dots,e_n, ie_1, ie_2, \\dots, ie_n\\right\\}" }, { "math_id": 47, "text": "J_{2n} = \\begin{bmatrix}0 & -I_n \\\\ I_n & 0\\end{bmatrix}." }, { "math_id": 48, "text": "\\mathrm{GL}(n, \\Complex) = \\left\\{ A \\in \\mathrm{GL}(2n,\\R) \\mid AJ = JA \\right\\}." }, { "math_id": 49, "text": "[J,A] = 0;" }, { "math_id": 50, "text": "[J,-]." }, { "math_id": 51, "text": "AJ = JA" }, { "math_id": 52, "text": "AJ - JA = 0," }, { "math_id": 53, "text": "[A,J] = 0," }, { "math_id": 54, "text": "J(v,w) = (-w,v)." }, { "math_id": 55, "text": "J = \\begin{bmatrix}0 & -I_V \\\\ I_V & 0\\end{bmatrix}" }, { "math_id": 56, "text": "I_V" }, { "math_id": 57, "text": "\\Complex \\otimes_{\\R} V." }, { "math_id": 58, "text": "B(Ju, Jv) = B(u, v)" }, { "math_id": 59, "text": " B(Ju,v) = -B(u,Jv). " }, { "math_id": 60, "text": " \\omega(Ju,Jv) = \\omega(u,v) " }, { "math_id": 61, "text": " \\omega(u, Ju) > 0 " }, { "math_id": 62, "text": "(\\omega,J)" }, { "math_id": 63, "text": " g_J(u, v) = \\omega(u, Jv). " }, { "math_id": 64, "text": "h_J\\colon V_J\\times V_J\\to\\mathbb{C}" }, { "math_id": 65, "text": " h_J(u,v) = g_J(u,v) + ig_J(Ju,v) = \\omega(u,Jv) +i\\omega(u,v). " }, { "math_id": 66, "text": "V^{\\mathbb C}=V\\otimes_{\\mathbb{R}}\\mathbb{C}." }, { "math_id": 67, "text": "\\overline{v\\otimes z} = v\\otimes\\bar z" }, { "math_id": 68, "text": "J(v\\otimes z) = J(v)\\otimes z." }, { "math_id": 69, "text": "V^{\\mathbb C}= V^{+}\\oplus V^{-}" }, { "math_id": 70, "text": "\\mathcal P^{\\pm} = {1\\over 2}(1\\mp iJ)." }, { "math_id": 71, "text": "V^{\\pm} = \\{v\\otimes 1 \\mp Jv\\otimes i: v \\in V\\}." }, { "math_id": 72, "text": "W^{\\mathbb C} \\cong W\\oplus \\overline{W}." }, { "math_id": 73, "text": "(V^*)^\\mathbb{C} = (V^*)^{+}\\oplus (V^*)^-" }, { "math_id": 74, "text": "\\Lambda^r U = \\bigoplus_{p+q=r}(\\Lambda^p S)\\otimes(\\Lambda^q T)." }, { "math_id": 75, "text": "\\Lambda^r\\,V^\\mathbb{C} = \\bigoplus_{p+q=r} \\Lambda^{p,q}\\,V_J" }, { "math_id": 76, "text": "\\Lambda^{p,q}\\,V_J\\;\\stackrel{\\mathrm{def}}{=}\\, (\\Lambda^p\\,V^+)\\otimes(\\Lambda^q\\,V^-)." }, { "math_id": 77, "text": "\\dim_{\\mathbb C}\\Lambda^{r}\\,V^{\\mathbb C} = {2n\\choose r}\\qquad \\dim_{\\mathbb C}\\Lambda^{p,q}\\,V_J = {n \\choose p}{n \\choose q}." } ]
https://en.wikipedia.org/wiki?curid=1549805
1549922
136 (number)
Natural number 136 (one hundred [and] thirty-six) is the natural number following 135 and preceding 137. In mathematics. 136 is itself a factor of the Eddington number. With a total of 8 divisors, 8 among them, 136 is a refactorable number. It is a composite number. 136 is a centered triangular number and a centered nonagonal number. The sum of the ninth row of Lozanić's triangle is 136. 136 is a self-descriptive number in base 4, and a repdigit in base 16. In base 10, the sum of the cubes of its digits is formula_0. The sum of the cubes of the digits of 244 is formula_1. 136 is a triangular number, because it's the sum of the first 16 positive integers. The digits in the number 136 in decimal are the first 3 triangle numbers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1^3 + 3^3 + 6^3 = 244" }, { "math_id": 1, "text": "2^3 + 4^3 + 4^3 = 136" } ]
https://en.wikipedia.org/wiki?curid=1549922
1550261
API gravity
Measure of how heavy or light a petroleum liquid is compared to water The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). It is used to compare densities of petroleum liquids. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. API gravity is graduated in degrees on a hydrometer instrument. API gravity values of most petroleum liquids fall between 10 and 70 degrees. In 1916, the U.S. National Bureau of Standards accepted the Baumé scale, which had been developed in France in 1768, as the U.S. standard for measuring the specific gravity of liquids less dense than water. Investigation by the U.S. National Academy of Sciences found major errors in salinity and temperature controls that had caused serious variations in published values. Hydrometers in the U.S. had been manufactured and distributed widely with a modulus of 141.5 instead of the Baumé scale modulus of 140. The scale was so firmly established that, by 1921, the remedy implemented by the American Petroleum Institute was to create the API gravity scale, recognizing the scale that was actually being used. API gravity formulas. The formula to calculate API gravity from specific gravity (SG) is: formula_0 Conversely, the specific gravity of petroleum liquids can be derived from their API gravity value as formula_1 Thus, a heavy oil with a specific gravity of 1.0 (i.e., with the same density as pure water at 60 °F) has an API gravity of: formula_2 Using API gravity to calculate barrels of crude oil per metric ton. In the oil industry, quantities of crude oil are often measured in metric tons. One can calculate the approximate number of barrels per metric ton for a given crude oil based on its API gravity: formula_3 For example, a metric ton of West Texas Intermediate (39.6° API) has a volume of about 7.6 barrels. Measurement of API gravity from its specific gravity. To derive the API gravity, the specific gravity (i.e., density relative to water) is first measured using either the hydrometer, detailed in ASTM D1298 or with the oscillating U-tube method detailed in ASTM D4052. Density adjustments at different temperatures, corrections for soda-lime glass expansion and contraction and meniscus corrections for opaque oils are detailed in the Petroleum Measurement Tables, details of usage specified in ASTM D1250. The specific gravity is defined by the formula below. formula_4 With the formula presented in the previous section, the API gravity can be readily calculated. When converting oil density to specific gravity using the above definition, it is important to use the correct density of water, according to the standard conditions used when the measurement was made. The official density of water at 60 °F according to the 2008 edition of ASTM D1250 is 999.016 kg/m3. The 1980 value is 999.012 kg/m3. In some cases the standard conditions may be 15 °C (59 °F) and not 60 °F (15.56 °C), in which case a different value for the water density would be appropriate ("see" standard conditions for temperature and pressure). Direct measurement of API gravity (hydrometer method). There are advantages to field testing and on-board conversion of measured volumes to volume correction. This method is detailed in ASTM D287. The hydrometer method is a standard technique for directly measuring API gravity of petroleum and petroleum products. This method is based on the principle of buoyancy and utilizes a specially calibrated hydrometer to determine the API gravity of a liquid sample. The procedure typically involves the following steps: The hydrometer method is widely used due to its simplicity and low cost. However, it requires a relatively large sample volume and may not be suitable for highly viscous or opaque fluids. Proper cleaning and handling of the hydrometer are crucial to maintain accuracy, and for volatile liquids, special precautions may be necessary to prevent evaporation during measurement. Classifications or grades. Generally speaking, oil with an API gravity between 40 and 45° commands the highest prices. Above 45°, the molecular chains become shorter and less valuable to refineries. Crude oil is classified as light, medium, or heavy according to its measured API gravity. However, not all parties use the same grading. The United States Geological Survey uses slightly different ranges. Crude oil with API gravity less than 10° is referred to as extra heavy oil or bitumen. Bitumen derived from oil sands deposits in Alberta, Canada, has an API gravity of around 8°. It can be diluted with lighter hydrocarbons to produce diluted bitumen, which has an API gravity of less than 22.3°, or further "upgraded" to an API gravity of 31 to 33° as synthetic crude. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{API gravity} = \\frac{141.5}{\\text{SG}} - 131.5" }, { "math_id": 1, "text": "\\text{SG at}~60^\\circ\\text{F} = \\frac{141.5}{\\text{API gravity} + 131.5}" }, { "math_id": 2, "text": "\\frac{141.5}{1.0} - 131.5 = 10.0^\\circ{\\text{API}}" }, { "math_id": 3, "text": "\\text{barrels of crude oil per metric ton} = \\frac{\\text{API gravity}+131.5}{141.5\\times 0.159}" }, { "math_id": 4, "text": "\\mbox{SG oil} = \\frac{\\rho_\\text{crudeoil}}{\\rho_{\\text{H}_2\\text{O}}}" } ]
https://en.wikipedia.org/wiki?curid=1550261
1550674
Radial stress
Stress in a direction radial to the axis of symmetry Radial stress is stress toward or away from the central axis of a component. Pressure vessels. The walls of pressure vessels generally undergo triaxial loading. For cylindrical pressure vessels, the normal loads on a wall element are longitudinal stress, circumferential (hoop) stress and radial stress. The radial stress for a thick-walled cylinder is equal and opposite to the gauge pressure on the inside surface, and zero on the outside surface. The circumferential stress and longitudinal stresses are usually much larger for pressure vessels, and so for thin-walled instances, radial stress is usually neglected. Formula. The radial stress for a thick walled pipe at a point formula_0 from the central axis is given by formula_1 where formula_2 is the inner radius, formula_3 is the outer radius, formula_4 is the inner absolute pressure and formula_5 is the outer absolute pressure. Maximum radial stress occurs when formula_6 (at the inside surface) and is equal to gauge pressure, or formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": " \\sigma_r(r) = \\frac{p_i r_i^2 - p_o r_o^2}{r_o^2 - r_i^2}+\\frac{r_i^2 r_o^2 (p_o - p_i)}{r^2 (r_o^2 - r_i^2)}\\ " }, { "math_id": 2, "text": " r_i " }, { "math_id": 3, "text": " r_o " }, { "math_id": 4, "text": " p_i " }, { "math_id": 5, "text": " p_o " }, { "math_id": 6, "text": "r = r_i" }, { "math_id": 7, "text": "p_i - p_o" } ]
https://en.wikipedia.org/wiki?curid=1550674
1550677
Cylinder stress
Rotationally symmetric stress distribution In mechanics, a cylinder stress is a stress distribution with rotational symmetry; that is, which remains unchanged if the stressed object is rotated about some fixed axis. Cylinder stress patterns include: These three principal stresses- hoop, longitudinal, and radial can be calculated analytically using a mutually perpendicular tri-axial stress system. The classical example (and namesake) of hoop stress is the tension applied to the iron bands, or hoops, of a wooden barrel. In a straight, closed pipe, any force applied to the cylindrical pipe wall by a pressure differential will ultimately give rise to hoop stresses. Similarly, if this pipe has flat end caps, any force applied to them by static pressure will induce a perpendicular "axial stress" on the same pipe wall. Thin sections often have negligibly small "radial stress", but accurate models of thicker-walled cylindrical shells require such stresses to be considered. In thick-walled pressure vessels, construction techniques allowing for favorable initial stress patterns can be utilized. These compressive stresses at the inner surface reduce the overall hoop stress in pressurized cylinders. Cylindrical vessels of this nature are generally constructed from concentric cylinders shrunk over (or expanded into) one another, i.e., built-up shrink-fit cylinders, but can also be performed to singular cylinders though autofrettage of thick cylinders. Definitions. Hoop stress. The hoop stress is the force over area exerted circumferentially (perpendicular to the axis and the radius of the object) in both directions on every particle in the cylinder wall. It can be described as: formula_0 where: An alternative to "hoop stress" in describing circumferential stress is wall stress or wall tension ("T"), which usually is defined as the total circumferential force exerted along the entire radial thickness: formula_1 Along with axial stress and radial stress, circumferential stress is a component of the stress tensor in cylindrical coordinates. It is usually useful to decompose any force applied to an object with rotational symmetry into components parallel to the cylindrical coordinates "r", "z", and "θ". These components of force induce corresponding stresses: radial stress, axial stress, and hoop stress, respectively. Relation to internal pressure. Thin-walled assumption. For the thin-walled assumption to be valid, the vessel must have a wall thickness of no more than about one-tenth (often cited as Diameter / t &gt; 20) of its radius. This allows for treating the wall as a surface, and subsequently using the Young–Laplace equation for estimating the hoop stress created by an internal pressure on a thin-walled cylindrical pressure vessel: formula_2 (for a cylinder) formula_3 (for a sphere) where The hoop stress equation for thin shells is also approximately valid for spherical vessels, including plant cells and bacteria in which the internal turgor pressure may reach several atmospheres. In practical engineering applications for cylinders (pipes and tubes), hoop stress is often re-arranged for pressure, and is called Barlow's formula. Inch-pound-second system (IPS) units for "P" are pounds-force per square inch (psi). Units for "t", and "d" are inches (in). SI units for "P" are pascals (Pa), while "t" and "d"=2"r" are in meters (m). When the vessel has closed ends, the internal pressure acts on them to develop a force along the axis of the cylinder. This is known as the axial stress and is usually less than the hoop stress. formula_5 Though this may be approximated to formula_6 There is also a radial stress formula_7 that is developed perpendicular to the surface and may be estimated in thin walled cylinders as: formula_8 In the thin-walled assumption the ratio formula_9 is large, so in most cases this component is considered negligible compared to the hoop and axial stresses. Thick-walled vessels. When the cylinder to be studied has a formula_10 ratio of less than 10 (often cited as formula_11) the thin-walled cylinder equations no longer hold since stresses vary significantly between inside and outside surfaces and shear stress through the cross section can no longer be neglected. These stresses and strains can be calculated using the "Lamé equations", a set of equations developed by French mathematician Gabriel Lamé. formula_12 formula_13 where: formula_14 and formula_15 are constants of integration, which may be found from the boundary conditions, formula_16 is the radius at the point of interest (e.g., at the inside or outside walls). For cylinder with boundary conditions: formula_17 (i.e. internal pressure formula_18 at inner surface), formula_19 (i.e. external pressure formula_20 at outer surface), the following constants are obtained: formula_21, formula_22. Using these constants, the following equation for hoop stress is obtained: formula_23 For a solid cylinder: formula_24 then formula_25 and a solid cylinder cannot have an internal pressure so formula_26. Being that for thick-walled cylinders, the ratio formula_9 is less than 10, the radial stress, in proportion to the other stresses, becomes non-negligible (i.e. P is no longer much, much less than Pr/t and Pr/2t), and so the thickness of the wall becomes a major consideration for design (Harvey, 1974, pp. 57). In pressure vessel theory, any given element of the wall is evaluated in a tri-axial stress system, with the three principal stresses being hoop, longitudinal, and radial. Therefore, by definition, there exist no shear stresses on the transverse, tangential, or radial planes. In thick-walled cylinders, the maximum shear stress at any point is given by half of the algebraic difference between the maximum and minimum stresses, which is, therefore, equal to half the difference between the hoop and radial stresses. The shearing stress reaches a maximum at the inner surface, which is significant because it serves as a criterion for failure since it correlates well with actual rupture tests of thick cylinders (Harvey, 1974, p. 57). Practical effects. Engineering. Fracture is governed by the hoop stress in the absence of other external loads since it is the largest principal stress. Note that a hoop experiences the greatest stress at its inside (the outside and inside experience the same total strain, which is distributed over different circumferences); hence cracks in pipes should theoretically start from "inside" the pipe. This is why pipe inspections after earthquakes usually involve sending a camera inside a pipe to inspect for cracks. Yielding is governed by an equivalent stress that includes hoop stress and the longitudinal or radial stress when absent. Medicine. In the pathology of vascular or gastrointestinal walls, the wall tension represents the muscular tension on the wall of the vessel. As a result of the Law of Laplace, if an aneurysm forms in a blood vessel wall, the radius of the vessel has increased. This means that the inward force on the vessel decreases, and therefore the aneurysm will continue to expand until it ruptures. A similar logic applies to the formation of diverticuli in the gut. Theory development. The first theoretical analysis of the stress in cylinders was developed by the mid-19th century engineer William Fairbairn, assisted by his mathematical analyst Eaton Hodgkinson. Their first interest was in studying the design and failures of steam boilers. Fairbairn realized that the hoop stress was twice the longitudinal stress, an important factor in the assembly of boiler shells from rolled sheets joined by riveting. Later work was applied to bridge-building and the invention of the box girder. In the Chepstow Railway Bridge, the cast iron pillars are strengthened by external bands of wrought iron. The vertical, longitudinal force is a compressive force, which cast iron is well able to resist. The hoop stress is tensile, and so wrought iron, a material with better tensile strength than cast iron, is added. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sigma_\\theta = \\dfrac{F}{tl} \\ " }, { "math_id": 1, "text": " T = \\dfrac{F}{l} \\ " }, { "math_id": 2, "text": " \\sigma_\\theta = \\dfrac{Pr}{t} \\ " }, { "math_id": 3, "text": " \\sigma_\\theta = \\dfrac{Pr}{2t} \\ " }, { "math_id": 4, "text": " \\sigma_\\theta \\! " }, { "math_id": 5, "text": " \\sigma_z = \\dfrac{F}{A} = \\dfrac{Pd^2}{(d+2t)^2 - d^2} \\ " }, { "math_id": 6, "text": " \\sigma_z = \\dfrac{Pr}{2t} \\ " }, { "math_id": 7, "text": " \\sigma_r \\ " }, { "math_id": 8, "text": " \\sigma_r = {-P} \\ " }, { "math_id": 9, "text": " \\dfrac{r}{t} \\ " }, { "math_id": 10, "text": "\\text{radius} / \\text{thickness} " }, { "math_id": 11, "text": " \\text{diameter} / \\text{thickness} < 20" }, { "math_id": 12, "text": " \\sigma_r = A - \\dfrac{B}{r^2} \\ " }, { "math_id": 13, "text": " \\sigma_\\theta = A + \\dfrac{B}{r^2} \\ " }, { "math_id": 14, "text": "A" }, { "math_id": 15, "text": "B " }, { "math_id": 16, "text": "r" }, { "math_id": 17, "text": "p(r=a) = P_a" }, { "math_id": 18, "text": "P_a" }, { "math_id": 19, "text": "p(r=b) = P_b" }, { "math_id": 20, "text": "P_b" }, { "math_id": 21, "text": " A = \\dfrac{P_a a^2 - P_b b^2}{b^2 - a^2} \\ " }, { "math_id": 22, "text": " B = \\dfrac{a^2 b^2 (P_a - P_b)}{b^2 - a^2} \\ " }, { "math_id": 23, "text": " \\sigma_\\theta = \\dfrac{P_a a^2 - P_b b^2}{b^2 - a^2} + \\dfrac{a^2 b^2 (P_a - P_b)}{(b^2 - a^2)r^2} \\ " }, { "math_id": 24, "text": "R_i = 0" }, { "math_id": 25, "text": " B = 0" }, { "math_id": 26, "text": " A = P_o " } ]
https://en.wikipedia.org/wiki?curid=1550677
1550685
Strong antichain
Concept in the mathematics of partial orders In order theory, a subset "A" of a partially ordered set "P" is a strong downwards antichain if it is an antichain in which no two distinct elements have a common lower bound in "P", that is, formula_0 In the case where "P" is ordered by inclusion, and closed under subsets, but does not contain the empty set, this is simply a family of pairwise disjoint sets. A strong upwards antichain "B" is a subset of "P" in which no two distinct elements have a common upper bound in "P". Authors will often omit the "upwards" and "downwards" term and merely refer to strong antichains. Unfortunately, there is no common convention as to which version is called a strong antichain. In the context of forcing, authors will sometimes also omit the "strong" term and merely refer to antichains. To resolve ambiguities in this case, the weaker type of antichain is called a weak antichain. If ("P", ≤) is a partial order and there exist distinct "x", y ∈ "P" such that {"x", "y"} is a strong antichain, then ("P", ≤) cannot be a lattice (or even a meet semilattice), since by definition, every two elements in a lattice (or meet semilattice) must have a common lower bound. Thus lattices have only trivial strong antichains (i.e., strong antichains of cardinality at most 1). References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\forall x, y \\in A \\; [x \\neq y \\rightarrow \\neg\\exists z \\in P \\; [ z \\leq x \\land z \\leq y]]. " } ]
https://en.wikipedia.org/wiki?curid=1550685
1550771
Countable chain condition
In order theory, a partially ordered set "X" is said to satisfy the countable chain condition, or to be ccc, if every strong antichain in "X" is countable. Overview. There are really two conditions: the "upwards" and "downwards" countable chain conditions. These are not equivalent. The countable chain condition means the downwards countable chain condition, in other words no two elements have a common lower bound. This is called the "countable chain condition" rather than the more logical term "countable antichain condition" for historical reasons related to certain chains of open sets in topological spaces and chains in complete Boolean algebras, where chain conditions sometimes happen to be equivalent to antichain conditions. For example, if κ is a cardinal, then in a complete Boolean algebra every antichain has size less than κ if and only if there is no descending κ-sequence of elements, so chain conditions are equivalent to antichain conditions. Partial orders and spaces satisfying the ccc are used in the statement of Martin's axiom. In the theory of forcing, ccc partial orders are used because forcing with any generic set over such an order preserves cardinals and cofinalities. Furthermore, the ccc property is preserved by finite support iterations (see iterated forcing). For more information on ccc in the context of forcing, see . More generally, if κ is a cardinal then a poset is said to satisfy the κ-chain condition, also written as κ-c.c., if every antichain has size less than κ. The countable chain condition is the ℵ1-chain condition. Examples and properties in topology. A topological space is said to satisfy the countable chain condition, or Suslin's Condition, if the partially ordered set of non-empty open subsets of "X" satisfies the countable chain condition, "i.e." every pairwise disjoint collection of non-empty open subsets of "X" is countable. The name originates from Suslin's Problem. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{c}=2^{\\aleph_{0}}" }, { "math_id": 1, "text": "\\{ 0, 1 \\}^{2^{2^{\\aleph_{0}}}}" } ]
https://en.wikipedia.org/wiki?curid=1550771
155110
Beaufort scale
Empirical measure describing wind speed based on observed conditions The Beaufort scale is an empirical measure that relates wind speed to observed conditions at sea or on land. Its full name is the Beaufort wind force scale. History. The scale that carries Beaufort's name had a long and complex evolution from the previous work of others (including Daniel Defoe the century before). In the 18th century, naval officers made regular weather observations, but there was no standard scale and so they could be very subjective — one man's "stiff breeze" might be another's "soft breeze"—: Beaufort succeeded in standardising a scale. The scale was devised in 1805 by Francis Beaufort (later Rear Admiral), a hydrographer and a Royal Navy officer, while serving on , and refined until he was Hydrographer of the Navy in the 1830s, when it was adopted officially. It was first used during the 1831-1836 "Darwin voyage" of HMS "Beagle" under Captain Robert FitzRoy, who was later to set up the first Meteorological Office in Britain giving regular weather forecasts. The initial scale of 13 classes (zero to 12) did not reference wind speed numbers, but related qualitative wind conditions to effects on the sails of a frigate, then the main ship of the Royal Navy, from "just sufficient to give steerage" to "that which no canvas sails could withstand". The scale was made a standard for ship's log entries on Royal Navy vessels in the late 1830s and, in 1853, the Beaufort scale was accepted as generally applicable at the First International Meteorological Conference in Brussels. In 1916, to accommodate the growth of steam power, the descriptions were changed to how the sea, not the sails, behaved and extended to land observations. Anemometer rotations to scale numbers were standardised only in 1923. George Simpson, CBE (later Sir George Simpson), director of the UK Meteorological Office, was responsible for this and for the addition of the land-based descriptors. The measures were slightly altered some decades later to improve its utility for meteorologists. Nowadays, meteorologists typically express wind speed in kilometres or miles per hour or, for maritime and aviation purposes, knots, but Beaufort scale terminology is still sometimes used in weather forecasts for shipping and the severe weather warnings given to the public. Wind speed on the Beaufort scale is based on the empirical relationship: where "v" is the equivalent wind speed at 10 metres above the sea surface and "B" is Beaufort scale number. For example, "B" = 9.5 is related to 24.5 m/s which is equal to the lower limit of "10 Beaufort". Using this formula the highest winds in hurricanes would be 23 in the scale. F1 tornadoes on the Fujita scale and T2 TORRO scale also begin roughly at the end of level 12 of the Beaufort scale, but are independent scales, although the TORRO scale wind values are based on the 3/2 power law relating wind velocity to Beaufort force. Wave heights in the scale are for conditions in the open ocean, not along the shore. Modern scale. The Beaufort scale is neither an exact nor an objective scale; it was based on visual and subjective observation of a ship and of the sea. The corresponding integral wind speeds were determined later, Extended scale. The Beaufort scale was extended in 1946 when forces 13 to 17 were added. However, forces 13 to 17 were intended to apply only to special cases, such as tropical cyclones. Nowadays, the extended scale is only used in Taiwan and mainland China, which are often affected by typhoons. Internationally, the World Meteorological Organization Manual on Marine Meteorological Services (2012 edition) defined the Beaufort Scale only up to force 12 and there was no recommendation on the use of the extended scale. Use. The scale is used in the Shipping Forecasts broadcast on BBC Radio 4 in the United Kingdom, and in the Sea Area Forecast from Met Éireann, the Irish Meteorological Service. Met Éireann issues a "Small Craft Warning" if winds of Beaufort force 6 (mean wind speed exceeding 22 knots) are expected up to 10 nautical miles offshore. Other warnings are issued by Met Éireann for Irish coastal waters, which are regarded as extending 30 miles out from the coastline, and the Irish Sea or part thereof: "Gale Warnings" are issued if winds of Beaufort force 8 are expected; "Strong Gale Warnings" are issued if winds of Beaufort force 9 or frequent gusts of at least 52 knots are expected.; "Storm Force Warnings" are issued if Beaufort force 10 or frequent gusts of at least 61 knots are expected; "Violent Storm Force Warnings" are issued if Beaufort force 11 or frequent gusts of at least 69 knots are expected; "Hurricane Force Warnings" are issued if winds of greater than 64 knots are expected. This scale is also widely used in the Netherlands, Germany, Greece, China, Taiwan, Hong Kong, Malta, and Macau, although with some differences between them. Taiwan uses the Beaufort scale with the extension to 17 noted above. China also switched to this extended version without prior notice on the morning of 15 May 2006, and the extended scale was immediately put to use for Typhoon Chanchu. Hong Kong and Macau retain force 12 as the maximum. In the United States of America, winds of force 6 or 7 result in the issuance of a small craft advisory, with force 8 or 9 winds bringing about a gale warning, force 10 or 11 a storm warning ("a tropical storm warning" being issued instead of the latter two if the winds relate to a tropical cyclone), and force 12 a hurricane-force wind warning (or hurricane warning if related to a tropical cyclone). A set of red warning flags (daylight) and red warning lights (night time) is displayed at shore establishments which coincide with the various levels of warning. In Canada, maritime winds forecast to be in the range of 6 to 7 are designated as "strong"; 8 to 9 "gale force"; 10 to 11 "storm force"; 12 "hurricane force". Appropriate wind warnings are issued by Environment Canada's Meteorological Service of Canada: strong wind warning, gale (force wind) warning, storm (force wind) warning and hurricane-force wind warning. These designations were standardised nationally in 2008, whereas "light wind" can refer to 0 to 12 or 0 to 15 knots and "moderate wind" 12 to 19 or 16 to 19 knots, depending on regional custom, definition or practice. Prior to 2008, a "strong wind warning" would have been referred to as a "small craft warning" by Environment Canada, similar to US terminology. (Canada and the USA have the Great Lakes in common.) Weather scale. Beaufort's name was also attached to the Beaufort scale for weather reporting: In this scale the weather designations could be combined, and reported, for example, as "s.c." for snow and detached cloud or "g.r.q." for dark, rain and squally. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "=\\frac{13} {8} \\sqrt{B^3}" } ]
https://en.wikipedia.org/wiki?curid=155110
1551157
Operative temperature
Operative temperature (formula_0) is defined as a uniform temperature of an imaginary black enclosure in which an occupant would exchange the same amount of heat by radiation plus convection as in the actual nonuniform environment. Some references also use the terms 'equivalent temperature" or 'effective temperature' to describe combined effects of convective and radiant heat transfer. In design, operative temperature can be defined as the average of the mean radiant and ambient air temperatures, weighted by their respective heat transfer coefficients. The instrument used for assessing environmental thermal comfort in terms of operative temperature is called a eupatheoscope and was invented by A. F. Dufton in 1929. Mathematically, operative temperature can be shown as; formula_1 where, formula_2 = convective heat transfer coefficient formula_3 = linear radiative heat transfer coefficient formula_4 = air temperature formula_5 = mean radiant temperature Or formula_6 where, formula_7 = air velocity formula_4 and formula_5 have the same meaning as above. It is also acceptable to approximate this relationship for occupants engaged in near sedentary physical activity (with metabolic rates between 1.0 met and 1.3 met), not in direct sunlight, and not exposed to air velocities greater than 0.10 m/s (20 fpm). formula_8 where formula_4 and formula_5 have the same meaning as above. Application. Operative temperature is used in heat transfer and thermal comfort analysis in transportation and buildings. Most psychrometric charts used in HVAC design only show the dry bulb temperature on the x-axis(abscissa), however, it is the operative temperature which is specified on the x-axis of the psychrometric chart illustrated in ANSI/ASHRAE Standard 55 – Thermal Environmental Conditions for Human occupancy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t_o" }, { "math_id": 1, "text": "t_o = \\frac{(h_r t_{mr} + h_c t_a)}{ h_r + h_c}" }, { "math_id": 2, "text": "h_c" }, { "math_id": 3, "text": "h_r" }, { "math_id": 4, "text": "t_a" }, { "math_id": 5, "text": "t_{mr}" }, { "math_id": 6, "text": "t_o = \\frac{(t_{mr} + (t_a \\times \\sqrt{10v}))}{ 1 + \\sqrt{10v}}" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "t_o = \\frac{(t_a + t_{mr})}{ 2 }" } ]
https://en.wikipedia.org/wiki?curid=1551157