id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1347979 | Land-use forecasting | Projecting the distribution and intensity of trip generating activities in the urban area
Land-use forecasting undertakes to project the distribution and intensity of trip generating activities in the urban area. In practice, land-use models are demand-driven, using as inputs the aggregate information on growth produced by an aggregate economic forecasting activity. Land-use estimates are inputs to the transportation planning process.
The discussion of land-use forecasting to follow begins with a review of the Chicago Area Transportation Study (CATS) effort. CATS researchers did interesting work, but did not produce a transferable forecasting model, and researchers elsewhere worked to develop models. After reviewing the CATS work, the discussion will turn to the first model to be widely known and emulated: the Lowry model developed by Ira S. Lowry when he was working for the Pittsburgh Regional Economic Study. Second and third generation Lowry models are now available and widely used, as well as interesting features incorporated in models that are not widely used.
Today, the transportation planning activities attached to metropolitan planning organizations are the loci for the care and feeding of regional land-use models. In the US, interest in and use of models is growing rapidly, after an extended period of limited use. Interest is also substantial in Europe and elsewhere.
Even though the majority of metropolitan planning agencies in the US do not use formal land-use models, we need to understand the subject: the concepts and analytic tools shape how land-use/transportation matters are thought about and handled; there is a good bit of interest in the research community where there have been important developments; and a new generation of land-use models such as LEAM and UrbanSim has developed since the 1990s that depart from these aggregate models, and incorporate innovations in discrete choice modeling, microsimulation, dynamics, and geographic information systems.
Land-use analysis at the Chicago Area Transportation Study.
In brief, the CATS analysis of the 1950s was “by mind and hand” distribute growth. The product was maps developed with a rule-based process. The rules by which land use was allocated were based on state-of-the art knowledge and concepts, and it hard to fault CATS on those grounds. The CATS took advantage of Colin Clark’s extensive work on the distribution of population densities around city centers. Theories of city form were available, sector and concentric circle concepts, in particular. Urban ecology notions were important at the University of Chicago and University of Michigan. Sociologists and demographers at the University of Chicago had begun its series of neighborhood surveys with an ecological flavor. Douglas Carroll, the CATS director, had studied with Amos Hawley, an urban ecologist at Michigan.
Colin Clark studied the population densities of many cities, and he found traces similar to those in the figure. Historic data show how the density line has changed over the years. To project the future, one uses changes in the parameters as a function of time to project the shape of density in the future, say in 20 years. The city spreads glacier-like. The area under the curve is given by population forecasts.
The CATS did extensive land use and activity surveys, taking advantage of the City work done by the Chicago Planning Commission. Hock’s work forecasting activities said what the land uses—activities were that would be accommodated under the density curve. Existing land-use data were arrayed in cross section. Land uses were allocated in a manner consistent with the existing pattern.
The study area was divided into transportation analysis zones: small zones where there was a lot of activity, larger zones elsewhere. The original CATS scheme reflected its Illinois State connections. Zones extended well away from the city. The zones were defined to take advantage of Census data at the block and minor civil division levels. They also strived for homogeneous land use and urban ecology attributes.
The first land use forecasts at CATS arrayed developments using “by hand” techniques, as stated. We do not fault the “by hand” technique – the then state of computers and data systems forced it. It was a rule based land use allocation. Growth was the forcing function, as were inputs from the economic study. Growth said that the population density envelope would have to shift. The land uses implied by the mix of activities were allocated from “Where is the land available?” and “What’s the use now?” Considerations. Certain types of activities allocate easily: steel mills, warehouses, etc.
Conceptually, the allocation rules seem important. There is lot of spatial autocorrelation in urban land uses; it's driven by historical path dependence: this sort of thing got started here and seeds more of the same. This autocorrelation was lost somewhat in the step from “by hand” to analytic models.
The CATS procedure was not viewed with favor by the emerging Urban Transportation Planning professional peer group, and in the late 1950s there was interest in the development of analytic forecasting procedures. At about the same time, similar interests emerged to meet urban redevelopment and sewer planning needs, and interest in analytic urban analysis emerged in political science, economics, and geography.
Lowry model.
Hard on the heels of the CATS work, several agencies and investigators began to explore analytic forecasting techniques, and between 1956 and the early 1960s a number of modeling techniques evolved. Irwin (1965) provides a review of the status of emerging models. One of the models, the Lowry model, was widely adopted.
Supported at first by local organizations and later by a Ford Foundation grant to the RAND Corporation, Ira S. Lowry undertook a three-year study in the Pittsburgh metropolitan area. (Work at RAND will be discussed later.) The environment was data rich, and there were good professional relationships available in the emerging emphasis on location and regional economies in the Economics Department at the University of Pittsburgh under the leadership of Edgar M. Hoover. The structure of the Lowry model is shown on the flow chart.
The flow chart gives the logic of the Lowry model. It is demand driven. First, the model responds to an increase in basic employment. It then responds to the consequent impacts on service activities. As Lowry treated his model and as the flow chart indicates, the model is solved by iteration. But the structure of the model is such that iteration is not necessary.
Although the language giving justification for the model specification is an economic language and Lowry is an economist, the model is not an economic model. Prices, markets, and the like do not enter.
A review of Lowry's publication will suggest reasons why his approach has been widely adopted. The publication was the first full elaboration of a model, data analysis and handling problems, and computations. Lowry's writing is excellent. He is candid and discusses his reasoning in a clear fashion. One can imagine an analyst elsewhere reading Lowry and thinking, “Yes, I can do that.”
The diffusion of innovations of the model is interesting. Lowry was not involved in consulting, and his word of mouth contacts with transportation professionals were quite limited. His interest was and is in housing economics. Lowry did little or no “selling.” We learn that people will pay attention to good writing and an idea whose time has come.
The model makes extensive use of gravity or interaction decaying with distance functions. Use of “gravity model” ideas was common at the time Lowry developed his model; indeed, the idea of the gravity model was at least 100 years old at the time. It was under much refinement at the time of Lowry's work; persons such as Alan Voorhees, Mort Schneider, John Hamburg, Roger Creighon, and Walter Hansen made important contributions. (See Carrothers 1956).
The Lowry model provided a point of departure for work in a number of places. Goldner (1971) traces its impact and modifications made. Steven Putnam at the University of Pennsylvania used it to develop PLUM (projective land use model) and I(incremental)PLUM. We estimate that Lowry derivatives are used in most MPO studies, but most of today's workers do not recognize the Lowry heritage, the derivatives are one or two steps away from the mother logic.
Penn-Jersey model.
The P-J (Penn-Jersey, greater Philadelphia area) analysis had little impact on planning practice. However, it illustrates what planners might have done, given available knowledge building blocks. It is an introduction to some of the work by researchers who are not practicing planners.
The P-J study scoped widely for concepts and techniques. It scoped well beyond the CATS and Lowry efforts, especially taking advantage of things that had come along in the late 1950s. It was well funded and viewed by the State and the Bureau of Public Roads as a research and a practical planning effort. Its director's background was in public administration, and leading personnel were associated with the urban planning department at the University of Pennsylvania. The P-J study was planning and policy oriented.
The P-J study drew on several factors "in the air". First, there was a lot of excitement about economic activity analysis and the applied math that it used, at first, linear programming. T. J. Koopmans, the developer of activity analysis, had worked in transportation. There was pull for transportation (and communications) applications, and the tools and interested professionals were available.
There was work on flows on networks, through nodes, and activity location. Orden (1956) had suggested the use of conservation equations when networks involved intermediate modes; flows from raw material sources through manufacturing plants to market were treated by Beckmann and Marschak (1955) and Goldman (1958) had treated commodity flows and the management of empty vehicles.
Maximal flow and synthesis problems were also treated (Boldreff 1955, Gomory and Hu 1962, Ford and Fulkerson 1956, Kalaba and Juncosa 1956, Pollack 1964). Balinski (1960) considered the problem of fixed cost. Finally, Cooper (1963) considered the problem of optimal location of nodes. The problem of investment in link capacity was treated by Garrison and Marble (1958) and the issue of the relationship between the length of the planning time-unit and investment decisions was raised by Quandt (1960) and Pearman (1974).
A second set of building blocks was evolving in location economics, regional science, and geography. Edgar Dunn (1954) undertook an extension of the classic von Thünen analysis of the location of rural land uses. Also, there had been a good bit of work in Europe on the interrelations of economic activity and transportation, especially during the railroad deployment era, by German and Scandinavian economists. That work was synthesized and augmented in the 1930s by August Lösch, and his "The Location of Economic Activities" was translated into English during the late 1940s. Edgar Hoover's work with the same title was also published in the late 1940s. Dunn's analysis was mainly graphical; static equilibrium was claimed by counting equations and unknowns. There was no empirical work (unlike Garrison 1958). For its time, Dunn's was a rather elegant work.
William Alonso's (1964) work soon followed. It was modeled closely on Dunn's and also was a University of Pennsylvania product. Although Alonso's book was not published until 1964, its content was fairly widely known earlier, having been the subject of papers at professional meetings and Committee on Urban Economics (CUE) seminars. Alonso's work became much more widely known than Dunn's, perhaps because it focused on “new” urban problems. It introduced the notion of bid rent and treated the question of the amount of land consumed as a function of land rent.
Wingo (1961) was also available. It was different in style and thrust from Alonso and Dunn's books and touched more on policy and planning issues. Dunn's important, but little noted, book undertook analysis of location rent, the rent referred to by Marshall as situation rent. Its key equation was:
formula_0
where:
"R" = rent per unit of land,
"P" = market price per unit of product,
"c" = cost of production per unit of product,
"d" = distance to market, and
"t" = unit transportation cost.
In addition, there were also demand and supply schedules.
This formulation by Dunn is very useful, for it indicates how land rent ties to transportation cost. Alonso's urban analysis starting point was similar to Dunn's, though he gave more attention to market clearing by actors bidding for space.
The question of exactly how rents tied to transportation was sharpened by those who took advantage of the duality properties of linear programming. First, there was a spatial price equilibrium perspective, as in Henderson (1957, 1958) Next, Stevens (1961) merged rent and transportation concepts in a simple, interesting paper. In addition, Stevens showed some optimality characteristics and discussed decentralized decision-making. This simple paper is worth studying for its own sake and because the model in the P-J study took the analysis into the urban area, a considerable step.
Stevens 1961 paper used the linear programming version of the transportation, assignment, translocation of masses problem of Koopmans, Hitchcock, and Kantorovich. His analysis provided an explicit link between transportation and location rent. It was quite transparent, and it can be extended simply. In response to the initiation of the P-J study, Herbert and Stevens (1960) developed the core model of the P-J Study. Note that this paper was published before the 1961 paper. Even so, the 1961 paper came first in Stevens’ thinking.
The Herbert–Stevens model was housing centered, and the overall study had the view that the purpose of transportation investments and related policy choices was to make Philadelphia a good place to live. Similar to the 1961 Stevens paper, the model assumed that individual choices would lead to overall optimization.
The P-J region was divided into "u" small areas recognizing "n" household groups and "m" residential bundles. Each residential bundle was defined on the house of apartment, the amenity level in the neighborhood (parks, schools, etc.), and the trip set associated with the site. There is an objective function:
formula_1
wherein xihk is the number of households in group "i" selecting residential bundle "h" in area "k". The items in brackets are bih (the budget allocated by "i" to bundle "h") and cihk, the purchase cost of "h" in area "k". In short, the sum of the differences between what households are willing to pay and what they have to pay is maximized; a surplus is maximized. The equation says nothing about who gets the surplus: it is divided between households and those who supply housing in some unknown way. There is a constraint equation for each area limiting the land use for housing to the land supply available.
formula_2
where:
sih = land used for bundle "h"
Lk = land supply in area "k"
And there is a constraint equation for each household group assuring that all folks can find housing.
formula_3
where:
"Ni" = number of households in group "i"
A policy variable is explicit, the land available in areas. Land can be made available by changing zoning and land redevelopment. Another policy variable is explicit when we write the dual of the maximization problem, namely:
formula_4
Subject to:
formula_5
formula_6
The variables are "rk" (rent in area "k") and "vi" an unrestricted subsidy variable specific to each household group. Common sense says that a policy will be better for some than others, and that is reasoning behind the subsidy variable. The subsidy variable is also a policy variable because society may choose to subsidize housing budgets for some groups. The constraint equations may force such policy actions.
It is apparent that the Herbert–Stevens scheme is a very interesting one. It is also apparent that it is housing centered, and the tie to transportation planning is weak. That question is answered when we examine the overall scheme for study, the flow chart of a single iteration of the model. How the scheme works requires little study. The chart doesn’t say much about transportation. Changes in the transportation system are displayed on the chart as if they are a policy matter.
The word “simulate” appears in boxes five, eight, and nine. The P-J modelers would say, “We are making choices about transportation improvements by examining the ways improvements work their way through urban development. The measure of merit is the economic surplus created in housing.”
Academics paid attention to the P-J study. The Committee on Urban Economics was active at the time. The committee was funded by the Ford Foundation to assist in the development of the nascent urban economics field. It often met in Philadelphia for review of the P-J work. Stevens and Herbert were less involved as the study went along. Harris gave intellectual leadership, and he published a fair amount about the study (1961, 1962). However, the P-J influence on planning practice was nil. The study didn’t put transportation up front. There were unsolvable data problems. Much was promised but never delivered. The Lowry model was already available.
Kain model.
About 1960, the Ford Foundation made a grant to the RAND Corporation to support work on urban transportation problems. (Lowry's work was supported in part by that grant) The work was housed in the logistics division of RAND, where the economists at RAND were housed. The head of that division was then Charles Zwick, who had worked on transportation topics previously.
The RAND work ranged from new technology and the cost of tunneling to urban planning models and analyses with policy implications. Some of the researchers at RAND were regular employees. Most, however, were imported for short periods of time. The work was published in several formats: first in the RAND P series and RM series and then in professional publications or in book form. Often, a single piece of work is available in differing forms at different places in the literature.
In spite of the diversity of topics and styles of work, one theme runs through the RAND work – the search for economic policy guides. We see that theme in Kain (1962), which is discussed by de Neufville and Stafford, and the figure is adapted from their book.
Kain's model dealt with direct and indirect effects. Suppose income increases. The increase has a direct effect on travel time and indirect effects through the use of land, auto ownership, and chopice of mode. Work supported at RAND also resulted in Meyer, Kain and Wohl (1964). These parts of the work at RAND had considerable influence on subsequent analysis (but not so much on practice as on policy). John Meyer became President of the National Bureau of Economic Research and worked to refocus its lines of work. Urban analysis Kain-style formed the core of a several-year effort and yielded book length publications (see, e.g., G. Ingram, et al., The NBER Urban Simulation Model, Columbia Univ. Press, 1972). After serving in the Air Force, Kain moved to Harvard, first to redirect the Urban Planning Department. After a time, he relocated at the Kennedy School, and he, along with José A. Gómez-Ibáñez, John Meyer, and C. Ingram, lead much work in an economic-policy analysis style. Martin Wohl moved on from RAND, eventually, to Carnegie-Mellon University, where he continued his style of work (e.g. Wohl 1984).
Policy-oriented gaming.
The notion that the impact of policy on urban development might be simulated was the theme for a conference at Cornell in the early 1960s; collegiums were formed, several streams of work emerged. Several persons developed rather simple (from today's view) simulation games. Land-use development was the outcome of gravitational type forces and the issue faced was that of conflicts between developers and planners when planners intervened in growth. CLUG and METROPOLIS are two rather well known products from this stream of work (they were the SimCity of their day); there must be twenty or thirty other similar planner vs. developer in the political context games. There seems to have been little serious attempt to analyze use of these games for policy formulation and decision-making, except for work at the firm Environmetrics.
Peter House, one of the Cornell Conference veterans, established environmetrics early in the 1960s. It, too, started with relatively simple gaming ideas. Over about a ten-year period, the comprehensiveness of gaming devices was gradually improved and, unlike the other gaming approaches, transportation played a role in their formulation. Environmetrics’ work moved into the Environmental Protection Agency and was continued for a time at the EPA Washington Environmental Studies Center.
A model known as River Basin was generalized to GEM (general environmental assessment model) and then birthed SEAS (strategic environmental assessment model) and SOS (Son of SEAS). There was quite a bit of development as the models were generalized, too much to be discussed here.
The most interesting thing to be noted is change in the way the use of the models evolved. Use shifted from a “playing games” stance to an “evaluate the impact of federal policy” stance. The model (both equations and data) is viewed as a generalized city or cities. It responds to the question: What would be the impact of proposed policies on cities?
An example of generalized question answering is LaBelle and Moses (1983) La Belle and Moses implement the UTP process on typical cities to assess the impact of several policies. There is no mystery why this approach was used. House had moved from the EPA to the DOE, and the study was prepared for his office.
University of North Carolina.
A group at Chapel Hill, mainly under the leadership of Stuart Chapin, began its work with simple analysis devices somewhat similar to those used in games. Results include Chapin (1965), Chapin and H. C. Hightower (1966) and Chapin and Weiss (1968). That group subsequently focused on (1) the ways in which individuals make tradeoffs in selecting residential property, (2) the roles of developers and developer decisions in the urban development process, and (3) information about choices obtained from survey research. Lansing and Muller (1964 and 1967) at the Survey Research Center worked in cooperation with the Chapel Hill Group in developing some of this latter information.
The first work was on simple, probabilistic growth models. It quickly moved from this style to game-like interviews to investigate preferences for housing. Persons interviewed would be given “money” and a set of housing attributes – sidewalks, garage, numbers of rooms, lot size, etc. How do they spend their money? This is an early version of the game The Sims. The work also began to examine developer behavior, as mentioned. (See: Kaiser 1972).
Reviews and surveys.
In addition to reviews at CUE meetings and sessions at professional meetings, there have been a number of organized efforts to review progress in land-use modeling. An early effort was the May 1965 issue of the Journal of the American Institute of Planners edited by B. Harris. The next major effort was a Highway Research Board Conference in June, 1967 (HRB 1968) and this was most constructive. This reference contains a review paper by Lowry, comments by Chapin, Alonso, and others. Of special interest is Appendix A, which listed several ways that analysis devices had been adapted for use. Robinson (1972) gives the flavor of urban redevelopment oriented modeling. And there have been critical reviews (e.g. Brewer 1973, Lee 1974). Pack (1978) addresses agency practice; it reviews four models and a number of case studies of applications. (See also Zettel and Carll 1962 and Pack and Pack 1977). The discussion above has been limited to models that most affected practice (Lowry) and theory (P-J, etc.) there are a dozen more that are noted in reviews. Several of those deal with retail and industry location. There are several that were oriented to urban redevelopment projects where transportation was not at issue.
Discussion.
Lowry-derived land-use analysis tools reside in the MPOs. The MPOs also have a considerable data capability including census tapes and programs, land-use information of varied quality, and survey experiences and survey-based data. Although large model work continues, fine detail analysis dominates agency and consultant work in the US. One reason is the requirement for environmental impact statements. Energy, noise, and air pollution have been of concern, and techniques special to the analysis of these topics have been developed. Recently, interest has increased in the uses of developer fees and/or other developer transportation related actions. Perceived shortages for funds for highways and transit are one motive for extracting resources or actions from developers. There's also the long-standing ethic that those who occasion costs should pay. Finally, there is a small amount of theoretical or academic work. Small is the operative word. There are few researchers and the literature is limited.
The discussion to follow will first emphasize the latter, theory-oriented work. It will then turn to a renewed interest in planning models in the international arena. Modern behavioral, academic, or theory-based analysis of transportation and land-use date from about 1965. By modern we mean analysis that derives aggregate results from micro behavior. First models were Herbert-Stevens in character. Similar to the P-J model, they:
There have been three major developments subsequently:
The Herbert-Stevens model was not a behavioral model in the sense that it did not try to map from micro to macro behavior. It did assume rational, maximizing behavior by locators. But that was attached to macro behavior and policy by assumed some centralized authority that provided subsidies. Wheaton (1974) and Anderson (1982) modified the Herbert-Stevens approach in different, but fairly simple, ways to deal with the artificiality of the Herbert-Stevens formulation.
An alternative to the P-J, Herbert-Stevens tradition was seeded when Edwin S. Mills, who is known as the father of modern urban economics, took on the problem of scoping more widely. Beginning with Mills (1972), Mills has developed a line of work yielding more publications and follow on work by others, especially his students.
Using a Manhattan geometry, Mills incorporated a transportation component in his analysis. Homogeneous zones defined by the transportation system were analyzed as positioned x integer steps away from the central zone via the Manhattan geometry. Mills treated congestion by assigning integer measures to levels of service, and he considered the costs of increasing capacity. To organize flows, Mills assumed a single export facility in the central node. He allowed capital-land rent trade offs yielding the tallest buildings in the central zones.
Stating this in a rather long but not difficult to understand linear programming format, Mills’ system minimizes land, capital, labor, and congestion costs, subject to a series of constraints on the quantities affecting the system. One set of these is the exogenously gives vector of export levels. Mills (1974a,b) permitted exports from non-central zones, and other modifications shifted the ways congestion is measured and allowed for more than one mode of transport.
With respect to activities, Mills introduced an input-output type coefficient for activities; aqrs, denotes land input q per unit of output r using production technique s. T.J. Kim (1979) has followed the Mills tradition through the addition of articulating sectors. The work briefly reviewed above adheres to a closed form, comparative statics manner of thinking. This note now will turn to dynamics.
The literature gives rather varied statements on what consideration of dynamics means. Most often, there is the comment that time is considered in an explicit fashion, and analysis becomes dynamic when results are run out over time. In that sense, the P-J model was a dynamic model. Sometimes, dynamics are operationalized by allowing things that were assumed static to change with time. Capital gets attention. Most of the models of the type discussed previously assume that capital is malleable, and one considers dynamics if capital is taken as durable yet subject to ageing – e.g., a building once built stays there but gets older and less effective. On the people side, intra-urban migration is considered. Sometimes too, there is an information context. Models assume perfect information and foresight. Let's relax that assumption.
Anas (1978) is an example of a paper that is “dynamic” because it considers durable capital and limited information about the future. Residents were mobile; some housing stock was durable (outlying), but central city housing stock was subject to obsolescence and abandonment.
Persons working in other traditions tend to emphasize feedbacks and stability (or the lack of stability) when they think “dynamics,” and there is some literature reflecting those modes of thought. The best known is Forester (1968), which set off an enormous amount of critique and some follow on thoughtful extensions (e.g., Chen (ed), 1972)
Robert Crosby in the University Research Office of the US DOT was very much interested in the applications of dynamics to urban analysis, and when the DOT program was active some work was sponsored (Kahn (ed) 1981). The funding for that work ended, and we doubt if any new work was seeded.
The analyses discussed use land rent ideas. The direct relation between transportation and land rent is assumed, e.g., as per Stevens. There is some work that takes a less simple view of land rent. An interesting example is Thrall (1987). Thrall introduces a consumption theory of land rent that includes income effects; utility is broadly considered. Thrall manages both to simplify analytic treatment making the theory readily accessible and develop insights about policy and transportation.
Future land use can be forecasted by using methods of Artificial Intelligence and discrete mathematics (Papadimitriou, 2012). | [
{
"math_id": 0,
"text": "\nR = Y\\left( {P - c} \\right) - Ytd\n"
},
{
"math_id": 1,
"text": "\n\\max Z = \\sum_{k = 1}^u {\\sum_{i = 1}^n {\\sum_{h = 1}^m {x_{ih}^k \\left( {b_{ih} - c_{ih}^k } \\right)} } } \\quad x_{ih}^k \\geq 0\n"
},
{
"math_id": 2,
"text": "\n\\sum_{i = 1}^n {\\sum_{h = 1}^m {s_{ih} x_{ih}^k } } \\leq L^k \n"
},
{
"math_id": 3,
"text": "\n\\sum_{k = 1}^u {\\sum_{h = 1}^m {x_{ih}^k } } = N_i \n"
},
{
"math_id": 4,
"text": "\n\\min Z' = \\sum_{k = 1}^u {r^k L^k + \\sum_{i = 1}^n {v_i \\left( { - N_i } \\right)} } \n"
},
{
"math_id": 5,
"text": "\n s_{ih} r^k - v_i \\geq b_{ih} - c_{ih}^k\n"
},
{
"math_id": 6,
"text": "\n r^k \\geq 0 \n"
}
]
| https://en.wikipedia.org/wiki?curid=1347979 |
13480113 | Stein–Strömberg theorem | In mathematics, the Stein–Strömberg theorem or Stein–Strömberg inequality is a result in measure theory concerning the Hardy–Littlewood maximal operator. The result is foundational in the study of the problem of differentiation of integrals. The result is named after the mathematicians Elias M. Stein and Jan-Olov Strömberg.
Statement of the theorem.
Let "λ""n" denote "n"-dimensional Lebesgue measure on "n"-dimensional Euclidean space R"n" and let "M" denote the Hardy–Littlewood maximal operator: for a function "f" : R"n" → R, "Mf" : R"n" → R is defined by
formula_0
where "B""r"("x") denotes the open ball of radius "r" with center "x". Then, for each "p" > 1, there is a constant "C""p" > 0 such that, for all natural numbers "n" and functions "f" ∈ "L""p"(R"n"; R),
formula_1
In general, a maximal operator "M" is said to be of strong type ("p", "p") if
formula_2
for all "f" ∈ "L""p"(R"n"; R). Thus, the Stein–Strömberg theorem is the statement that the Hardy–Littlewood maximal operator is of strong type ("p", "p") uniformly with respect to the dimension "n". | [
{
"math_id": 0,
"text": "Mf(x) = \\sup_{r > 0} \\frac1{\\lambda^{n} \\big( B_{r} (x) \\big)} \\int_{B_{r} (x)} | f(y) | \\, \\mathrm{d} \\lambda^{n} (y),"
},
{
"math_id": 1,
"text": "\\| Mf \\|_{L^{p}} \\leq C_{p} \\| f \\|_{L^{p}}."
},
{
"math_id": 2,
"text": "\\| Mf \\|_{L^{p}} \\leq C_{p, n} \\| f \\|_{L^{p}}"
}
]
| https://en.wikipedia.org/wiki?curid=13480113 |
13480124 | Spinodal decomposition | Mechanism of spontaneous phase separation
Spinodal decomposition is a mechanism by which a single thermodynamic phase spontaneously separates into two phases (without nucleation). Decomposition occurs when there is no thermodynamic barrier to phase separation. As a result, phase separation via decomposition does not require the nucleation events resulting from thermodynamic fluctuations, which normally trigger phase separation.
Spinodal decomposition is observed when mixtures of metals or polymers separate into two co-existing phases, each rich in one species and poor in the other. When the two phases emerge in approximately equal proportion (each occupying about the same volume or area), characteristic intertwined structures are formed that gradually coarsen (see animation). The dynamics of spinodal decomposition is commonly modeled using the Cahn–Hilliard equation.
Spinodal decomposition is fundamentally different from nucleation and growth. When there is a nucleation barrier to the formation of a second phase, time is taken by the system to overcome that barrier. As there is no barrier (by definition) to spinodal decomposition, some fluctuations (in the order parameter that characterizes the phase) start growing instantly. Furthermore, in spinodal decomposition, the two distinct phases start growing in any location uniformly throughout the volume, whereas a nucleated phase change begins at a discrete number of points.
Spinodal decomposition occurs when a homogenous phase becomes thermodynamically unstable. An unstable phase lies at a maximum in free energy. In contrast, nucleation and growth occur when a homogenous phase becomes metastable. That is, another biphasic system becomes lower in free energy, but the homogenous phase remains at a local minimum in free energy, and so is resistant to small fluctuations. J. Willard Gibbs described two criteria for a metastable phase: that it must remain stable against a small change over a large area.
History.
In the early 1940s, Bradley reported the observation of sidebands around the Bragg peaks in the X-ray diffraction pattern of a Cu-Ni-Fe alloy that had been quenched and then annealed inside the miscibility gap. Further observations on the same alloy were made by Daniel and Lipson, who demonstrated that the sidebands could be explained by a periodic modulation of composition in the <100> directions. From the spacing of the sidebands, they were able to determine the wavelength of the modulation, which was of the order of 100 angstroms (10 nm).
The growth of a composition modulation in an initially homogeneous alloy implies uphill diffusion or a negative diffusion coefficient. Becker and Dehlinger had already predicted a negative diffusivity inside the spinodal region of a binary system, but their treatments could not account for the growth of a modulation of a particular wavelength, such as was observed in the Cu-Ni-Fe alloy. In fact, any model based on Fick's law yields a physically unacceptable solution when the diffusion coefficient is negative.
The first explanation of the periodicity was given by Mats Hillert in his 1955 Doctoral Dissertation at MIT. Starting with a regular solution model, he derived a flux equation for one-dimensional diffusion on a discrete lattice. This equation differed from the usual one by the inclusion of a term, which allowed for the effect of the interfacial energy on the driving force of adjacent interatomic planes that differed in composition. Hillert solved the flux equation numerically and found that inside the spinodal it yielded a periodic variation of composition with distance. Furthermore, the wavelength of the modulation was of the same order as that observed in the Cu-Ni-Fe alloys.
Building on Hillert's work, a more flexible continuum model was subsequently developed by John W. Cahn and John Hilliard, who included the effects of coherency strains as well as the gradient energy term. The strains are significant in that they dictate the ultimate morphology of the decomposition in anisotropic materials.
Cahn–Hilliard model for spinodal decomposition.
Free energies in the presence of small amplitude fluctuations, e.g. in concentration, can be evaluated using an approximation introduced by Ginzburg and Landau to describe magnetic field gradients in superconductors. This approach allows one to approximate the free energy as an expansion in terms of the concentration gradient formula_0, a vector. Since free energy is a scalar and we are probing near its minima, the term proportional to formula_0 is negligible. The lowest order term is the quadratic expression formula_1, a scalar. Here formula_2 is a parameter that controls the free energy cost of variations in concentration formula_3.
The Cahn–Hilliard free energy is then
formula_4
where formula_5 is the bulk free energy per unit volume of the homogeneous solution, and the integral is over the volume of the system.
We now want to study the stability of the system with respect to small fluctuations in the concentration formula_3, for example a sine wave of amplitude formula_6 and wavevector formula_7, for formula_8 the wavelength of the concentration wave. To be thermodynamically stable, the free energy change formula_9 due to any small amplitude concentration fluctuation formula_10, must be positive.
We may expand formula_5 about the average composition co as follows:
formula_11
and for the perturbation formula_10 the free energy change is
formula_12
When this is integrated over the volume formula_13, the formula_14 gives zero, while formula_15 and formula_16 integrate to give formula_17. So, then
formula_18
As formula_19, thermodynamic stability requires that the term in brackets be positive. The formula_20 is always positive but tends to zero at small wavevectors, large wavelengths. Since we are interested in macroscopic fluctuations, formula_21, stability requires that the second derivative of the free energy be positive. When it is, there is no spinodal decomposition, but when it is negative, spinodal decomposition will occur. Then fluctuations with wavevectors formula_22 become spontaneously unstable, where the critical wave number formula_23 is given by:
formula_24
which corresponds to a fluctuations above a critical wavelength
formula_25
Dynamics of spinodal decomposition when molecules move via diffusion.
Spinodal decomposition can be modeled using a generalized diffusion equation:
formula_26
for formula_27 the chemical potential and formula_28 the mobility. As pointed out by Cahn, this equation can be considered as a phenomenological definition of the mobility M, which must by definition be positive.
It consists of the ratio of the flux to the local gradient in chemical potential. The chemical potential is a variation of the free energy and when this is the Cahn–Hilliard free energy this is
formula_29
and so
formula_30
and now we want to see what happens to a small concentration fluctuation formula_31 - note that now it has time dependence as a wavevector dependence. Here formula_32 is a growth rate. If formula_33 then the perturbation shrinks to nothing, the system is stable with respect to small perturbations or fluctuations, and there is no spinodal decomposition. However, if formula_34 then the perturbation grows and the system is unstable with respect to small perturbations or fluctuations: There is spinodal decomposition.
Substituting in this concentration fluctuation, we get
formula_35
This gives the same expressions for the stability as above, but it also gives an expression for the growth rate of concentration perturbations
formula_36
which has a maximum at a wavevector
formula_37
So, at least at the beginning of spinodal decomposition, we expect the growing concentrations to mostly have this wavevector.
Phase diagram.
This type of phase transformation is known as spinodal decomposition, and can be illustrated on a phase diagram exhibiting a miscibility gap. Thus, phase separation occurs whenever a material transition into the unstable region of the phase diagram. The boundary of the unstable region sometimes referred to as the binodal or coexistence curve, is found by performing a common tangent construction of the free-energy diagram. Inside the binodal is a region called the spinodal, which is found by determining where the curvature of the free-energy curve is negative. The binodal and spinodal meet at the critical point. It is when a material is moved into the spinodal region of the phase diagram that spinodal decomposition can occur.
The free energy curve is plotted as a function of composition for a temperature below the convolute temperature, T. Equilibrium phase compositions are those corresponding to the free energy minima. Regions of negative curvature (∂2f/∂c2 < 0 ) lie within the inflection points of the curve (∂2f/∂c2 = 0 ) which are called the spinodes. Their locus as a function of temperature defines the spinodal curve. For compositions within the spinodal, a homogeneous solution is unstable against infinitesimal fluctuations in density or composition, and there is no thermodynamic barrier to the growth of a new phase. Thus, the spinodal represents the limit of physical and chemical stability.
To reach the spinodal region of the phase diagram, a transition must take the material through the binodal region or the critical point. Often phase separation will occur via nucleation during this transition, and spinodal decomposition will not be observed. To observe spinodal decomposition, a very fast transition, often called a "quench", is required to move from the stable to the spinodal unstable region of the phase diagram.
In some systems, ordering of the material leads to a compositional instability and this is known as a "conditional spinodal", e.g. in the feldspars.
Coherency strains.
For most crystalline solid solutions, there is a variation of lattice parameters with the composition. If the lattice of such a solution is to remain coherent in the presence of a composition modulation, mechanical work has to be done to strain the rigid lattice structure. The maintenance of coherency thus affects the driving force for diffusion.
Consider a crystalline solid containing a one-dimensional composition modulation along the x-direction. We calculate the elastic strain energy for a cubic crystal by estimating the work required to deform a slice of material so that it can be added coherently to an existing slab of cross-sectional area. We will assume that the composition modulation is along the x' direction and, as indicated, a prime will be used to distinguish the reference axes from the standard axes of a cubic system (that is, along the <100>).
Let the lattice spacing in the plane of the slab be "ao" and that of the undeformed slice "a". If the slice is to be coherent after the addition of the slab, it must be subjected to a strain ε in the " z' " and " y' " directions which is given by:
formula_38
In the first step, the slice is deformed hydrostatically in order to produce the required strains to the " z' " and " y' " directions. We use the linear compressibility of a cubic system 1 / ( c11 + 2 c12 ) where the c's are the elastic constants. The stresses required to produce a hydrostatic strain of δ are therefore given by:
formula_39
The elastic work per unit volume is given by:
formula_40
where the ε's are the strains. The work performed per unit volume of the slice during the first step is therefore given by:
formula_41
In the second step, the sides of the slice parallel to the x' direction are clamped and the stress in this direction is relaxed reversibly. Thus, εz' = εy' = 0. The result is that:
formula_42
The net work performed on the slice in order to achieve coherency is given by:
formula_43
or
formula_44
The final step is to express c1'1' in terms of the constants referred to the standard axes. From the rotation of axes, we obtain the following:
formula_45
where l, m, n are the direction cosines of the x' axis and, therefore the direction cosines of the composition modulation. Combining these, we obtain the following:
formula_46
formula_47
The existence of any shear strain has not been accounted for. Cahn considered this problem, and concluded that shear would be absent for modulations along <100>, <110>, <111> and that for other directions the effect of shear strains would be small. It then follows that the total elastic strain energy of a slab of cross-sectional area A is given by:
formula_48
We next have to relate the strain δ to the composition variation. Let ao be the lattice parameter of the unstrained solid of the average composition co. Using a Taylor series expansion about co yields the following:
formula_49
in which
formula_50
where the derivatives are evaluated at co. Thus, neglecting higher-order terms, we have:
formula_51
Substituting, we obtain:
formula_52
This simple result indicates that the strain energy of a composition modulation depends only on the amplitude and is independent of the wavelength. For a given amplitude, the strain energy WE is proportional to Y. Consider a few special cases.
For an isotropic material:
formula_53
so that:
formula_54
This equation can also be written in terms of Young's modulus E and Poisson's ratio υ using the standard relationships:
formula_55
formula_56
Substituting, we obtain the following:
formula_57
For most metals, the left-hand side of this equation
formula_58
is positive, so that the elastic energy will be a minimum for those directions that minimize the term: l2m2 + m2n2 + l2n2. By inspection, those are seen to be <100>. For this case:
formula_59
the same as for an isotropic material. At least one metal (molybdenum) has an anisotropy of the opposite sign. In this case, the directions for minimum WE will be those that maximize the directional cosine function. These directions are <111>, and
formula_60
As we will see, the growth rate of the modulations will be a maximum in the directions that minimize Y. These directions, therefore, determine the morphology and structural characteristics of the decomposition in cubic solid solutions.
Rewriting the diffusion equation and including the term derived for the elastic energy yields the following:
formula_61
or
formula_62
which can alternatively be written in terms of the diffusion coefficient D as:
formula_63
The simplest way of solving this equation is by using the method of Fourier transforms.
Fourier transform.
The motivation for the Fourier transformation comes from the study of a Fourier series. In the study of a Fourier series, complicated periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. Due to the properties of sine and cosine, it is possible to recover the amount of each wave in the sum by an integral. In many cases it is desirable to use Euler's formula, which states that "e"2"πiθ" = cos 2"πθ" + "i" sin 2"πθ", to write Fourier series in terms of the basic waves "e"2"πiθ", with the distinct advantage of simplifying many unwieldy formulas.
The passage from sines and cosines to complex exponentials makes it necessary for the Fourier coefficients to be complex-valued. The usual interpretation of this complex number is that it gives both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. This passage also introduces the need for negative "frequencies". (E.G. If θ were measured in seconds then the waves "e"2"πiθ" and "e"−2"πiθ" would both complete one cycle per second—but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is closely related.)
If A(β) is the amplitude of a Fourier component of wavelength λ and wavenumber β = 2π/λ the spatial variation in composition can be expressed by the Fourier integral:
formula_64
in which the coefficients are defined by the inverse relationship:
formula_65
Substituting, we obtain on equating coefficients:
formula_66
This is an ordinary differential equation that has the solution:
formula_67
in which "A(β)" is the initial amplitude of the Fourier component of wave wavenumber β and "R(β)" defined by:
formula_68
or, expressed in terms of the diffusion coefficient D:
formula_69
In a similar manner, the new diffusion equation:
formula_70
has a simple sine wave solution given by:
formula_71
where formula_72 is obtained by substituting this solution back into the diffusion equation as follows:
formula_73
For solids, the elastic strains resulting from coherency add terms to the amplification factor formula_72 as follows:
formula_74
where, for isotropic solids:
formula_75,
where E is Young's modulus of elasticity, ν is Poisson's ratio, and η is the linear strain per unit composition difference. For anisotropic solids, the elastic term depends on the direction in a manner that can be predicted by elastic constants and how the lattice parameters vary with composition. For the cubic case, Y is a minimum for either (100) or (111) directions, depending only on the sign of the elastic anisotropy.
Thus, by describing any composition fluctuation in terms of its Fourier components, Cahn showed that a solution would be unstable concerning to the sinusoidal fluctuations of a critical wavelength. By relating the elastic strain energy to the amplitudes of such fluctuations, he formalized the wavelength or frequency dependence of the growth of such fluctuations, and thus introduced the principle of selective amplification of Fourier components of certain wavelengths. The treatment yields the expected mean particle size or wavelength of the most rapidly growing fluctuation.
Thus, the amplitude of composition fluctuations should grow continuously until a metastable equilibrium is reached with preferential amplification of components of particular wavelengths. The kinetic amplification factor "R" is negative when the solution is stable to the fluctuation, zero at the critical wavelength, and positive for longer wavelengths—exhibiting a maximum at exactly formula_76 times the critical wavelength.
Consider a homogeneous solution within the spinodal. It will initially have a certain amount of fluctuation from the average composition which may be written as a Fourier integral. Each Fourier component of that fluctuation will grow or diminish according to its wavelength.
Because of the maximum in "R" as a function of wavelength, those components of the fluctuation with formula_76 times the critical wavelength will grow fastest and will dominate. This "principle of selective amplification" depends on the initial presence of these wavelengths but does not critically depend on their exact amplitude relative to other wavelengths (if the time is large compared with (1/R). It does not depend on any additional assumptions, since different wavelengths can coexist and do not interfere with one another.
Limitations of this theory would appear to arise from this assumption and the absence of an expression formulated to account for irreversible processes during phase separation which may be associated with internal friction and entropy production. In practice, frictional damping is generally present and some of the energy is transformed into thermal energy. Thus, the amplitude and intensity of a one-dimensional wave decrease with distance from the source, and for a three-dimensional wave, the decrease will be greater.
Dynamics in k-space.
In the spinodal region of the phase diagram, the free energy can be lowered by allowing the components to separate, thus increasing the relative concentration of a component material in a particular region of the material. The concentration will continue to increase until the material reaches the stable part of the phase diagram. Very large regions of material will change their concentration slowly due to the amount of material that must be moved. Very small regions will shrink away due to the energy cost of maintaining an interface between two dissimilar component materials.
To initiate a homogeneous quench a control parameter, such as temperature, is abruptly and globally changed. For a binary mixture of formula_77-type and formula_78-type materials, the Landau free-energy
formula_79
is a good approximation of the free energy near the critical point and is often used to study homogeneous quenches. The mixture concentration formula_80 is the density difference of the mixture components, the control parameters which determine the stability of the mixture are formula_77 and formula_78, and the interfacial energy cost is determined by formula_2.
Diffusive motion often dominates at the length-scale of spinodal decomposition. The equation of motion for a diffusive system is
formula_81
where formula_82 is the diffusive mobility, formula_83 is some random noise such that formula_84, and the chemical potential formula_27 is derived from the Landau free-energy:
formula_85
We see that if formula_86, small fluctuations around formula_87 have a negative effective diffusive mobility and will grow rather than shrink. To understand the growth dynamics, we disregard the fluctuating currents due to formula_88, linearize the equation of motion around formula_89 and perform a Fourier transform into formula_90-space. This leads to
formula_91
which has an exponential growth solution:
formula_92
Since the growth rate formula_93 is exponential, the fastest growing angular wavenumber
formula_94
will quickly dominate the morphology. We now see that spinodal decomposition results in domains of the characteristic length scale called the "spinodal length":
formula_95
The growth rate of the fastest-growing angular wave number is
formula_96
where formula_97 is known as the "spinodal time".
The spinodal length and spinodal time can be used to nondimensionalize the equation of motion, resulting in universal scaling for spinodal decomposition.
Spinodal Architected Materials.
Spinodal phase decomposition has been used to generate architected materials by interpreting one phase as solid, and the other phase as void. These spinodal architected materials present interesting mechanical properties, such as high energy absorption, insensitivity to imperfections, superior mechanical resilience, and high stiffness-to-weight ratio. Furthermore, by controlling the phase separation, i.e., controlling the proportion of materials, and/or imposing preferential directions in the decompositions, one can control the density, and preferential directions effectively tuning the strength, weight, and anisotropy of the resulting architected material. Another interesting property of spinodal materials is the capability to seamlessly transition between different classes, orientations, and densities, thereby enabling the manufacturing of effectively multi-material structures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\nabla c"
},
{
"math_id": 1,
"text": "\\kappa(\\nabla c)^2"
},
{
"math_id": 2,
"text": "\\kappa"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "F = \\int_v [ f_b + \\kappa( \\nabla c)^2 ]~dV"
},
{
"math_id": 5,
"text": "f_b"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "q=2\\pi/\\lambda"
},
{
"math_id": 8,
"text": "\\lambda"
},
{
"math_id": 9,
"text": "\\delta F"
},
{
"math_id": 10,
"text": "\\delta c=a\\sin({\\vec q}.{\\vec r})"
},
{
"math_id": 11,
"text": "\n f_b( c ) = f_b( c_0 )+\n\\left( c - c_0 \\right) \\left( \\frac{\\partial f}{\\partial c} \\right)_{c\\,=\\,c_0}\n+ \\frac{1}{2}\\, \\left( c - c_0 \\right)^2 \\left( \\frac{\\partial^2 f}{\\partial c^2} \\right)_{c\\,=\\,c_0} +\\cdots\n"
},
{
"math_id": 12,
"text": "\n f_b + \\kappa( \\nabla c)^2 = f_b( c_0 )+\na\\sin({\\vec q}.{\\vec r}) \\left( \\frac{\\partial f}{\\partial c} \\right)_{c\\,=\\,c_0}\n+ \\frac{1}{2}\\,a^2 \\sin^2({\\vec q}.{\\vec r}) \\left( \\frac{\\partial^2 f}{\\partial c^2} \\right)_{c\\,=\\,c_0} +a^2\\kappa q^2\\cos^2({\\vec q}.{\\vec r})\n"
},
{
"math_id": 13,
"text": "V"
},
{
"math_id": 14,
"text": "\\sin({\\vec q}.{\\vec r})"
},
{
"math_id": 15,
"text": "\\sin^2({\\vec q}.{\\vec r})"
},
{
"math_id": 16,
"text": "\\cos^2({\\vec q}.{\\vec r})"
},
{
"math_id": 17,
"text": "V/2"
},
{
"math_id": 18,
"text": "\\frac{\\delta F}{V} = \\frac{a^2}{4} \\left[ \\left( \\frac{\\partial^2 f}{\\partial c^2} \\right)_{c=c_0} + 2\\, \\kappa\\, q^2 \\right]"
},
{
"math_id": 19,
"text": " a^2>0"
},
{
"math_id": 20,
"text": " 2\\kappa q^2"
},
{
"math_id": 21,
"text": " q \\to 0 "
},
{
"math_id": 22,
"text": " q < q_c"
},
{
"math_id": 23,
"text": " q_c"
},
{
"math_id": 24,
"text": "q_c = \\sqrt{ \\frac{-1}{2 \\kappa}\\left(\\frac{\\partial^2 f}{\\partial c^2}\\right)_{c=c_0} }"
},
{
"math_id": 25,
"text": "\\lambda_c = \\sqrt{ -8\\pi^2\\kappa/\\left(\\frac{\\partial^2 f}{\\partial c^2}\\right)_{c=c_0} }"
},
{
"math_id": 26,
"text": "\\frac{\\partial c}{\\partial t}=M\\nabla^2\\mu"
},
{
"math_id": 27,
"text": "\\mu"
},
{
"math_id": 28,
"text": " M"
},
{
"math_id": 29,
"text": " \\mu=\\frac{\\delta F}{\\delta c} = \\left( \\frac{\\partial f}{\\partial c} \\right)_{c=c_0} - 2\\kappa\\nabla^2 c"
},
{
"math_id": 30,
"text": " \\frac{\\partial c}{\\partial t}=M\\nabla^2\\mu=M\\left[\\left( \\frac{\\partial^2 f}{\\partial c^2} \\right)_{c=c_0}\\nabla^2 c - 2\\kappa\\nabla^4 c\\right]"
},
{
"math_id": 31,
"text": "\\delta c=a\\exp(\\omega t)\\sin({\\vec q}.{\\vec r})"
},
{
"math_id": 32,
"text": "\\omega"
},
{
"math_id": 33,
"text": "\\omega < 0"
},
{
"math_id": 34,
"text": "\\omega > 0"
},
{
"math_id": 35,
"text": "\\omega \\delta c= M\\left[-\\left(\\frac{\\partial^2 f}{\\partial c^2}\\right)_{c=c_0}q^2-2\\kappa q^4\n\\right]\\delta c"
},
{
"math_id": 36,
"text": "\\omega = Mq^2\\left[-\\left(\\frac{\\partial^2 f}{\\partial c^2}\\right)_{c=c_0}-2\\kappa q^2\n\\right]"
},
{
"math_id": 37,
"text": "q_{\\rm{max}} = \\sqrt{-\\left(\\frac{\\partial^2 f}{\\partial c^2}\\right)_{c=c_0}/(4\\kappa)}"
},
{
"math_id": 38,
"text": " \\epsilon = \\frac{ a - a_0}{a_0} "
},
{
"math_id": 39,
"text": " \\sigma_{x'} = \\sigma_{y'} = \\sigma_{z'} "
},
{
"math_id": 40,
"text": "W_E = \\frac{1}{2} \\displaystyle \\sum_i \\sigma_i\\epsilon_i "
},
{
"math_id": 41,
"text": "W_E(1) = \\frac{3}{2} ( c_{11} + 2 c_{12} ) \\epsilon^2"
},
{
"math_id": 42,
"text": " W_E(2) = \\frac{\\epsilon^2 (c_{11} + 2 c_{22})}{2c_{11}}"
},
{
"math_id": 43,
"text": "W_E = W_E(1) - W_E(2) "
},
{
"math_id": 44,
"text": "W_E = \\frac{\\epsilon^2}{2} (c_{11} + 2c_{12} ) \\left( 3 - \\left[ \\frac{c_{11} - 2c_{12}}{c_{1'1'}} \\right] \\right)"
},
{
"math_id": 45,
"text": "c_{1'1'} = c_{11} + 2(2c_{44} - c_{11} + c_{12}) (l^2m^2 + m^2n^2 + l^2n^2)"
},
{
"math_id": 46,
"text": "W_E = Y \\epsilon^2 "
},
{
"math_id": 47,
"text": " Y = \\frac{1}{2} (c_{11} + 2c_{12}) \\left[ 3 - \\frac{c_{11} + 2c_{12}}{c_{11} + 2(2c_{44} - c_{11} + c_{12})(l^2m^2 + m^2n^2 + l^2n^2)} \\right]"
},
{
"math_id": 48,
"text": "W_E = 4 \\int Y \\epsilon^2~dx "
},
{
"math_id": 49,
"text": "a = a_0[ 1 + \\eta [c-c_0 ] + \\cdots ] "
},
{
"math_id": 50,
"text": "\\eta = \\frac{1}{a_0} \\frac{da}{dc} + \\frac{d \\ln a}{dc}"
},
{
"math_id": 51,
"text": " \\epsilon = \\frac{a-a_0}{a_0} = \\eta ( c- c_0) "
},
{
"math_id": 52,
"text": " W_E = A \\int \\eta^2 Y (c -c_0)^2~dx "
},
{
"math_id": 53,
"text": " 2c_{44} -c_{11} + c_{12} = 0"
},
{
"math_id": 54,
"text": "Y[\\mathrm{iso}] = c_{11} + c_{12} -2 \\frac{c_{12}^2}{c_{11}}"
},
{
"math_id": 55,
"text": "c_{11} = \\frac{ E (1-\\nu)}{(1-2 \\nu)(1 + \\nu)} "
},
{
"math_id": 56,
"text": "c_{12} = \\frac { E \\nu} {(1-2 \\nu)(1+\\nu)}"
},
{
"math_id": 57,
"text": "Y[\\mathrm{iso} ] = \\frac{E}{1-\\nu} "
},
{
"math_id": 58,
"text": "2c_{44} - c_{11} + c_{12} "
},
{
"math_id": 59,
"text": "Y[\\mathrm{100}] = c_{11} + c_{12} -2 \\frac{c_{12}^2}{c_{11}}"
},
{
"math_id": 60,
"text": "Y[\\mathrm{111}] = \\frac{ 6c_{44} ( c_{11} + 2c_{12} )}{c_{11} + 2c_{12} + 4c_{44}} "
},
{
"math_id": 61,
"text": "F_t = A \\int f(c) + \\eta Y (c-c_0)^2 + K\\left(\\frac{dc}{dx}\\right)^2~dx"
},
{
"math_id": 62,
"text": "\\frac{\\partial c} {\\partial t} = \\frac{M}{N_\\nu} \\left( [ f'' + 2 \\eta Y ] \\frac{d^2 c}{dx^2} - 2K \\frac{d^4c}{dx^4} \\right) "
},
{
"math_id": 63,
"text": "\\frac{\\partial c} {\\partial t} = \\left[ 1 + \\frac{ 2\\eta Y}{f''} \\right] \\frac{d^2 c}{dx^2} - \\frac{2KF}{f''} \\frac{d^4c}{dx^4} "
},
{
"math_id": 64,
"text": "c - c_0 = \\int A(\\beta) \\exp (i \\beta x)~d\\beta "
},
{
"math_id": 65,
"text": "A(\\beta) = \\frac{1}{2\\pi} \\int (c-c_0) \\exp(-i\\beta x) ~dx"
},
{
"math_id": 66,
"text": "\\frac{dA(\\beta)}{dt} = - \\frac{M}{N_\\nu} [ f'' + 2 \\eta^2Y + 2Y\\beta^2 ] \\beta^2 A(\\beta) "
},
{
"math_id": 67,
"text": "A(\\beta,t) = A(\\beta,0) \\exp[ R(\\beta) t] "
},
{
"math_id": 68,
"text": " R(\\beta) = - \\frac{M}{N_\\nu} (f '' + 2\\eta Y + 2k\\beta^2)\\beta^2"
},
{
"math_id": 69,
"text": " R(\\beta) = -\\tilde{D} \\left(1 + \\frac{2\\eta^2 Y}{f''} + \\frac{2K}{f''}\\beta^2 \\right) \\beta^2"
},
{
"math_id": 70,
"text": " \\frac{\\partial c }{ \\partial t} = M \\frac{\\partial^2 f}{\\partial c^2} \\nabla^2 c - 2MK\\nabla^4 c) "
},
{
"math_id": 71,
"text": "c - c_0 = exp[R\\bar{\\beta}t] cos\\beta \\cdot r "
},
{
"math_id": 72,
"text": "R(\\beta)"
},
{
"math_id": 73,
"text": "R(\\bar{\\beta}) - M\\beta^2 \\left( \\frac{\\partial^2 f}{\\partial c^2} + 2 K \\beta^2 \\right) "
},
{
"math_id": 74,
"text": " R(\\bar{\\beta}) = - M\\beta^2 \\left( \\frac{\\partial^2 f}{\\partial c^2} + 2\\eta^2 Y + 2K\\beta^2 \\right)"
},
{
"math_id": 75,
"text": "Y = \\frac{E}{1-\\nu} "
},
{
"math_id": 76,
"text": "\\sqrt{2}"
},
{
"math_id": 77,
"text": "A"
},
{
"math_id": 78,
"text": "B"
},
{
"math_id": 79,
"text": "F=\\int\\!\\left(\\frac{A}{2}\\phi^2+\\frac{B}{4}\\phi^4 + \\frac{\\kappa}{2}\\left(\\nabla\\phi\\right)^2\\right)~dx\\;."
},
{
"math_id": 80,
"text": "\\phi=\\rho_A-\\rho_B"
},
{
"math_id": 81,
"text": "\\partial_t\\phi=\\nabla ( m\\nabla\\mu + \\xi(x) )\\;,"
},
{
"math_id": 82,
"text": "m"
},
{
"math_id": 83,
"text": "\\xi(x)"
},
{
"math_id": 84,
"text": "\\langle\\xi(x)\\rangle=0"
},
{
"math_id": 85,
"text": "\\mu=\\frac{\\delta F}{\\delta \\phi}=A\\phi+B\\phi^3-\\kappa \\nabla^2 \\phi\\;."
},
{
"math_id": 86,
"text": "A<0"
},
{
"math_id": 87,
"text": "\\phi=0"
},
{
"math_id": 88,
"text": "\\xi"
},
{
"math_id": 89,
"text": "\\phi=\\phi_{in}"
},
{
"math_id": 90,
"text": "k"
},
{
"math_id": 91,
"text": "\\partial_t\\tilde{\\phi}(k,t)=-m((A + 3B\\phi_{in}^2)k^2 + \\kappa k^4)\\tilde{\\phi}(k,t)=R(k)\\tilde{\\phi}(k,t)\\;,"
},
{
"math_id": 92,
"text": "\\tilde{\\phi}(k,t) = \\exp(R(k)t)\\;."
},
{
"math_id": 93,
"text": "R(k)"
},
{
"math_id": 94,
"text": "k_{sp} = \\sqrt{\\frac{-(A+3B\\phi_{in}^2)}{2\\kappa}}\\;,"
},
{
"math_id": 95,
"text": "\\lambda_{sp} = \\frac{2\\pi}{k_{sp}} = 2\\pi\\sqrt{\\frac{2\\kappa}{-(A+3B\\phi_{in}^2)}}\\;."
},
{
"math_id": 96,
"text": "R(k_{sp})=-m((A + 3B\\phi_{in}^2)k_{sp}^2 + \\kappa k_{sp}^4)=\\frac{m(A+3B\\phi_{in}^2)^2}{4\\kappa} = \\frac{1}{t_{sp}}"
},
{
"math_id": 97,
"text": "t_{sp}"
}
]
| https://en.wikipedia.org/wiki?curid=13480124 |
1348079 | Partial isometry | In mathematical functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.
The orthogonal complement of its kernel is called the initial subspace and its range is called the final subspace.
Partial isometries appear in the polar decomposition.
General definition.
The concept of partial isometry can be defined in other equivalent ways. If "U" is an isometric map defined on a closed subset "H"1 of a Hilbert space "H" then we can define an extension "W" of "U" to all of "H" by the condition that "W" be zero on the orthogonal complement of "H"1. Thus a partial isometry is also sometimes defined as a closed partially defined isometric map.
Partial isometries (and projections) can be defined in the more abstract setting of a semigroup with involution; the definition coincides with the one herein.
Characterization in finite dimensions.
In finite-dimensional vector spaces, a matrix formula_0 is a partial isometry if and only if formula_1 is the projection onto its support. Contrast this with the more demanding definition of isometry: a matrix formula_2 is an isometry if and only if formula_3. In other words, an isometry is an injective partial isometry.
Any finite-dimensional partial isometry can be represented, in some choice of basis, as a matrix of the form formula_4, that is, as a matrix whose first formula_5 columns form an isometry, while all the other columns are identically 0.
Note that for any isometry formula_2, the Hermitian conjugate formula_6 is a partial isometry, although not every partial isometry has this form, as shown explicitly in the given examples.
Operator Algebras.
For operator algebras one introduces the initial and final subspaces:
formula_7
C*-Algebras.
For C*-algebras one has the chain of equivalences due to the C*-property:
formula_8
So one defines partial isometries by either of the above and declares the initial resp. final projection to be W*W resp. WW*.
A pair of projections are partitioned by the equivalence relation:
formula_9
It plays an important role in K-theory for C*-algebras and in the Murray-von Neumann theory of projections in a von Neumann algebra.
Special Classes.
Projections.
Any orthogonal projection is one with common initial and final subspace:
formula_10
Embeddings.
Any isometric embedding is one with full initial subspace:
formula_11
Unitaries.
Any unitary operator is one with full initial and final subspace:
formula_12
"(Apart from these there are far more partial isometries.)"
Examples.
Nilpotents.
On the two-dimensional complex Hilbert space the matrix
formula_13
is a partial isometry with initial subspace
formula_14
and final subspace
formula_15
Generic finite-dimensional examples.
Other possible examples in finite dimensions areformula_16This is clearly not an isometry, because the columns are not orthonormal. However, its support is the span of formula_17 and formula_18, and restricting the action of formula_0 on this space, it becomes an isometry (and in particular, a unitary). One can similarly verify that formula_19, that is, that formula_20 is the projection onto its support.
Partial isometries do not necessarily correspond to squared matrices. Consider for example,formula_21This matrix has support the span of formula_17 and formula_22, and acts as an isometry (and in particular, as the identity) on this space.
Yet another example, in which this time formula_0 acts like a non-trivial isometry on its support, isformula_23One can readily verify that formula_24, and formula_25, showing the isometric behavior of formula_0 between its support formula_26 and its range formula_27.
Leftshift and Rightshift.
On the square summable sequences the operators
formula_28
formula_29
which are related by
formula_30
are partial isometries with initial subspace
formula_31
and final subspace:
formula_32. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": " A^* A"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "V^* V=I"
},
{
"math_id": 4,
"text": "A=\\begin{pmatrix}V & 0\\end{pmatrix}"
},
{
"math_id": 5,
"text": "\\operatorname{rank}(A)"
},
{
"math_id": 6,
"text": "V^*"
},
{
"math_id": 7,
"text": "\\mathcal{I}W:=\\mathcal{R}W^*W,\\,\\mathcal{F}W:=\\mathcal{R}WW^*"
},
{
"math_id": 8,
"text": "(W^*W)^2=W^*W\\iff WW^*W=W\\iff W^*WW^*=W^*\\iff(WW^*)^2=WW^*"
},
{
"math_id": 9,
"text": "P=W^*W,\\,Q=WW^*"
},
{
"math_id": 10,
"text": "P:\\mathcal{H}\\rightarrow\\mathcal{H}:\\quad\\mathcal{I}P=\\mathcal{F}P"
},
{
"math_id": 11,
"text": "J:\\mathcal{H}\\hookrightarrow\\mathcal{K}:\\quad\\mathcal{I}J=\\mathcal{H}"
},
{
"math_id": 12,
"text": "U:\\mathcal{H}\\leftrightarrow\\mathcal{K}:\\quad\\mathcal{I}U=\\mathcal{H},\\,\\mathcal{F}U=\\mathcal{K}"
},
{
"math_id": 13,
"text": " \\begin{pmatrix}0 & 1 \\\\ 0 & 0 \\end{pmatrix} "
},
{
"math_id": 14,
"text": " \\{0\\} \\oplus \\mathbb{C}"
},
{
"math_id": 15,
"text": " \\mathbb{C} \\oplus \\{0\\}. "
},
{
"math_id": 16,
"text": "A\\equiv \\begin{pmatrix}1&0&0\\\\0&\\frac1{\\sqrt2}&\\frac1{\\sqrt2}\\\\0&0&0\\end{pmatrix}."
},
{
"math_id": 17,
"text": "\\mathbf e_1\\equiv (1,0,0)"
},
{
"math_id": 18,
"text": "\\frac{1}{\\sqrt2}(\\mathbf e_2+\\mathbf e_3)\\equiv (0,1/\\sqrt2,1/\\sqrt2)"
},
{
"math_id": 19,
"text": "A^* A= \\Pi_{\\operatorname{supp}(A)}"
},
{
"math_id": 20,
"text": "A^* A"
},
{
"math_id": 21,
"text": "A\\equiv \\begin{pmatrix}1&0&0\\\\0&\\frac12&\\frac12\\\\ 0 & 0 & 0 \\\\ 0& \\frac12 & \\frac12\\end{pmatrix}."
},
{
"math_id": 22,
"text": "\\mathbf e_2+\\mathbf e_3\\equiv (0,1,1)"
},
{
"math_id": 23,
"text": "A = \\begin{pmatrix}0 & \\frac1{\\sqrt2} & \\frac1{\\sqrt2} \\\\ 1&0&0\\\\0&0&0\\end{pmatrix}."
},
{
"math_id": 24,
"text": "A\\mathbf e_1=\\mathbf e_2"
},
{
"math_id": 25,
"text": "A \\left(\\frac{\\mathbf e_2 + \\mathbf e_3}{\\sqrt2}\\right) = \\mathbf e_1"
},
{
"math_id": 26,
"text": "\\operatorname{span}(\\{\\mathbf e_1, \\mathbf e_2+\\mathbf e_3\\})"
},
{
"math_id": 27,
"text": "\\operatorname{span}(\\{\\mathbf e_1,\\mathbf e_2\\})"
},
{
"math_id": 28,
"text": "R:\\ell^2(\\mathbb{N})\\to\\ell^2(\\mathbb{N}):(x_1,x_2,\\ldots)\\mapsto(0,x_1,x_2,\\ldots)"
},
{
"math_id": 29,
"text": "L:\\ell^2(\\mathbb{N})\\to\\ell^2(\\mathbb{N}):(x_1,x_2,\\ldots)\\mapsto(x_2,x_3,\\ldots)"
},
{
"math_id": 30,
"text": "R^*=L"
},
{
"math_id": 31,
"text": "LR(x_1,x_2,\\ldots)=(x_1,x_2,\\ldots)"
},
{
"math_id": 32,
"text": "RL(x_1,x_2,\\ldots)=(0,x_2,\\ldots)"
}
]
| https://en.wikipedia.org/wiki?curid=1348079 |
13486743 | Bhaskara's lemma | Mathematical lemma
"Bhaskara's" Lemma is an identity used as a lemma during the chakravala method. It states that:
formula_0
for integers formula_1 and non-zero integer formula_2.
Proof.
The proof follows from simple algebraic manipulations as follows: multiply both sides of the equation by formula_3, add formula_4, factor, and divide by formula_5.
formula_6
formula_7
formula_8
formula_9
So long as neither formula_2 nor formula_3 are zero, the implication goes in both directions. (The lemma holds for real or complex numbers as well as integers.) | [
{
"math_id": 0,
"text": "\\, Nx^2 + k = y^2\\implies \\,N\\left(\\frac{mx + y}{k}\\right)^2 + \\frac{m^2 - N}{k} = \\left(\\frac{my + Nx}{k}\\right)^2"
},
{
"math_id": 1,
"text": "m,\\, x,\\, y,\\, N,"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "m^2-N"
},
{
"math_id": 4,
"text": "N^2x^2+2Nmxy+Ny^2"
},
{
"math_id": 5,
"text": "k^2"
},
{
"math_id": 6,
"text": "\\, Nx^2 + k = y^2\\implies Nm^2x^2-N^2x^2+k(m^2-N) = m^2y^2-Ny^2"
},
{
"math_id": 7,
"text": "\\implies Nm^2x^2+2Nmxy+Ny^2+k(m^2-N) = m^2y^2+2Nmxy+N^2x^2"
},
{
"math_id": 8,
"text": "\\implies N(mx+y)^2+k(m^2-N) = (my+Nx)^2"
},
{
"math_id": 9,
"text": "\\implies \\,N\\left(\\frac{mx + y}{k}\\right)^2 + \\frac{m^2 - N}{k} = \\left(\\frac{my + Nx}{k}\\right)^2."
}
]
| https://en.wikipedia.org/wiki?curid=13486743 |
13487725 | Photostationary state | The equilibrium chemical composition under a specific kind of electromagnetic irradiation
The photostationary state of a reversible photochemical reaction is the equilibrium chemical composition under a specific kind of electromagnetic irradiation (usually a single wavelength of visible or UV radiation).
It is a property of particular importance in photochromic compounds, often used as a measure of their practical efficiency and usually quoted as a ratio or percentage.
The position of the photostationary state is primarily a function of the irradiation parameters, the absorbance spectra of the chemical species, and the quantum yields of the reactions. The photostationary state can be very different from the composition of a mixture at thermodynamic equilibrium. As a consequence, photochemistry can be used to produce compositions that are "contra-thermodynamic".
For instance, although "cis"-stilbene is "uphill" from "trans-"stilbene in a thermodynamic sense, irradiation of "trans"-stilbene results in a mixture that is predominantly the "cis" isomer. As an extreme example, irradiation of benzene at 237 to 254 nm results in formation of benzvalene, an isomer of benzene that is 71 kcal/mol higher in energy than benzene itself.
Overview.
Absorption of radiation by reactants of a reaction at equilibrium increases the rate of forward reaction without directly affecting the rate of the reverse reaction.
The rate of a photochemical reaction is proportional to the absorption cross section of the reactant with respect to the excitation source (σ), the quantum yield of reaction (Φ), and the intensity of the irradiation. In a reversible photochemical reaction between compounds A and B, there will therefore be a "forwards" reaction of formula_0 at a rate proportional to formula_1 and a "backwards" reaction of formula_2 at a rate proportional to formula_3. The ratio of the rates of the forward and backwards reactions determines where the equilibrium lies, and thus the photostationary state is found at:
formula_4
If (as is always the case to some extent) the compounds A and B have different absorption spectra, then there may exist wavelengths of light where σa is high and σb is low. Irradiation at these wavelengths will provide photostationary states that contain mostly B. Likewise, wavelengths that give photostationary states of predominantly A may exist. This is particularly likely in compounds such as some photochromics, where A and B have entirely different absorption bands. Compounds that may be readily switched in this way find utility in devices such as molecular switches and optical data storage.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A \\rightarrow B"
},
{
"math_id": 1,
"text": "\\sigma_a \\times \\phi_{A\\rightarrow B}"
},
{
"math_id": 2,
"text": "B \\rightarrow A"
},
{
"math_id": 3,
"text": "\\sigma_b \\times \\phi_{B \\rightarrow A}"
},
{
"math_id": 4,
"text": "\\sigma_a \\times \\phi_{A\\rightarrow B} / \\sigma_b \\times \\phi_{B \\rightarrow A}"
}
]
| https://en.wikipedia.org/wiki?curid=13487725 |
1348798 | Skolem's paradox | Mathematical logic concept
In mathematical logic and philosophy, Skolem's paradox is the apparent contradiction that a countable model of first-order set theory could contain an uncountable set. The paradox arises from part of the Löwenheim–Skolem theorem; Thoralf Skolem was the first to discuss the seemingly contradictory aspects of the theorem, and to discover the relativity of set-theoretic notions now known as non-absoluteness. Although it is not an actual antinomy like Russell's paradox, the result is typically called a paradox and was described as a "paradoxical state of affairs" by Skolem.
In model theory, a model corresponds to a specific interpretation of a formal language or theory. It consists of a domain (a set of objects) and an interpretation of the symbols and formulas in the language, such that the axioms of the theory are satisfied within this structure. The Löwenheim–Skolem theorem shows that any model of set theory in first-order logic, if it is consistent, has an equivalent model that is countable. This appears contradictory, because Georg Cantor proved that there exist sets which are not countable. Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets, satisfies the first-order sentence that intuitively states "there are uncountable sets".
A mathematical explanation of the paradox, showing that it is not a true contradiction in mathematics, was first given in 1922 by Skolem. He explained that the countability of a set is not absolute, but relative to the model in which the cardinality is measured. Skolem's work was harshly received by Ernst Zermelo, who argued against the limitations of first-order logic and Skolem's notion of "relativity," but the result quickly came to be accepted by the mathematical community.
The philosophical implications of Skolem's paradox have received much study. One line of inquiry questions whether it is accurate to claim that any first-order sentence actually states "there are uncountable sets". This line of thought can be extended to question whether any set is uncountable in an absolute sense. More recently, scholars such as Hilary Putnam have introduced the paradox and Skolem's concept of relativity to the study of the philosophy of language.
Background.
One of the earliest results in set theory, published by Cantor in 1874, was the existence of different sizes, or cardinalities, of infinite sets. An infinite set "X" is called countable if there is a function that gives a one-to-one correspondence between "X" and the natural numbers, and is uncountable if there is no such correspondence function. When Zermelo proposed his axioms for set theory in 1908, he proved Cantor's theorem from them to demonstrate their strength.
In 1915, Leopold Löwenheim gave the first proof of what Skolem would prove more generally in 1920 and 1922, the Löwenheim–Skolem theorem. Löwenheim showed that any first-order sentence with a model also has a model with a countable domain; Skolem generalized this to infinite sets of sentences. The downward form of the Löwenheim–Skolem theorem shows that if a countable first-order collection of axioms is satisfied by an infinite structure, then the same axioms are satisfied by some countably infinite structure. Since the first-order versions of standard axioms of set theory (such as Zermelo–Fraenkel set theory) are a countable collection of axioms, this implies that if these axioms are satisfiable, they are satisfiable in some countable model.
The result and its implications.
In 1922, Skolem pointed out the seeming contradiction between the Löwenheim–Skolem theorem, which implies that there is a countable model of Zermelo's axioms, and Cantor's theorem, which states that uncountable sets exist, and which is provable from Zermelo's axioms. "So far as I know," Skolem wrote, "no one has called attention to this peculiar and apparently paradoxical state of affairs. By virtue of the axioms we can prove the existence of higher cardinalities... How can it be, then, that the entire domain "B" [a countable model of Zermelo's axioms] can already be enumerated by means of the finite positive integers?"
However, this is only an apparent paradox. In the context of a specific model of set theory, the term "set" does not refer to an arbitrary set, but only to a set that is actually included in the model. The definition of countability requires that a certain one-to-one correspondence between a set and the natural numbers must exist. This correspondence itself is a set. Skolem resolved the paradox by concluding that such a set does not necessarily exist in a countable model; that is, countability is "relative" to a model, and countable, first-order models are incomplete.
Though Skolem gave his result with respect to Zermelo's axioms, it holds for any standard model of first-order set theory, such as ZFC. Consider Cantor's theorem as a long formula in the formal language of ZFC. If ZFC has a model, call this model formula_0 and its domain formula_1. The interpretation of the element symbol formula_2, or formula_3, is a set of ordered pairs of elements of formula_1—in other words, formula_3 is a subset of formula_4. Since the Löwenheim–Skolem theorem guarantees that formula_1 is countable, then so must be formula_4. There are two special elements of formula_1; they model the natural numbers formula_5 and the power set of the natural numbers formula_6. There is only a countably infinite amount of ordered pairs in formula_3 of the form formula_7, because formula_4 is countable. However, there is no contradiction with Cantor's theorem, because what it states is simply "no element "of" formula_1 is a bijective function from formula_5 (an element of formula_1) to formula_6 (another element of formula_1)."
Skolem used the term "relative" to describe when the same set could be countable in one model of set theory and not countable in another: relative to one model, no enumerating function can put some set into correspondence with the natural numbers, but relative to another model, this correspondence may exist. He described this as the "most important" result in his 1922 paper. Contemporary set theorists describe concepts that do not depend on the choice of a transitive model as absolute. From their point of view, Skolem's paradox simply shows that countability is not an absolute property in first-order logic.
Skolem described his work as a critique of (first-order) set theory, intended to illustrate its weakness as a foundational system:
<templatestyles src="Template:Blockquote/styles.css" />I believed that it was so clear that axiomatization in terms of sets was not a satisfactory ultimate foundation of mathematics that mathematicians would, for the most part, not be very much concerned with it. But in recent times I have seen to my surprise that so many mathematicians think that these axioms of set theory provide the ideal foundation for mathematics; therefore it seemed to me that the time had come for a critique.
Reception by the mathematical community.
It took some time for the theory of first-order logic to be developed enough for mathematicians to understand the cause of Skolem's result; no resolution of the paradox was widely accepted during the 1920s. In 1928, Abraham Fraenkel still described the result as an antinomy:
<templatestyles src="Template:Blockquote/styles.css" />Neither have the books yet been closed on the antinomy, nor has agreement on its significance and possible solution yet been reached.
In 1925, John von Neumann presented a novel axiomatization of set theory, which developed into NBG set theory. Very much aware of Skolem's 1923 paper, von Neumann investigated countable models of his axioms in detail. In his concluding remarks, von Neumann commented that there is no categorical axiomatization of set theory, or any other theory with an infinite model. Speaking of the impact of Skolem's paradox, he wrote:
<templatestyles src="Template:Blockquote/styles.css" />At present we can do no more than note that we have one more reason here to entertain reservations about set theory and that for the time being no way of rehabilitating this theory is known.
Zermelo at first considered the Skolem paradox a hoax and spoke against it starting in 1929. Skolem's result applies only to what is now called first-order logic, but Zermelo argued against the finitary metamathematics that underlie first-order logic, as Zermelo was a mathematical Platonist who opposed intuitionism and finitism in mathematics. Zermelo argued that his axioms should instead be studied in second-order logic, a setting in which Skolem's result does not apply. Zermelo published a second-order axiomatization in 1930 and proved several categoricity results in that context. Zermelo's further work on the foundations of set theory after Skolem's paper led to his discovery of the cumulative hierarchy and formalization of infinitary logic.
The surprise with which set theorists met Skolem's paradox in the 1920s was a product of their times. Gödel's completeness theorem and the compactness theorem, theorems which illuminate the way that first-order logic behaves and established its finitary nature, were not first proved until 1929. Leon Henkin's proof of the completeness theorem, which is now a standard technique for constructing countable models of a consistent first-order theory, was not presented until 1947. Thus, in the 1920s, the particular properties of first-order logic that permit Skolem's paradox were not yet understood. It is now known that Skolem's paradox is unique to first-order logic; if set theory is studied using higher-order logic with full semantics, then it does not have any countable models. By the time that Zermelo was writing his final refutation of the paradox in 1937, the community of logicians and set theorists had largely accepted the incompleteness of first-order logic. Zermelo left this refutation unfinished.
Later opinions.
Later mathematical logicians did not view Skolem's paradox a fatal flaw in set theory. Stephen Cole Kleene described the result as "not a paradox in the sense of outright contradiction, but rather a kind of anomaly". After surveying Skolem's argument that the result is not contradictory, Kleene concluded: "there is no absolute notion of countability". Geoffrey Hunter described the contradiction as "hardly even a paradox". Fraenkel et al. claimed that contemporary mathematicians are no more bothered by the lack of categoricity of first-order theories than they are bothered by the conclusion of Gödel's incompleteness theorem: that no consistent, effective, and sufficiently strong set of first-order axioms is complete.
Other mathematicians such as Reuben Goodstein and Hao Wang have gone so far as to adopt what is called a "Skolemite" view: that not only does the Löwenheim-Skolem theorem prove that set-theoretic notions of countability are relative to a model, but that every set is countable from some "absolute" perspective. This conception of absolute countability was first championed by L. E. J. Brouwer from the vantage of mathematical intuitionism. Both the Skolemites and Brouwer oppose mathematical Platonism, but Carl Posy denies the idea that Brouwer's position was a reaction to any set-theoretic paradox.
Countable models of Zermelo–Fraenkel set theory have become common tools in the study of set theory. Paul Cohen's method for extending set theory, forcing, is often explained in terms of countable models, and was described by Akihiro Kanamori as a kind of extension of Skolem's paradox. The fact that these countable models of Zermelo–Fraenkel set theory still satisfy the theorem that there are uncountable sets is not considered a pathology; Jean van Heijenoort described it as "not a paradox...[but] a novel and unexpected feature of formal systems".
Hilary Putnam considered Skolem's result a paradox, but one of the philosophy of language rather than of set theory or formal logic. He extended Skolem's paradox to argue that not only are set-theoretic notions of membership relative, but semantic notions of language are relative: there is no "absolute" model for terms and predicates in language. Timothy Bays argued that Putnam's argument applies the downward Löwenheim-Skolem theorem incorrectly, while Tim Button argued that Putnam's claim stands despite the use or misuse of the Löwenheim-Skolem theorem. Appeals to Skolem's paradox have been made several times in the philosophy of science, with scholars making use of the Skolem's idea of the relativity of model structures.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{M}"
},
{
"math_id": 1,
"text": "\\mathbb{M}"
},
{
"math_id": 2,
"text": " \\in "
},
{
"math_id": 3,
"text": "\\mathcal{I} ( \\in )"
},
{
"math_id": 4,
"text": "\\mathbb{M} \\times \\mathbb{M}"
},
{
"math_id": 5,
"text": "\\mathbb{N}"
},
{
"math_id": 6,
"text": "\\mathcal{P} ( \\mathbb{N} ) "
},
{
"math_id": 7,
"text": "\\langle x , \\mathcal{P} ( \\mathbb{N} ) \\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=1348798 |
1348988 | Protein efficiency ratio | Protein efficiency ratio (PER) is based on the weight gain of a test subject divided by its intake of a particular food protein during the test period.
From 1919 until very recently, the PER had been a widely used method for evaluating the quality of protein in food.
The food industry in Canada currently uses the PER as the standard for evaluating the protein quality of foods. The official method for determining the protein efficiency ratio is from Health Canada's Health Protection Branch Method FO-1, October 15, 1981.
The U.S. Food and Drug Administration now uses the Protein Digestibility Corrected Amino Acid Score (PDCAAS) as the basis for the percent of the U.S. recommended daily allowance (USRDA) for protein shown on food labels. However, the PER is still used in certain FDA regulations. The US FDA official methods to calculate the PER are as stated in the Official Methods of Analysis of AOAC International, 16th ed. (1995) Section 45.3.05, AOAC Official Method 982.30 Protein Efficiency Ratio Calculation Method; and Official Methods of Analysis of AOAC International, 18th ed. (2005).
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PER \\,= \\frac{Gain\\ in\\ body\\ mass(g)}{Protein\\ intake (g)}"
}
]
| https://en.wikipedia.org/wiki?curid=1348988 |
1349150 | Heating element | Device that converts electricity into heat
A heating element is a device used for conversion of electric energy into heat, consisting of a heating resistor and accessories. Heat is generated by the passage of electric current through a resistor through a process known as Joule Heating. Heating elements are used in household appliances, industrial equipment, and scientific instruments enabling them to perform tasks such as cooking, warming, or maintaining specific temperatures higher than the ambient.
Heating elements may be used to transfer heat via conduction, convection, or radiation. They are different than devices that generate heat from electrical energy via the Peltier effect, and have no dependence on the direction of electrical current.
Principles of operation.
Resistance & resistivity.
Materials used in heating elements have a relatively high electrical resistivity, which is a measure of the material's ability to resist electric current. The electrical resistance that some amount of element material will have is defined by "Pouillet's law" as formula_0where
The "resistance per wire length" (Ω/m) of a heating element material is defined in ASTM and DIN standards.2 In ASTM, wires greater than 0.127 mm in diameter are specified to be held within a tolerance of ±5% Ω/m and for thinner wires ±8% Ω/m.
Power density.
Heating element performance is often quantified by characterizing the power density of the element. Power density is defined as the output power, P, from a heating element divided by the heated surface area, A, of the element. In mathematical terms it is given as:
formula_5
Power density is a measure of heat flux (denoted Φ) and is most often expressed in watts per square millimeter or watts per square inch.
Heating elements with low power density tend to be more expensive but have longer life than heating elements with high power density.
In the United States, power density is often referred to as 'watt density.' It is also sometimes referred to as 'wire surface load.'
Components.
Resistance heater.
Wire.
"Resistance wire"s are very long and slender resistors that have a circular cross-section. Like conductive wire, the diameter of resistance wire is often measured with a gauge system, such as American Wire Gauge (AWG).
Ribbon.
"Resistance ribbon" heating elements are made by flattening round resistance wire, giving them a rectangular cross-section with rounded corners.54 Generally ribbon widths are between 0.3 and 4 mm. If a ribbon is wider than that, it is cut out from a broader strip and may instead be called resistance "strip". Compared to wire, ribbon can be bent with a tighter radius and can produce heat faster and at a lower cost due to its higher surface area to volume ratio. On the other hand, ribbon life is often shorter than wire life and the price per unit mass of ribbon is generally higher.55 In many applications, resistance ribbon is wound around a mica card or on one of its sides.57
Coil.
"Resistance coil" is a resistance wire that has a coiled shape.100 Coils are wound very tightly and then relax to up to 10 times their original length in use. Coils are classified by their diameter and the pitch, or number of coils per unit length.
Insulator.
Heating element "insulators" serve to electrically and thermally insulate the resistance heater from the environment and foreign objects. Generally for elements that operate higher than 600 °C, ceramic insulators are used.137 Aluminum oxide, silicon dioxide, and magnesium oxide are compounds commonly used in ceramic heating element insulators. For lower temperatures a wider range of materials are used.
Leads.
Electrical leads serve to connect a heating element to a power source. They generally are made of conductive materials such as copper that do not have as high of a resistance to oxidation as the active resistance material.
Terminals.
Heating element terminals serve to isolate the active resistance material from the leads. Terminals are designed to have a lower resistance than the active material by having with a lower resistivity and/or a larger diameter. They may also have a lower oxidation resistance than the active material.
Types.
Heating elements are generally classified in one of three frameworks: "suspended, embedded," or "supported".
Tubes (Calrods®).
Tubular or sheathed elements (also referred to by their brand name, Calrods®) normally comprise a fine coil of resistance wire surrounded by an electrical insulator and a metallic tube-shaped sheath or casing. Insulation is typically a magnesium oxide powder and the sheath is normally constructed of a copper or steel alloy. To keep moisture out of the hygroscopic insulator, the ends are equipped with beads of insulating material such as ceramic or silicone rubber, or a combination of both. The tube is drawn through a die to compress the powder and maximize heat transmission. These can be a straight rod (as in toaster ovens) or bent to a shape to span an area to be heated (such as in electric stoves, ovens, and coffee makers).
Screen-printed elements.
Screen-printed metal–ceramic tracks deposited on ceramic-insulated metal (generally steel) plates have found widespread application as elements in kettles and other domestic appliances since the mid-1990s.
Radiative elements.
Radiative heating elements (heat lamps) are high-powered incandescent lamps that run at less than maximum power to radiate mostly infrared instead of visible light. These are usually found in radiant space heaters and food warmers, taking either a long, tubular form or an "R40" reflector-lamp form. The reflector lamp style is often tinted red to minimize the visible light produced; the tubular form comes in different formats:
Removable ceramic core elements.
Removable ceramic core elements use a coiled resistance heating alloy wire threaded through one or more cylindrical ceramic segments to make a required length (related to output), with or without a center rod. Inserted into a metal sheath or tube sealed at one end, this type of element allows replacement or repair without breaking into the process involved, usually fluid heating under pressure.
Etched foil elements.
Etched foil elements are generally made from the same alloys as resistance wire elements, but are produced with a subtractive photo-etching process that starts with a continuous sheet of metal foil and ends with a complex resistance pattern. These elements are commonly found in precision heating applications like medical diagnostics and aerospace.
Polymer PTC heating elements.
Resistive heaters can be made of conducting PTC rubber materials where the resistivity increases exponentially with increasing temperature. Such a heater will produce high power when it is cold, and rapidly heat itself to a constant temperature. Due to the exponentially increasing resistivity, the heater can never heat itself to warmer than this temperature. Above this temperature, the rubber acts as an electrical insulator. The temperature can be chosen during the production of the rubber. Typical temperatures are between .
It is a point-wise self-regulating and self-limiting heater. "Self-regulating" means that every point of the heater independently keeps a constant temperature without the need of regulating electronics. "Self-limiting" means that the heater can never exceed a certain temperature in any point and requires no overheat protection.
Thick-film heaters.
Thick-film heaters are a type of resistive heater that can be printed on a thin substrate. Thick-film heaters exhibit various advantages over the conventional metal-sheathed resistance elements. In general, thick-film elements are characterized by their low-profile form factor, improved temperature uniformity, quick thermal response due to low thermal mass, high energy density, and wide range of voltage compatibility. Typically, thick-film heaters are printed on flat substrates, as well as on tubes in different heater patterns. These heaters can attain power densities of as high as 100 W/cm2 depending on the heat transfer conditions. The thick-film heater patterns are highly customizable based on the sheet resistance of the printed resistor paste.
These heaters can be printed on a variety of substrates including metal, ceramic, glass, and polymer using metal- or alloy-loaded thick-film pastes. The most common substrates used to print thick-film heaters are aluminum 6061-T6, stainless steel, and muscovite or phlogopite mica sheets. The applications and operational characteristics of these heaters vary widely based on the chosen substrate materials. This is primarily attributed to the thermal characteristics of the substrates.
There are several conventional applications of thick-film heaters. They can be used in griddles, waffle irons, stove-top electric heating, humidifiers, tea kettles, heat sealing devices, water heaters, clothes irons and steamers, hair straighteners, boilers, heated beds of 3D printers, thermal print heads, glue guns, laboratory heating equipment, clothes dryers, baseboard heaters, warming trays, heat exchangers, deicing and defogging devices for car windshields, side mirrors, refrigerator defrosting, etc.
For most applications, the thermal performance and temperature distribution are the two key design parameters. In order to maintain a uniform temperature distribution across a substrate, the circuit design can be optimized by changing the localized power density of the resistor circuit. An optimized heater design helps to control the heating power and modulate the local temperatures across the heater substrate. In cases where there is a requirement of two or more heating zones with different power densities over a relatively small area, a thick-film heater can be designed to achieve a zonal heating pattern on a single substrate.
Thick-film heaters can largely be characterized under two subcategories – negative-temperature-coefficient (NTC) and positive-temperature-coefficient (PTC) materials – based on the effect of temperature changes on the element's resistance. NTC-type heaters are characterized by a decrease in resistance as the heater temperature increases and thus have a higher power at higher temperatures for a given input voltage. PTC heaters behave in an opposite manner with an increase of resistance and decreasing heater power at elevated temperatures. This characteristic of PTC heaters makes them self-regulating, as their power stabilizes at fixed temperatures. On the other hand, NTC-type heaters generally require a thermostat or a thermocouple in order to control the heater runaway. These heaters are used in applications which require a quick ramp-up of heater temperature to a predetermined set-point as they are usually faster-acting than PTC-type heaters.
Liquid.
An electrode boiler uses electricity flowing through streams of water to create steam. Operating voltages
are typically between 240 and 600 volts, single or three-phase AC.
Laser heaters.
Laser heaters are heating elements used for achieving very high temperatures.
Materials.
Materials used in heating elements are selected for a variety of mechanical, thermal, and electrical properties. Due to the wide range of operating temperatures that these elements withstand, temperature dependencies of material properties are a common consideration.
Metal alloys.
Resistance heating alloys are metals that can be used for electrical heating purposes above 600 °C in air. They can be distinguished from resistance alloys which are used primarily for resistors operating below 600 °C.
While the majority of atoms in these alloys correspond to the ones listed in their name, they also consist of trace elements. Trace elements play an important role in resistance alloys, as they have a substantial influence on mechanical properties such as work-ability, form stability, and oxidation life. Some of these trace elements may be present in the basic raw materials, while others may be added deliberately to improve the performance of the material. The terms "contaminates" and "enhancements" are used to classify trace elements. Contaminates typically have undesirable effects such as decreased life and limited temperature range. Enhancements are intentionally added by the manufacturer and may provide improvements such as increased oxide layer adhesion, greater ability to hold shape, or longer life at higher temperatures.
The most common alloys used in heating elements include:
Ni-Cr(Fe) alloys (AKA nichrome, Chromel).
Ni-Cr(Fe) resistance heating alloys, also known as nichrome or Chromel, are described by both ASTM and DIN standards. These standards specify the relative percentages of nickel and chromium that should be present in an alloy. In ASTM three alloys that are specified contain, amongst other trace elements:
Nichrome 80/20 is one of the most commonly used resistance heating alloys because it has relatively high resistance and forms an adherent layer of chromium oxide when it is heated for the first time. Material beneath this layer will not oxidize, preventing the wire from breaking or burning out.
Fe-Cr-Al alloys (AKA Kanthal®).
Fe-Cr-Al resistance heating alloys, also known as Kanthal®, are described by an ASTM standard. Manufacturers may opt to use this class of alloys as opposed to Ni-Cr(Fe) alloys to avoid the typically relatively higher cost of nickel as a raw material compared to aluminum. The tradeoff is that Fe-Cr-Al alloys are more brittle and less ductile than Ni-Cr(Fe) ones, making them more delicate and prone to failure.
On the other hand, the aluminum oxide layer that forms on the surface of Fe-Cr-Al alloys is more thermodynamically stable than the chromium oxide layer that tends to form on Ni-Cr(Fe), making Fe-Cr-Al better at resisting corrosion. However, humidity may be more detrimental to the wire life of Fe-Cr-Al than Ni-Cr(Fe).
Fe-Cr-Al alloys, like stainless steels, tend to undergo embrittlement at room temperature after being heated in the temperature range of 400 to 575 °C for an extended duration.
Applications.
Heating elements find application in a wide range of domestic, commercial, and industrial settings:
Life cycle.
The life of a heating element specifies how long it is expected to last in an application. Generally heating elements in a domestic appliance will be rated for between 500 and 5000 hours of use, depending on the type of product and how it is used.164
A thinner wire or ribbon will always have a shorter life than a thicker one at the same temperature.58
Standardized life tests for resistance heating materials are described by ASTM International. Accelerated life tests for Ni-Cr(Fe) alloys and Fe-Cr-Al alloys intended for electrical heating are used to measure the cyclic oxidation resistance of materials.
Packaging.
Resistance wire and ribbon are most often shipped wound around spools. Generally the thinner the wire, the smaller the spool. In some cases pail packs or rings may be used instead of spools.
Safety.
General safety requirements for heating elements used in household appliances are defined by the International Electrotechnical Commission (IEC). The standard specifies limits for parameters such as insulation strength, creepage distance, and leakage current. It also provides tolerances on the rating of a heating element.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R = \\rho \\frac{\\ell}{A}"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "\\ell"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "\\Phi=P/A "
}
]
| https://en.wikipedia.org/wiki?curid=1349150 |
1349163 | Mass deficit | A mass deficit is the amount of mass (in stars) that has been removed from the center of a galaxy, presumably by the action of a binary supermassive black hole.
The density of stars increases toward the center in most galaxies. In small galaxies, this increase continues into the very center. In large galaxies, there is usually a "core", a region near the center where the density is constant or slowly rising. The size of the core – the "core radius" – can be a few hundred parsecs in large elliptical galaxies.
The greatest observed stellar cores reach 3.2 to 5.7 kiloparsecs in radius.
It is believed that cores are produced by binary supermassive black holes (SMBHs). Binary SMBHs form during the merger of two galaxies. If a star passes near the massive binary, it will be ejected, by a process called the gravitational slingshot. This ejection continues until most of the stars near the center of the galaxy have been removed. The result is a low-density core. Such cores are ubiquitous in giant elliptical galaxies.
The mass deficit is defined as the amount of mass that was removed in creating the core. Mathematically, the mass deficit is defined asformula_0
where "ρ"i is the original density, "ρ" is the observed density, and "R"c is the core radius. In practice, the core-Sersic model can be used to help quantify the deficits.
Observed mass deficits are typically in the range of one to a few times the mass of the central SMBH, and observed core radii are comparable to the influence radii of the central SMBH. These properties are consistent with what is predicted in theoretical models of core formation and lend support to the hypothesis that all bright galaxies once contained binary SMBHs at their centers.
It is not known whether most galaxies still contain massive binaries, or whether the two black holes have coalesced. Both possibilities are consistent with the presence of mass deficits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_\\mathrm{def} = 4 \\pi \\int_0^{R_c} \\left[\\rho_i(r) - \\rho(r) \\right]r^2 dr,"
}
]
| https://en.wikipedia.org/wiki?curid=1349163 |
1349294 | Bell series | In mathematics, the Bell series is a formal power series used to study properties of arithmetical functions. Bell series were introduced and developed by Eric Temple Bell.
Given an arithmetic function formula_0 and a prime formula_1, define the formal power series formula_2, called the Bell series of formula_0 modulo formula_1 as:
formula_3
Two multiplicative functions can be shown to be identical if all of their Bell series are equal; this is sometimes called the "uniqueness theorem": given multiplicative functions formula_0 and formula_4, one has formula_5 if and only if:
formula_6 for all primes formula_1.
Two series may be multiplied (sometimes called the "multiplication theorem"): For any two arithmetic functions formula_0 and formula_4, let formula_7 be their Dirichlet convolution. Then for every prime formula_1, one has:
formula_8
In particular, this makes it trivial to find the Bell series of a Dirichlet inverse.
If formula_0 is completely multiplicative, then formally:
formula_9
Examples.
The following is a table of the Bell series of well-known arithmetic functions. | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "f_p(x)"
},
{
"math_id": 3,
"text": "f_p(x)=\\sum_{n=0}^\\infty f(p^n)x^n."
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "f=g"
},
{
"math_id": 6,
"text": "f_p(x)=g_p(x)"
},
{
"math_id": 7,
"text": "h=f*g"
},
{
"math_id": 8,
"text": "h_p(x)=f_p(x) g_p(x).\\,"
},
{
"math_id": 9,
"text": "f_p(x)=\\frac{1}{1-f(p)x}."
},
{
"math_id": 10,
"text": "\\mu"
},
{
"math_id": 11,
"text": "\\mu_p(x)=1-x."
},
{
"math_id": 12,
"text": "\\mu_p^2(x) = 1+x."
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": "\\varphi_p(x)=\\frac{1-x}{1-px}."
},
{
"math_id": 15,
"text": "\\delta"
},
{
"math_id": 16,
"text": "\\delta_p(x)=1."
},
{
"math_id": 17,
"text": "\\lambda"
},
{
"math_id": 18,
"text": "\\lambda_p(x)=\\frac{1}{1+x}."
},
{
"math_id": 19,
"text": "(\\textrm{Id}_k)_p(x)=\\frac{1}{1-p^kx}."
},
{
"math_id": 20,
"text": "\\operatorname{Id}_k(n)=n^k"
},
{
"math_id": 21,
"text": "\\sigma_k"
},
{
"math_id": 22,
"text": "(\\sigma_k)_p(x)=\\frac{1}{(1-p^kx)(1-x)}."
},
{
"math_id": 23,
"text": "1_p(x) = (1-x)^{-1}"
},
{
"math_id": 24,
"text": "f(n) = 2^{\\omega(n)} = \\sum_{d|n} \\mu^2(d)"
},
{
"math_id": 25,
"text": "f_p(x) = \\frac{1+x}{1-x}."
},
{
"math_id": 26,
"text": "f(p^{n+1}) = f(p) f(p^n) - g(p) f(p^{n-1})"
},
{
"math_id": 27,
"text": "n \\geq 1"
},
{
"math_id": 28,
"text": "f_p(x) = \\left(1-f(p)x + g(p)x^2\\right)^{-1}."
},
{
"math_id": 29,
"text": "\\mu_k(n) = \\sum_{d^k|n} \\mu_{k-1}\\left(\\frac{n}{d^k}\\right) \\mu_{k-1}\\left(\\frac{n}{d}\\right)"
},
{
"math_id": 30,
"text": "(\\mu_k)_p(x) = \\frac{1-2x^k+x^{k+1}}{1-x}."
}
]
| https://en.wikipedia.org/wiki?curid=1349294 |
13493012 | Relative volatility | Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as formula_0.
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages.
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide).
Definition.
For a liquid mixture of two components (called a "binary mixture") at a given temperature and pressure, the relative volatility is defined as
formula_1
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a formula_2 value (= formula_3) for a more volatile component is larger than a formula_2 value for a less volatile component. That means that formula_0 ≥ 1 since the larger formula_2 value of the more volatile component is in the numerator and the smaller formula_2 of the less volatile component is in the denominator.
formula_0 is a unitless quantity. When the volatilities of both key components are equal, formula_0 = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same (azeotrope). As the value of formula_0 increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation column consists predominantly of the more volatile component and some small amount of the less volatile component and the bottoms fraction consists predominantly of the less volatile component and some small amount of the more volatile component.
A liquid mixture containing many components is called a multi-component mixture. When a multi-component mixture is distilled, the overhead fraction and the bottoms fraction typically contain much more than one or two components. For example, some intermediate products in an oil refinery are multi-component liquid mixtures that may contain the alkane, alkene and alkyne hydrocarbons ranging from methane having one carbon atom to decanes having ten carbon atoms. For distilling such a mixture, the distillation column may be designed (for example) to produce:
Such a distillation column is typically called a depropanizer.
The designer would designate the key components governing the separation design to be propane as the so-called light key (LK) and isobutane as the so-called heavy key (HK). In that context, a lighter component means a component with a lower boiling point (or a higher vapor pressure) and a heavier component means a component with a higher boiling point (or a lower vapor pressure).
Thus, for the distillation of any multi-component mixture, the relative volatility is often defined as
formula_4
Large-scale industrial distillation is rarely undertaken if the relative volatility is less than 1.05.
The values of formula_2 have been correlated empirically or theoretically in terms of temperature, pressure and phase compositions in the form of equations, tables or graph such as the well-known DePriester charts.
formula_2 values are widely used in the design of large-scale distillation columns for distilling multi-component mixtures in oil refineries, petrochemical and chemical plants, natural gas processing plants and other industries.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\alpha=\\frac {(y_i/x_i)}{(y_j/x_j)} = K_i/K_j"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "y/x"
},
{
"math_id": 4,
"text": "\\alpha=\\frac {(y_{LK}/x_{LK})}{(y_{HK}/x_{HK})} = K_{LK}/K_{HK}"
}
]
| https://en.wikipedia.org/wiki?curid=13493012 |
13495046 | Quantum spin Hall effect | The quantum spin Hall state is a state of matter proposed to exist in special, two-dimensional semiconductors that have a quantized spin-Hall conductance and a vanishing charge-Hall conductance. The quantum spin Hall state of matter is the cousin of the integer quantum Hall state, and that does not require the application of a large magnetic field. The quantum spin Hall state does not break charge conservation symmetry and spin-formula_0 conservation symmetry (in order to have well defined Hall conductances).
Description.
The first proposal for the existence of a quantum spin Hall state was developed by Charles Kane and Gene Mele who adapted an earlier model for graphene by F. Duncan M. Haldane which exhibits an integer quantum Hall effect. The Kane and Mele model is two copies of the Haldane model such that the spin up electron exhibits a chiral integer quantum Hall Effect while the spin down electron exhibits an anti-chiral integer quantum Hall effect. A relativistic version of the quantum spin Hall effect was introduced in the 1990s for the numerical simulation of chiral gauge theories; the simplest example consisting of a parity and time reversal symmetric U(1) gauge theory with bulk fermions of opposite sign mass, a massless Dirac surface mode, and bulk currents that carry chirality but not charge (the spin Hall current analogue). Overall the Kane-Mele model has a charge-Hall conductance of exactly zero but a spin-Hall conductance of exactly formula_1 (in units of formula_2). Independently, a quantum spin Hall model was proposed by Andrei Bernevig and Shoucheng Zhang in an intricate strain architecture which engineers, due to spin-orbit coupling, a magnetic field pointing upwards for spin-up electrons and a magnetic field pointing downwards for spin-down electrons. The main ingredient is the existence of spin–orbit coupling, which can be understood as a momentum-dependent magnetic field coupling to the spin of the electron.
Real experimental systems, however, are far from the idealized picture presented above in which spin-up and spin-down electrons are not coupled. A very important achievement was the realization that the quantum spin Hall state remains to be non-trivial even after the introduction of spin-up spin-down scattering, which destroys the quantum spin Hall effect. In a separate paper, Kane and Mele introduced a topological formula_3 invariant which characterizes a state as trivial or non-trivial band insulator (regardless if the state exhibits or does not exhibit a quantum spin Hall effect). Further stability studies of the edge liquid through which conduction takes place in the quantum spin Hall state proved, both analytically and numerically that the non-trivial state is robust to both interactions and extra spin-orbit coupling terms that mix spin-up and spin-down electrons. Such a non-trivial state (exhibiting or not exhibiting a quantum spin Hall effect) is called a topological insulator, which is an example of symmetry-protected topological order protected by charge conservation symmetry and time reversal symmetry. (Note that the quantum spin Hall state is also a symmetry-protected topological state protected by charge conservation symmetry and spin-formula_0 conservation symmetry. We do not need time reversal symmetry to protect quantum spin Hall state. Topological insulator and quantum spin Hall state are different symmetry-protected topological states. So topological insulator and quantum spin Hall state are different states of matter.)
In HgTe quantum wells.
Since graphene has extremely weak spin-orbit coupling, it is very unlikely to support a quantum spin Hall state at temperatures achievable with today's technologies. Two-dimensional topological insulators (also known as the quantum spin Hall insulators) with one-dimensional helical edge states were predicted in 2006 by Bernevig, Hughes and Zhang to occur in quantum wells (very thin layers) of mercury telluride sandwiched between cadmium telluride, and were observed in 2007.
Different quantum wells of varying HgTe thickness can be built. When the sheet of HgTe in between the CdTe is thin, the system behaves like an ordinary insulator and does not conduct when the Fermi level resides in the band-gap. When the sheet of HgTe is varied and made thicker (this requires the fabrication of separate quantum wells), an interesting phenomenon happens. Due to the inverted band structure of HgTe, at some critical HgTe thickness, a Lifshitz transition occurs in which the system closes the bulk band gap to become a semi-metal, and then re-opens it to become a quantum spin Hall insulator.
In the gap closing and re-opening process, two edge states are brought out from the bulk and cross the bulk-gap. As such, when the Fermi level resides in the bulk gap, the conduction is dominated by the edge channels that cross the gap. The two-terminal conductance is formula_4 in the quantum spin Hall state and zero in the normal insulating state. As the conduction is dominated by the edge channels, the value of the conductance should be insensitive to how wide the sample is. A magnetic field should destroy the quantum spin Hall state by breaking time-reversal invariance and allowing spin-up spin-down electron scattering processes at the edge. All these predictions have been experimentally verified in an experiment performed in the Molenkamp labs at Universität Würzburg in Germany. | [
{
"math_id": 0,
"text": "S_z"
},
{
"math_id": 1,
"text": "\\sigma_{xy}^\\text{spin}=2"
},
{
"math_id": 2,
"text": "\\frac{e}{4 \\pi}"
},
{
"math_id": 3,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 4,
"text": "G_{xx}=2 \\frac{e^2}{h}"
}
]
| https://en.wikipedia.org/wiki?curid=13495046 |
13496530 | Category algebra | In category theory, a field of mathematics, a category algebra is an associative algebra, defined for any locally finite category and commutative ring with unity. Category algebras generalize the notions of group algebras and incidence algebras, just as categories generalize the notions of groups and partially ordered sets.
Definition.
If the given category is finite (has finitely many objects and morphisms), then the following two definitions of the category algebra agree.
Group algebra-style definition.
Given a group "G" and a commutative ring "R", one can construct "RG", known as the group algebra; it is an "R"-module equipped with a multiplication. A group is the same as a category with a single object in which all morphisms are isomorphisms (where the elements of the group correspond to the morphisms of the category), so the following construction generalizes the definition of the group algebra from groups to arbitrary categories.
Let "C" be a category and "R" be a commutative ring with unity. Define "RC" (or "R"["C"]) to be the free "R"-module with the set formula_0 of morphisms of "C" as its basis. In other words, "RC" consists of formal linear combinations (which are finite sums) of the form formula_1, where "fi" are morphisms of "C", and "ai" are elements of the ring "R". Define a multiplication operation on "RC" as follows, using the composition operation in the category:
formula_2
where formula_3 if their composition is not defined. This defines a binary operation on "RC", and moreover makes "RC" into an associative algebra over the ring "R". This algebra is called the category algebra of "C".
From a different perspective, elements of the free module "RC" could also be considered as functions from the morphisms of "C" to "R" which are finitely supported. Then the multiplication is described by a convolution: if formula_4 (thought of as functionals on the morphisms of "C"), then their product is defined as:
formula_5
The latter sum is finite because the functions are finitely supported, and therefore formula_6.
Incidence algebra-style definition.
The definition used for incidence algebras assumes that the category "C" is locally finite (see below), is "dual" to the above definition, and defines a "different" object. This isn't a useful assumption for groups, as a group that is locally finite as a category is finite.
A locally finite category is one where every morphism can be written in only finitely many ways as the composition of two non-identity morphisms (not to be confused with the "has finite Hom-sets" meaning). The category algebra (in this sense) is defined as above, but allowing all coefficients to be non-zero.
In terms of formal sums, the elements are all formal sums
formula_7
where there are no restrictions on the formula_8 (they can all be non-zero).
In terms of functions, the elements are any functions from the morphisms of "C" to "R", and multiplication is defined as convolution. The sum in the convolution is always finite because of the local finiteness assumption.
Dual.
The module dual of the category algebra (in the group algebra sense of the definition) is the space of all maps from the morphisms of "C" to "R", denoted "F"("C"), and has a natural coalgebra structure. Thus for a locally finite category, the dual of a category algebra (in the group algebra sense) is the category algebra (in the incidence algebra sense), and has both an algebra and coalgebra structure.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{Hom}C"
},
{
"math_id": 1,
"text": "\\sum a_i f_i"
},
{
"math_id": 2,
"text": "\\sum a_i f_i \\sum b_j g_j = \\sum a_i b_j f_i g_j"
},
{
"math_id": 3,
"text": "f_i g_j=0"
},
{
"math_id": 4,
"text": "a, b \\in RC"
},
{
"math_id": 5,
"text": "(a * b)(h) := \\sum_{fg=h} a(f)b(g)."
},
{
"math_id": 6,
"text": "a * b \\in RC"
},
{
"math_id": 7,
"text": "\\sum_{f_i \\in \\operatorname{Hom}C} a_i f_i,"
},
{
"math_id": 8,
"text": "a_i"
},
{
"math_id": 9,
"text": " R^{n \\times n} "
},
{
"math_id": 10,
"text": "C \\rightarrow R"
}
]
| https://en.wikipedia.org/wiki?curid=13496530 |
1349666 | Normalized number | In applied mathematics, a number is normalized when it is written in scientific notation with one non-zero decimal digit before the decimal point. Thus, a real number, when written out in normalized scientific notation, is as follows:
formula_0
where "n" is an integer, formula_1 are the digits of the number in base 10, and formula_2 is not zero. That is, its leading digit (i.e., leftmost) is not zero and is followed by the decimal point. Simply speaking, a number is "normalized" when it is written in the form of "a" × 10"n" where 1 ≤ |"a"| < 10 without leading zeros in "a". This is the "standard form" of scientific notation. An alternative style is to have the first non-zero digit "after" the decimal point.
Examples.
As examples, the number 918.082 in normalized form is
formula_3
while the number in normalized form is
formula_4
Clearly, any non-zero real number can be normalized.
Other bases.
The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10.
In base "b" a normalized number will have the form
formula_5
where again formula_6 and the digits, formula_1 are integers between formula_7 and formula_8.
In many computer systems, binary floating-point numbers are represented internally using this normalized form for their representations; for details, see normal number (computing). Although the point is described as "floating", for a normalized floating-point number, its position is fixed, the movement being reflected in the different values of the power.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pm d_0 . d_1 d_2 d_3 \\dots \\times 10^n"
},
{
"math_id": 1,
"text": "d_0, d_1, d_2, d_3, \\ldots,"
},
{
"math_id": 2,
"text": "d_0"
},
{
"math_id": 3,
"text": "9.18082 \\times 10^2,"
},
{
"math_id": 4,
"text": "-5.74012 \\times 10^{-3}."
},
{
"math_id": 5,
"text": "\\pm d_0 . d_1 d_2 d_3 \\dots \\times b^n,"
},
{
"math_id": 6,
"text": "d_0 \\neq 0,"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "b - 1"
}
]
| https://en.wikipedia.org/wiki?curid=1349666 |
13497085 | David X. Li | Canadian quantitative analyst
David X. Li ( born Nanjing, China in the 1960s) is a Chinese-born Canadian quantitative analyst and actuary who pioneered the use of Gaussian copula models for the pricing of collateralized debt obligations (CDOs) in the early 2000s. The "Financial Times" has called him "the world’s most influential actuary", while in the aftermath of the 2007–2008 financial crisis, to which Li's model has been partly credited to blame, his model has been called a "recipe for disaster" in the hands of those who did not fully understand his research and misapplied it. Widespread application of simplified Gaussian copula models to financial products such as securities may have contributed to the 2007–2008 financial crisis. David Li is currently an adjunct professor at the University of Waterloo in the Statistics and Actuarial Sciences department.
Early life and education.
Li was born as Li Xianglin and raised in a rural part of China during the 1960s; his family had been relocated during the Cultural Revolution to a rural village in southern China for "re-education". Li received a master's degree in economics from Nankai University, one of the country's most prestigious universities. After leaving China in 1987 at the behest of the Chinese government to study capitalism from the west, he earned a MBA from Laval University in Quebec, a MMath in Actuarial Science and PhD in statistics from the University of Waterloo in Waterloo, Ontario in 1995 with the thesis title "An estimating function approach to credibility theory" under the supervision of Distinguished Emeritus Professor Harry H. Panjer in the Statistics and Actuarial Science Department at the University of Waterloo.
Career.
Li began his career in finance in 1997 at the Canadian Imperial Bank of Commerce in the World Markets division. He moved to New York City in 2000 where he became a partner in J.P. Morgan's RiskMetrics unit. By 2003 he was director and global head of credit derivatives research at Citigroup. In 2004 he moved to Barclays Capital and lead the credit quantitative analytics team. In 2008 Li moved to Beijing where he worked for China International Capital Corporation as the head of the risk management department.
David Li is currently an adjunct professor at the University of Waterloo in the Statistics and Actuarial Sciences department. He is also a professor at the Shanghai Advanced Institute of Finance (SAIF).
CDOs and Gaussian copula.
Li's paper "On Default Correlation: A Copula Function Approach" was the first appearance of the Gaussian copula applied to CDOs published in 2000, which quickly became a tool for financial institutions to correlate associations between multiple financial securities. This allowed for CDOs to be supposedly accurately priced for a wide range of investments that were previously too complex to price, such as mortgages.
However, in the aftermath of the 2007–2008 financial crisis, the model has been seen as a "recipe for disaster". According to Nassim Nicholas Taleb, "People got very excited about the Gaussian copula because of its mathematical elegance, but the thing never worked. Co-association between securities is not measurable using correlation"; in other words, "anything that relies on correlation is charlatanism."
Li himself apparently understood the fallacy of his model, in 2005 saying "Very few people understand the essence of the model." Li also wrote that "The current copula framework gains its popularity owing to its simplicity...However, there is little theoretical justification of the current framework from financial economics...We essentially have a credit portfolio model without solid credit portfolio theory." Kai Gilkes of CreditSights says "Li can't be blamed"; although he invented the model, it was the bankers who misinterpreted and misused it.
Li's paper.
Li's paper is called "On Default Correlation: A Copula Function Approach" (2000), published in "Journal of Fixed Income", Vol. 9, Issue 4, pages 43–54. In section 1 through 5.3, Li describes actuarial math that sets the stage for his theory. The mathematics are from established statistical theory, actuarial models, and probability theory. In section 5.4, he uses Gaussian copula to measure event relationships, or mathematically, "correlations", between random economic events, expressed as:
formula_0
In layman's terms, he proposes to quantify the relationship between two events "House A" defaulting and "House B" defaulting by looking at the dependence between their time-unit-default (or survival time; see survival analysis). While under some scenarios (such as real estate) this correlation appeared to work most of the time, the underlying problem is that the single numerical data of correlation is a poor way to summarize history, and hence is not enough to predict the future. From section 6.0 onward, the paper presents experimental results using the Gaussian copula.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C_\\rho(u,v) = \\Phi \\left(\\Phi^{-1}(u), \\Phi^{-1}(v); \\rho \\right) "
}
]
| https://en.wikipedia.org/wiki?curid=13497085 |
1350002 | Essential spectrum | In mathematics, the essential spectrum of a bounded operator (or, more generally, of a densely defined closed linear operator) is a certain subset of its spectrum, defined by a condition of the type that says, roughly speaking, "fails badly to be invertible".
The essential spectrum of self-adjoint operators.
In formal terms, let "X" be a Hilbert space and let "T" be a self-adjoint operator on "X".
Definition.
The essential spectrum of "T", usually denoted σess("T"), is the set of all complex numbers λ such that
formula_0
is not a Fredholm operator, where formula_1 denotes the "identity operator" on "X", so that formula_2 for all "x" in "X".
Properties.
The essential spectrum is always closed, and it is a subset of the spectrum. Since "T" is self-adjoint, the spectrum is contained on the real axis.
The essential spectrum is invariant under compact perturbations. That is, if "K" is a compact self-adjoint operator on "X", then the essential spectra of "T" and that of formula_3 coincide. This explains why it is called the "essential" spectrum: Weyl (1910) originally defined the essential spectrum of a certain differential operator to be the spectrum independent of boundary conditions.
"Weyl's criterion" is as follows. First, a number λ is in the "spectrum" of "T" if and only if there exists a sequence {ψ"k"} in the space "X" such that formula_4 and
formula_5
Furthermore, λ is in the "essential spectrum" if there is a sequence satisfying this condition, but such that it contains no convergent subsequence (this is the case if, for example formula_6 is an orthonormal sequence); such a sequence is called a "singular sequence".
The discrete spectrum.
The essential spectrum is a subset of the spectrum σ, and its complement is called the discrete spectrum, so
formula_7
If "T" is self-adjoint, then, by definition, a number λ is in the "discrete spectrum" of "T" if it is an isolated eigenvalue of finite multiplicity, meaning that the dimension of the space
formula_8
has finite but non-zero dimension and that there is an ε > 0 such that μ ∈ σ("T") and |μ−λ| < ε imply that μ and λ are equal.
The essential spectrum of closed operators in Banach spaces.
Let "X" be a Banach space
and let formula_10 be a closed linear operator on "X" with dense domain formula_11. There are several definitions of the essential spectrum, which are not equivalent.
Each of the above-defined essential spectra formula_19, formula_20, is closed. Furthermore,
formula_21
and any of these inclusions may be strict. For self-adjoint operators, all the above definitions of the essential spectrum coincide.
Define the "radius" of the essential spectrum by
formula_22
Even though the spectra may be different, the radius is the same for all "k".
The definition of the set formula_13 is equivalent to Weyl's criterion: formula_13 is the set of all λ for which there exists a singular sequence.
The essential spectrum formula_19 is invariant under compact perturbations for "k" = 1,2,3,4, but not for "k" = 5.
The set formula_15 gives the part of the spectrum that is independent of compact perturbations, that is,
formula_23
where formula_24 denotes the set of compact operators on "X" (D.E. Edmunds and W.D. Evans, 1987).
The spectrum of a closed densely defined operator "T" can be decomposed into a disjoint union
formula_25,
where formula_26 is the discrete spectrum of "T".
References.
<templatestyles src="Reflist/styles.css" />
The self-adjoint case is discussed in
A discussion of the spectrum for general operators can be found in
The original definition of the essential spectrum goes back to | [
{
"math_id": 0,
"text": "T-\\lambda I_X"
},
{
"math_id": 1,
"text": "I_X"
},
{
"math_id": 2,
"text": "I_X(x)=x"
},
{
"math_id": 3,
"text": "T+K"
},
{
"math_id": 4,
"text": "\\Vert \\psi_k\\Vert=1"
},
{
"math_id": 5,
"text": " \\lim_{k\\to\\infty} \\left\\| T\\psi_k - \\lambda\\psi_k \\right\\| = 0. "
},
{
"math_id": 6,
"text": "\\{\\psi_k\\}"
},
{
"math_id": 7,
"text": " \\sigma_{\\mathrm{disc}}(T) = \\sigma(T) \\setminus \\sigma_{\\mathrm{ess}}(T). "
},
{
"math_id": 8,
"text": " \\{ \\psi \\in X : T\\psi = \\lambda\\psi \\} "
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "T:\\,X\\to X"
},
{
"math_id": 11,
"text": "D(T)"
},
{
"math_id": 12,
"text": "\\sigma_{\\mathrm{ess},1}(T)"
},
{
"math_id": 13,
"text": "\\sigma_{\\mathrm{ess},2}(T)"
},
{
"math_id": 14,
"text": "\\sigma_{\\mathrm{ess},3}(T)"
},
{
"math_id": 15,
"text": "\\sigma_{\\mathrm{ess},4}(T)"
},
{
"math_id": 16,
"text": "\\sigma_{\\mathrm{ess},5}(T)"
},
{
"math_id": 17,
"text": "\\C\\setminus \\sigma_{\\mathrm{ess},1}(T)"
},
{
"math_id": 18,
"text": "\\C \\setminus \\sigma(T)"
},
{
"math_id": 19,
"text": "\\sigma_{\\mathrm{ess},k}(T)"
},
{
"math_id": 20,
"text": "1\\le k\\le 5"
},
{
"math_id": 21,
"text": " \\sigma_{\\mathrm{ess},1}(T) \\subset \\sigma_{\\mathrm{ess},2}(T) \\subset \\sigma_{\\mathrm{ess},3}(T) \\subset \\sigma_{\\mathrm{ess},4}(T) \\subset \\sigma_{\\mathrm{ess},5}(T) \\subset \\sigma(T) \\subset \\C,"
},
{
"math_id": 22,
"text": "r_{\\mathrm{ess},k}(T) = \\max \\{ |\\lambda| : \\lambda\\in\\sigma_{\\mathrm{ess},k}(T) \\}. "
},
{
"math_id": 23,
"text": " \\sigma_{\\mathrm{ess},4}(T) = \\bigcap_{K \\in B_0(X)} \\sigma(T+K), "
},
{
"math_id": 24,
"text": "B_0(X)"
},
{
"math_id": 25,
"text": "\\sigma(T)=\\sigma_{\\mathrm{ess},5}(T)\\bigsqcup\\sigma_{\\mathrm{d}}(T)"
},
{
"math_id": 26,
"text": "\\sigma_{\\mathrm{d}}(T)"
}
]
| https://en.wikipedia.org/wiki?curid=1350002 |
13501019 | Lower flammability limit | The lower flammability limit (LFL), usually expressed in volume per cent, is the lower end of the concentration range over which a flammable mixture of gas or vapour in air can be ignited at a given temperature and pressure. The flammability range is delineated by the upper and lower flammability limits. Outside this range of air/vapor mixtures, the mixture cannot be ignited at that temperature and pressure. The LFL decreases with increasing temperature; thus, a mixture that is below its LFL at a given temperature may be ignitable if heated sufficiently.
For liquids, the LFL is typically close to the saturated vapor concentration at the flash point, however, due to differences in the liquid properties, the relationship of LFL to flash point (which is also dependent on the test apparatus) is not fixed and some spread in the data usually exists.
The formula_0 of a mixture can be evaluated using the Le Chatelier mixing rule if the formula_1 of the components formula_2 are known:
formula_3
Where formula_0 is the lower flammability of the mixture, formula_1 is the lower flammability of the formula_2-th component of the mixture, and formula_4 is the molar fraction of the formula_2-th component of the mixture.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "LFL_{mix}"
},
{
"math_id": 1,
"text": "LFL_{i}"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "LFL_{mix}=\\frac{1}{\\sum \\frac{x_{i}}{LFL_{i}}}"
},
{
"math_id": 4,
"text": "x_{i}"
}
]
| https://en.wikipedia.org/wiki?curid=13501019 |
1350120 | Knot complement | Complement of a knot in three-sphere
In mathematics, the knot complement of a tame knot "K" is the space where the knot is not. If a knot is embedded in the 3-sphere, then the complement is the 3-sphere minus the space near the knot. To make this precise, suppose that "K" is a knot in a three-manifold "M" (most often, "M" is the 3-sphere). Let "N" be a tubular neighborhood of "K"; so "N" is a solid torus. The knot complement is then the complement of "N",
formula_0
The knot complement "XK" is a compact 3-manifold; the boundary of "XK" and the boundary of the neighborhood "N" are homeomorphic to a two-torus. Sometimes the ambient manifold "M" is understood to be the 3-sphere. Context is needed to determine the usage. There are analogous definitions for the link complement.
Many knot invariants, such as the knot group, are really invariants of the complement of the knot. When the ambient space is the three-sphere no information is lost: the Gordon–Luecke theorem states that a knot is determined by its complement. That is, if "K" and "K"′ are two knots with homeomorphic complements then there is a homeomorphism of the three-sphere taking one knot to the other.
Knot complements are Haken manifolds. More generally complements of links are Haken manifolds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_K = M - \\mbox{interior}(N)."
}
]
| https://en.wikipedia.org/wiki?curid=1350120 |
13502744 | Babuška–Lax–Milgram theorem | In mathematics, the Babuška–Lax–Milgram theorem is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear form can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result is named after the mathematicians Ivo Babuška, Peter Lax and Arthur Milgram.
Background.
In the modern, functional-analytic approach to the study of partial differential equations, one does not attempt to solve a given partial differential equation directly, but by using the structure of the vector space of possible solutions, e.g. a Sobolev space "W" "k","p". Abstractly, consider two real normed spaces "U" and "V" with their continuous dual spaces "U"∗ and "V"∗ respectively. In many applications, "U" is the space of possible solutions; given some partial differential operator Λ : "U" → "V"∗ and a specified element "f" ∈ "V"∗, the objective is to find a "u" ∈ "U" such that
formula_0
However, in the weak formulation, this equation is only required to hold when "tested" against all other possible elements of "V". This "testing" is accomplished by means of a bilinear function "B" : "U" × "V" → R which encodes the differential operator Λ; a "weak solution" to the problem is to find a "u" ∈ "U" such that
formula_1
The achievement of Lax and Milgram in their 1954 result was to specify sufficient conditions for this weak formulation to have a unique solution that depends continuously upon the specified datum "f" ∈ "V"∗: it suffices that "U" = "V" is a Hilbert space, that "B" is continuous, and that "B" is strongly coercive, i.e.
formula_2
for some constant "c" > 0 and all "u" ∈ "U".
For example, in the solution of the Poisson equation on a bounded, open domain Ω ⊂ R"n",
formula_3
the space "U" could be taken to be the Sobolev space "H"01(Ω) with dual "H"−1(Ω); the former is a subspace of the "L""p" space "V" = "L"2(Ω); the bilinear form "B" associated to −Δ is the "L"2(Ω) inner product of the derivatives:
formula_4
Hence, the weak formulation of the Poisson equation, given "f" ∈ "L"2(Ω), is to find "u""f" such that
formula_5
Statement of the theorem.
In 1971, Babuška provided the following generalization of Lax and Milgram's earlier result, which begins by dispensing with the requirement that "U" and "V" be the same space. Let "U" and "V" be two real Hilbert spaces and let "B" : "U" × "V" → R be a continuous bilinear functional. Suppose also that "B" is weakly coercive: for some constant "c" > 0 and all "u" ∈ "U",
formula_6
and, for all 0 ≠ "v" ∈ "V",
formula_7
Then, for all "f" ∈ "V"∗, there exists a unique solution "u" = "u""f" ∈ "U" to the weak problem
formula_8
Moreover, the solution depends continuously on the given data:
formula_9 | [
{
"math_id": 0,
"text": "\\Lambda u = f."
},
{
"math_id": 1,
"text": "B(u, v) = \\langle f, v \\rangle \\mbox{ for all } v \\in V."
},
{
"math_id": 2,
"text": "| B(u, u) | \\geq c \\| u \\|^{2}"
},
{
"math_id": 3,
"text": "\\begin{cases} - \\Delta u(x) = f(x), & x \\in \\Omega; \\\\ u(x) = 0, & x \\in \\partial \\Omega; \\end{cases}"
},
{
"math_id": 4,
"text": "B(u, v) = \\int_{\\Omega} \\nabla u(x) \\cdot \\nabla v(x) \\, \\mathrm{d} x."
},
{
"math_id": 5,
"text": "\\int_{\\Omega} \\nabla u_{f}(x) \\cdot \\nabla v(x) \\, \\mathrm{d} x = \\int_{\\Omega} f(x) v(x) \\, \\mathrm{d} x \\mbox{ for all } v \\in H_{0}^{1} (\\Omega)."
},
{
"math_id": 6,
"text": "\\sup_{\\| v \\| = 1} | B(u, v) | \\geq c \\| u \\|"
},
{
"math_id": 7,
"text": "\\sup_{\\| u \\| = 1} | B(u, v) | > 0"
},
{
"math_id": 8,
"text": "B(u_{f}, v) = \\langle f, v \\rangle \\mbox{ for all } v \\in V."
},
{
"math_id": 9,
"text": "\\| u_{f} \\| \\leq \\frac{1}{c} \\| f \\|."
}
]
| https://en.wikipedia.org/wiki?curid=13502744 |
1350423 | Bead sort | Natural sorting algorithm
Bead sort, also called gravity sort, is a natural sorting algorithm, developed by Joshua J. Arulanandham, Cristian S. Calude and Michael J. Dinneen in 2002, and published in "The Bulletin of the European Association for Theoretical Computer Science". Both digital and analog hardware implementations of bead sort can achieve a sorting time of "O"("n"); however, the implementation of this algorithm tends to be significantly slower in software and can only be used to sort lists of positive integers. Also, it would seem that even in the best case, the algorithm requires "O"("n2") space.
Algorithm overview.
The bead sort operation can be compared to the manner in which beads slide on parallel poles, such as on an abacus. However, each pole may have a distinct number of beads. Initially, it may be helpful to imagine the beads suspended on vertical poles. In Step 1, such an arrangement is displayed using "n=5" rows of beads on "m=4" vertical poles. The numbers to the right of each row indicate the number that the row in question represents; rows 1 and 2 are representing the positive integer 3 (because they each contain three beads) while the top row represents the positive integer 2 (as it only contains two beads).
If we then allow the beads to fall, the rows now represent the same integers in sorted order. Row 1 contains the largest number in the set, while row "n" contains the smallest. If the above-mentioned convention of rows containing a series of beads on poles 1.."k" and leaving poles "k"+1.."m" empty has been followed, it will continue to be the case here.
The action of allowing the beads to "fall" in our physical example has allowed the larger values from the higher rows to propagate to the lower rows. If the value represented by row "a" is smaller than the value contained in row "a+1", some of the beads from row "a+1" will fall into row "a"; this is certain to happen, as row "a" does not contain beads in those positions to stop the beads from row "a+1" from falling.
The mechanism underlying bead sort is similar to that behind counting sort; the number of beads on each pole corresponds to the number of elements with value equal or greater than the index of that pole.
Complexity.
Bead sort can be implemented with four general levels of complexity, among others:
Like the Pigeonhole sort, bead sort is unusual in that in worst case it can perform faster than "O"("n" log "n"), the fastest performance possible for a comparison sort in worst case. This is possible because the key for a bead sort is always a positive integer and bead sort exploits its structure.
Implementation.
This implementation is written in Python; it is assumed that the will be a sequence of integers. The function returns a new list rather than mutating the one passed in, but it can be trivially modified to operate in place efficiently.
def beadsort(input_list):
"""Bead sort."""
return_list = []
# Initialize a 'transposed list' to contain as many elements as
# the maximum value of the input -- in effect, taking the 'tallest'
# column of input beads and laying it out flat
transposed_list = [0] * max(input_list)
for num in input_list:
# For each element (each 'column of beads') of the input list,
# 'lay the beads flat' by incrementing as many elements of the
# transposed list as the column is tall.
# These will accumulate atop previous additions.
transposed_list[:num] = [n + 1 for n in transposed_list[:num]]
# We've now dropped the beads. To de-transpose, we count the
# 'bottommost row' of dropped beads, then mimic removing this
# row by subtracting 1 from each 'column' of the transposed list.
# When a column does not reach high enough for the current row,
# its value in transposed_list will be <= 0.
for i in range(len(input_list)):
# Counting values > i is how we tell how many beads are in the
# current 'bottommost row'. Note that Python's bools can be
# evaluated as integers; True == 1 and False == 0.
return_list.append(sum(n > i for n in transposed_list))
# The resulting list is sorted in descending order
return return_list
We can also implement the algorithm using Java.
public static void beadSort(int[] a)
// Find the maximum element
int max = a[0];
for (int i = 1; i < a.length; i++) {
if (a[i] > max) {
max = a[i];
// allocating memory
int[][] beads = new int[a.length][max];
// mark the beads
for (int i = 0; i < a.length; i++) {
for (int j = 0; j < a[i]; j++) {
beads[i][j] = 1;
// move down the beads
for (int j = 0; j < max; j++) {
int sum = 0;
for (int i = 0; i < a.length; i++) {
sum += beads[i][j];
beads[i][j] = 0;
for (int i = a.length - 1; i >= a.length - sum;
i--) {
a[i] = j + 1;
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{n}"
}
]
| https://en.wikipedia.org/wiki?curid=1350423 |
13505242 | Time-of-flight mass spectrometry | Method of mass spectrometry
Time-of-flight mass spectrometry (TOFMS) is a method of mass spectrometry in which an ion's mass-to-charge ratio is determined by a time of flight measurement. Ions are accelerated by an electric field of known strength. This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio (heavier ions of the same charge reach lower speeds, although ions with higher charge will also increase in velocity). The time that it subsequently takes for the ion to reach a detector at a known distance is measured. This time will depend on the velocity of the ion, and therefore is a measure of its mass-to-charge ratio. From this ratio and known experimental parameters, one can identify the ion.
Theory.
The potential energy of a charged particle in an electric field is related to the charge of the particle and to the strength of the electric field:
where "E"p is potential energy, "q" is the charge of the particle, and "U" is the electric potential difference (also known as voltage).
When the charged particle is accelerated into "time-of-flight tube" (TOF tube or flight tube) by the voltage "U", its potential energy is converted to kinetic energy. The kinetic energy of any mass is:
In effect, the potential energy is converted to kinetic energy, meaning that equations (1) and (2) are equal
The velocity of the charged particle after acceleration will not change since it moves in a field-free time-of-flight tube. The velocity of the particle can be determined in a time-of-flight tube since the length of the path ("d") of the flight of the ion is known and the time of the flight of the ion ("t") can be measured using a transient digitizer or time to digital converter.
Thus,
and we substitute the value of "v" in (5) into (4).
Rearranging (6) so that the flight time is expressed by everything else:
Taking the square root yields the time,
These factors for the time of flight have been grouped purposely. formula_0 contains constants that in principle do not change when a set of ions are analyzed in a single pulse of acceleration. (8) can thus be given as:
where "k" is a proportionality constant representing factors related to the instrument settings and characteristics.
(9) reveals more clearly that the time of flight of the ion varies with the square root of its mass-to-charge ratio ("m/q").
Consider a real-world example of a MALDI time-of-flight mass spectrometer instrument which is used to produce a mass spectrum of the tryptic peptides of a protein. Suppose the mass of one tryptic peptide is 1000 daltons (Da). The kind of ionization of peptides produced by MALDI is typically +1 ions, so "q" = e in both cases. Suppose the instrument is set to accelerate the ions in a "U" = 15,000 volts (15 kilovolt or 15 kV) potential. And suppose the length of the flight tube is 1.5 meters (typical). All the factors necessary to calculate the time of flight of the ions are now known for (8), which is evaluated first of the ion of mass 1000 Da:
Note that the mass had to be converted from daltons (Da) to kilograms (kg) to make it possible to evaluate the equation in the proper units. The final value should be in seconds:
formula_1
which is about 28 microseconds. If there were a singly charged tryptic peptide ion with 4000 Da mass, and it is four times larger than the 1000 Da mass, it would take twice the time, or about 56 microseconds to traverse the flight tube, since time is proportional to the square root of the mass-to-charge ratio.
Delayed extraction.
Mass resolution can be improved in axial MALDI-TOF mass spectrometer where ion production takes place in vacuum by allowing the initial burst of ions and neutrals produced by the laser pulse to equilibrate and to let the ions travel some distance perpendicularly to the sample plate before the ions can be accelerated into the flight tube. The ion equilibration in plasma plume produced during the desorption/ionization takes place approximately 100 ns or less, after that most of ions irrespectively of their mass start moving from the surface with some average velocity. To compensate for the spread of this average velocity and to improve mass resolution, it was proposed to delay the extraction of ions from the ion source toward the flight tube by a few hundred nanoseconds to a few microseconds with respect to the start of short (typically, a few nanosecond) laser pulse. This technique is referred to as "time-lag focusing" for ionization of atoms or molecules by resonance enhanced multiphoton ionization or by electron impact ionization in a rarefied gas and "delayed extraction" for ions produced generally by laser desorption/ionization of molecules adsorbed on flat surfaces or microcrystals placed on conductive flat surface.
Delayed extraction generally refers to the operation mode of vacuum ion sources when the onset of the electric field responsible for acceleration (extraction) of the ions into the flight tube is delayed by some short time (200–500 ns) with respect to the ionization (or desorption/ionization) event. This differs from a case of constant extraction field where the ions are accelerated instantaneously upon being formed. Delayed extraction is used with MALDI or laser desorption/ionization (LDI) ion sources where the ions to be analyzed are produced in an expanding plume moving from the sample plate with a high speed (400–1000 m/s). Since the thickness of the ion packets arriving at the detector is important to mass resolution, on first inspection it can appear counter-intuitive to allow the ion plume to further expand before extraction. Delayed extraction is more of a compensation for the initial momentum of the ions: it provides the same arrival times at the detector for ions with the same mass-to-charge ratios but with different initial velocities.
In delayed extraction of ions produced in vacuum, the ions that have lower momentum in the direction of extraction start to be accelerated at higher potential due to being further from the extraction plate when the extraction field is turned on. Conversely, those ions with greater forward momentum start to be accelerated at lower potential since they are closer to the extraction plate. At the exit from the acceleration region, the slower ions at the back of the plume will be accelerated to greater velocity than the initially faster ions at the front of the plume. So after delayed extraction, a group of ions that leaves the ion source earlier has lower velocity in the direction of the acceleration compared to some other group of ions that leaves the ion source later but with greater velocity. When ion source parameters are properly adjusted, the faster group of ions catches up to the slower one at some distance from the ion source, so the detector plate placed at this distance detects simultaneous arrival of these groups of ions. In its way, the delayed application of the acceleration field acts as a one-dimensional time-of-flight focusing element.
Reflectron TOF.
The kinetic energy distribution in the direction of ion flight can be corrected by using a reflectron. The reflectron uses a constant electrostatic field to reflect the ion beam toward the detector. The more energetic ions penetrate deeper into the reflectron, and take a slightly longer path to the detector. Less energetic ions of the same mass-to-charge ratio penetrate a shorter distance into the reflectron and, correspondingly, take a shorter path to the detector. The flat surface of the ion detector (typically a microchannel plate, MCP) is placed at the plane where ions of same m/z but with different energies arrive at the same time counted with respect to the onset of the extraction pulse in the ion source. A point of simultaneous arrival of ions of the same mass-to-charge ratio but with different energies is often referred as time-of-flight focus.
An additional advantage to the re-TOF arrangement is that twice the flight path is achieved in a given length of the TOF instrument.
Ion gating.
A Bradbury–Nielsen shutter is a type of ion gate used in TOF mass spectrometers and in ion mobility spectrometers, as well as Hadamard transform TOF mass spectrometers. The Bradbury–Nielsen shutter is ideal for fast timed ion selector (TIS)—a device used for isolating ions over narrow mass range in tandem (TOF/TOF) MALDI mass spectrometers.
Orthogonal acceleration time-of-flight.
Continuous ion sources (most commonly electrospray ionization, ESI) are generally interfaced to the TOF mass analyzer by "orthogonal extraction" in which ions introduced into the TOF mass analyzer are accelerated along the axis perpendicular to their initial direction of motion. Orthogonal acceleration combined with collisional ion cooling allows separating the ion production in the ion source and mass analysis. In this technique, very high resolution can be achieved for ions produced in MALDI or ESI sources.
Before entering the orthogonal acceleration region or the pulser, the ions produced in continuous (ESI) or pulsed (MALDI) sources are focused (cooled) into a beam of 1–2 mm diameter by collisions with a residual gas in RF multipole guides. A system of electrostatic lenses mounted in high-vacuum region before the pulser makes the beam parallel to minimize its divergence in the direction of acceleration. The combination of ion collisional cooling and orthogonal acceleration TOF has provided significant increase in resolution of modern TOF MS from few hundred to several tens of thousand without compromising the sensitivity.
Hadamard transform time-of-flight mass spectrometry.
Hadamard transform time-of flight mass spectrometry (HT-TOFMS) is a mode of mass analysis used to significantly increase the signal-to-noise ratio of a conventional TOFMS. Whereas traditional TOFMS analyzes one packet of ions at a time, waiting for the ions to reach the detector before introducing another ion packet, HT-TOFMS can simultaneously analyze several ion packets traveling in the flight tube. The ions packets are encoded by rapidly modulating the transmission of the ion beam, so that lighter (and thus faster) ions from all initially-released packets of mass from a beam get ahead of heavier (and thus slower) ions. This process creates an overlap of many time-of-flight distributions convoluted in form of signals. The Hadamard transform algorithm is then used to carry out the deconvolution process which helps to produce a faster mass spectral storage rate than traditional TOFMS and other comparable mass separation instruments.
Tandem time-of-flight.
Tandem time-of-flight (TOF/TOF) is a tandem mass spectrometry method where two time-of-flight mass spectrometers are used consecutively. To record full spectrum of precursor (parent) ions TOF/TOF operates in MS mode. In this mode, the energy of the pulse laser is chosen slightly above the onset of MALDI for specific matrix in use to ensure the compromise between an ion yield for all the parent ions and reduced fragmentation of the same ions. When operating in a tandem (MS/MS) mode, the laser energy is increased considerably above MALDI threshold. The first TOF mass spectrometer (basically, a flight tube which ends up with the timed ion selector) isolates precursor ions of choice using a velocity filter, typically, of a Bradbury–Nielsen type, and the second TOF-MS (that includes the post accelerator, flight tube, ion mirror, and the ion detector) analyzes the fragment ions. Fragment ions in MALDI TOF/TOF result from decay of precursor ions vibrationally excited above their dissociation level in MALDI source (post source decay ). Additional ion fragmentation implemented in a high-energy collision cell may be added to the system to increase dissociation rate of vibrationally excited precursor ions. Some designs include precursor signal quenchers as a part of second TOF-MS to reduce the instant current load on the ion detector.
Quadrupole time-of-flight.
Quadrupole time-of-flight mass spectrometry (QToF-MS) has a similar configuration to a tandem mass spectrometer with a mass-resolving quadrupole and collision cell hexapole, but instead of a second mass-resolving quadrupole, a time-of-flight mass analyzer is used. Both quadrupoles can operate in RF mode only to allow all ions to pass through to the mass analyzer with minimal fragmentation. To increase spectral detail, the system takes advantage of collision-induced dissociation. Once the ions reach the flight tube, the ion pulser sends them upwards towards the reflectron and back down into the detector. Since the ion pulser transfers the same kinetic energy to all molecules, the flight time is dictated by the mass of the analyte.
QToF is capable of measuring mass to the 4th decimal place and is frequently used for pharmaceutical and toxicological analysis as a screening method for drug analogues. Identification is done by collection of the mass spectrum and comparison to tandem mass spectrum libraries.
Detectors.
A time-of-flight mass spectrometer (TOFMS) consists of a mass analyzer and a detector. An ion source (either pulsed or continuous) is used for lab-related TOF experiments, but not needed for TOF analyzers used in space, where the sun or planetary ionospheres provide the ions. The TOF mass analyzer can be a linear flight tube or a reflectron. The ion detector typically consists of microchannel plate detector or a fast secondary emission multiplier (SEM) where first converter plate (dynode) is flat. The electrical signal from the detector is recorded by means of a time to digital converter (TDC) or a fast analog-to-digital converter (ADC). TDC is mostly used in combination with orthogonal-acceleration (oa)TOF instruments.
Time-to-digital converters register the arrival of a single ion at discrete time "bins"; a combination of threshold triggering and constant fraction discriminator (CFD) discriminates between electronic noise and ion arrival events. CFD converts nanosecond-long Gaussian-shaped electrical pulses of different amplitudes generated on the MCP's anode into common-shape pulses (e.g., pulses compatible with TTL/ESL logic circuitry) sent to TDC. Using CFD provides a time point correspondent to a position of peak maximum independent of variation in the peak amplitude caused by variation of the MCP or SEM gain. Fast CFDs of advanced designs have the dead times equal to or less than two single-hit response times of the ion detector (single-hit response time for MCP with 2-5 micron wide channels can be somewhere between 0.2 ns and 0.8 ns, depending on the channel angle) thus preventing repetitive triggering from the same pulse. Double-hit resolution (dead time) of modern multi-hit TDC can be as low as 3-5 nanosecond.
The TDC is a counting detector – it can be extremely fast (down to a few picosecond resolution), but its dynamic range is limited due to its inability to properly count the events when more than one ion simultaneously (i.e., within the TDC dead time) hit the detector. The outcome of limited dynamic range is that the number of ions (events) recorded in one mass spectrum is smaller compared to real number. The problem of limited dynamic range can be alleviated using multichannel detector design: an array of mini-anodes attached to a common MCP stack and multiple CFD/TDC, where each CFD/TDC records signals from individual mini-anode. To obtain peaks with statistically acceptable intensities, ion counting is accompanied by summing of hundreds of individual mass spectra (so-called hystograming). To reach a very high counting rate (limited only by duration of individual TOF spectrum which can be as high as few milliseconds in multipath TOF setups), a very high repetition rate of ion extractions to the TOF tube is used. Commercial orthogonal acceleration TOF mass analyzers typically operate at 5–20 kHz repetition rates. In combined mass spectra obtained by summing a large number of individual ion detection events, each peak is a histogram obtained by adding up counts in each individual bin. Because the recording of the individual ion arrival with TDC yields only a single time point, the TDC eliminates the fraction of peak width determined by a limited response time of both the MCP detector and preamplifier. This propagates into better mass resolution.
Modern ultra-fast 10 GSample/sec analog-to-digital converters digitize the pulsed ion current from the MCP detector at discrete time intervals (100 picoseconds). Modern 8-bit or 10-bit 10 GHz ADC has much higher dynamic range than the TDC, which allows its usage in MALDI-TOF instruments with its high peak currents. To record fast analog signals from MCP detectors one is required to carefully match the impedance of the detector anode with the input circuitry of the ADC (preamplifier) to minimize the "ringing" effect. Mass resolution in mass spectra recorded with ultra-fast ADC can be improved by using small-pore (2-5 micron) MCP detectors with shorter response times.
Applications.
Matrix-assisted laser desorption ionization (MALDI) is a pulsed ionization technique that is readily compatible with TOF MS.
Atom probe tomography also takes advantage of TOF mass spectrometry.
Photoelectron photoion coincidence spectroscopy uses soft photoionization for ion internal energy selection and TOF mass spectrometry for mass analysis.
Secondary ion mass spectrometry commonly utilizes TOF mass spectrometers to allow parallel detection of different ions with a high mass resolving power.
Stefan Rutzinger proposed using TOF mass spectrometry with a cryogenic detector for the spectrometry of heavy biomolecules.
History of the field.
An early time-of-flight mass spectrometer, named the Velocitron, was reported by A. E. Cameron and D. F. Eggers Jr, working at the Y-12 National Security Complex, in 1948. The idea had been proposed two years earlier, in 1946, by W. E. Stephens of the University of Pennsylvania in a Friday afternoon session of a meeting, at the Massachusetts Institute of Technology, of the American Physical Society.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{d}{\\sqrt{2U}}"
},
{
"math_id": 1,
"text": "t = 2.788 \\times 10^{-5}\\;\\mathrm{s}"
}
]
| https://en.wikipedia.org/wiki?curid=13505242 |
1350550 | Oudin coil | Resonant transformer circuit
An Oudin coil, also called an Oudin oscillator or Oudin resonator, is a resonant transformer circuit that generates very high voltage, high frequency alternating current (AC) electricity at low current levels, used in the obsolete forms of electrotherapy around the turn of the 20th century. It is very similar to the Tesla coil, with the difference being that the Oudin coil was connected as an autotransformer. It was invented in 1893 by French physician Paul Marie Oudin as a modification of physician Jacques Arsene d'Arsonval's electrotherapy equipment and used in medical diathermy therapy as well as quack medicine until perhaps 1940. The high voltage output terminal of the coil was connected to an insulated handheld electrode which produced luminous brush discharges, which were applied to the patient's body to treat various medical conditions in electrotherapy.
How it works.
Oudin and Tesla coils are spark-excited air-core double-tuned transformer circuits that use resonance to generate very high voltages at low currents. They produce alternating current in the radio frequency (RF) range. The medical coils of the early 20th century produced potentials of 50,000 up to a million volts, at frequencies in the range 200 kHz to 5 MHz. The primary circuit of the coil has Leyden jar capacitors "(C)" which in combination with the primary winding of the coil "(L1)" make a resonant circuit (tuned circuit). In medical coils usually two capacitors were used for safety, one in each side of the primary circuit, to isolate the patient completely from the potentially lethal low frequency primary current. The primary circuit also has a spark gap "(SG)" that acts as a switch to excite oscillations in the primary. The primary circuit is powered by a high voltage transformer or induction coil "(T)" at a potential of 2 - 15 kV. The transformer repeatedly charges the capacitors, which then discharge through the spark gap and the primary winding (a detailed description of the operation cycle in the Tesla coil article also applies to the Oudin coil). This cycle is repeated many times per second. During each spark, the charge moves rapidly back and forth between the capacitor plates through the primary coil, creating a damped RF oscillating current in the primary tuned circuit which induced the high voltage in the secondary.
The secondary winding "(L2)" is open-circuited, and connected to the output electrode of the device. In the Oudin coil, one side of the primary winding "(L1)" is grounded and the other side is connected to the secondary, so the primary and secondary are in series. There were two versions of the Oudin coil:
Although it doesn't include a capacitor, the secondary winding is also a resonant circuit (electrical resonator); the parasitic capacitance between the ends of the secondary coil resonates with the large inductance of the secondary at a particular resonant frequency. When it is excited at this frequency by the primary, large oscillating voltages are induced in the secondary. The number of turns in the primary winding, and thus the resonant frequency of the primary, could be adjusted with a tap on the coil. When the two tuned circuits are adjusted to resonate at the same frequency, the large turns ratio of the coil, aided by the high Q of the tuned circuits, steps up the primary voltage to hundreds of thousands to millions of volts at the secondary.
The secondary is directly connected to the primary circuit, which carries lethal low frequency 50/60 Hz currents at thousands of volts from the power transformer. Since the Oudin coil was a medical device, with the secondary current applied directly to a person's body, for safety the Oudin circuit has "two" capacitors "(C)", one in each leg of the primary, to completely isolate the coil and output electrode from the supply transformer at the mains frequency. Because two identical capacitors in series have half the capacitance of a single capacitor, the resonant frequency of the Oudin circuit is
formula_0
Use.
The high voltage terminal of the coil was attached through a wire to various types of handheld electrode which the physician used to apply the high voltage to the patient's body. The treatment was not painful for the patient, because alternating current in the radio frequency (RF) range, above 10 kHz in frequency, does not generally cause the sensation of electric shock. The Oudin coil was a "unipolar" generator, with the lower end of the coil grounded, so sometimes only one electrode was applied to the patient and the return path for the currents was through the ground. However usually a ground wire from the bottom of the coil was used; attached to a ground electrode which the patient held. A drawback of the Oudin coil was that movement of the electrode and wire during use changed the capacitance of the top terminal of the secondary coil, and thus its resonant frequency. This threw the secondary coil out of resonance with the primary, causing a reduction in voltage. So the tap point on the primary coil had to be constantly adjusted during use to keep the primary and secondary in "tune" "(seen in left image above)".
Many specialized types of electrodes were used to apply the current to various parts of the patient's body. These generally fell into two types. To apply brush discharges (called "effluves") to the outside of the patient's body, electrodes consisting of one or more metal points on an insulating handle were used. Care had to be taken to keep these far enough from the body to prevent a continuous arc to the skin, which could cause painful RF burns. To apply current directly to the body surface, as well as to tissues inside the patient's body through the mouth, rectum, or vagina, a vacuum tube "condensing" electrode was used. This consisted of a partially evacuated glass tube of various shapes, with an electrode sealed inside, attached to the high voltage wire. This produced a dramatic violet glow when energized. The glass envelope of the tube formed a capacitor with the patient's body through which the current had to pass, limiting it to safe values.
To apply current to the whole body, a "condensing couch" was used. This was a bed or couch with a metal back under its mattress, connected by a wire to the high voltage terminal. Metal handrest electrodes at the sides, which the patient grasped during treatment, served as the "ground" return path and were attached to the bottom of the coil. Thus the couch formed a capacitor, with the patient's body as one electrode.
History.
During the 1800s, experiments in applying electric currents to the human body grew into a Victorian era medical field, part legitimate experimental medicine and part quack medicine, called electrotherapy, in which currents were applied to treat many medical conditions. The discovery of radio waves by Heinrich Hertz in 1886 and subsequent development of radio by Oliver Lodge, Guglielmo Marconi sparked interest in radio frequency currents and circuits for generating them. "High frequency" currents meant any frequency above the audio range, > 20 kHz, and the resonant coils which generated them were generically called "oscillation transformers". During the 1890s doctors began to experiment with applying these high voltage and high frequency currents to the human body (ethical standards in the medical profession were looser then and physicians could experiment on their patients). In 1890 French physician Jacques Arsene d'Arsonval founded the field of high frequency electrotherapy, performing the first experiments applying high frequency currents to the human body. He discovered that currents above 10 kHz do not cause muscle contraction or activate nerves to cause the sensation of electric shock, so that extremely high voltages could be applied to a patient without discomfort. In 1891 in America, engineer Nikola Tesla independently discovered the same thing. Three types of apparatus were used, developed by three pioneers in the field, D'Arsonval, Tesla, and Oudin, and separate bodies of clinical technique grew up around them:
The D'Arsonval and Oudin apparatus became popular in Europe, while the Tesla-Thompson apparatus was mostly used in America. During the first decades of the 20th century there was a rivalry between these camps, and debate in the medical literature as to whether "Tesla currents" or "Oudin currents" were better for various conditions. By 1920 it was realized that the currents were very similar. Since the circuits were so similar, medical suppliers sold combination "high frequency" units that could be set up for Tesla, D'Arsonval, or Oudin therapy, often also combined with Rontgen ray (X-ray)
After Oudin combined the primary and "resonator" coil together on the same form, making them an air-core autotransformer, the only significant difference between the Tesla and Oudin apparatus was that the medical Tesla coil was "bipolar" while the Oudin coil was "unipolar", with one end grounded. As time went on the meaning of the terms changed, until (by perhaps 1920) the term "Tesla coil" meant a "bipolar" coil; any high voltage coil with an ungrounded balanced secondary with two output terminals, while the term "Oudin coil" meant a "unipolar" coil; any coil with a grounded secondary and a single output terminal.
Around the 1930s vacuum tube oscillators replaced spark-excited circuits in high frequency medical equipment. The field of electrotherapy was replaced by the modern field of diathermy, and the Oudin coil became obsolete. Ironically modern day Tesla coil designs are unipolar, with a single high voltage terminal, and so are sometimes called Oudin coils.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f = {1 \\over 2\\pi\\sqrt{L_1C/2}}\\;"
}
]
| https://en.wikipedia.org/wiki?curid=1350550 |
13505688 | Analytic semigroup | In mathematics, an analytic semigroup is particular kind of strongly continuous semigroup. Analytic semigroups are used in the solution of partial differential equations; compared to strongly continuous semigroups, analytic semigroups provide better regularity of solutions to initial value problems, better results concerning perturbations of the infinitesimal generator, and a relationship between the type of the semigroup and the spectrum of the infinitesimal generator.
Definition.
Let Γ("t") = exp("At") be a strongly continuous one-parameter semigroup on a Banach space ("X", ||·||) with infinitesimal generator "A". Γ is said to be an analytic semigroup if
formula_0
and the usual semigroup conditions hold for "s", "t" ∈ Δ"θ"&hairsp;: exp("A"0) = id, exp("A"("t" + "s")) = exp("At") exp("As"), and, for each "x" ∈ "X", exp("At")"x" is continuous in "t";
Characterization.
The infinitesimal generators of analytic semigroups have the following characterization:
A closed, densely defined linear operator "A" on a Banach space "X" is the generator of an analytic semigroup if and only if there exists an "ω" ∈ R such that the half-plane Re("λ") > "ω" is contained in the resolvent set of "A" and, moreover, there is a constant "C" such that for the resolvent formula_1 of the operator "A" we have
formula_2
for Re("λ") > "ω". Such operators are called "sectorial". If this is the case, then the resolvent set actually contains a sector of the form
formula_3
for some "δ" > 0, and an analogous resolvent estimate holds in this sector. Moreover, the semigroup is represented by
formula_4
where "γ" is any curve from "e"−"iθ"∞ to "e"+"iθ"∞ such that "γ" lies entirely in the sector
formula_5
with π/&hairsp;2 < "θ" < π/&hairsp;2 + "δ". | [
{
"math_id": 0,
"text": "\\Delta_{\\theta} = \\{ 0 \\} \\cup \\{ t \\in \\mathbb{C} : | \\mathrm{arg}(t) | < \\theta \\},"
},
{
"math_id": 1,
"text": "R_\\lambda(A)"
},
{
"math_id": 2,
"text": "\\| R_{\\lambda} (A) \\| \\leq \\frac{C}{| \\lambda - \\omega |}"
},
{
"math_id": 3,
"text": "\\left\\{ \\lambda \\in \\mathbf{C} : | \\mathrm{arg} (\\lambda - \\omega) | < \\frac{\\pi}{2} + \\delta \\right\\}"
},
{
"math_id": 4,
"text": "\\exp (At) = \\frac1{2 \\pi i} \\int_{\\gamma} e^{\\lambda t} ( \\lambda \\mathrm{id} - A )^{-1} \\, \\mathrm{d} \\lambda,"
},
{
"math_id": 5,
"text": "\\big\\{ \\lambda \\in \\mathbf{C} : | \\mathrm{arg} (\\lambda - \\omega) | \\leq \\theta \\big\\},"
}
]
| https://en.wikipedia.org/wiki?curid=13505688 |
13508240 | Planarity | Puzzle computer game involving planar graphs
Planarity is a 2005 puzzle computer game by John Tantalo, based on a concept by Mary Radcliffe at Western Michigan University.
The name comes from the concept of planar graphs in graph theory; these are graphs that can be embedded in the Euclidean plane so that no edges intersect. By Fáry's theorem, if a graph is planar, it can be drawn without crossings so that all of its edges are straight line segments. In the planarity game, the player is presented with a circular layout of a planar graph, with all the vertices placed on a single circle and with many crossings. The goal for the player is to eliminate all of the crossings and construct a straight-line embedding of the graph by moving the vertices one by one into better positions.
History and versions.
The game was written in Flash by John Tantalo at Case Western Reserve University in 2005. Online popularity and the local notoriety he gained placed Tantalo as one of Cleveland's most interesting people for 2006. It in turn has inspired the creation of a GTK+ version by Xiph.org's Chris Montgomery, which possesses additional level generation algorithms and the ability to manipulate multiple nodes at once.
Puzzle generation algorithm.
The definition of the planarity puzzle does not depend on how the planar graphs in the puzzle are generated, but the original implementation uses the following algorithm:
If a graph is generated from formula_0 lines, then the graph will have exactly formula_1 vertices (each line has formula_2 vertices, and each vertex is shared with one other line) and formula_3 edges (each line contains formula_4 edges). The first level of Planarity is built with formula_5 lines, so it has formula_6 vertices and formula_7 edges. Each level after is generated by one more line than the last. If a level was generated with formula_0 lines, then the next level has formula_0 more vertices and formula_8 more edges.
The best known algorithms from computational geometry for constructing the graphs of line arrangements solve the problem in formula_9 time, linear in the size of the graph to be constructed, but they are somewhat complex. Alternatively and more simply, it is possible to index each crossing point by the pair of lines that cross at that point, sort the crossings along each line by their formula_10-coordinates, and use this sorted ordering to generate the edges of the planar graph, in near-optimal formula_11 time. Once the vertices and edges of the graph have been generated, they may be placed evenly around a circle using a random permutation.
Related theoretical research.
The problem of determining whether a graph is planar can be solved in linear time, and any such graph is guaranteed to have a straight-line embedding by Fáry's theorem, that can also be found from the planar embedding in linear time. Therefore, any puzzle could be solved in linear time by a computer. However, these puzzles are not as straightforward for human players to solve.
In the field of computational geometry, the process of moving a subset of the vertices in a graph embedding to eliminate edge crossings has been studied by Pach and Tardos (2002), and others, inspired by the planarity puzzle. The results of these researchers shows that (in theory, assuming that the field of play is the infinite plane rather than a bounded rectangle) it is always possible to solve the puzzle while leaving formula_12 of the formula_13 input vertices fixed in place at their original positions, for a constant formula_14 that has not been determined precisely but lies between 1/4 and slightly less than 1/2. When the planar graph to be untangled is a cycle graph, a larger number of vertices may be fixed in place. However, determining the largest number of vertices that may be left in place for a particular input puzzle (or equivalently, the smallest number of moves needed to solve the puzzle) is NP-complete.
has shown that the randomized circular layout used for the initial state of Planarity is nearly the worst possible in terms of its number of crossings: regardless of what planar graph is to be tangled, the expected value of the number of crossings for this layout is within a factor of three of the largest number of crossings among all layouts.
In 2014, mathematician David Eppstein published a paper providing an effective algorithm for solving planar graphs generated by the original Planarity game, based on the specifics of the puzzle generation algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "\\tbinom{L}{2} = \\tfrac{L(L-1)}{2}"
},
{
"math_id": 2,
"text": "L-1"
},
{
"math_id": 3,
"text": "L(L-2)"
},
{
"math_id": 4,
"text": "L-2"
},
{
"math_id": 5,
"text": "L=4"
},
{
"math_id": 6,
"text": "L(L-1)/2=6"
},
{
"math_id": 7,
"text": "L(L-2)=8"
},
{
"math_id": 8,
"text": "2L-1"
},
{
"math_id": 9,
"text": "O(L^2)"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "O(L^2\\log L)"
},
{
"math_id": 12,
"text": "n^\\epsilon"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\epsilon"
}
]
| https://en.wikipedia.org/wiki?curid=13508240 |
1350841 | Toda field theory | Special quantum field theory
In mathematics and physics, specifically the study of field theory and partial differential equations, a Toda field theory, named after Morikazu Toda, is specified by a choice of Lie algebra and a specific Lagrangian.
Formulation.
Fixing the Lie algebra to have rank formula_0, that is, the Cartan subalgebra of the algebra has dimension formula_0, the Lagrangian can be written
formula_1
The background spacetime is 2-dimensional Minkowski space, with space-like coordinate formula_2 and timelike coordinate formula_3. Greek indices indicate spacetime coordinates.
For some choice of root basis, formula_4 is the formula_5th simple root. This provides a basis for the Cartan subalgebra, allowing it to be identified with formula_6.
Then the field content is a collection of formula_0 scalar fields formula_7, which are scalar in the sense that they transform trivially under Lorentz transformations of the underlying spacetime.
The inner product formula_8 is the restriction of the Killing form to the Cartan subalgebra.
The formula_9 are integer constants, known as Kac labels or Dynkin labels.
The physical constants are the mass formula_10 and the coupling constant formula_11.
Classification of Toda field theories.
Toda field theories are classified according to their associated Lie algebra.
Toda field theories usually refer to theories with a finite Lie algebra. If the Lie algebra is an affine Lie algebra, it is called an affine Toda field theory (after the component of φ which decouples is removed). If it is hyperbolic, it is called a hyperbolic Toda field theory.
Toda field theories are integrable models and their solutions describe solitons.
Examples.
Liouville field theory is associated to the A1 Cartan matrix, which corresponds to the Lie algebra formula_12 in the classification of Lie algebras by Cartan matrices. The algebra formula_12 has only a single simple root.
The sinh-Gordon model is the affine Toda field theory with the generalized Cartan matrix
formula_13
and a positive value for β after we project out a component of φ which decouples.
The sine-Gordon model is the model with the same Cartan matrix but an imaginary β. This Cartan matrix corresponds to the Lie algebra formula_12. This has a single simple root, formula_14 and Coxeter label formula_15, but the Lagrangian is modified for the affine theory: there is also an "affine root" formula_16 and Coxeter label formula_17. One can expand formula_18 as formula_19, but for the affine root formula_20, so the formula_21 component decouples.
The sum is formula_22 Then if formula_11 is purely imaginary, formula_23 with formula_24 real and, without loss of generality, positive, then this is formula_25. The Lagrangian is then
formula_26
which is the sine-Gordon Lagrangian.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "\\mathcal{L}=\\frac{1}{2}\\left\\langle \\partial_\\mu \\phi, \\partial^\\mu \\phi \\right\\rangle\n-\\frac{m^2}{\\beta^2}\\sum_{i=1}^r n_i \\exp(\\beta \\langle\\alpha_i, \\phi\\rangle)."
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "\\alpha_i"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "\\mathbb{R}^r"
},
{
"math_id": 7,
"text": "\\phi_i"
},
{
"math_id": 8,
"text": "\\langle\\cdot, \\cdot\\rangle"
},
{
"math_id": 9,
"text": "n_i"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "\\beta"
},
{
"math_id": 12,
"text": "\\mathfrak{su}(2)"
},
{
"math_id": 13,
"text": "\\begin{pmatrix} 2&-2 \\\\ -2&2 \\end{pmatrix}"
},
{
"math_id": 14,
"text": "\\alpha_1 = 1"
},
{
"math_id": 15,
"text": "n_1 = 1"
},
{
"math_id": 16,
"text": "\\alpha_0 = -1"
},
{
"math_id": 17,
"text": "n_0 = 1"
},
{
"math_id": 18,
"text": "\\phi"
},
{
"math_id": 19,
"text": "\\phi_0 \\alpha_0 + \\phi_1 \\alpha_1"
},
{
"math_id": 20,
"text": "\\langle \\alpha_0, \\alpha_0 \\rangle = 0"
},
{
"math_id": 21,
"text": "\\phi_0"
},
{
"math_id": 22,
"text": "\\sum_{i=0}^1 n_i\\exp(\\beta \\alpha_i\\phi) = \\exp(\\beta \\phi) + \\exp(-\\beta\\phi)."
},
{
"math_id": 23,
"text": "\\beta = ib"
},
{
"math_id": 24,
"text": "b"
},
{
"math_id": 25,
"text": "2\\cos(b\\phi)"
},
{
"math_id": 26,
"text": "\\mathcal{L} = \\frac{1}{2}\\partial_\\mu \\phi \\partial^\\mu \\phi + \\frac{2m^2}{b^2}\\cos(b\\phi),"
}
]
| https://en.wikipedia.org/wiki?curid=1350841 |
1350865 | Affine Lie algebra | In mathematics, an affine Lie algebra is an infinite-dimensional Lie algebra that is constructed in a canonical fashion out of a finite-dimensional simple Lie algebra. Given an affine Lie algebra, one can also form the associated affine Kac-Moody algebra, as described below. From a purely mathematical point of view, affine Lie algebras are interesting because their representation theory, like representation theory of finite-dimensional semisimple Lie algebras, is much better understood than that of general Kac–Moody algebras. As observed by Victor Kac, the character formula for representations of affine Lie algebras implies certain combinatorial identities, the Macdonald identities.
Affine Lie algebras play an important role in string theory and two-dimensional conformal field theory due to the way they are constructed: starting from a simple Lie algebra formula_0, one considers the loop algebra, formula_1, formed by the formula_0-valued functions on a circle (interpreted as the closed string) with pointwise commutator. The affine Lie algebra formula_2 is obtained by adding one extra dimension to the loop algebra and modifying the commutator in a non-trivial way, which physicists call a quantum anomaly (in this case, the anomaly of the WZW model) and mathematicians a central extension. More generally,
if σ is an automorphism of the simple Lie algebra formula_0 associated to an automorphism of its Dynkin diagram, the twisted loop algebra formula_3 consists of formula_0-valued functions "f" on the real line which satisfy
the twisted periodicity condition "f"("x" + 2"π")
"σ f"("x"). Their central extensions are precisely the twisted affine Lie algebras. The point of view of string theory helps to understand many deep properties of affine Lie algebras, such as the fact that the characters of their representations transform amongst themselves under the modular group.
Affine Lie algebras from simple Lie algebras.
Definition.
If formula_0 is a finite-dimensional simple Lie algebra, the corresponding
affine Lie algebra formula_2 is constructed as a central extension of the loop algebra formula_4, with one-dimensional center formula_5
As a vector space,
formula_6
where formula_7 is the complex vector space of Laurent polynomials in the indeterminate "t". The Lie bracket is defined by the formula
formula_8
for all formula_9 and formula_10, where formula_11 is the Lie bracket in the Lie algebra formula_0 and formula_12 is the Cartan-Killing form on formula_13
The affine Lie algebra corresponding to a finite-dimensional semisimple Lie algebra is the direct sum of the affine Lie algebras corresponding to its simple summands. There is a distinguished derivation of the affine Lie algebra defined by
formula_14
The corresponding affine Kac–Moody algebra is defined as a semidirect product by adding an extra generator "d" that satisfies ["d", "A"] = "δ"("A").
Constructing the Dynkin diagrams.
The Dynkin diagram of each affine Lie algebra consists of that of the corresponding simple Lie algebra plus an additional node, which corresponds to the addition of an imaginary root. Of course, such a node cannot be attached to the Dynkin diagram in just any location, but for each simple Lie algebra there exists a number of possible attachments equal to the cardinality of the group of outer automorphisms of the Lie algebra. In particular, this group always contains the identity element, and the corresponding affine Lie algebra is called an untwisted affine Lie algebra. When the simple algebra admits automorphisms that are not inner automorphisms, one may obtain other Dynkin diagrams and these correspond to twisted affine Lie algebras.
Classifying the central extensions.
The attachment of an extra node to the Dynkin diagram of the corresponding simple Lie algebra corresponds to the following construction. An affine Lie algebra can always be constructed as a central extension of the loop algebra of the corresponding simple Lie algebra. If one wishes to begin instead with a semisimple Lie algebra, then one needs to centrally extend by a number of elements equal to the number of simple components of the semisimple algebra. In physics, one often considers instead the direct sum of a semisimple algebra and an abelian algebra formula_15. In this case one also needs to add "n" further central elements for the "n" abelian generators.
The second integral cohomology of the loop group of the corresponding simple compact Lie group is isomorphic to the integers. Central extensions of the affine Lie group by a single generator are topologically circle bundles over this free loop group, which are classified by a two-class known as the first Chern class of the fibration. Therefore, the central extensions of an affine Lie group are classified by a single parameter "k" which is called the "level" in the physics literature, where it first appeared. Unitary highest weight representations of the affine compact groups only exist when "k" is a natural number. More generally, if one considers a semi-simple algebra, there is a central charge for each simple component.
Structure.
Cartan–Weyl basis.
As in the finite case, determining the Cartan–Weyl basis is an important step in determining the structure of affine Lie algebras.
Fix a finite-dimensional, simple, complex Lie algebra formula_0 with Cartan subalgebra formula_16 and a particular root system formula_17. Introducing the notation formula_18, one can attempt to extend a Cartan–Weyl basis formula_19 for formula_0 to one for the affine Lie algebra, given by formula_20, with formula_21 forming an abelian subalgebra.
The eigenvalues of formula_22 and formula_23 on formula_24 are formula_25 and formula_26 respectively and independently of formula_27. Therefore the root formula_28 is infinitely degenerate with respect to this abelian subalgebra. Appending the derivation described above to the abelian subalgebra turns the abelian subalgebra into a Cartan subalgebra for the affine Lie algebra, with eigenvalues formula_29 for formula_30
Killing form.
The Killing form can almost be completely determined using its invariance property. Using the notation formula_31 for the Killing form on formula_0 and formula_32 for the Killing form on the affine Kac–Moody algebra,
formula_33
formula_34
formula_35
where only the last equation is not fixed by invariance and instead chosen by convention. Notably, the restriction of formula_32 to the formula_36 subspace gives a bilinear form with signature formula_37.
Write the affine root associated with formula_24 as formula_38. Defining formula_39, this can be rewritten
formula_40
The full set of roots is
formula_41
Then formula_42 is unusual as it has zero length: formula_43 where formula_44 is the bilinear form on the roots induced by the Killing form.
Affine simple root.
In order to obtain a basis of simple roots for the affine algebra, an extra simple root must be appended, and is given by
formula_45
where formula_46 is the highest root of formula_0, using the usual notion of height of a root. This allows definition of the extended Cartan matrix and extended Dynkin diagrams.
Representation theory.
The representation theory for affine Lie algebras is usually developed using Verma modules. Just as in the case of semi-simple Lie algebras, these are highest weight modules. There are no finite-dimensional representations; this follows from the fact that the null vectors of a finite-dimensional Verma module are necessarily zero; whereas those for the affine Lie algebras are not. Roughly speaking, this follows because the Killing form is Lorentzian in the formula_47 directions, thus formula_48 are sometimes called "lightcone coordinates" on the string. The "radially ordered" current operator products can be understood to be time-like normal ordered by taking formula_49 with formula_50 the time-like direction along the string world sheet and formula_51 the spatial direction.
Vacuum representation of rank "k".
The representations are constructed in more detail as follows.
Fix a Lie algebra formula_0 and basis formula_52. Then formula_53 is a basis for the corresponding loop algebra, and formula_54 is a basis for the affine Lie algebra formula_55.
The vacuum representation of rank formula_56, denoted formula_57 where formula_58, is the complex representation with basis
formula_59
and define the action of formula_55 on formula_60 by (with formula_61)
formula_62
formula_63
Affine Vertex Algebra.
The vacuum representation in fact can be equipped with vertex algebra structure, in which case it is called "the" affine vertex algebra of rank formula_56. The affine Lie algebra naturally extends to the Kac–Moody algebra, with the differential formula_64 represented by the translation operator formula_65 in the vertex algebra.
Weyl group and characters.
The Weyl group of an affine Lie algebra can be written as a semi-direct product of the Weyl group of the zero-mode algebra (the Lie algebra used to define the loop algebra) and the coroot lattice.
The Weyl character formula of the algebraic characters of the affine Lie algebras generalizes to the Weyl-Kac character formula. A number of interesting constructions follow from these. One may construct generalizations of the Jacobi theta function. These theta functions transform under the modular group. The usual denominator identities of semi-simple Lie algebras generalize as well; because the characters can be written as "deformations" or q-analogs of the highest weights, this led to many new combinatoric identities, include many previously unknown identities for the Dedekind eta function. These generalizations can be viewed as a practical example of the Langlands program.
Applications.
Due to the Sugawara construction, the universal enveloping algebra of any affine Lie algebra has the Virasoro algebra as a subalgebra. This allows affine Lie algebras to serve as symmetry algebras of conformal field theories such as WZW models or coset models. As a consequence, affine Lie algebras also appear in the worldsheet description of string theory.
Example.
The Heisenberg algebra defined by generators formula_66 satisfying commutation relations
formula_67
can be realized as the affine Lie algebra formula_68.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{g}"
},
{
"math_id": 1,
"text": "L\\mathfrak{g}"
},
{
"math_id": 2,
"text": "\\hat{\\mathfrak{g}}"
},
{
"math_id": 3,
"text": "L_\\sigma\\mathfrak{g}"
},
{
"math_id": 4,
"text": "\\mathfrak{g}\\otimes\\mathbb{\\Complex}[t,t^{-1}]"
},
{
"math_id": 5,
"text": "\\mathbb{\\Complex}c."
},
{
"math_id": 6,
"text": "\\widehat{\\mathfrak{g}}=\\mathfrak{g}\\otimes\\mathbb{\\Complex}[t,t^{-1}]\\oplus\\mathbb{\\Complex}c,"
},
{
"math_id": 7,
"text": "\\mathbb{\\Complex}[t,t^{-1}]"
},
{
"math_id": 8,
"text": "[a\\otimes t^n+\\alpha c, b\\otimes t^m+\\beta c]=[a,b]\\otimes t^{n+m}+\\langle a|b\\rangle n\\delta_{m+n,0}c"
},
{
"math_id": 9,
"text": "a,b\\in\\mathfrak{g}, \\alpha,\\beta\\in\\mathbb{\\Complex}"
},
{
"math_id": 10,
"text": "n,m\\in\\mathbb{Z}"
},
{
"math_id": 11,
"text": "[a,b]"
},
{
"math_id": 12,
"text": "\\langle\\cdot |\\cdot\\rangle"
},
{
"math_id": 13,
"text": "\\mathfrak{g}."
},
{
"math_id": 14,
"text": " \\delta (a\\otimes t^m+\\alpha c) = t{d\\over dt} (a\\otimes t^m)."
},
{
"math_id": 15,
"text": "\\mathbb{\\Complex}^n"
},
{
"math_id": 16,
"text": "\\mathfrak{h}"
},
{
"math_id": 17,
"text": "\\Delta"
},
{
"math_id": 18,
"text": "X_n = X\\otimes t^n,"
},
{
"math_id": 19,
"text": "\\{H^i\\} \\cup \\{E^\\alpha|\\alpha \\in \\Delta\\}"
},
{
"math_id": 20,
"text": "\\{H^i_n\\} \\cup \\{c\\} \\cup \\{E^\\alpha_n\\}"
},
{
"math_id": 21,
"text": "\\{H^i_0\\} \\cup \\{c\\}"
},
{
"math_id": 22,
"text": "ad(H^i_0)"
},
{
"math_id": 23,
"text": "ad(c)"
},
{
"math_id": 24,
"text": "E^\\alpha_n"
},
{
"math_id": 25,
"text": "\\alpha^i"
},
{
"math_id": 26,
"text": "0"
},
{
"math_id": 27,
"text": "n"
},
{
"math_id": 28,
"text": "\\alpha"
},
{
"math_id": 29,
"text": "(\\alpha^1, \\cdots, \\alpha^{dim \\mathfrak{h}}, 0, n)"
},
{
"math_id": 30,
"text": "E^\\alpha_n."
},
{
"math_id": 31,
"text": "B"
},
{
"math_id": 32,
"text": "\\hat B"
},
{
"math_id": 33,
"text": "\\hat B(X_n, Y_m) = B(X,Y)\\delta_{n+m,0},"
},
{
"math_id": 34,
"text": "\\hat B(X_n, c) = 0, \\hat B(X_n, d) = 0"
},
{
"math_id": 35,
"text": "\\hat B(c, c) = 0, \\hat B(c, d) = 1, \\hat B(d,d) = 0,"
},
{
"math_id": 36,
"text": "c,d"
},
{
"math_id": 37,
"text": "(+,-)"
},
{
"math_id": 38,
"text": "\\hat \\alpha = (\\alpha;0;n)"
},
{
"math_id": 39,
"text": "\\delta = (0,0,1)"
},
{
"math_id": 40,
"text": "\\hat \\alpha = \\alpha + n\\delta."
},
{
"math_id": 41,
"text": "\\hat \\Delta = \\{\\alpha + n\\delta|n \\in \\mathbb Z, \\alpha \\in \\Delta\\}\\cup \\{n\\delta|n \\in \\mathbb Z, n \\neq 0\\}."
},
{
"math_id": 42,
"text": "\\delta"
},
{
"math_id": 43,
"text": "(\\delta, \\delta) = 0"
},
{
"math_id": 44,
"text": "(\\cdot,\\cdot)"
},
{
"math_id": 45,
"text": "\\alpha_0 = -\\theta + \\delta"
},
{
"math_id": 46,
"text": "\\theta"
},
{
"math_id": 47,
"text": "c,\\delta"
},
{
"math_id": 48,
"text": "(z, \\bar{z})"
},
{
"math_id": 49,
"text": "z=\\exp(\\tau + i\\sigma)"
},
{
"math_id": 50,
"text": "\\tau"
},
{
"math_id": 51,
"text": "\\sigma"
},
{
"math_id": 52,
"text": "\\{J^\\rho\\}"
},
{
"math_id": 53,
"text": "\\{J^\\rho_n\\} = \\{J^\\rho \\otimes t^n\\}"
},
{
"math_id": 54,
"text": "\\{J^\\rho_n\\}\\cup \\{c\\}"
},
{
"math_id": 55,
"text": "\\hat \\mathfrak{g}"
},
{
"math_id": 56,
"text": "k"
},
{
"math_id": 57,
"text": "V_k(\\mathfrak g)"
},
{
"math_id": 58,
"text": "k \\in \\mathbb C"
},
{
"math_id": 59,
"text": "\\{v^{\\rho_1\\cdots \\rho_m}_{n_1\\cdots n_m}:n_1\\geq \\cdots \\geq n_m \\geq 1, \\rho_1 \\leq \\cdots \\leq \\rho_m\\} \\cup \\{\\Omega\\}"
},
{
"math_id": 60,
"text": "V = V_k(\\mathfrak{g})"
},
{
"math_id": 61,
"text": "n > 0"
},
{
"math_id": 62,
"text": "c = k\\text{id}_V, \\, J^\\rho_n \\Omega = 0,"
},
{
"math_id": 63,
"text": "J^\\rho_{-n}\\Omega = v^\\rho_n \\, J^\\rho_{-n}v^{\\rho_1\\cdots \\rho_m}_{n_1\\cdots n_m} = v^{\\rho\\rho_1\\cdots \\rho_m}_{n n_1\\cdots n_m}."
},
{
"math_id": 64,
"text": "d"
},
{
"math_id": 65,
"text": "T"
},
{
"math_id": 66,
"text": "a_n, n \\in \\mathbb{Z}"
},
{
"math_id": 67,
"text": "[a_m, a_n] = m\\delta_{m+n,0}c"
},
{
"math_id": 68,
"text": "\\hat \\mathfrak u(1)"
}
]
| https://en.wikipedia.org/wiki?curid=1350865 |
13510193 | Densely defined operator | Function that is defined almost everywhere (mathematics)
In mathematics – specifically, in operator theory – a densely defined operator or partially defined operator is a type of partially defined function. In a topological sense, it is a linear operator that is defined "almost everywhere". Densely defined operators often arise in functional analysis as operations that one would like to apply to a larger class of objects than those for which they "a priori" "make sense".
A closed operator that is used in practice is often densely defined.
Definition.
A densely defined linear operator formula_0 from one topological vector space, formula_1 to another one, formula_2 is a linear operator that is defined on a dense linear subspace formula_3 of formula_4 and takes values in formula_2 written formula_5 Sometimes this is abbreviated as formula_6 when the context makes it clear that formula_4 might not be the set-theoretic domain of formula_7
Examples.
Consider the space formula_8 of all real-valued, continuous functions defined on the unit interval; let formula_9 denote the subspace consisting of all continuously differentiable functions. Equip formula_8 with the supremum norm formula_10; this makes formula_8 into a real Banach space. The differentiation operator formula_11 given by formula_12 is a densely defined operator from formula_8 to itself, defined on the dense subspace formula_13 The operator formula_14 is an example of an unbounded linear operator, since
formula_15
This unboundedness causes problems if one wishes to somehow continuously extend the differentiation operator formula_11 to the whole of formula_16
The Paley–Wiener integral, on the other hand, is an example of a continuous extension of a densely defined operator. In any abstract Wiener space formula_17 with adjoint formula_18 there is a natural continuous linear operator (in fact it is the inclusion, and is an isometry) from formula_19 to formula_20 under which formula_21 goes to the equivalence class formula_22 of formula_23 in formula_24 It can be shown that formula_19 is dense in formula_25 Since the above inclusion is continuous, there is a unique continuous linear extension formula_26 of the inclusion formula_27 to the whole of formula_25 This extension is the Paley–Wiener map.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "X,"
},
{
"math_id": 2,
"text": "Y,"
},
{
"math_id": 3,
"text": "\\operatorname{dom}(T)"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "T : \\operatorname{dom}(T) \\subseteq X \\to Y."
},
{
"math_id": 6,
"text": "T : X \\to Y"
},
{
"math_id": 7,
"text": "T."
},
{
"math_id": 8,
"text": "C^0([0, 1]; \\R)"
},
{
"math_id": 9,
"text": "C^1([0, 1]; \\R)"
},
{
"math_id": 10,
"text": "\\|\\,\\cdot\\,\\|_\\infty"
},
{
"math_id": 11,
"text": "D"
},
{
"math_id": 12,
"text": "(\\mathrm{D} u)(x) = u'(x)"
},
{
"math_id": 13,
"text": "C^1([0, 1]; \\R)."
},
{
"math_id": 14,
"text": "\\mathrm{D}"
},
{
"math_id": 15,
"text": "u_n (x) = e^{- n x} \\quad \\text{ has } \\quad \\frac{\\left\\|\\mathrm{D} u_n\\right\\|_{\\infty}}{\\left\\|u_n\\right\\|_\\infty} = n."
},
{
"math_id": 16,
"text": "C^0([0, 1]; \\R)."
},
{
"math_id": 17,
"text": "i : H \\to E"
},
{
"math_id": 18,
"text": "j := i^* : E^* \\to H,"
},
{
"math_id": 19,
"text": "j\\left(E^*\\right)"
},
{
"math_id": 20,
"text": "L^2(E, \\gamma; \\R),"
},
{
"math_id": 21,
"text": "j(f) \\in j\\left(E^*\\right) \\subseteq H"
},
{
"math_id": 22,
"text": "[f]"
},
{
"math_id": 23,
"text": "f"
},
{
"math_id": 24,
"text": "L^2(E, \\gamma; \\R)."
},
{
"math_id": 25,
"text": "H."
},
{
"math_id": 26,
"text": "I : H \\to L^2(E, \\gamma; \\R)"
},
{
"math_id": 27,
"text": "j\\left(E^*\\right) \\to L^2(E, \\gamma; \\R)"
}
]
| https://en.wikipedia.org/wiki?curid=13510193 |
1351125 | K-25 | Manhattan Project codename for a program to produce enriched uranium
K-25 was the codename given by the Manhattan Project to the program to produce enriched uranium for atomic bombs using the gaseous diffusion method. Originally the codename for the product, over time it came to refer to the project, the production facility located at the Clinton Engineer Works in Oak Ridge, Tennessee, the main gaseous diffusion building, and ultimately the site. When it was built in 1944, the four-story K-25 gaseous diffusion plant was the world's largest building, comprising over of floor space and a volume of .
Construction of the K-25 facility was undertaken by J. A. Jones Construction. At the height of construction, over 25,000 workers were employed on the site. Gaseous diffusion was but one of three enrichment technologies used by the Manhattan Project. Slightly enriched product from the S-50 thermal diffusion plant was fed into the K-25 gaseous diffusion plant. Its product in turn was fed into the Y-12 electromagnetic plant. The enriched uranium was used in the Little Boy atomic bomb used in the atomic bombing of Hiroshima. In 1946, the K-25 gaseous diffusion plant became capable of producing highly enriched product.
After the war, four more gaseous diffusion plants named K-27, K-29, K-31 and K-33 were added to the site. The K-25 site was renamed the Oak Ridge Gaseous Diffusion Plant in 1955. Production of enriched uranium ended in 1964, and gaseous diffusion finally ceased on the site on 27 August 1985. The Oak Ridge Gaseous Diffusion Plant was renamed the Oak Ridge K-25 Site in 1989 and the East Tennessee Technology Park in 1996. Demolition of all five gaseous diffusion plants was completed in February 2017.
Background.
The discovery of the neutron by James Chadwick in 1932, followed by that of nuclear fission in uranium by German chemists Otto Hahn and Fritz Strassmann in 1938, and its theoretical explanation (and naming) by Lise Meitner and Otto Frisch soon after, opened up the possibility of a controlled nuclear chain reaction with uranium. At the Pupin Laboratories at Columbia University, Enrico Fermi and Leo Szilard began exploring how this might be achieved. Fears that a German atomic bomb project would develop atomic weapons first, especially among scientists who were refugees from Nazi Germany and other fascist countries, were expressed in the Einstein-Szilard letter to the President of the United States, Franklin D. Roosevelt. This prompted Roosevelt to initiate preliminary research in late 1939.
Niels Bohr and John Archibald Wheeler applied the liquid drop model of the atomic nucleus to explain the mechanism of nuclear fission. As the experimental physicists studied fission, they uncovered puzzling results. George Placzek asked Bohr why uranium seemed to fission with both fast and slow neutrons. Walking to a meeting with Wheeler, Bohr had an insight that the fission at low energies was caused by the uranium-235 isotope, while at high energies it was mainly a reaction with the far more abundant uranium-238 isotope. The former makes up just 0.714 percent of the uranium atoms in natural uranium, about one in every 140; natural uranium is 99.28 percent uranium-238. There is also a tiny amount of uranium-234, which accounts for just 0.006 percent.
At Columbia, John R. Dunning believed this was the case, but Fermi was not so sure. The only way to settle this was to obtain a sample of uranium-235 and test it. He had Alfred O. C. Nier from the University of Minnesota to prepare samples of uranium enriched in uranium-234, 235 and 238 using a mass spectrometer. These were ready in February 1940, and Dunning, Eugene T. Booth and Aristid von Grosse then carried out a series of experiments. They demonstrated that uranium-235 was indeed primarily responsible for fission with slow neutrons, but they were unable to determine precise neutron capture cross sections because their samples were not sufficiently enriched.
At the University of Birmingham in Britain, the Australian physicist Mark Oliphant assigned two refugee physicists—Otto Frisch and Rudolf Peierls—the task of investigating the feasibility of an atomic bomb, ironically because their status as enemy aliens precluded their working on secret projects like radar. Their March 1940 Frisch–Peierls memorandum indicated that the critical mass of uranium-235 was within an order of magnitude of , which was small enough to be carried by a bomber aircraft of the day.
Gaseous diffusion.
In April 1940, Jesse Beams, Ross Gunn, Fermi, Nier, Merle Tuve and Harold Urey had a meeting at the American Physical Society in Washington, D.C. At the time, the prospect of building an atomic bomb seemed dim, and even creating a chain reaction would likely require enriched uranium. They therefore recommended that research be conducted with the aim of developing the means to separate kilogram amounts of uranium-235. At a lunch on 21 May 1940, George B. Kistiakowsky suggested the possibility of using gaseous diffusion.
Gaseous diffusion is based on Graham's law, which states that the rate of effusion of a gas through a porous barrier is inversely proportional to the square root of the gas's molecular mass. In a container with a porous barrier containing a mixture of two gases, the lighter molecules will pass out of the container more rapidly than the heavier molecules. The gas leaving the container is slightly enriched in the lighter molecules, while the residual gas is slightly depleted. A container wherein the enrichment process takes place through gaseous diffusion is called a diffuser.
Gaseous diffusion had been used to separate isotopes before. Francis William Aston had used it to partially separate isotopes of neon in 1931, and Gustav Ludwig Hertz had improved on the method to almost completely separate neon by running it through a series of stages. In the United States, William D. Harkins had used it to separate chlorine. Kistiakowsky was familiar with the work of Charles G. Maier at the Bureau of Mines, who had also used the process to separate gases.
Uranium hexafluoride (UF6) was the only known compound of uranium sufficiently volatile to be used in the gaseous diffusion process. Before this could be done, the Special Alloyed Materials (SAM) Laboratories at Columbia University and the Kellex Corporation had to overcome formidable difficulties to develop a suitable barrier. Fluorine consists of only a single natural isotope 19F, so the 1percent difference in molecular weights between 235UF6 and 238UF6 is solely the difference in weights of the uranium isotopes. For these reasons, UF6 was the only choice as a feedstock for the gaseous diffusion process. Uranium hexafluoride, a solid at room temperature, sublimes at at . Applying Graham's law to uranium hexafluoride:
formula_0
where:
"Rate1" is the rate of effusion of 235UF6.
"Rate2" is the rate of effusion of 238UF6.
"M1" is the molar mass of 235UF6 ≈ 235 + 6 × 19 = 349g·mol−1
"M2" is the molar mass of 238UF6 ≈ 238 + 6 × 19 = 352g·mol−1
Uranium hexafluoride is a highly corrosive substance. It is an oxidant and a Lewis acid which is able to bind to fluoride. It reacts with water to form a solid compound and is very difficult to handle on an industrial scale.
Organization.
Booth, Dunning and von Grosse investigated the gaseous diffusion process. In 1941, they were joined by Francis G. Slack from Vanderbilt University and Willard F. Libby from the University of California. In July 1941, an Office of Scientific Research and Development (OSRD) contract was awarded to Columbia University to study gaseous diffusion. With the help of the mathematician Karl P. Cohen, they built a twelve-stage pilot gaseous diffusion plant at the Pupin Laboratories. Initial tests showed that the stages were not as efficient as the theory would suggest; they would need about 4,600 stages to enrich to 90 percent uranium-235.
A secret contract was awarded to M. W. Kellogg for engineering studies in July 1941. This included the design and construction of a ten-stage pilot gaseous diffusion plant. On 14 December 1942, the Manhattan District, the US Army component of the Manhattan Project (as the effort to develop an atomic bomb became known) contracted Kellogg to design, build and operate a full-scale production plant. Unusually, the contract did not require any guarantees from Kellogg that it could actually accomplish this task. Because the scope of the project was not well defined, Kellogg and the Manhattan District agreed to defer any financial details to a later, cost-plus contract, which was executed in April 1944. Kellogg was then paid $2.5 million.
For security reasons, the Army had Kellogg establish a wholly-owned subsidiary, the Kellex Corporation, so the gaseous diffusion project could be kept separate from other company work. "Kell" stood for "Kellogg" and "X" for secret. Kellex operated as a self-contained and autonomous entity. Percival C. Keith, Kellogg's vice president of engineering, was placed in charge of Kellex. He drew extensively on Kellogg to staff the new company but also had to recruit staff from outside. Eventually, Kellex would have over 3,700 employees.
Dunning remained in charge at Columbia until 1May 1943, when the Manhattan District took over the contract from OSRD. By this time Slack's group had nearly 50 members. His was the largest group, and it was working on the most challenging problem: the design of a suitable barrier through which the gas could diffuse. Another 30 scientists and technicians were working in five other groups. Henry A. Boorse was responsible for the pumps; Booth for the cascade test units. Libby handled chemistry, Nier analytical work and Hugh C. Paxton, engineering support. The Army reorganized the research effort at Columbia, which became the Special Alloyed Materials (SAM) Laboratories. Urey was put in charge, Dunning becoming head of one of its divisions. It would remain this way until 1March 1945, when the SAM Laboratories were taken over by Union Carbide.
The expansion of the SAM Laboratories led to a search for more space. The Nash Garage Building at 3280 Broadway was purchased by Columbia University. Originally an automobile dealership, it was just a few blocks from the campus. Major Benjamin K. Hough Jr. was the Manhattan District's Columbia Area engineer, and he moved his offices there too. Kellex was in the Woolworth Building at 233 Broadway in Lower Manhattan. In January 1943, Lieutenant Colonel James C. Stowers was appointed New York Area Engineer, with responsibility for the entire K-25 Project. His small staff, initially of 20 military and civilian personnel but which gradually grew to over 70, was co-located in the Woolworth Building. The Manhattan District had its offices nearby at 270 Broadway until it moved to Oak Ridge, Tennessee, in August 1943.
Codename.
The codename "K-25" was a combination of the "K" from Kellex, and "25", a World War II-era code designation for uranium-235 (an isotope of element 92, mass number 235). The term was first used in Kellex internal reports for the end product, enriched uranium, in March 1943. By April 1943, the term "K-25 plant" was being used for the plant that created it. That month, the term "K-25 Project" was applied to the entire project to develop uranium enrichment using the gaseous diffusion process. When other "K-" buildings were added after the war, "K-25" became the name of the original, larger complex.
Research and development.
Diffusers.
The highly corrosive nature of uranium hexafluoride presented several technological challenges. Pipes and fittings that it came into contact with had to be made of or clad with nickel. This was feasible for small objects but impractical for the large diffusers, the tank-like containers that had to hold the gas under pressure. Nickel was a vital war material, and although the Manhattan Project could use its overriding priority to acquire it, making the diffusers out of solid nickel would deplete the national supply. The director of the Manhattan Project, Brigadier General Leslie R. Groves Jr., gave the contract to build the diffusers to Chrysler. In turn Chrysler president K. T. Keller assigned Carl Heussner, an expert in electroplating, the task of developing a process for electroplating such a large object. Senior Chrysler executives called this "Project X-100".
Electroplating used one-thousandth of the amount of nickel needed for a solid nickel diffuser. The SAM Laboratories had already attempted this and failed. Heussner experimented with a prototype in a building built within a building, and found that it could be done, so long as the series of pickling and scaling steps required were done without anything coming in contact with oxygen. Chrysler's entire factory at Lynch Road in Detroit was turned over to the manufacture of diffusers. The electroplating process required over of floor space, several thousand workers and a complicated air filtration system to ensure the nickel was not contaminated. By the war's end, Chrysler had built and shipped more than 3,500 diffusers.
Pumps.
The gaseous diffusion process required suitable pumps that had to meet stringent requirements. Like the diffusers, the pumps had to resist corrosion from the uranium hexafluoride feed. Corrosion would not only damage the pumps, it would contaminate the feed. There could be no leakage of uranium hexafluoride (especially if it was already enriched) or of oil, which would react with the uranium hexafluoride. The pumps needed to operate at high rates and handle a gas 12 times as dense as air. To meet these requirements, the SAM Laboratories chose to use centrifugal pumps. The desired compression ratio of 2.3:1 to 3.2:1 was unusually high for this type of pump. For some purposes, a reciprocating pump would suffice, and these were designed by Boorse at the SAM Laboratories, while Ingersoll Rand tackled the centrifugal pumps.
In early 1943, Ingersoll Rand pulled out. Keith approached the Clark Compressor Company and Worthington Pump and Machinery, but they turned it down, saying it could not be done. So Keith and Groves met with executives at Allis-Chalmers, who agreed to build a new factory to produce the pumps, even though the pump design was still uncertain. The SAM Laboratories came up with a design, and Westinghouse built some prototypes that were successfully tested. Then Judson Swearingen at the Elliott Company came up with a revolutionary and promising design that was mechanically stable with seals that would contain the gas. This design was manufactured by Allis-Chalmers.
Barriers.
Difficulties with the diffusers and pumps paled in significance beside those with the porous barrier. To work, the gaseous diffusion process required a barrier with microscopic holes, but not subject to plugging. It had to be porous but strong enough to handle the high pressures. And, like everything else, it had to resist corrosion from uranium hexafluoride. The latter criterion suggested a nickel barrier. Foster C. Nix at the Bell Telephone Laboratories experimented with nickel powder, while Edward O. Norris at the C. O. Jelliff Manufacturing Corporation and Edward Adler at the City College of New York worked on a design with electroplated nickel. Norris was an English interior decorator who had developed a very fine metal mesh for use with a spray gun. The design appeared too brittle and fragile for the proposed use, particularly on the higher stages of enrichment, but there was hope that this could be overcome.
In 1943, Urey brought in Hugh S. Taylor from Princeton University to look at the problem of a usable barrier. Libby made progress on understanding the chemistry of uranium hexafluoride, leading to ideas on how to prevent corrosion and plugging. Chemical researchers at the SAM Laboratories studied fluorocarbons, which resisted corrosion and could be used as lubricants and coolants in the gaseous diffusion plant. Despite this progress, the K-25 Project was in serious trouble without a suitable barrier, and by August 1943 it was facing cancellation. On 13 August Groves informed the Military Policy Committee (the senior committee that steered the Manhattan Project) that gaseous diffusion enrichment in excess of fifty percent was probably infeasible, and the gaseous diffusion plant would be limited to producing product with a lower enrichment which could be fed into the calutrons of the Y-12 electromagnetic plant. Urey therefore began preparations to mass-produce the Norris-Adler barrier, despite its problems.
Meanwhile, Union Carbide and Kellex had made researchers at the Bakelite Corporation, a subsidiary of Union Carbide, aware of Nix's unsuccessful efforts with powdered nickel barriers. To Frazier Groff and other researchers at Bakelite's laboratories in Bound Brook, New Jersey, it seemed that Nix was not taking advantage of the latest techniques, and they began their own development efforts. Both Bell and Bound Brook sent samples of their powdered nickel barriers to Taylor for evaluation, but he was unimpressed; neither had come up with a practical barrier. At Kellogg's laboratory in Jersey City, New Jersey, Clarence A. Johnson, who was aware of the steps taken by the SAM Laboratories to improve the Norris-Adler barrier, realized that they could also be taken with the Bakelite barrier. The result was a barrier better than either, although still short of what was required. At a meeting at Columbia with the Army in attendance on 20 October 1943, Keith proposed switching the development effort to the Johnson barrier. Urey balked at this, fearing this would destroy morale at the SAM Laboratories. The issue was put to Groves at a meeting on 3November 1943, and he decided to pursue development of both the Johnson and the Norris-Adler barriers.
Groves summoned British help, in the form of Wallace Akers and fifteen members of the British gaseous diffusion project, who reviewed the progress made thus far. Their verdict was that while the new barrier was potentially superior, Keith's undertaking to build a new facility to produce the new barrier in just four months, produce all the barriers required in another four and have the production facility up and running in just twelve "would be something of a miraculous achievement". On 16 January 1944, Groves ruled in favor of the Johnson barrier. Johnson built a pilot plant for the new process at the Nash Building. Taylor analyzed the sample barriers produced and pronounced only 5percent of them to be of acceptable quality. Edward Mack Jr. created his own pilot plant at Schermerhorn Hall at Columbia, and Groves obtained of nickel from the International Nickel Company. With plenty of nickel to work with, by April 1944 both pilot plants were producing barriers of acceptable quality at a 45 percent rate.
Construction.
The project site chosen was at the Clinton Engineer Works in Tennessee. The area was inspected by representatives of the Manhattan District, Kellex and Union Carbide on 18 January 1943. Consideration was also given to sites near the Shasta Dam in California and the Big Bend of the Columbia River in Washington state. The lower humidity of these areas made them more suitable for a gaseous diffusion plant, but the Clinton Engineer Works site was immediately available and otherwise suitable. Groves decided on the site in April 1943.
Under the contract, Kellex had responsibility not just for the design and engineering of the K-25 plant, but for its construction as well. The prime construction contractor was J. A. Jones Construction from Charlotte, North Carolina. It had impressed Groves with its work on several major Army construction projects, such as Camp Shelby, Mississippi. There were more than sixty subcontractors. Kellex engaged another construction company, Ford, Bacon & Davis, to build the fluorine and nitrogen facilities, and the conditioning plant. Construction work was initially the responsibility of Lieutenant Colonel Warren George, the chief of the construction division of the Clinton Engineer Works. Major W. P. Cornelius became the construction officer responsible for K-25 works on 31 July 1943. He was answerable to Stowers back in Manhattan. He became chief of the construction division on 1March 1946. J. J. Allison was the resident engineer from Kellex, and Edwin L. Jones, the General Manager of J. A. Jones.
Power plant.
Construction began before completion of the design for the gaseous diffusion process. Because of the large amount of electric power the K-25 plant was expected to consume, it was decided to provide it with its own electric power plant. While the Tennessee Valley Authority (TVA) believed it could supply the Clinton Engineer Works' needs, there was unease about relying on a single supplier when a power failure could cost the gaseous diffusion plant weeks of work, and the lines to TVA could be sabotaged. A local plant was more secure. The Kellex engineers were also attracted to the idea of being able to generate the variable frequency current required by the gaseous diffusion process without complicated transformers.
A site was chosen for this on the western edge of the Clinton Engineer Works site where it could draw cold water from the Clinch River and discharge warm water into Poplar Creek without affecting the inflow. Groves approved this location on 3May 1943. Surveying began on the power plant site on 31 May, and J. A. Jones started construction work the following day. Because the bedrock was below the surface, the power plant was supported on 40 concrete-filled caissons. Installation of the first boiler commenced in October 1943. Construction work was complete by late September. To prevent sabotage, the power plant was connected to the gaseous diffusion plant by an underground conduit. Despite this, there was one act of sabotage, in which a nail was driven through the electric cable. The culprit was never found but was considered more likely to be a disgruntled employee than an Axis spy.
Electric power in the United States was generated at 60 hertz; the power house was able to generate variable frequencies between 45 and 60 hertz, and constant frequencies of 60 and 120 hertz. This capability was not ultimately required, and all but one of the K-25 systems ran on a constant 60 hertz, the exception using a constant 120 hertz. The first coal-fired boiler was started on 7April 1944, followed by the second on 14 July and the third on 2November. Each produced of steam per hour and . To obtain the fourteen turbine generators needed, Groves had to use the Manhattan Project's priority to overrule Julius Albert Krug, the director of the Office of War Utilities. The turbine generators had a combined output of 238,000 kilowatts. The power plant could also receive power from TVA. It was decommissioned in the 1960s and demolished in 1995.
Gaseous diffusion plant.
A site for the K-25 facility was chosen near the high school of the town of Wheat. As the dimensions of the K-25 facility became more apparent, it was decided to move it to a larger site near Poplar Creek, closer to the power plant. This site was approved on 24 June 1943. Considerable work was required to prepare the site. Existing roads in the area were improved to take heavy traffic. A road was built to connect the site to US Route 70, and another, long, to connect with Tennessee State Route 61. A ferry over the Clinch River was upgraded and then replaced with a long bridge in December 1943. A railroad spur was run from Blair, Tennessee, to the K-25 site. Some of sidings were also provided. The first carload of freight traversed the line on 18 September 1943.
It was initially intended that the construction workers should live off-site, but the poor condition of the roads and a shortage of accommodations in the area made commuting long and difficult, which in turn made it difficult to find and retain workers. Thus construction workers were housed in large hutment and trailer camps. The J. A. Jones camp for K-25 workers, known as Happy Valley, held 15,000 people. This required 8dormitories, 17 barracks, 1,590 hutments, 1,153 trailers and 100 Victory Houses. A pumping station was built to supply drinking water from the Clinch River, along with a water treatment plant. Amenities included a school, eight cafeterias, a bakery, theater, three recreation halls, a warehouse and a cold storage plant. Ford, Bacon & Davis established a smaller camp for 2,100 people. Responsibility for the camps was transferred to the Roane-Anderson Company on 25 January 1946, and the school was transferred to district control in March 1946.
Work began on the main facility area on 20 October 1943. Although the site was generally flat, some of soil and rock had to be excavated from areas up to high, and six major areas had to be filled, to a maximum depth of . Normally buildings containing complicated heavy machinery would rest on concrete caissons down to the bedrock, but this would have required thousands of caissons. To save time, soil compaction and shallow footings were used instead. Layers were laid down and compacted with sheepsfoot rollers in the areas that had to be filled, and the footings were laid over compacted soil in the low-lying areas and the undisturbed soil in the areas that had been excavated. Activities overlapped, so concrete pouring began while grading was still going on. Cranes started lifting the steel frames into place on 19 January 1944.
Kellex's design for the main process building of K-25 called for a four-story U-shaped structure long containing 51 main process buildings and threepurge cascade buildings. These were divided into nine sections. Within these were cells of six stages. The cells could be operated independently or consecutively within a section. Similarly, the sections could be operated separately or as part of a single cascade. When completed, there were 2,892 stages. The basement housed the auxiliary equipment, such as the transformers, switch gears, and air conditioning systems. The ground floor contained the cells. The third level contained the piping. The fourth floor was the operating floor, which contained the control room and the hundreds of instrument panels. From here, the operators monitored the process. The first section was ready for test runs on 17 April 1944, although the barriers were not yet ready to be installed.
The main process building surpassed The Pentagon as the largest building in the world, with a floor area of , and an enclosed volume of . Construction required of concrete and of gas pipes. Because uranium hexafluoride corrodes steel and steel piping had to be coated in nickel, smaller pipes were made of copper or monel. The equipment operated under vacuum pressures, so plumbing had to be air tight. Special efforts were made to create as clean an environment as possible to areas where piping or fixtures were being installed. J. A. Jones established a special cleanliness unit on 18 April 1944. Buildings were completely sealed off, air was filtered, and all cleaning was with vacuum cleaners and mopping. Workers wore white lint-free gloves. At the peak of construction activity in May 1945, 25,266 people were employed on the site.
Other buildings.
Although by far the largest, the main process building (K-300) was but one of many that made up the facility. There was a conditioning building (K-1401), where piping and equipment were cleaned prior to installation. A feed purification building (K-101) was built to remove impurities from the uranium hexafluoride, but never operated as such because the suppliers provided feed pure enough to be fed into the gaseous diffusion process. The three-story surge and waste removal building (K-601) processed the "tail" stream of depleted uranium hexafluoride. The air conditioning building (K-1401) provided per minute of clean, dry air. K-1201 compressed the air. The nitrogen plant (K-1408) provided gas for use as a pump sealant and to protect equipment from moist air.
The fluorine generating plant (K-1300) generated, bottled and stored fluorine. It had not been in great demand before the war, and Kellex and the Manhattan District considered four different processes for large-scale production. A process developed by the Hooker Chemical Company was chosen. Owing to the hazardous nature of fluorine, it was decided that shipping it across the United States was inadvisable and it should be manufactured on site at the Clinton Engineer Works. Two pump houses (K-801 and K-802) and two cooling towers (H-801 and H-802) provided of cooling water per day for the motors and compressors.
The administration building (K-1001) provided of office space. A laboratory building (K-1401) contained facilities for testing and analyzing feed and product. Five drum warehouses (K-1025-A to -E) had of floor space to store drums of uranium hexafluoride. There were also warehouses for general stores (K-1035), spare parts (K-1036), and equipment (K-1037). A cafeteria (K-1002) provided meal facilities, including a segregated lunch room for African Americans. There were three changing houses (K-1008-A, B and C), a dispensary (K-1003), an instrument repair building (K-1024), and a fire station (K-1021).
In mid-January 1945, Kellex proposed an extension to K-25 to allow product enrichment of up to 85 percent. Groves initially approved this but later canceled it in favor of a 540-stage side feed unit, which became known as K-27, which could process a slightly enriched product. This could then be fed into K-25 or the calutrons at Y-12. Kellex estimated that using the enriched feed from K-27 could lift the output from K-25 from 35 to 60 percent uranium-235. Construction started at K-27 on 3April 1945 and was completed in December 1945. The five drum warehouses were moved by truck to make way for K-27. The construction work was expedited by making it "virtually a Chinese copy" of a section of K-25. By 31 December 1946, when the Manhattan Project ended, 110,048,961 man-hours of construction work had been performed at the K-25 site. The total cost, including that of K-27, was $479,589,999 (equivalent to $ in 2023).
The water tower (K-1206-F) was a tall structure that held of water. It was built in 1958 by the Chicago Bridge and Iron Company and served as reservoir for the fire suppression system. Over of steel was used in its construction. It operated until June 2013 and was demolished in August 2013.
Operations.
The preliminary specification for the K-25 plant in March 1943 called for it to produce per day of product that was 90 percent uranium-235. As the practical difficulties were realized, this target was reduced to 36 percent. On the other hand, the cascade design meant construction did not need to be complete before the plant started operating. In August 1943, Kellex submitted a schedule that called for a capability to produce material enriched to 5percent uranium-235 by 1June 1945; 15 percent by 1July; and 36 percent by 23 August. This schedule was revised in August 1944 to 0.9 percent by 1January 1945; 5percent by 10 June; 15 percent by 1August; 23 percent by 13 September; and 36 percent as soon as possible after that.
A meeting between the Manhattan District and Kellogg on 12 December 1942 recommended the K-25 plant be operated by Union Carbide. This would be through a wholly-owned subsidiary, Carbon and Carbide Chemicals. A cost-plus-fixed-fee contract was signed on 18 January 1943, setting the fee at $75,000 per month. This was later increased to $96,000 per month to operate both K-25 and K-27. Union Carbide did not wish to be the sole operator of the facility; Union Carbide suggested the conditioning plant be built and operated by Ford, Bacon & Davis. The Manhattan District found this acceptable, and a cost-plus-fixed-fee contract was negotiated with a fee of $216,000 for services up to the end of June 1945. The contract was terminated early on 1May 1945, when Union Carbide took over the plant. Ford, Bacon & Davis was therefore paid $202,000. The other exception was the fluorine plant. Hooker Chemical was asked to supervise its construction of the fluorine plant and initially to operate it for a fixed fee of $24,500. The plant was turned over to Union Carbide on 1February 1945.
Part of the K-300 complex was taken over by Union Carbide in August 1944 and was run as a pilot plant, training operators and developing procedures, using nitrogen instead of uranium hexafluoride until October 1944, and then perfluoroheptane until April 1945. The design of the gaseous diffusion plant allowed for it to be completed in sections and for the sections to be put into operation while work continued on the others. J. A. Jones completed the first 60 stages by the end of 1944. Before each stage was accepted, it underwent tests by J. A. Jones, Carbide and Carbon, and SAM Laboratories technicians to verify that the equipment was working and there were no leaks. Between four and six hundred people devoted eight months to this testing. Perfluoroheptane was used as a test fluid until February 1945, when it was decided to use uranium hexafluoride despite its corrosive nature.
Manhattan District engineer Colonel Kenneth Nichols placed Major John J. Moran in charge of production at K-25. Production commenced in February 1945, and the first product was shipped to the calutrons in March. By April, the gaseous diffusion plant was producing 1.1 percent product. It was then decided that instead of processing uranium hexafluoride feed from the Harshaw Chemical Company, the gaseous diffusion plant would take the product of the S-50 thermal diffusion plant, with an average enrichment of about 0.85 percent. Product enrichment continued to improve as more stages came online and performed better than anticipated. By June product was being enriched to 7percent; by September it was 23 percent. The S-50 plant ceased operation on 9September, and Kellex transferred the last unit to Union Carbide on 11 September. Highly enriched uranium was used in the Little Boy atomic bomb used in the bombing of Hiroshima on 6August.
With the end of the war in August 1945, the Manhattan Project's priority shifted from speed to economy and efficiency. The cascades were configurable, so they could produce a large amount of slightly enriched product by running them in parallel, or a small amount of highly enriched product through running them in series. By early 1946, with K-27 in operation, the facility was producing per day, enriched to 30 percent. The next step was to increase the enrichment further to 60 percent. This was achieved on 20 July 1946. This presented a problem, because Y-12 was not equipped to handle feed that was so highly enriched, but the Los Alamos Laboratory required 95 percent. For a time, product was mixed with feed to reduce the enrichment to 30 percent. Taking the concentration up to 95 percent raised safety concerns, as there was the risk of a criticality accident.
After some deliberation, with opinions sought and obtained from Percival Keith, Norris Bradbury, Darol Froman, Elmer E. Kirkpatrick, Kenneth Nichols and Edward Teller, it was decided that this could be done safely if appropriate precautions were taken. On 28 November 1946, the K-25 plant began producing 94 percent product. At this point, they ran into a serious flaw in the gaseous diffusion concept: enrichment in uranium-235 also enriched the product in the unwanted and fairly useless uranium-234, making it difficult to raise the enrichment to 95 percent. On 6December 1946, production was dropped back to a steady per day enriched to 93.7 percent uranium-235, along with 1.9 percent uranium-234. This was regarded as a satisfactory product by the Los Alamos Laboratory, so on 26 December 1946 enrichment activity at Y-12 was curtailed. The Manhattan Project ended a few days later. Responsibility for the K-25 facility then passed to the newly established Atomic Energy Commission on 1January 1947. Workers at the plant were represented by the Oil, Chemical and Atomic Workers International Union.
Closure and demolition.
K-25 became a prototype for other gaseous diffusion facilities established in the early post-war years. The first of these was the K-27 completed in September 1945. It was followed by the K-29 in 1951, the K-31 in 1951 and the K-33 in 1954. Gaseous diffusion facilities were built at Paducah, Kentucky, in 1952, and Portsmouth, Ohio, in 1954. The K-25 plant was renamed the Oak Ridge Gaseous Diffusion Plant in 1955.
Today, uranium isotope separation is usually done by the more energy-efficient ultra centrifuge process, developed in the Soviet Union after World War II by Soviet and captured German engineers working in detention. The centrifuge process was the first isotope separation method considered for the Manhattan Project but was abandoned due to technical challenges early in the project. When German scientists and engineers were released from Soviet captivity in the mid-1950s, the West became aware of the ultra centrifuge design and began shifting uranium enrichment to this much more efficient process. As centrifuge technology advanced, it became possible to carry out uranium enrichment on a smaller scale without the vast resources that were necessary to build and operate 1940s and 1950s "K" and "Y" style separation plants, a development which had the effect of increasing nuclear proliferation concerns.
Centrifuge cascades began operating at Oak Ridge in 1961. A gas centrifuge test facility (K-1210) opened in 1975, followed by a larger centrifuge plant demonstration facility (K-1220) in 1982. In response to an order from President Lyndon B. Johnson to cut production of enriched uranium by 25 percent, K-25 and K-27 ceased production in 1964, but in 1969 K-25 began producing uranium enriched to 3to 5percent for use in nuclear reactors. Martin Marietta Energy replaced Union Carbide as the operator in 1984. Gaseous diffusion ceased on 27 August 1985. The Oak Ridge Gaseous Diffusion Plant was renamed the Oak Ridge K-25 Site in 1989 and the East Tennessee Technology Park in 1996. Production of enriched uranium using gaseous diffusion ceased in Portsmouth in 2001 and at Paducah in 2013. Presently all commercial uranium enrichment in the United States is carried out using gas centrifuge technology.
The United States Department of Energy contracted with British Nuclear Fuels Ltd in 1997 to decontaminate and decommission the facilities. Its subsidiary Reactor Sites Management Company Limited was acquired by EnergySolutions in June 2007. Initially K-29, K-31 and K-33 were to be retained for other uses, but it was subsequently decided to demolish them. Bechtel Jacobs, the environmental management contractor, assumed responsibility for the facility in July 2005. Demolition of K-29 began in January 2006 and was completed in August. Demolition of K-33 began in January 2011 and was completed ahead of schedule in September. It was followed by the demolition of K-31, which began in October 2014 and was completed in June 2015.
Bechtel Jacobs was contracted to dismantle and demolish the K-25 facility in September 2008. The contract, valued at $1.48 billion, was made retrospective to October 2007 and ended in August 2011. Demolition work was then carried out by URS | CH2M Hill Oak Ridge. Demolition was completed in March 2014 Demolition of K-27, the last of the five gaseous diffusion facilities at Oak Ridge, began in February 2016. US Senator Lamar Alexander and US Congressman Chuck Fleischmann joined 1,500 workers to watch the final wall come down on 30 August 2016. Its demolition was completed in February 2017. Since 2020, the K-25 site is being redeveloped in part into a general aviation airport to service the city of Oak Ridge. Several small private nuclear facilities are also planned on the site.
Commemoration.
On 27 February 2020, the K-25 History Center, a 7,500-square foot museum opened at the site. The museum is a branch of the American Museum of Science and Energy and features hundreds of original artifacts and interactive exhibits related to the K-25 site.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "{\\mbox{Rate}_1 \\over \\mbox{Rate}_2}=\\sqrt{M_2 \\over M_1}=\\sqrt{352 \\over 349} \\approx 1.0043"
}
]
| https://en.wikipedia.org/wiki?curid=1351125 |
13511542 | Pseudoforest | Graph with at most one cycle per component
In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest.
The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from .
Definitions and structure.
We define an undirected graph to be a set of vertices and edges such that each edge has two vertices (which may coincide) as endpoints. That is, we allow multiple edges (edges with the same pair of endpoints) and loops (edges whose two endpoints are the same vertex). A subgraph of a graph is the graph formed by any subsets of its vertices and edges such that each edge in the edge subset has both endpoints in the vertex subset.
A connected component of an undirected graph is the subgraph consisting of the vertices and edges that can be reached by following edges from a single given starting vertex. A graph is connected if every vertex or edge is reachable from every other vertex or edge. A cycle in an undirected graph is a connected subgraph in which each vertex is incident to exactly two edges, or is a loop.
A pseudoforest is an undirected graph in which each connected component contains at most one cycle. Equivalently, it is an undirected graph in which each connected component has no more edges than vertices. The components that have no cycles are just trees, while the components that have a single cycle within them are called 1-trees or unicyclic graphs. That is, a 1-tree is a connected graph containing exactly one cycle. A pseudoforest with a single connected component (usually called a pseudotree, although some authors define a pseudotree to be a 1-tree) is either a tree or a 1-tree; in general a pseudoforest may have multiple connected components as long as all of them are trees or 1-trees.
If one removes from a 1-tree one of the edges in its cycle, the result is a tree. Reversing this process, if one augments a tree by connecting any two of its vertices by a new edge, the result is a 1-tree; the path in the tree connecting the two endpoints of the added edge, together with the added edge itself, form the 1-tree's unique cycle. If one augments a 1-tree by adding an edge that connects one of its vertices to a newly added vertex, the result is again a 1-tree, with one more vertex; an alternative method for constructing 1-trees is to start with a single cycle and then repeat this augmentation operation any number of times. The edges of any 1-tree can be partitioned in a unique way into two subgraphs, one of which is a cycle and the other of which is a forest, such that each tree of the forest contains exactly one vertex of the cycle.
Certain more specific types of pseudoforests have also been studied.
A 1-forest, sometimes called a maximal pseudoforest, is a pseudoforest to which no more edges can be added without causing some component of the graph to contain multiple cycles. If a pseudoforest contains a tree as one of its components, it cannot be a 1-forest, for one can add either an edge connecting two vertices within that tree, forming a single cycle, or an edge connecting that tree to some other component. Thus, the 1-forests are exactly the pseudoforests in which every component is a 1-tree.
The spanning pseudoforests of an undirected graph "G" are the pseudoforest subgraphs of "G" that have all the vertices of "G". Such a pseudoforest need not have any edges, since for example the subgraph that has all the vertices of "G" and no edges is a pseudoforest (whose components are trees consisting of a single vertex).
The maximal pseudoforests of "G" are the pseudoforest subgraphs of "G" that are not contained within any larger pseudoforest of "G". A maximal pseudoforest of "G" is always a spanning pseudoforest, but not conversely. If "G" has no connected components that are trees, then its maximal pseudoforests are 1-forests, but if "G" does have a tree component, its maximal pseudoforests are not 1-forests. Stated precisely, in any graph "G" its maximal pseudoforests consist of every tree component of "G", together with one or more disjoint 1-trees covering the remaining vertices of "G".
Directed pseudoforests.
Versions of these definitions are also used for directed graphs. Like an undirected graph, a directed graph consists of vertices and edges, but each edge is directed from one of its endpoints to the other endpoint. A directed pseudoforest is a directed graph in which each vertex has at most one outgoing edge; that is, it has outdegree at most one. A directed 1-forest – most commonly called a functional graph (see below), sometimes maximal directed pseudoforest – is a directed graph in which each vertex has outdegree exactly one. If "D" is a directed pseudoforest, the undirected graph formed by removing the direction from each edge of "D" is an undirected pseudoforest.
Number of edges.
Every pseudoforest on a set of "n" vertices has at most "n" edges, and every maximal pseudoforest on a set of "n" vertices has exactly "n" edges. Conversely, if a graph "G" has the property that, for every subset "S" of its vertices, the number of edges in the induced subgraph of "S" is at most the number of vertices in "S", then "G" is a pseudoforest. 1-trees can be defined as connected graphs with equally many vertices and edges.
Moving from individual graphs to graph families, if a family of graphs has the property that every subgraph of a graph in the family is also in the family, and every graph in the family has at most as many edges as vertices, then the family contains only pseudoforests. For instance, every subgraph of a thrackle (a graph drawn so that every pair of edges has one point of intersection) is also a thrackle, so Conway's conjecture that every thrackle has at most as many edges as vertices can be restated as saying that every thrackle is a pseudoforest. A more precise characterization is that, if the conjecture is true, then the thrackles are exactly the pseudoforests with no four-vertex cycle and at most one odd cycle.
Streinu and Theran generalize the sparsity conditions defining pseudoforests: they define a graph as being ("k","l")-sparse if every nonempty subgraph with "n" vertices has at most "kn" − "l" edges, and ("k","l")-tight if it is ("k","l")-sparse and has exactly "kn" − "l" edges. Thus, the pseudoforests are the (1,0)-sparse graphs, and the maximal pseudoforests are the (1,0)-tight graphs. Several other important families of graphs may be defined from other values of "k" and "l",
and when "l" ≤ "k" the ("k","l")-sparse graphs may be characterized as the graphs formed as the edge-disjoint union of "l" forests and "k" − "l" pseudoforests.
Almost every sufficiently sparse random graph is pseudoforest. That is, if "c" is a constant with 0 < "c" < 1/2, and P"c"("n") is the probability that choosing uniformly at random among the "n"-vertex graphs with "cn" edges results in a pseudoforest, then P"c"("n") tends to one in the limit for large "n". However, for "c" > 1/2, almost every random graph with "cn" edges has a large component that is not unicyclic.
Enumeration.
A graph is "simple" if it has no self-loops and no multiple edges with the same endpoints. The number of simple 1-trees with "n" labelled vertices is
formula_0
The values for "n" up to 300 can be found in sequence OEIS: of the On-Line Encyclopedia of Integer Sequences.
The number of maximal directed pseudoforests on "n" vertices, allowing self-loops, is "nn", because for each vertex there are "n" possible endpoints for the outgoing edge. André Joyal used this fact to provide a bijective proof of Cayley's formula, that the number of undirected trees on "n" nodes is "n""n" − 2, by finding a bijection between maximal directed pseudoforests and undirected trees with two distinguished nodes. If self-loops are not allowed, the number of maximal directed pseudoforests is instead ("n" − 1)"n".
Graphs of functions.
Directed pseudoforests and endofunctions are in some sense mathematically equivalent. Any function ƒ from a set "X" to itself (that is, an endomorphism of "X") can be interpreted as defining a directed pseudoforest which has an edge from "x" to "y" whenever ƒ("x") = "y". The resulting directed pseudoforest is maximal, and may include self-loops whenever some value "x" has ƒ("x") = "x". Alternatively, omitting the self-loops produces a non-maximal pseudoforest. In the other direction, any maximal directed pseudoforest determines a function ƒ such that ƒ("x") is the target of the edge that goes out from "x", and any non-maximal directed pseudoforest can be made maximal by adding self-loops and then converted into a function in the same way. For this reason, maximal directed pseudoforests are sometimes called functional graphs. Viewing a function as a functional graph provides a convenient language for describing properties that are not as easily described from the function-theoretic point of view; this technique is especially applicable to problems involving iterated functions, which correspond to paths in functional graphs.
Cycle detection, the problem of following a path in a functional graph to find a cycle in it, has applications in cryptography and computational number theory, as part of Pollard's rho algorithm for integer factorization and as a method for finding collisions in cryptographic hash functions. In these applications, ƒ is expected to behave randomly; Flajolet and Odlyzko study the graph-theoretic properties of the functional graphs arising from randomly chosen mappings. In particular, a form of the birthday paradox implies that, in a random functional graph with "n" vertices, the path starting from a randomly selected vertex will typically loop back on itself to form a cycle within O(√"n") steps. Konyagin et al. have made analytical and computational progress on graph statistics.
Martin, Odlyzko, and Wolfram investigate pseudoforests that model the dynamics of cellular automata. These functional graphs, which they call "state transition diagrams", have one vertex for each possible configuration that the ensemble of cells of the automaton can be in, and an edge connecting each configuration to the configuration that follows it according to the automaton's rule. One can infer properties of the automaton from the structure of these diagrams, such as the number of components, length of limiting cycles, depth of the trees connecting non-limiting states to these cycles, or symmetries of the diagram. For instance, any vertex with no incoming edge corresponds to a Garden of Eden pattern and a vertex with a self-loop corresponds to a still life pattern.
Another early application of functional graphs is in the "trains" used to study Steiner triple systems. The train of a triple system is a functional graph having a vertex for each possible triple of symbols; each triple "pqr" is mapped by ƒ to "stu", where "pqs", "prt", and "qru" are the triples that belong to the triple system and contain the pairs "pq", "pr", and "qr" respectively. Trains have been shown to be a powerful invariant of triple systems although somewhat cumbersome to compute.
Bicircular matroid.
A matroid is a mathematical structure in which certain sets of elements are defined to be independent, in such a way that the independent sets satisfy properties modeled after the properties of linear independence in a vector space. One of the standard examples of a matroid is the graphic matroid in which the independent sets are the sets of edges in forests of a graph; the matroid structure of forests is important in algorithms for computing the minimum spanning tree of the graph. Analogously, we may define matroids from pseudoforests.
For any graph "G" = ("V","E"), we may define a matroid on the edges of "G", in which a set of edges is independent if and only if it forms a pseudoforest; this matroid is known as the bicircular matroid (or bicycle matroid) of "G". The smallest dependent sets for this matroid are the minimal connected subgraphs of "G" that have more than one cycle, and these subgraphs are sometimes called bicycles. There are three possible types of bicycle: a theta graph has two vertices that are connected by three internally disjoint paths, a figure 8 graph consists of two cycles sharing a single vertex, and a handcuff graph is formed by two disjoint cycles connected by a path.
A graph is a pseudoforest if and only if it does not contain a bicycle as a subgraph.
Forbidden minors.
Forming a minor of a pseudoforest by contracting some of its edges and deleting others produces another pseudoforest. Therefore, the family of pseudoforests is closed under minors, and the Robertson–Seymour theorem implies that pseudoforests can be characterized in terms of a finite set of forbidden minors, analogously to Wagner's theorem characterizing the planar graphs as the graphs having neither the complete graph K5 nor the complete bipartite graph K3,3 as minors.
As discussed above, any non-pseudoforest graph contains as a subgraph a handcuff, figure 8, or theta graph; any handcuff or figure 8 graph may be contracted to form a "butterfly graph" (five-vertex figure 8), and any theta graph may be contracted to form a "diamond graph" (four-vertex theta graph), so any non-pseudoforest contains either a butterfly or a diamond as a minor, and these are the only minor-minimal non-pseudoforest graphs. Thus, a graph is a pseudoforest if and only if it does not have the butterfly or the diamond as a minor. If one forbids only the diamond but not the butterfly, the resulting larger graph family consists of the cactus graphs and disjoint unions of multiple cactus graphs.
More simply, if multigraphs with self-loops are considered, there is only one forbidden minor, a vertex with two loops.
Algorithms.
An early algorithmic use of pseudoforests involves the "network simplex" algorithm and its application to generalized flow problems modeling the conversion between commodities of different types. In these problems, one is given as input a flow network in which the vertices model each commodity and the edges model allowable conversions between one commodity and another. Each edge is marked with a "capacity" (how much of a commodity can be converted per unit time), a "flow multiplier" (the conversion rate between commodities), and a "cost" (how much loss or, if negative, profit is incurred per unit of conversion). The task is to determine how much of each commodity to convert via each edge of the flow network, in order to minimize cost or maximize profit, while obeying the capacity constraints and not allowing commodities of any type to accumulate unused. This type of problem can be formulated as a linear program, and solved using the simplex algorithm. The intermediate solutions arising from this algorithm, as well as the eventual optimal solution, have a special structure: each edge in the input network is either unused or used to its full capacity, except for a subset of the edges, forming a spanning pseudoforest of the input network, for which the flow amounts may lie between zero and the full capacity. In this application, unicyclic graphs are also sometimes called "augmented trees" and maximal pseudoforests are also sometimes called "augmented forests".
The "minimum spanning pseudoforest problem" involves finding a spanning pseudoforest of minimum weight in a larger edge-weighted graph "G".
Due to the matroid structure of pseudoforests, minimum-weight maximal pseudoforests may be found by greedy algorithms similar to those for the minimum spanning tree problem. However, Gabow and Tarjan found a more efficient linear-time approach in this case.
The pseudoarboricity of a graph "G" is defined by analogy to the arboricity as the minimum number of pseudoforests into which its edges can be partitioned; equivalently, it is the minimum "k" such that "G" is ("k",0)-sparse, or the minimum "k" such that the edges of "G" can be oriented to form a directed graph with outdegree at most "k". Due to the matroid structure of pseudoforests, the pseudoarboricity may be computed in polynomial time.
A random bipartite graph with "n" vertices on each side of its bipartition, and with "cn" edges chosen independently at random from each of the "n"2 possible pairs of vertices, is a pseudoforest with high probability whenever "c" is a constant strictly less than one. This fact plays a key role in the analysis of cuckoo hashing, a data structure for looking up key-value pairs by looking in one of two hash tables at locations determined from the key: one can form a graph, the "cuckoo graph", whose vertices correspond to hash table locations and whose edges link the two locations at which one of the keys might be found, and the cuckoo hashing algorithm succeeds in finding locations for all of its keys if and only if the cuckoo graph is a pseudoforest.
Pseudoforests also play a key role in parallel algorithms for graph coloring and related problems.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n \\sum_{k=1}^n \\frac{(-1)^{k-1}}{k} \\sum_{n_1+\\cdots+n_k=n} \\frac{n!}{n_1! \\cdots n_k!} \\binom{\\binom{n_1}{2}+\\cdots +\\binom{n_k}{2}}{n}."
}
]
| https://en.wikipedia.org/wiki?curid=13511542 |
13511620 | Bicircular matroid | Abstraction of unicyclic subgraphs
In the mathematical subject of matroid theory, the bicircular matroid of a graph "G" is the matroid "B"("G") whose points are the edges of "G" and whose independent sets are the edge sets of pseudoforests of "G", that is, the edge sets in which each connected component contains at most one cycle.
The bicircular matroid was introduced by and explored further by and others. It is a special case of the frame matroid of a biased graph.
Circuits.
The circuits, or minimal dependent sets, of this matroid are the bicircular graphs (or bicycles, but that term has other meanings in graph theory); these are connected graphs whose circuit rank is exactly two.
There are three distinct types of bicircular graph:
All these definitions apply to multigraphs, i.e., they permit multiple edges (edges sharing the same endpoints) and loops (edges whose two endpoints are the same vertex).
Flats.
The closed sets (flats) of the bicircular matroid of a graph G can be described as the forests F of G such that in the induced subgraph of "V"("G") − "V"("F"), every connected component has a cycle. Since the flats of a matroid form a geometric lattice when partially ordered by set inclusion, these forests of G also form a geometric lattice. In the partial ordering for this lattice, that "F"1 ≤ "F"2 if
For the most interesting example, let "G" o be G with a loop added to every vertex. Then the flats of "B"("G" o) are all the forests of G, spanning or nonspanning. Thus, all forests of a graph G form a geometric lattice, the forest lattice of "G" .
As transversal matroids.
Bicircular matroids can be characterized as the transversal matroids that arise from a family of sets in which each set element belongs to at most two sets. That is, the independent sets of the matroid are the subsets of elements that can be used to form a system of distinct representatives for some or all of the sets.
In this description, the elements correspond to the edges of a graph, and there is one set per vertex, the set of edges having that vertex as an endpoint.
Minors.
Unlike transversal matroids in general, bicircular matroids form a minor-closed class; that is, any submatroid or contraction of a bicircular matroid is also a bicircular matroid, as can be seen from their description in terms of biased graphs .
Here is a description of deletion and contraction of an edge in terms of the underlying graph: To delete an edge from the matroid, remove it from the graph. The rule for contraction depends on what kind of edge it is. To contract a link (a non-loop) in the matroid, contract it in the graph in the usual way. To contract a loop "e" at vertex "v", delete "e" and "v" but not the other edges incident with v; rather, each edge incident with "v" and another vertex "w" becomes a loop at "w". Any other graph loops at "v" become matroid loops—to describe this correctly in terms of the graph one needs half-edges and loose edges; see biased graph minors.
Characteristic polynomial.
The characteristic polynomial of the bicircular matroid "B"("G" o) expresses in a simple way the numbers of spanning forests (forests that contain all vertices of "G") of each size in "G". The formula is
formula_0
where "f""k" equals the number of "k"-edge spanning forests in "G". See .
Vector representation.
Bicircular matroids, like all other transversal matroids, can be represented by vectors over any infinite field. However, unlike graphic matroids, they are not regular: they cannot be represented by vectors over an arbitrary finite field. The question of the fields over which a bicircular matroid has a vector representation leads to the largely unsolved problem of finding the fields over which a graph has multiplicative gains. See . | [
{
"math_id": 0,
"text": "p_{B(G)}(\\lambda) := \\sum_{k=0}^n (-1)^k f_k (\\lambda-1)^{n-k},"
}
]
| https://en.wikipedia.org/wiki?curid=13511620 |
1352020 | Fudge factor | Ad hoc element introduced into a calculation
A fudge factor is an ad hoc quantity or element introduced into a calculation, formula or model in order to make it fit observations or expectations. Also known as a correction coefficient, which is defined by
formula_0
Examples include Einstein's cosmological constant, dark energy, the initial proposals of dark matter and inflation.
Examples in science.
Some quantities in scientific theory are set arbitrarily according to measured results rather than by calculation (for example, the Planck constant). However, in the case of these fundamental constants, their arbitrariness is usually explicit. To suggest that other calculations may include a "fudge factor" may suggest that the calculation has been somehow tampered with to make results give a misleadingly good match to experimental data.
Cosmological constant.
In theoretical physics, when Albert Einstein originally tried to produce a general theory of relativity, he found that the theory seemed to predict the gravitational collapse of the universe: it seemed that the universe should either be expanding or collapsing, and to produce a model in which the universe was "static and stable" (which seemed to Einstein at the time to be the "proper" result), he introduced an expansionist variable (called the cosmological constant), whose sole purpose was to cancel out the cumulative effects of gravitation. He later called this, "the biggest blunder of my life".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\kappa_\\text{c} = \\frac{\\text{experimental value}}{\\text{theoretical value}}"
}
]
| https://en.wikipedia.org/wiki?curid=1352020 |
13520887 | Shilov boundary | In functional analysis, the Shilov boundary is the smallest closed subset of the structure space of a commutative Banach algebra where an analog of the maximum modulus principle holds. It is named after its discoverer, Georgii Evgen'evich Shilov.
Precise definition and existence.
Let formula_0 be a commutative Banach algebra and let formula_1 be its structure space equipped with the relative weak*-topology of the dual formula_2. A closed (in this topology) subset formula_3 of formula_4 is called a boundary of formula_5 if formula_6 for all formula_7.
The set formula_8 is called the Shilov boundary. It has been proved by Shilov that formula_9 is a boundary of formula_5.
Thus one may also say that Shilov boundary is the unique set formula_10 which satisfies
Examples.
Let formula_12 be the open unit disc in the complex plane and let formula_13 be the disc algebra, i.e. the functions holomorphic in formula_14 and continuous in the closure of formula_14 with supremum norm and usual algebraic operations. Then formula_15 and formula_16.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal A"
},
{
"math_id": 1,
"text": "\\Delta \\mathcal A"
},
{
"math_id": 2,
"text": "{\\mathcal A}^*"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "\\Delta {\\mathcal A}"
},
{
"math_id": 5,
"text": "{\\mathcal A}"
},
{
"math_id": 6,
"text": "\\max_{f \\in \\Delta {\\mathcal A}} |f(x)|=\\max_{f \\in F} |f(x)|"
},
{
"math_id": 7,
"text": "x \\in \\mathcal A"
},
{
"math_id": 8,
"text": "S = \\bigcap\\{F:F \\text{ is a boundary of } {\\mathcal A}\\}"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "S \\subset \\Delta \\mathcal A"
},
{
"math_id": 11,
"text": "S \\subset F"
},
{
"math_id": 12,
"text": "\\mathbb D=\\{z \\in \\Complex:|z|<1\\}"
},
{
"math_id": 13,
"text": "{\\mathcal A} = H^\\infty(\\mathbb D)\\cap {\\mathcal C}(\\bar{\\mathbb D})"
},
{
"math_id": 14,
"text": "\\mathbb D"
},
{
"math_id": 15,
"text": "\\Delta {\\mathcal A} = \\bar{\\mathbb D}"
},
{
"math_id": 16,
"text": "S=\\{|z|=1\\}"
}
]
| https://en.wikipedia.org/wiki?curid=13520887 |
1352428 | Modulo | Computational operation
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, called the "modulus" of the operation.
Given two positive numbers "a" and "n", "a" modulo "n" (often abbreviated as "a" mod "n") is the remainder of the Euclidean division of "a" by "n", where "a" is the dividend and "n" is the divisor.
For example, the expression "5 mod 2" evaluates to 1, because 5 divided by 2 has a quotient of 2 and a remainder of 1, while "9 mod 3" would evaluate to 0, because 9 divided by 3 has a quotient of 3 and a remainder of 0.
Although typically performed with "a" and "n" both being integers, many computing systems now allow other types of numeric operands. The range of values for an integer modulo operation of "n" is 0 to "n" − 1 ("a" mod 1 is always 0; "a" mod 0 is undefined, being a division by zero).
When exactly one of "a" or "n" is negative, the basic definition breaks down, and programming languages differ in how these values are defined.
Variants of the definition.
In mathematics, the result of the modulo operation is an equivalence class, and any member of the class may be chosen as representative; however, the usual representative is the least positive residue, the smallest non-negative integer that belongs to that class (i.e., the remainder of the Euclidean division). However, other conventions are possible. Computers and calculators have various ways of storing and representing numbers; thus their definition of the modulo operation depends on the programming language or the underlying hardware.
In nearly all computing systems, the quotient "q" and the remainder "r" of "a" divided by "n" satisfy the following conditions:
This still leaves a sign ambiguity if the remainder is non-zero: two possible choices for the remainder occur, one negative and the other positive; that choice determines which of the two consecutive quotients must be used to satisfy equation (1). In number theory, the positive remainder is always chosen, but in computing, programming languages choose depending on the language and the signs of "a" or "n". Standard Pascal and ALGOL 68, for example, give a positive remainder (or 0) even for negative divisors, and some programming languages, such as C90, leave it to the implementation when either of "n" or "a" is negative (see the table under for details). "a" modulo 0 is undefined in most systems, although some do define it as "a".
If both the dividend and divisor are positive, then the truncated, floored, and Euclidean definitions agree.
If the dividend is positive and the divisor is negative, then the truncated and Euclidean definitions agree.
If the dividend is negative and the divisor is positive, then the floored and Euclidean definitions agree.
If both the dividend and divisor are negative, then the truncated and floored definitions agree.
As described by Leijen,
<templatestyles src="Template:Blockquote/styles.css" />Boute argues that Euclidean division is superior to the other ones in terms of regularity and useful mathematical properties, although floored division, promoted by Knuth, is also a good definition. Despite its widespread use, truncated division is shown to be inferior to the other definitions.
However, truncated division satisfies the identity formula_0.
Notation.
Some calculators have a mod() function button, and many programming languages have a similar function, expressed as mod("a", "n"), for example. Some also support expressions that use "%", "mod", or "Mod" as a modulo or remainder operator, such as or .
For environments lacking a similar function, any of the three definitions above can be used.
Common pitfalls.
When the result of a modulo operation has the sign of the dividend (truncated definition), it can lead to surprising mistakes.
For example, to test if an integer is odd, one might be inclined to test if the remainder by 2 is equal to 1:
bool is_odd(int n) {
return n % 2 == 1;
But in a language where modulo has the sign of the dividend, that is incorrect, because when "n" (the dividend) is negative and odd, "n" mod 2 returns −1, and the function returns false.
One correct alternative is to test that the remainder is not 0 (because remainder 0 is the same regardless of the signs):
bool is_odd(int n) {
return n % 2 != 0;
Another alternative is to use the fact that for any odd number, the remainder may be either 1 or −1:
bool is_odd(int n) {
return n % 2 == 1 || n % 2 == -1;
A simpler alternative is to treat the result of n % 2 as if it is a Boolean value, where any non-zero value is true:
bool is_odd(int n) {
return n % 2;
Performance issues.
Modulo operations might be implemented such that a division with a remainder is calculated each time. For special cases, on some hardware, faster alternatives exist. For example, the modulo of powers of 2 can alternatively be expressed as a bitwise AND operation (assuming "x" is a positive integer, or using a non-truncating definition):
codice_0
Examples:
In devices and software that implement bitwise operations more efficiently than modulo, these alternative forms can result in faster calculations.
Compiler optimizations may recognize expressions of the form where is a power of two and automatically implement them as , allowing the programmer to write clearer code without compromising performance. This simple optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend (including C), unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, whereas will always be positive. For these languages, the equivalence codice_1 has to be used instead, expressed using bitwise OR, NOT and AND operations.
Optimizations for general constant-modulus operations also exist by calculating the division first using the constant-divisor optimization.
Properties (identities).
Some modulo operations can be factored or expanded similarly to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange. The properties involving multiplication, division, and exponentiation generally require that "a" and "n" are integers.
"a" mod "n".
0 for all positive integer values of "x".
"a" mod "p", due to Fermat's little theorem.
0.
1.
[("a" mod "n") + ("b" mod "n")] mod "n".
[("a" mod "n")("b" mod "n")] mod "n".
[("a" mod "n")("b"−1 mod "n")] mod "n", when the right hand side is defined (that is when "b" and "n" are coprime), and undefined otherwise.
"a" mod "n".
In programming languages.
In addition, many computer systems provide a functionality, which produces the quotient and the remainder at the same time. Examples include the x86 architecture's instruction, the C programming language's function, and Python's function.
Generalizations.
Modulo with offset.
Sometimes it is useful for the result of a modulo n to lie not between 0 and "n" − 1, but between some number d and "d" + "n" − 1. In that case, d is called an "offset" and "d" = 1 is particularly common.
There does not seem to be a standard notation for this operation, so let us tentatively use "a" mod"d" "n". We thus have the following definition: "x" = "a" mod"d" "n" just in case "d" ≤ "x" ≤ "d" + "n" − 1 and "x" mod "n" = "a" mod "n". Clearly, the usual modulo operation corresponds to zero offset: "a" mod "n" = "a" mod0 "n".
The operation of modulo with offset is related to the floor function as follows:
formula_1
To see this, let formula_2. We first show that "x" mod "n" = "a" mod "n". It is in general true that ("a" + "bn") mod "n" = "a" mod "n" for all integers b; thus, this is true also in the particular case when formula_3; but that means that formula_4, which is what we wanted to prove. It remains to be shown that "d" ≤ "x" ≤ "d" + "n" − 1. Let k and r be the integers such that "a" − "d" = "kn" + "r" with 0 ≤ "r" ≤ "n" − 1 (see Euclidean division). Then formula_5, thus formula_6. Now take 0 ≤ "r" ≤ "n" − 1 and add d to both sides, obtaining "d" ≤ "d" + "r" ≤ "d" + "n" − 1. But we've seen that "x" = "d" + "r", so we are done.
The modulo with offset "a" mod"d" "n" is implemented in Mathematica as .
Implementing other modulo definitions using truncation.
Despite the mathematical elegance of Knuth's floored division and Euclidean division, it is generally much more common to find a truncated division-based modulo in programming languages. Leijen provides the following algorithms for calculating the two divisions given a truncated integer division:
/* Euclidean and Floored divmod, in the style of C's ldiv() */
typedef struct {
/* This structure is part of the C stdlib.h, but is reproduced here for clarity */
long int quot;
long int rem;
} ldiv_t;
/* Euclidean division */
inline ldiv_t ldivE(long numer, long denom) {
/* The C99 and C++11 languages define both of these as truncating. */
long q = numer / denom;
long r = numer % denom;
if (r < 0) {
if (denom > 0) {
q = q - 1;
r = r + denom;
} else {
q = q + 1;
r = r - denom;
return (ldiv_t){.quot = q, .rem = r};
/* Floored division */
inline ldiv_t ldivF(long numer, long denom) {
long q = numer / denom;
long r = numer % denom;
if ((r > 0 && denom < 0) || (r < 0 && denom > 0)) {
q = q - 1;
r = r + denom;
return (ldiv_t){.quot = q, .rem = r};
For both cases, the remainder can be calculated independently of the quotient, but not vice versa. The operations are combined here to save screen space, as the logical branches are the same.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "({-a})/b = {-(a/b)} = a/({-b})"
},
{
"math_id": 1,
"text": "a \\operatorname{mod}_d n = a - n \\left\\lfloor\\frac{a-d}{n}\\right\\rfloor."
},
{
"math_id": 2,
"text": "x = a - n \\left\\lfloor\\frac{a-d}{n}\\right\\rfloor"
},
{
"math_id": 3,
"text": "b = -\\!\\left\\lfloor\\frac{a-d}{n}\\right\\rfloor"
},
{
"math_id": 4,
"text": "x \\bmod n = \\left(a - n \\left\\lfloor\\frac{a-d}{n}\\right\\rfloor\\right)\\! \\bmod n = a \\bmod n"
},
{
"math_id": 5,
"text": "\\left\\lfloor\\frac{a-d}{n}\\right\\rfloor = k"
},
{
"math_id": 6,
"text": "x = a - n \\left\\lfloor\\frac{a-d}{n}\\right\\rfloor = a - n k = d +r"
}
]
| https://en.wikipedia.org/wiki?curid=1352428 |
13525027 | Dual norm | Measurement on a normed vector space
In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.
Definition.
Let formula_0 be a normed vector space with norm formula_1 and let formula_2 denote its continuous dual space. The dual norm of a continuous linear functional formula_3 belonging to formula_2 is the non-negative real number defined by any of the following equivalent formulas:
formula_4
where formula_5 and formula_6 denote the supremum and infimum, respectively.
The constant formula_7 map is the origin of the vector space formula_2 and it always has norm formula_8
If formula_9 then the only linear functional on formula_0 is the constant formula_7 map and moreover, the sets in the last two rows will both be empty and consequently, their supremums will equal formula_10 instead of the correct value of formula_11
Importantly, a linear function formula_3 is not, in general, guaranteed to achieve its norm formula_12 on the closed unit ball formula_13 meaning that there might not exist any vector formula_14 of norm formula_15 such that formula_16 (if such a vector does exist and if formula_17 then formula_18 would necessarily have unit norm formula_19).
R.C. James proved James's theorem in 1964, which states that a Banach space formula_0 is reflexive if and only if every bounded linear function formula_20 achieves its norm on the closed unit ball.
It follows, in particular, that every non-reflexive Banach space has some bounded linear functional that does not achieve its norm on the closed unit ball.
However, the Bishop–Phelps theorem guarantees that the set of bounded linear functionals that achieve their norm on the unit sphere of a Banach space is a norm-dense subset of the continuous dual space.
The map formula_21 defines a norm on formula_22 (See Theorems 1 and 2 below.)
The dual norm is a special case of the operator norm defined for each (bounded) linear map between normed vector spaces.
Since the ground field of formula_0 (formula_23 or formula_24) is complete, formula_2 is a Banach space.
The topology on formula_2 induced by formula_1 turns out to be stronger than the weak-* topology on formula_22
The double dual of a normed linear space.
The double dual (or second dual) formula_25 of formula_0 is the dual of the normed vector space formula_2. There is a natural map formula_26. Indeed, for each formula_27 in formula_2 define
formula_28
The map formula_29 is linear, injective, and distance preserving. In particular, if formula_0 is complete (i.e. a Banach space), then formula_29 is an isometry onto a closed subspace of formula_25.
In general, the map formula_29 is not surjective. For example, if formula_0 is the Banach space formula_30 consisting of bounded functions on the real line with the supremum norm, then the map formula_29 is not surjective. (See formula_31 space). If formula_29 is surjective, then formula_0 is said to be a reflexive Banach space. If formula_32 then the space formula_31 is a reflexive Banach space.
Examples.
Dual norm for matrices.
The "<templatestyles src="Template:Visible anchor/styles.css" />Frobenius norm" defined by
formula_33
is self-dual, i.e., its dual norm is formula_34
The "<templatestyles src="Template:Visible anchor/styles.css" />spectral norm", a special case of the "induced norm" when formula_35, is defined by the maximum singular values of a matrix, that is,
formula_36
has the nuclear norm as its dual norm, which is defined by
formula_37
for any matrix formula_38 where formula_39 denote the singular values.
If formula_40 the Schatten formula_41-norm on matrices is dual to the Schatten formula_42-norm.
Finite-dimensional spaces.
Let formula_1 be a norm on formula_43 The associated "dual norm", denoted formula_44 is defined as
formula_45
(This can be shown to be a norm.) The dual norm can be interpreted as the operator norm of formula_46 interpreted as a formula_47 matrix, with the norm formula_1 on formula_48, and the absolute value on formula_49:
formula_50
From the definition of dual norm we have the inequality
formula_51
which holds for all formula_52 and formula_53 The dual of the dual norm is the original norm: we have formula_54 for all formula_55 (This need not hold in infinite-dimensional vector spaces.)
The dual of the Euclidean norm is the Euclidean norm, since
formula_56
The dual of the formula_61-norm is the formula_62-norm:
formula_63
and the dual of the formula_62-norm is the formula_64-norm.
More generally, Hölder's inequality shows that the dual of the formula_41-norm is the formula_42-norm, where formula_65 satisfies formula_66 that is, formula_67
As another example, consider the formula_68- or spectral norm on formula_69. The associated dual norm is
formula_70
which turns out to be the sum of the singular values,
formula_71
where formula_72 This norm is sometimes called the "<templatestyles src="Template:Visible anchor/styles.css" />nuclear norm".
"Lp" and ℓ"p" spaces.
For formula_73 p-norm (also called formula_74-norm) of vector formula_75 is
formula_76
If formula_40 satisfy formula_77 then the formula_41 and formula_42 norms are dual to each other and the same is true of the formula_31 and formula_78 norms, where formula_79 is some measure space.
In particular the Euclidean norm is self-dual since formula_80
For formula_81, the dual norm is formula_82 with formula_83 positive definite.
For formula_84 the formula_85-norm is even induced by a canonical inner product formula_86 meaning that formula_87 for all vectors formula_88 This inner product can expressed in terms of the norm by using the polarization identity.
On formula_89 this is the "<templatestyles src="Template:Visible anchor/styles.css" />Euclidean inner product" defined by
formula_90
while for the space formula_91 associated with a measure space formula_79 which consists of all square-integrable functions, this inner product is
formula_92
The norms of the continuous dual spaces of formula_68 and formula_68 satisfy the polarization identity, and so these dual norms can be used to define inner products. With this inner product, this dual space is also a Hilbert spaces.
Properties.
Given normed vector spaces formula_0 and formula_93 let formula_94 be the collection of all bounded linear mappings (or operators) of formula_0 into formula_96 Then formula_94 can be given a canonical norm.
<templatestyles src="Math_theorem/styles.css" />
Theorem 1 — Let formula_0 and formula_95 be normed spaces. Assigning to each continuous linear operator formula_97 the scalar formula_98
defines a norm formula_99 on formula_100 that makes formula_100 into a normed space. Moreover, if formula_95 is a Banach space then so is formula_101
When formula_95 is a scalar field (i.e. formula_103 or formula_104) so that formula_94 is the dual space formula_2 of formula_105
<templatestyles src="Math_theorem/styles.css" />
Theorem 2 —
Let formula_0 be a normed space and for every formula_106 let
formula_107
where by definition formula_108 is a scalar.
Then
As usual, let formula_109 denote the canonical metric induced by the norm on formula_110 and denote the distance from a point formula_52 to the subset formula_111 by
formula_112
If formula_3 is a bounded linear functional on a normed space formula_110 then for every vector formula_102
formula_113
where formula_114 denotes the kernel of formula_115
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\|\\cdot\\|"
},
{
"math_id": 2,
"text": "X^*"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\n\\begin{alignat}{5}\n\\| f \\| &= \\sup &&\\{\\,|f(x)| &&~:~ \\|x\\| \\leq 1 ~&&~\\text{ and } ~&&x \\in X\\} \\\\\n &= \\sup &&\\{\\,|f(x)| &&~:~ \\|x\\| < 1 ~&&~\\text{ and } ~&&x \\in X\\} \\\\\n &= \\inf &&\\{\\,c \\in [0, \\infty) &&~:~ |f(x)| \\leq c \\|x\\| ~&&~\\text{ for all } ~&&x \\in X\\} \\\\\n &= \\sup &&\\{\\,|f(x)| &&~:~ \\|x\\| = 1 \\text{ or } 0 ~&&~\\text{ and } ~&&x \\in X\\} \\\\\n &= \\sup &&\\{\\,|f(x)| &&~:~ \\|x\\| = 1 ~&&~\\text{ and } ~&&x \\in X\\} \\;\\;\\;\\text{ this equality holds if and only if } X \\neq \\{0\\} \\\\\n &= \\sup &&\\bigg\\{\\,\\frac{|f(x)|}{\\|x\\|} ~&&~:~ x \\neq 0 &&~\\text{ and } ~&&x \\in X\\bigg\\} \\;\\;\\;\\text{ this equality holds if and only if } X \\neq \\{0\\} \\\\\n\\end{alignat}\n"
},
{
"math_id": 5,
"text": "\\sup"
},
{
"math_id": 6,
"text": "\\inf"
},
{
"math_id": 7,
"text": "0"
},
{
"math_id": 8,
"text": "\\|0\\| = 0."
},
{
"math_id": 9,
"text": "X = \\{0\\}"
},
{
"math_id": 10,
"text": "\\sup \\varnothing = - \\infty"
},
{
"math_id": 11,
"text": "0."
},
{
"math_id": 12,
"text": "\\|f\\| = \\sup \\{|f x| : \\|x\\| \\leq 1, x \\in X\\}"
},
{
"math_id": 13,
"text": "\\{x \\in X : \\|x\\| \\leq 1\\},"
},
{
"math_id": 14,
"text": "u \\in X"
},
{
"math_id": 15,
"text": "\\|u\\| \\leq 1"
},
{
"math_id": 16,
"text": "\\|f\\| = |f u|"
},
{
"math_id": 17,
"text": "f \\neq 0,"
},
{
"math_id": 18,
"text": "u"
},
{
"math_id": 19,
"text": "\\|u\\| = 1"
},
{
"math_id": 20,
"text": "f \\in X^*"
},
{
"math_id": 21,
"text": "f \\mapsto \\|f\\|"
},
{
"math_id": 22,
"text": "X^*."
},
{
"math_id": 23,
"text": "\\Reals"
},
{
"math_id": 24,
"text": "\\Complex"
},
{
"math_id": 25,
"text": "X^{**}"
},
{
"math_id": 26,
"text": "\\varphi: X \\to X^{**}"
},
{
"math_id": 27,
"text": "w^*"
},
{
"math_id": 28,
"text": "\\varphi(v)(w^*): = w^*(v)."
},
{
"math_id": 29,
"text": "\\varphi"
},
{
"math_id": 30,
"text": "L^{\\infty}"
},
{
"math_id": 31,
"text": "L^p"
},
{
"math_id": 32,
"text": "1 < p < \\infty,"
},
{
"math_id": 33,
"text": "\\| A\\|_{\\text{F}} = \\sqrt{\\sum_{i=1}^m\\sum_{j=1}^n \\left| a_{ij} \\right|^2} = \\sqrt{\\operatorname{trace}(A^*A)} = \\sqrt{\\sum_{i=1}^{\\min\\{m,n\\}} \\sigma_{i}^2}"
},
{
"math_id": 34,
"text": " \\| \\cdot \\|'_{\\text{F}} = \\| \\cdot \\|_{\\text{F}}."
},
{
"math_id": 35,
"text": "p=2"
},
{
"math_id": 36,
"text": "\\| A \\| _2 = \\sigma_{\\max}(A),"
},
{
"math_id": 37,
"text": "\\|B\\|'_2 = \\sum_i \\sigma_i(B),"
},
{
"math_id": 38,
"text": "B"
},
{
"math_id": 39,
"text": "\\sigma_i(B)"
},
{
"math_id": 40,
"text": "p, q \\in [1, \\infty]"
},
{
"math_id": 41,
"text": "\\ell^p"
},
{
"math_id": 42,
"text": "\\ell^q"
},
{
"math_id": 43,
"text": "\\R^n."
},
{
"math_id": 44,
"text": "\\| \\cdot \\|_*,"
},
{
"math_id": 45,
"text": "\\|z\\|_* = \\sup\\{z^\\intercal x : \\|x\\| \\leq 1 \\}."
},
{
"math_id": 46,
"text": "z^\\intercal,"
},
{
"math_id": 47,
"text": "1 \\times n"
},
{
"math_id": 48,
"text": "\\R^n"
},
{
"math_id": 49,
"text": "\\R"
},
{
"math_id": 50,
"text": "\\|z\\|_* = \\sup\\{|z^\\intercal x| : \\|x\\| \\leq 1 \\}."
},
{
"math_id": 51,
"text": "z^\\intercal x = \\|x\\| \\left(z^\\intercal \\frac{x}{\\|x\\|} \\right) \\leq \\|x\\| \\|z\\|_*"
},
{
"math_id": 52,
"text": "x"
},
{
"math_id": 53,
"text": "z."
},
{
"math_id": 54,
"text": "\\|x\\|_{**} = \\|x\\| "
},
{
"math_id": 55,
"text": "x."
},
{
"math_id": 56,
"text": "\\sup\\{z^\\intercal x : \\|x\\|_2 \\leq 1 \\} = \\|z\\|_2."
},
{
"math_id": 57,
"text": "z,"
},
{
"math_id": 58,
"text": "z^\\intercal x"
},
{
"math_id": 59,
"text": "\\|x\\|_2 \\leq 1"
},
{
"math_id": 60,
"text": "\\tfrac{z}{\\|z\\|_2}."
},
{
"math_id": 61,
"text": "\\ell^\\infty "
},
{
"math_id": 62,
"text": "\\ell^1"
},
{
"math_id": 63,
"text": "\\sup\\{z^\\intercal x : \\|x\\| _\\infty \\leq 1\\} = \\sum_{i=1}^n |z_i| = \\|z\\| _1,"
},
{
"math_id": 64,
"text": "\\ell^\\infty"
},
{
"math_id": 65,
"text": "q"
},
{
"math_id": 66,
"text": "\\tfrac{1}{p} + \\tfrac{1}{q} = 1,"
},
{
"math_id": 67,
"text": "q = \\tfrac{p}{p-1}."
},
{
"math_id": 68,
"text": "\\ell^2"
},
{
"math_id": 69,
"text": "\\R^{m\\times n}"
},
{
"math_id": 70,
"text": "\\|Z\\| _{2*} = \\sup\\{\\mathbf{tr}(Z^\\intercal X) : \\|X\\|_2 \\leq 1\\},"
},
{
"math_id": 71,
"text": "\\|Z\\| _{2*} = \\sigma_1(Z) + \\cdots + \\sigma_r(Z) = \\mathbf{tr} (\\sqrt{Z^\\intercal Z}),"
},
{
"math_id": 72,
"text": "r = \\mathbf{rank} Z."
},
{
"math_id": 73,
"text": "p \\in [1, \\infty],"
},
{
"math_id": 74,
"text": "\\ell_p"
},
{
"math_id": 75,
"text": "\\mathbf{x} = (x_n)_n"
},
{
"math_id": 76,
"text": "\\|\\mathbf{x}\\|_p ~:=~ \\left(\\sum_{i=1}^n \\left|x_i\\right|^p\\right)^{1/p}."
},
{
"math_id": 77,
"text": "1/p+1/q=1"
},
{
"math_id": 78,
"text": "L^q"
},
{
"math_id": 79,
"text": "(X, \\Sigma, \\mu),"
},
{
"math_id": 80,
"text": "p = q = 2."
},
{
"math_id": 81,
"text": "\\sqrt{x^{\\mathrm{T}}Qx}"
},
{
"math_id": 82,
"text": "\\sqrt{y^{\\mathrm{T}}Q^{-1}y}"
},
{
"math_id": 83,
"text": "Q"
},
{
"math_id": 84,
"text": "p = 2,"
},
{
"math_id": 85,
"text": "\\|\\,\\cdot\\,\\|_2"
},
{
"math_id": 86,
"text": "\\langle \\,\\cdot,\\,\\cdot\\rangle,"
},
{
"math_id": 87,
"text": "\\|\\mathbf{x}\\|_2 = \\sqrt{\\langle \\mathbf{x}, \\mathbf{x} \\rangle}"
},
{
"math_id": 88,
"text": "\\mathbf{x}."
},
{
"math_id": 89,
"text": "\\ell^2,"
},
{
"math_id": 90,
"text": "\\langle \\left(x_n\\right)_{n}, \\left(y_n\\right)_{n} \\rangle_{\\ell^2} ~=~ \\sum_n x_n \\overline{y_n}"
},
{
"math_id": 91,
"text": "L^2(X, \\mu)"
},
{
"math_id": 92,
"text": "\\langle f, g \\rangle_{L^2} = \\int_X f(x) \\overline{g(x)} \\, \\mathrm dx."
},
{
"math_id": 93,
"text": "Y,"
},
{
"math_id": 94,
"text": "L(X,Y)"
},
{
"math_id": 95,
"text": "Y"
},
{
"math_id": 96,
"text": "Y."
},
{
"math_id": 97,
"text": "f \\in L(X, Y)"
},
{
"math_id": 98,
"text": "\\|f\\| = \\sup \\{\\|f(x)\\| : x \\in X, \\|x\\| \\leq 1\\}"
},
{
"math_id": 99,
"text": "\\|\\cdot\\| ~:~ L(X, Y) \\to \\Reals"
},
{
"math_id": 100,
"text": "L(X, Y)"
},
{
"math_id": 101,
"text": "L(X, Y)."
},
{
"math_id": 102,
"text": "x \\in X,"
},
{
"math_id": 103,
"text": "Y = \\Complex"
},
{
"math_id": 104,
"text": "Y = \\R"
},
{
"math_id": 105,
"text": "X."
},
{
"math_id": 106,
"text": "x^* \\in X^*"
},
{
"math_id": 107,
"text": "\\left\\|x^*\\right\\| ~:=~ \\sup \\left\\{| \\langle x, x^* \\rangle | ~:~ x \\in X \\text{ with } \\| x \\| \\leq 1 \\right\\}"
},
{
"math_id": 108,
"text": "\\langle x, x^* \\rangle ~:=~ x^{*}(x)"
},
{
"math_id": 109,
"text": "d(x, y) := \\|x - y\\|"
},
{
"math_id": 110,
"text": "X,"
},
{
"math_id": 111,
"text": "S \\subseteq X"
},
{
"math_id": 112,
"text": "d(x, S) ~:=~ \\inf_{s \\in S} d(x, s) ~=~ \\inf_{s \\in S} \\|x - s\\|."
},
{
"math_id": 113,
"text": "|f(x)| = \\|f\\| \\, d(x, \\ker f),"
},
{
"math_id": 114,
"text": "\\ker f = \\{k \\in X : f(k) = 0\\}"
},
{
"math_id": 115,
"text": "f."
}
]
| https://en.wikipedia.org/wiki?curid=13525027 |
1352555 | Wheel and axle | Simple machine consisting of a wheel attached to a smaller axle
The wheel and axle is a simple machine consisting of a wheel attached to a smaller axle so that these two parts rotate together in which a force is transferred from one to the other. The wheel and axle can be viewed as a version of the Lever, with a drive force applied tangentially to the perimeter of the wheel, and a load force applied to the axle supported in a bearing, which serves as a fulcrum.
History.
The Halaf culture of 6500–5100 BCE has been credited with the earliest depiction of a wheeled vehicle, but this is doubtful as there is no evidence of Halafians using either wheeled vehicles or even pottery wheels.
One of the first applications of the wheel to appear was the potter's wheel, used by prehistoric cultures to fabricate clay pots. The earliest type, known as "tournettes" or "slow wheels", were known in the Middle East by the 5th millennium BCE. One of the earliest examples was discovered at Tepe Pardis, Iran, and dated to 5200–4700 BCE. These were made of stone or clay and secured to the ground with a peg in the center, but required significant effort to turn. True potter's wheels, which are freely-spinning and have a wheel and axle mechanism, were developed in Mesopotamia (Iraq) by 4200–4000 BCE. The oldest surviving example, which was found in Ur (modern day Iraq), dates to approximately 3100 BCE.
Evidence of wheeled vehicles appeared by the late 4th millennium BCE. Depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk, in the Sumerian civilization of Mesopotamia, are dated between 3700–3500 BCE. In the second half of the 4th millennium BCE, evidence of wheeled vehicles appeared near-simultaneously in the Northern Caucasus (Maykop culture) and Eastern Europe (Cucuteni–Trypillian culture). Depictions of a wheeled vehicle appeared between 3500 and 3350 BCE in the Bronocice clay pot excavated in a Funnelbeaker culture settlement in southern Poland. In nearby Olszanica, a 2.2 m wide door was constructed for wagon entry; this barn was 40 m long and had 3 doors. Surviving evidence of a wheel–axle combination, from Stare Gmajne near Ljubljana in Slovenia (Ljubljana Marshes Wooden Wheel), is dated within two standard deviations to 3340–3030 BCE, the axle to 3360–3045 BCE. Two types of early Neolithic European wheel and axle are known; a circumalpine type of wagon construction (the wheel and axle rotate together, as in Ljubljana Marshes Wheel), and that of the Baden culture in Hungary (axle does not rotate). They both are dated to c. 3200–3000 BCE. Historians believe that there was a diffusion of the wheeled vehicle from the Near East to Europe around the mid-4th millennium BCE.
An early example of a wooden wheel and its axle was found in 2002 at the Ljubljana Marshes some 20 km south of Ljubljana, the capital of Slovenia. According to radiocarbon dating, it is between 5,100 and 5,350 years old. The wheel was made of ash and oak and had a radius of 70 cm and the axle was 120 cm long and made of oak.
In China, the earliest evidence of spoked wheels comes from Qinghai in the form of two wheel hubs from a site dated between 2000 and 1500 BCE.
In Roman Egypt, Hero of Alexandria identified the wheel and axle as one of the simple machines used to lift weights. This is thought to have been in the form of the windlass which consists of a crank or pulley connected to a cylindrical barrel that provides mechanical advantage to wind up a rope and lift a load such as a bucket from the well.
The wheel and axle was identified as one of six simple machines by Renaissance scientists, drawing from Greek texts on technology.
Mechanical advantage.
The simple machine called a "wheel and axle" refers to the assembly formed by two disks, or cylinders, of different diameters mounted so they rotate together around the same axis. The thin rod which needs to be turned is called the axle and the wider object fixed to the axle, on which we apply force is called the wheel. A tangential force applied to the periphery of the large disk can exert a larger force on a load attached to the axle, achieving mechanical advantage. When used as the wheel of a wheeled vehicle the smaller cylinder is the axle of the wheel, but when used in a windlass, winch, and other similar applications (see medieval mining lift to right) the smaller cylinder may be separate from the axle mounted in the bearings. It cannot be used separately.
Assuming the wheel and axle does not dissipate or store energy, that is it has no friction or elasticity, the power input by the force applied to the wheel must equal the power output at the axle. As the wheel and axle system rotates around its bearings, points on the circumference, or edge, of the wheel move faster than points on the circumference, or edge, of the axle. Therefore, a force applied to the edge of the wheel must be less than the force applied to the edge of the axle, because power is the product of force and velocity.
Let "a" and "b" be the distances from the center of the bearing to the edges of the wheel "A" and the axle "B." If the input force "FA" is applied to the edge of the wheel "A" and the force "FB" at the edge of the axle "B" is the output, then the ratio of the velocities of points "A" and "B" is given by "a/b", so the ratio of the output force to the input force, or mechanical advantage, is given by
formula_0
The mechanical advantage of a simple machine like the wheel and axle is computed as the ratio of the resistance to the effort. The larger the ratio the greater the multiplication of force (torque) created or distance achieved. By varying the radii of the axle and/or wheel, any amount of mechanical advantage may be gained. In this manner, the size of the wheel may be increased to an inconvenient extent. In this case a system or combination of wheels (often toothed, that is, gears) are used. As a wheel and axle is a type of lever, a system of wheels and axles is like a compound lever.
On a powered wheeled vehicle the transmission exerts a force on the axle which has a smaller radius than the wheel. The mechanical advantage is therefore much less than 1. The wheel and axle of a car are therefore not representative of a simple machine (whose purpose is to increase the force). The friction between wheel and road is actually quite low, so even a small force exerted on the axle is sufficient. The actual advantage lies in the large rotational speed at which the axle is rotating thanks to the transmission.
Ideal mechanical advantage.
The mechanical advantage of a wheel and axle with no friction is called the ideal mechanical advantage (IMA). It is calculated with the following formula:
formula_1
Actual mechanical advantage.
All actual wheels have friction, which dissipates some of the power as heat. The actual mechanical advantage (AMA) of a wheel and axle is calculated with the following formula:
formula_2
where
formula_3 is the efficiency of the wheel, the ratio of power output to power input
References.
<templatestyles src="Reflist/styles.css" />
Additional resources.
Basic Machines and How They Work, United States. Bureau of Naval Personnel, Courier Dover Publications 1965, pp. 3–1 and following preview online | [
{
"math_id": 0,
"text": "MA = \\frac{F_B}{F_A} = \\frac{a}{b}."
},
{
"math_id": 1,
"text": "\\mathrm{IMA} = {F_\\text{out} \\over F_\\text{in}} = { \\mathrm{Radius}_\\text{wheel} \\over \\mathrm{Radius}_\\text{axle}} "
},
{
"math_id": 2,
"text": "\\mathrm{AMA} = {F_\\text{out} \\over F_\\text{in}} = \\eta \\cdot { \\mathrm{Radius}_\\text{wheel} \\over \\mathrm{Radius}_\\text{axle}} "
},
{
"math_id": 3,
"text": "\\eta = {P_\\text{out} \\over P_\\text{in} }"
}
]
| https://en.wikipedia.org/wiki?curid=1352555 |
1352564 | Decider (Turing machine) | Turing machine that halts for any input
In computability theory, a decider is a Turing machine that halts for every input. A decider is also called a total Turing machine as it represents a total function.
Because it always halts, such a machine is able to decide whether a given string is a member of a formal language. The class of languages which can be decided by such machines is the set of recursive languages.
Given an arbitrary Turing machine, determining whether it is a decider is an undecidable problem. This is a variant of the halting problem, which asks for whether a Turing machine halts on a specific input.
Functions computable by total Turing machines.
In practice, many functions of interest are computable by machines that always halt. A machine that uses only finite memory on any particular input can be forced to halt for every input by restricting its flow control capabilities so that no input will ever cause the machine to enter an infinite loop. As a trivial example, a machine implementing a finitary decision tree will always halt.
It is not required that the machine be entirely free of looping capabilities, however, to guarantee halting. If we restrict loops to be of a predictably finite size (like the FOR loop in BASIC), we can express all of the primitive recursive functions (Meyer and Ritchie, 1967). An example of such a machine is provided by the toy programming language PL-{GOTO} of Brainerd and Landweber (1974).
We can further define a programming language in which we can ensure that even more sophisticated functions always halt. For example, the Ackermann function, which is not primitive recursive, nevertheless is a total computable function computable by a term rewriting system with a reduction ordering on its arguments (Ohlebusch, 2002, pp. 67).
Despite the above examples of programming languages which guarantee termination of the programs, there exists no programming language which captures exactly the total recursive functions, i.e. the functions which can be computed by a Turing machine that always halts. This is because existence of such a programming language would be a contradiction to the non-semi-decidability of the problem whether a Turing machine halts on every input.
Relationship to partial Turing machines.
A general Turing machine will compute a partial function. Two questions can be asked about the relationship between partial Turing machines and total Turing machines:
The answer to each of these questions is no.
The following theorem shows that the functions computable by machines that always halt do not include extensions of all partial computable functions, which implies the first question above has a negative answer. This fact is closely related to the algorithmic unsolvability of the halting problem.
<templatestyles src="Math_theorem/styles.css" />
Theorem — There are Turing computable partial functions that have no extension to a total Turing computable function. In particular, the partial function "f" defined so that "f"("n") = "m" if and only if the Turing machine with index "n" halts on input with output "m" has no extension to a total computable function.
Indeed, if "g" were a total computable function extending "f" then "g" would be computable by some Turing machine; fix "e" as the index of such a machine. Build a Turing machine "M", using Kleene's recursion theorem, which on input simulates the machine with index "e" running on an index "nM" for "M" (thus the machine "M" can produce an index of itself; this is the role of the recursion theorem). By assumption, this simulation will eventually return an answer. so that if "g"("nM") = "m" then the return value of "M" is &NoBreak;&NoBreak;. Thus "f"("nM"), the true return value of "M" on input , will not equal "g"("nM"). Hence "g" does not extend "f".
The second question asks, in essence, whether there is another reasonable model of computation which computes only total functions and computes all the total computable functions. Informally, if such a model existed then each of its computers could be simulated by a Turing machine. Thus if this new model of computation consisted of a sequence formula_0 of machines, there would be a recursively enumerable sequence formula_1 of Turing machines that compute total functions and so that every total computable function is computable by one of the machines "Ti". This is impossible, because a machine T could be constructed such that on input "i" the machine "T" returns formula_2. This machine cannot be equivalent to any machine T on the list: suppose it were on the list at index "j". Then formula_3, which does not return an integer result. Therefore, it cannot be total, but the function by construction must be total (if total functions are recursively enumerable, then this function can be constructed), which is a contradiction. This shows that the second question has a negative answer.
The set of indices of total Turing machines.
The decision problem of whether the Turing machine with index "e" will halt on every input is not decidable. In fact, this problem is at level formula_4 of the arithmetical hierarchy. Thus this problem is strictly more difficult than the Halting problem, which asks whether the machine with index "e" halts on input "0". Intuitively, this difference in unsolvability is because each instance of the "total machine" problem represents infinitely many instances of the Halting problem.
Provability.
One may be interested not only in whether a Turing machine is total, but also in whether this can be proven in a certain logical system, such as first order Peano arithmetic.
In a sound proof system, every provably total Turing machine is indeed total, but the converse is not true: informally, for every first-order proof system that is strong enough (including Peano arithmetic), there are Turing machines which are assumed to be total, but cannot be proven as such, unless the system is inconsistent (in which case one can prove anything). The proof of their totality either rests on some assumptions or require another proof system.
Thus, as one can enumerate all the proofs in the proof system, one can build a Turing machine on input n that goes through the first n proofs and look for a contradiction. If it finds one, it gets into an infinite loop and never halts; otherwise, it halts. If the system is consistent, the Turing machine will halt on every input, but one cannot prove this in a strong enough proof system due to Gödel's incompleteness theorems.
One can also create a Turing machine that will halt if and only if the proof system is inconsistent, and is thus non-total for a consistent system but cannot be proven such: This is a Turing machine that, regardless of input, enumerates all proofs and halts on a contradiction.
A Turing machine that goes through Goodstein sequences and halts at zero is total but cannot be proven as such in Peano arithmetic. | [
{
"math_id": 0,
"text": "M_1,M_2,\\ldots"
},
{
"math_id": 1,
"text": "T_1,\\ldots T_2,\\ldots"
},
{
"math_id": 2,
"text": "T_i(i)+1\\,"
},
{
"math_id": 3,
"text": "T_j(j)=T_j(j)+1\\,"
},
{
"math_id": 4,
"text": "\\Pi^0_2"
}
]
| https://en.wikipedia.org/wiki?curid=1352564 |
1352566 | Modulo (mathematics) | Word with multiple distinct meanings
In mathematics, the term modulo ("with respect to a modulus of", the Latin ablative of "modulus" which itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for"). For the most part, the term often occurs in statements of the form:
"A" is the same as "B" modulo "C"
which is often equivalent to ""A" is the same as "B" up to "C", and means
"A" and "B" are the same—except for differences accounted for or explained by "C".
History.
"Modulo" is a mathematical jargon that was introduced into mathematics in the book "Disquisitiones Arithmeticae" by Carl Friedrich Gauss in 1801. Given the integers "a", "b" and "n", the expression "a" ≡ "b" (mod "n")", pronounced ""a" is congruent to "b" modulo "n"", means that "a" − "b" is an integer multiple of "n", or equivalently, "a" and "b" both share the same remainder when divided by "n". It is the Latin ablative of "modulus", which itself means "a small measure."
The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of an equivalence relation "R", where "a" is "equivalent" (or "congruent)" to "b" modulo "R" if "aRb".
Usage.
Original use.
Gauss originally intended to use "modulo" as follows: given the integers "a", "b" and "n", the expression "a" ≡ "b" (mod "n") (pronounced ""a" is congruent to "b" modulo "n"") means that "a" − "b" is an integer multiple of "n", or equivalently, "a" and "b" both leave the same remainder when divided by "n". For example:
13 is congruent to 63 modulo 10
means that
13 − 63 is a multiple of 10 (equiv., 13 and 63 differ by a multiple of 10).
Computing.
In computing and computer science, the term can be used in several ways:
Structures.
The term "modulo" can be used differently—when referring to different mathematical structures. For example:
Modding out.
In general, modding out is a somewhat informal term that means declaring things equivalent that otherwise would be considered distinct. For example, suppose the sequence 1 4 2 8 5 7 is to be regarded as the same as the sequence 7 1 4 2 8 5, because each is a cyclicly-shifted version of the other:
formula_0
In that case, one is "modding out by cyclic shifts".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{ccccccccccccc}\n& 1 & & 4 & & 2 & & 8 & & 5 & & 7 \\\\\n\\searrow & & \\searrow & & \\searrow & & \\searrow & & \\searrow & & \\searrow & & \\searrow \\\\\n& 7 & & 1 & & 4 & & 2 & & 8 & & 5\n\\end{array}\n"
}
]
| https://en.wikipedia.org/wiki?curid=1352566 |
13526813 | Fermat's Last Theorem in fiction | References to the famous problem in number theory
The problem in number theory known as "Fermat's Last Theorem" has repeatedly received attention in fiction and popular culture. It was proved by Andrew Wiles in 1994.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1782^{12} + 1841^{12} = 1922^{12}"
},
{
"math_id": 1,
"text": "3987^{12} + 4365^{12} = 4472^{12}"
}
]
| https://en.wikipedia.org/wiki?curid=13526813 |
13527566 | Heine's identity | Fourier expansion of a reciprocal square root
In mathematical analysis, Heine's identity, named after Heinrich Eduard Heine is a Fourier expansion of a reciprocal square root which Heine presented as
formula_0
where formula_1 is a Legendre function of the second kind, which has degree, "m" − <templatestyles src="Fraction/styles.css" />1⁄2, a half-integer, and argument, "z", real and greater than one. This expression can be generalized for arbitrary half-integer powers as follows
formula_2
where formula_3 is the Gamma function. | [
{
"math_id": 0,
"text": "\\frac{1}{\\sqrt{z-\\cos\\psi}} = \\frac{\\sqrt{2}}{\\pi}\\sum_{m=-\\infty}^\\infty Q_{m-\\frac12}(z) e^{im\\psi}"
},
{
"math_id": 1,
"text": " Q_{m-\\frac12}"
},
{
"math_id": 2,
"text": "(z-\\cos\\psi)^{n-\\frac12} = \\sqrt{\\frac{2}{\\pi}}\\frac{(z^2-1)^{\\frac{n}{2}}}{\\Gamma(\\frac12-n)}\n\\sum_{m=-\\infty}^{\\infty} \\frac{\\Gamma(m-n+\\frac12)}{\\Gamma(m+n+\\frac12)}Q_{m-\\frac12}^n(z)e^{im\\psi},"
},
{
"math_id": 3,
"text": "\\scriptstyle\\,\\Gamma"
}
]
| https://en.wikipedia.org/wiki?curid=13527566 |
1352948 | Operating leverage | Measure of how revenue growth translates into growth in operating income
Operating leverage is a measure of how revenue growth translates into growth in operating income. It is a measure of leverage, and of how risky, or volatile, a company's operating income is.
Definition.
There are various measures of operating leverage,
which can be interpreted analogously to financial leverage.
Costs.
One analogy is "fixed costs + variable costs = total costs . . . is similar to . . . debt + equity = assets". This analogy is partly motivated because, for a given amount of debt, debt servicing is a fixed cost. This leads to two measures of operating leverage:
One measure is fixed costs to total costs:
formula_0
Compare to debt to value, which is
formula_1
Another measure is fixed costs to variable costs:
formula_2
Compare to debt to equity ratio:
formula_3
Both of these measures depend on sales: if the unit variable cost is constant, then as sales increase, operating leverage (as measured by fixed costs to total costs or variable costs) decreases.
Contribution.
Contribution Margin is a measure of operating leverage: the higher the contribution margin is (the lower variable costs are as a percentage of total costs), the faster the profits increase with sales. Note that unlike other measures of operating leverage, in the linear Cost-Volume-Profit Analysis Model, contribution margin is a fixed quantity, and does not change with Sales.
Contribution = Sales - Variable Cost
DOL and Operating income.
Operating leverage can also be measured in terms of change in operating income for a given change in sales (revenue).
The Degree of Operating Leverage (DOL) can be computed in a number of equivalent ways; one way it is defined as the ratio of the percentage change in Operating Income for a given percentage change in Sales :
formula_4
This can also be computed as Total Contribution Margin over Operating Income:
formula_5
The above equivalence follows as the relative change in operating income with one more unit dX equals the contribution margin divided by operating income while the relative change in sales with one more unit dX equals price divided by revenue (or, in other words, 1 / X with X being the quantity).
Alternatively, as Contribution Margin Ratio over Operating Margin:
formula_6
For instance, if a company has sales of 1,000,000 units, at price $, unit variable cost of $, and fixed costs of $, then its unit contribution is $, its Total Contribution is $, and its Operating Income is $, so its DOL is
formula_7
This could also be computed as 80% = Contribution Margin Ratio divided by 60% = Operating Margin.
It currently has Sales of $ and Operating Income of $, so additional Unit Sales (say of 100,000 units) yield $5m more Sales and $4m more Operating Income: a 10% increase in Sales and a formula_8 increase in Operating Income.
Assuming the model, for a given level of sales, the DOL is higher the higher fixed costs are (an example): for a given level of sales and profit, a company with higher fixed costs has a lower Operating Income, and hence its Operating Income increases more rapidly with Sales than a company with lower fixed costs (and correspondingly lower contribution margin and higher Operating Income).
If a company has no fixed costs (and hence breaks even at zero), then its DOL equals 1: a 10% increase in Sales yields a 10% increase in Operating Income, and its operating margin equals its contribution margin:
formula_9
DOL is highest near the break-even point; in fact, at the break-even point, DOL is undefined, because it is infinite: an increase of 10% in sales, say, increases Operating Income for 0 to some positive number (say, $), which is an infinite (or undefined) percentage change; in terms of margins, its Operating Margin is zero, so its DOL is undefined.
Similarly, for a very small positive Operating Income (say, $), a 10% increase in sales may increase Operating Income to $, a 100x (or 9,900%) increase, for a DOL of 990; in terms of margins, its Operating Margin is very small, so its DOL is very large.
DOL is closely related to the rate of increase in the operating margin: as sales increase past the break-even point, operating margin rapidly increases from 0% (reflected in a high DOL), and as sales increase, asymptotically approaches the contribution margin: thus the rate of change in operating margin decreases, as does the DOL, which asymptotically approaches 1.
Industry-specific.
Examples of companies with high operating leverage include companies with high R&D costs, such as pharmaceuticals: it can cost billions to develop a drug, but then pennies to produce it. Hence from a life cycle cost analysis perspective, the ratio of preproduction costs (e.g. design widgets) versus incremental production costs (e.g. produce a widget) is a useful measure of operating leverage.
Outsourcing.
Outsourcing a product or service is a method used to change the ratio of fixed costs to variable costs in a business. Outsourcing can be used to change the balance of this ratio by offering a move from fixed to variable cost and also by making variable costs more predictable. | [
{
"math_id": 0,
"text": "\\frac{\\text{FC}}{\\text{TC}}=\\frac{\\text{FC}}{\\text{FC}+\\text{VC}}"
},
{
"math_id": 1,
"text": "\\frac{\\text{Debt}}{\\text{Assets}}=\\frac{\\text{Debt}}{\\text{Debt}+\\text{Equity}}"
},
{
"math_id": 2,
"text": "\\frac{\\text{FC}}{\\text{VC}}"
},
{
"math_id": 3,
"text": "\\frac{\\text{Debt}}{\\text{Equity}}"
},
{
"math_id": 4,
"text": "\\text{DOL} = \\frac{\\%\\text{ change in Operating Income}}{\\% \\text{ change in Sales}}"
},
{
"math_id": 5,
"text": "\\text{DOL} = \\frac{\\text{Total Contribution}}{\\text{Operating Income}} = \\frac{\\text{Total Contribution}}{\\text{Total Contribution} - \\text{Fixed Costs}} = \\frac{(\\text{P}-\\text{V})\\cdot \\text{X}}{(\\text{P}-\\text{V})\\cdot \\text{X} - \\text{FC}}"
},
{
"math_id": 6,
"text": "\\text{DOL} = \\frac{\\text{Contribution Margin Ratio}}{\\text{Operating Margin}}"
},
{
"math_id": 7,
"text": "\\frac{\\$\\text{40m}}{\\$\\text{30m}} = 1 \\frac{1}{3} \\approx 1.33"
},
{
"math_id": 8,
"text": "10\\% \\times 1 \\frac{1}{3}= 13\\frac 1 3 \\%"
},
{
"math_id": 9,
"text": "\\frac{\\text{Operating Income}}{\\text{Sales}}=\\frac{\\text{Unit Price} - \\text{Unit Variable Cost}}{\\text{Unit Price}}"
}
]
| https://en.wikipedia.org/wiki?curid=1352948 |
135316 | Sexagesimal | Numeral system
Sexagesimal, also known as base 60, is a numeral system with sixty as its base. It originated with the ancient Sumerians in the 3rd millennium BC, was passed down to the ancient Babylonians, and is still used—in a modified form—for measuring time, angles, and geographic coordinates.
The number 60, a superior highly composite number, has twelve divisors, namely 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60, of which 2, 3, and 5 are prime numbers. With so many factors, many fractions involving sexagesimal numbers are simplified. For example, one hour can be divided evenly into sections of 30 minutes, 20 minutes, 15 minutes, 12 minutes, 10 minutes, 6 minutes, 5 minutes, 4 minutes, 3 minutes, 2 minutes, and 1 minute. 60 is the smallest number that is divisible by every number from 1 to 6; that is, it is the lowest common multiple of 1, 2, 3, 4, 5, and 6.
"In this article, all sexagesimal digits are represented as decimal numbers, except where otherwise noted. For example, the largest sexagesimal digit is "59"."
Origin.
According to Otto Neugebauer, the origins of sexagesimal are not as simple, consistent, or singular in time as they are often portrayed. Throughout their many centuries of use, which continues today for specialized topics such as time, angles, and astronomical coordinate systems, sexagesimal notations have always contained a strong undercurrent of decimal notation, such as in how sexagesimal digits are written. Their use has also always included (and continues to include) inconsistencies in where and how various bases are to represent numbers even within a single text.
The most powerful driver for rigorous, fully self-consistent use of sexagesimal has always been its mathematical advantages for writing and calculating fractions. In ancient texts this shows up in the fact that sexagesimal is used most uniformly and consistently in mathematical tables of data. Another practical factor that helped expand the use of sexagesimal in the past even if less consistently than in mathematical tables, was its decided advantages to merchants and buyers for making everyday financial transactions easier when they involved bargaining for and dividing up larger quantities of goods. In the late 3rd millennium BC, Sumerian/Akkadian units of weight included the "kakkaru" (talent, approximately 30 kg) divided into 60 "manû" (mina), which was further subdivided into 60 "šiqlu" (shekel); the descendants of these units persisted for millennia, though the Greeks later coerced this relationship into the more base-10 compatible ratio of a "shekel" being one 50th of a "mina".
Apart from mathematical tables, the inconsistencies in how numbers were represented within most texts extended all the way down to the most basic cuneiform symbols used to represent numeric quantities. For example, the cuneiform symbol for 1 was an ellipse made by applying the rounded end of the stylus at an angle to the clay, while the sexagesimal symbol for 60 was a larger oval or "big 1". But within the same texts in which these symbols were used, the number 10 was represented as a circle made by applying the round end of the style perpendicular to the clay, and a larger circle or "big 10" was used to represent 100. Such multi-base numeric quantity symbols could be mixed with each other and with abbreviations, even within a single number. The details and even the magnitudes implied (since zero was not used consistently) were idiomatic to the particular time periods, cultures, and quantities or concepts being represented. While such context-dependent representations of numeric quantities are easy to critique in retrospect, in modern times we still have dozens of regularly used examples of topic-dependent base mixing, including the recent innovation of adding decimal fractions to sexagesimal astronomical coordinates.
Usage.
Babylonian mathematics.
The sexagesimal system as used in ancient Mesopotamia was not a pure base-60 system, in the sense that it did not use 60 distinct symbols for its digits. Instead, the cuneiform digits used ten as a sub-base in the fashion of a sign-value notation: a sexagesimal digit was composed of a group of narrow, wedge-shaped marks representing units up to nine (, , , , ..., ) and a group of wide, wedge-shaped marks representing up to five tens (, , , , ). The value of the digit was the sum of the values of its component parts:
Numbers larger than 59 were indicated by multiple symbol blocks of this form in place value notation. Because there was no symbol for zero it is not always immediately obvious how a number should be interpreted, and its true value must sometimes have been determined by its context. For example, the symbols for 1 and 60 are identical. Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as in numbers like .
Other historical usages.
In the Chinese calendar, a system is commonly used in which days or years are named by positions in a sequence of ten stems and in another sequence of 12 branches. The same stem and branch repeat every 60 steps through this cycle.
Book VIII of Plato's "Republic" involves an allegory of marriage centered on the number 604 = and its divisors. This number has the particularly simple sexagesimal representation 1,0,0,0,0. Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage.
Ptolemy's "Almagest", a treatise on mathematical astronomy written in the second century AD, uses base 60 to express the fractional parts of numbers. In particular, his table of chords, which was essentially the only extensive trigonometric table for more than a millennium, has fractional parts of a degree in base 60, and was practically equivalent to a modern-day table of values of the sine function.
Medieval astronomers also used sexagesimal numbers to note time. Al-Biruni first subdivided the hour sexagesimally into minutes, seconds, thirds and fourths in 1000 while discussing Jewish months. Around 1235 John of Sacrobosco continued this tradition, although Nothaft thought Sacrobosco was the first to do so. The Parisian version of the Alfonsine tables (ca. 1320) used the day as the basic unit of time, recording multiples and fractions of a day in base-60 notation.
The sexagesimal number system continued to be frequently used by European astronomers for performing calculations as late as 1671. For instance, Jost Bürgi in "Fundamentum Astronomiae" (presented to Emperor Rudolf II in 1592), his colleague Ursus in "Fundamentum Astronomicum", and possibly also Henry Briggs, used multiplication tables based on the sexagesimal system in the late 16th century, to calculate sines.
In the late 18th and early 19th centuries, Tamil astronomers were found to make astronomical calculations, reckoning with shells using a mixture of decimal and sexagesimal notations developed by Hellenistic astronomers.
Base-60 number systems have also been used in some other cultures that are unrelated to the Sumerians, for example by the Ekari people of Western New Guinea.
Modern usage.
Modern uses for the sexagesimal system include measuring angles, geographic coordinates, electronic navigation, and time.
One hour of time is divided into 60 minutes, and one minute is divided into 60 seconds. Thus, a measurement of time such as 3:23:17 (3 hours, 23 minutes, and 17 seconds) can be interpreted as a whole sexagesimal number (no sexagesimal point), meaning 3 × 602 + 23 × 601 + 17 × 600 seconds. However, each of the three sexagesimal digits in this number (3, 23, and 17) is written using the decimal system.
Similarly, the practical unit of angular measure is the degree, of which there are 360 (six sixties) in a circle. There are 60 minutes of arc in a degree, and 60 arcseconds in a minute.
YAML.
In version 1.1 of the YAML data storage format, sexagesimals are supported for plain scalars, and formally specified both for integers and floating point numbers. This has led to confusion, as e.g. some MAC addresses would be recognised as sexagesimals and loaded as integers, where others were not and loaded as strings. In YAML 1.2 support for sexagesimals was dropped.
Notations.
In Hellenistic Greek astronomical texts, such as the writings of Ptolemy, sexagesimal numbers were written using Greek alphabetic numerals, with each sexagesimal digit being treated as a distinct number. Hellenistic astronomers adopted a new symbol for zero, —°, which morphed over the centuries into other forms, including the Greek letter omicron, ο, normally meaning 70, but permissible in a sexagesimal system where the maximum value in any position is 59. The Greeks limited their use of sexagesimal numbers to the fractional part of a number.
In medieval Latin texts, sexagesimal numbers were written using Arabic numerals; the different levels of fractions were denoted "minuta" (i.e., fraction), "minuta secunda", "minuta tertia", etc. By the 17th century it became common to denote the integer part of sexagesimal numbers by a superscripted zero, and the various fractional parts by one or more accent marks. John Wallis, in his "Mathesis universalis", generalized this notation to include higher multiples of 60; giving as an example the number 49‵‵‵‵36‵‵‵25‵‵15‵1°15′2″36‴49⁗; where the numbers to the left are multiplied by higher powers of 60, the numbers to the right are divided by powers of 60, and the number marked with the superscripted zero is multiplied by 1. This notation leads to the modern signs for degrees, minutes, and seconds. The same minute and second nomenclature is also used for units of time, and the modern notation for time with hours, minutes, and seconds written in decimal and separated from each other by colons may be interpreted as a form of sexagesimal notation.
In some usage systems, each position past the sexagesimal point was numbered, using Latin or French roots: "prime" or "primus", "seconde" or "secundus", "tierce", "quatre", "quinte", etc. To this day we call the second-order part of an hour or of a degree a "second". Until at least the 18th century, of a second was called a "tierce" or "third".
In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integer and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days. This notation is used in this article.
Fractions and irrational numbers.
Fractions.
In the sexagesimal system, any fraction in which the denominator is a regular number (having only 2, 3, and 5 in its prime factorization) may be expressed exactly. Shown here are all fractions of this type in which the denominator is less than or equal to 60:
<templatestyles src="Div col/styles.css"/>
<templatestyles src="Fraction/styles.css" />1⁄2 = 0;30
<templatestyles src="Fraction/styles.css" />1⁄3 = 0;20
<templatestyles src="Fraction/styles.css" />1⁄4 = 0;15
<templatestyles src="Fraction/styles.css" />1⁄5 = 0;12
<templatestyles src="Fraction/styles.css" />1⁄6 = 0;10
<templatestyles src="Fraction/styles.css" />1⁄8 = 0;7,30
<templatestyles src="Fraction/styles.css" />1⁄9 = 0;6,40
<templatestyles src="Fraction/styles.css" />1⁄10 = 0;6
<templatestyles src="Fraction/styles.css" />1⁄12 = 0;5
<templatestyles src="Fraction/styles.css" />1⁄15 = 0;4
<templatestyles src="Fraction/styles.css" />1⁄16 = 0;3,45
<templatestyles src="Fraction/styles.css" />1⁄18 = 0;3,20
<templatestyles src="Fraction/styles.css" />1⁄20 = 0;3
<templatestyles src="Fraction/styles.css" />1⁄24 = 0;2,30
<templatestyles src="Fraction/styles.css" />1⁄25 = 0;2,24
<templatestyles src="Fraction/styles.css" />1⁄27 = 0;2,13,20
<templatestyles src="Fraction/styles.css" />1⁄30 = 0;2
<templatestyles src="Fraction/styles.css" />1⁄32 = 0;1,52,30
<templatestyles src="Fraction/styles.css" />1⁄36 = 0;1,40
<templatestyles src="Fraction/styles.css" />1⁄40 = 0;1,30
<templatestyles src="Fraction/styles.css" />1⁄45 = 0;1,20
<templatestyles src="Fraction/styles.css" />1⁄48 = 0;1,15
<templatestyles src="Fraction/styles.css" />1⁄50 = 0;1,12
<templatestyles src="Fraction/styles.css" />1⁄54 = 0;1,6,40
<templatestyles src="Fraction/styles.css" />1⁄60 = 0;1
However numbers that are not regular form more complicated repeating fractions. For example:
<templatestyles src="Fraction/styles.css" />1⁄7 = 0;8,34,17 (the bar indicates the sequence of sexagesimal digits 8,34,17 repeats infinitely many times)
<templatestyles src="Fraction/styles.css" />1⁄11 = 0;5,27,16,21,49
<templatestyles src="Fraction/styles.css" />1⁄13 = 0;4,36,55,23
<templatestyles src="Fraction/styles.css" />1⁄14 = 0;4,17,8,34
<templatestyles src="Fraction/styles.css" />1⁄17 = 0;3,31,45,52,56,28,14,7
<templatestyles src="Fraction/styles.css" />1⁄19 = 0;3,9,28,25,15,47,22,6,18,56,50,31,34,44,12,37,53,41
<templatestyles src="Fraction/styles.css" />1⁄59 = 0;1
<templatestyles src="Fraction/styles.css" />1⁄61 = 0;0,59
The fact that the two numbers that are adjacent to sixty, 59 and 61, are both prime numbers implies that fractions that repeat with a period of one or two sexagesimal digits can only have regular number multiples of 59 or 61 as their denominators, and that other non-regular numbers have fractions that repeat with a longer period.
Irrational numbers.
The representations of irrational numbers in any positional number system (including decimal and sexagesimal) neither terminate nor repeat.
The square root of 2, the length of the diagonal of a unit square, was approximated by the Babylonians of the Old Babylonian Period (1900 BC – 1650 BC) as
formula_0
Because √2 ≈ ... is an irrational number, it cannot be expressed exactly in sexagesimal (or indeed any integer-base system), but its sexagesimal expansion does begin 1;24,51,10,7,46,6,4,44... (OEIS: )
The value of π as used by the Greek mathematician and scientist Ptolemy was 3;8,30 = 3 + + = ≈ ... Jamshīd al-Kāshī, a 15th-century Persian mathematician, calculated 2π as a sexagesimal expression to its correct value when rounded to nine subdigits (thus to ); his value for 2π was 6;16,59,28,1,34,51,46,14,50. Like √2 above, 2π is an irrational number and cannot be expressed exactly in sexagesimal. Its sexagesimal expansion begins 6;16,59,28,1,34,51,46,14,49,55,12,35... (OEIS: )
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1;24,51,10=1+\\frac{24}{60}+\\frac{51}{60^2}+\\frac{10}{60^3}=\\frac{30547}{21600}\\approx 1.41421296\\ldots"
}
]
| https://en.wikipedia.org/wiki?curid=135316 |
1353255 | Duration gap | Financial institutions duration gap
In Finance, and accounting, and particularly in asset and liability management (ALM), the duration gap is the difference between the duration - i.e. the average "maturity" - of assets and liabilities held by a financial entity.
A related approach is to see the "duration gap" as the difference in the price sensitivity of interest-yielding assets and the price sensitivity of liabilities (of the organization) to a change in market interest rates (yields).
The duration gap thus measures how well matched are the timings of cash inflows (from assets) and cash outflows (from liabilities), and is then one of the primary asset–liability mismatches considered in the ALM process.
The term is typically used by banks, pension funds, or other financial institutions to measure, and manage, their risk due to changes in the interest rate; see and .
By duration matching, that is creating a zero duration gap, the firm becomes "immunized" against interest rate risk. Duration has a double-facet view. It can be beneficial or harmful depending on where interest rates are headed.
A formula sometimes applied is:
formula_0
Implied here is that even if the duration gap is zero, the firm is immunized only if the size of the liabilities equals the size of the assets. Thus as an example, with a two-year loan of one million and a one-year asset of two millions, the firm is still exposed to rollover risk after one year when the remaining year of the two-year loan has to be financed.
formula_1
Further limitations of the duration gap approach to risk-management include the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Duration \\ gap = duration \\ of \\ earning \\ assets \\ - \\ duration \\ of \\ paying \\ liabilities \\ \\times \\ \\frac{paying \\ liabilities}{earning \\ assets}"
},
{
"math_id": 1,
"text": "0 = 1 - 2 \\times \\frac{1,000,000}{2,000,000}"
}
]
| https://en.wikipedia.org/wiki?curid=1353255 |
13535375 | Mass spectrometry imaging | Mass spectrometry technique that can visualize the spatial distribution of molecules
Mass spectrometry imaging (MSI) is a technique used in mass spectrometry to visualize the spatial distribution of molecules, as biomarkers, metabolites, peptides or proteins by their molecular masses. After collecting a mass spectrum at one spot, the sample is moved to reach another region, and so on, until the entire sample is scanned. By choosing a peak in the resulting spectra that corresponds to the compound of interest, the MS data is used to map its distribution across the sample. This results in pictures of the spatially resolved distribution of a compound pixel by pixel. Each data set contains a veritable gallery of pictures because any peak in each spectrum can be spatially mapped. Despite the fact that MSI has been generally considered a qualitative method, the signal generated by this technique is proportional to the relative abundance of the analyte. Therefore, quantification is possible, when its challenges are overcome. Although widely used traditional methodologies like radiochemistry and immunohistochemistry achieve the same goal as MSI, they are limited in their abilities to analyze multiple samples at once, and can prove to be lacking if researchers do not have prior knowledge of the samples being studied. Most common ionization technologies in the field of MSI are DESI imaging, MALDI imaging, secondary ion mass spectrometry imaging (SIMS imaging) and Nanoscale SIMS (NanoSIMS).
History.
More than 50 years ago, MSI was introduced using secondary ion mass spectrometry (SIMS) to study semiconductor surfaces by Castaing and Slodzian. However, it was the pioneering work of Richard Caprioli and colleagues in the late 1990s, demonstrating how matrix-assisted laser desorption/ionization (MALDI) could be applied to visualize large biomolecules (as proteins and lipids) in cells and tissue to reveal the function of these molecules and how function is changed by diseases like cancer, which led to the widespread use of MSI. Nowadays, different ionization techniques have been used, including SIMS, MALDI and desorption electrospray ionization (DESI), as well as other technologies. Still, MALDI is the current dominant technology with regard to clinical and biological applications of MSI.
Operation principle.
The MSI is based on the spatial distribution of the sample. Therefore, the operation principle depends on the technique that is used to obtain the spatial information. The two techniques used in MSI are: microprobe and microscope.
Microprobe.
This technique is performed using a focused ionization beam to analyze a specific region of the sample by generating a mass spectrum. The mass spectrum is stored along with the spatial coordination where the measurement took place. Then, a new region is selected and analyzed by moving the sample or the ionization beam. These steps are repeated until the entire sample has been scanned. By coupling all individual mass spectra, a distribution map of intensities as a function of x and y locations can be plotted. As a result, reconstructed molecular images of the sample are obtained.
Microscope.
In this technique, a 2D position-sensitive detector is used to measure the spatial origin of the ions generated at the sample surface by the ion optics of the instruments. The resolution of the spatial information will depend on the magnification of the microscope, the quality of the ions optics and the sensitivity of the detector. A new region still needs to be scanned, but the number of positions drastically reduces. The limitation of this mode is the finite depth of vision present with all microscopes.
Ion source dependence.
The ionization techniques available for MSI are suited to different applications. Some of the criteria for choosing the ionization method are the sample preparation requirement and the parameters of the measurement, as resolution, mass range and sensitivity. Based on that, the most common used ionization method are MALDI, SIMS AND DESI which are described below. Still, other minor techniques used are laser ablation electrospray ionization (LAESI), laser-ablation-inductively coupled plasma (LA-ICP) and nanospray desorption electrospray ionization (nano-DESI).
SIMS and NanoSIMS imaging.
Secondary ion mass spectrometry (SIMS) is used to analyze solid surfaces and thin films by sputtering the surface with a focused primary ion beam and collecting and analyzing ejected secondary ions. There are many different sources for a primary ion beam. However, the primary ion beam must contain ions that are at the higher end of the energy scale. Some common sources are: Cs+, O2+, O, Ar+ and Ga+. SIMS imaging is performed in a manner similar to electron microscopy; the primary ion beam is emitted across the sample while secondary mass spectra are recorded. SIMS proves to be advantageous in providing the highest image resolution but only over small area of samples. More, this technique is widely regarded as one of the most sensitive forms of mass spectrometry as it can detect elements in concentrations as small as 1012-1016 atoms per cubic centimeter.
Multiplexed ion beam imaging (MIBI) is a SIMS method that uses metal isotope labeled antibodies to label compounds in biological samples.
Developments within SIMS: Some chemical modifications have been made within SIMS to increase the efficiency of the process. There are currently two separate techniques being used to help increase the overall efficiency by increasing the sensitivity of SIMS measurements: matrix-enhanced SIMS (ME-SIMS) - This has the same sample preparation as MALDI does as this simulates the chemical ionization properties of MALDI. ME-SIMS does not sample nearly as much material. However, if the analyte being tested has a low mass value then it can produce a similar looking spectra to that of a MALDI spectra. ME-SIMS has been so effective that it has been able to detect low mass chemicals at sub cellular levels that was not possible prior to the development of the ME-SIMS technique. The second technique being used is called sample metallization (Meta-SIMS) - This is the process of gold or silver addition to the sample. This forms a layer of gold or silver around the sample and it is normally no more than 1-3 nm thick. Using this technique has resulted in an increase of sensitivity for larger mass samples. The addition of the metallic layer also allows for the conversion of insulating samples to conducting samples, thus charge compensation within SIMS experiments is no longer required.
Subcellular (50 nm) resolution is enabled by NanoSIMS allowing for absolute quantitative analysis at the organelle level.
MALDI imaging.
Matrix-assisted laser desorption ionization can be used as a mass spectrometry imaging technique for relatively large molecules. It has recently been shown that the most effective type of matrix to use is an ionic matrix for MALDI imaging of tissue. In this version of the technique the sample, typically a thin tissue section, is moved in two dimensions while the mass spectrum is recorded. Although MALDI has the benefit of being able to record the spatial distribution of larger molecules, it comes at the cost of lower resolution than the SIMS technique. The limit for the lateral resolution for most of the modern instruments using MALDI is 20 formula_0m. MALDI experiments commonly use either an Nd:YAG (355 nm) or N2 (337 nm) laser for ionization.
Pharmacodynamics and toxicodynamics in tissue have been studied by MALDI imaging.
DESI imaging.
Desorption electrospray Ionization is a less destructive technique, which couples simplicity and rapid analysis of the sample. The sample is sprayed with an electrically charged solvent mist at an angle that causes the ionization and desorption of various molecular species. Then, two-dimensional maps of the abundance of the selected ions in the surface of the sample in relation with the spatial distribution are generated. This technique is applicable to solid, liquid, frozen and gaseous samples. Moreover, DESI allows analyzing a wide range of organic and biological compounds, as animal and plant tissues and cell culture samples, without complex sample preparation Although, this technique has the poorest resolution among other, it can create high-quality image from a large area scan, as a whole body section scanning. Fn
Combination of various MSI techniques and other imaging techniques.
Combining various MSI techniques can be beneficial, since each particular technique has its own advantage. For example, when information regards both proteins and lipids are necessary in the same tissue section, performing DESI to analyze the lipid, followed by MALDI to obtain information about the peptide, and finalize applying a stain (haematoxylin and eosin) for medical diagnosis of the structural characteristic of the tissue. On the other side of MSI with other imaging techniques, fluorescence staining with MSI and magnetic resonance imaging (MRI) with MRI can be highlighted. Fluorescence staining can give information of the appearance of some proteins present in any process inside a tissue, while MSI may give information about the molecular changes presented in that process. Combining both techniques, multimodal picture or even 3D images of the distribution of different molecules can be generated. In contrast, MRI with MSI combines the continuous 3D representation of MRI image with detailed structural representation using molecular information from MSI. Even though, MSI itself can generate 3D images, the picture is just part of the reality due to the depth limitation in the analysis, while MRI provides, for example, detailed organ shape with additional anatomical information. This coupled technique can be beneficial for cancer precise diagnosis and neurosurgery.
Data processing.
Standard data format for mass spectrometry imaging datasets.
The imzML was proposed to exchange data in a standardized XML file based on the mzML format. Several imaging MS software tools support it. The advantage of this format is the flexibility to exchange data between different instruments and data analysis software.
Software.
There are many free software packages available for visualization and mining of imaging mass spectrometry data. Converters from Thermo Fisher format, Analyze format, GRD format and Bruker format to imzML format were developed by the Computis project. Some software modules are also available for viewing mass spectrometry images in imzML format: Biomap (Novartis, free), Datacube Explorer (AMOLF, free), EasyMSI (CEA), Mirion (JLU), MSiReader (NCSU, free) and SpectralAnalysis.
For processing .imzML files with the free statistical and graphics language R, a collection of R scripts is available, which permits parallel-processing of large files on a local computer, a remote cluster or on the Amazon cloud.
Another free statistical package for processing imzML and Analyze 7.5 data in R exists, Cardinal.
SPUTNIK is an R package containing various filters to remove peaks characterized by an uncorrelated spatial distribution with the sample location or spatial randomness.
Applications.
A remarkable ability of MSI is to find out the localization of biomolecules in tissues, even though there are no previous information about them. This feature has made MSI a unique tool for clinical research and pharmacological research. It provides information about biomolecular changes related with diseases by tracking proteins, lipids, and cell metabolism. For example, identifying biomarkers by MSI can show detailed cancer diagnosis. In addition, low cost imaging for pharmaceuticals studies can be acquired, such as images of molecular signatures that would be indicative of treatment response for a specific drug or the effectiveness of a particular drug delivery method.
Ion colocalization has been studied as a way to infer local interactions between biomolecules. Similarly to colocalization in microscopy imaging, correlation has been used to quantify the similarity between ion images and generate network models.
Advantages, challenges and limitations.
The main advantage of MSI for studying the molecules location and distribution within the tissue is that this analysis can provide either greater selectivity, more information or more accuracy than others. Moreover, this tool requires less investment of time and resources for similar results. The table below shows a comparison of advantages and disadvantages of some available techniques, including MSI, correlated with drug distribution analysis.
Notes.
<templatestyles src="Reflist/styles.css" />
Further reading.
"Imaging Trace Metals in Biological Systems" pp 81–134 in "Metals, Microbes and Minerals: The Biogeochemical Side of Life" (2021) pp xiv + 341. Authors Yu, Jyao; Harankhedkar, Shefali; Nabatilan, Arielle; Fahrni, Christopher; Walter de Gruyter, Berlin.
Editors Kroneck, Peter M.H. and Sosa Torres, Martha.
DOI 10.1515/9783110589771-004
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
}
]
| https://en.wikipedia.org/wiki?curid=13535375 |
1353729 | Gaussian binomial coefficient | Family of polynomials
In mathematics, the Gaussian binomial coefficients (also called Gaussian coefficients, Gaussian polynomials, or "q"-binomial coefficients) are "q"-analogs of the binomial coefficients. The Gaussian binomial coefficient, written as formula_0 or formula_1, is a polynomial in "q" with integer coefficients, whose value when "q" is set to a prime power counts the number of subspaces of dimension "k" in a vector space of dimension "n" over formula_2, a finite field with "q" elements; i.e. it is the number of points in the finite Grassmannian formula_3.
Definition.
The Gaussian binomial coefficients are defined by:
formula_4
where "m" and "r" are non-negative integers. If "r" > "m", this evaluates to 0. For "r"
0, the value is 1 since both the numerator and denominator are empty products.
Although the formula at first appears to be a rational function, it actually is a polynomial, because the division is exact in Z["q"]
All of the factors in numerator and denominator are divisible by 1 − "q", and the quotient is the "q"-number:
formula_5
Dividing out these factors gives the equivalent formula
formula_6
In terms of the "q" factorial formula_7, the formula can be stated as
formula_8
Substituting "q"
1 into formula_9 gives the ordinary binomial coefficient formula_10.
The Gaussian binomial coefficient has finite values as formula_11:
formula_12
formula_13
formula_14
formula_15
formula_16
formula_17
formula_18
formula_19
Combinatorial descriptions.
Inversions.
One combinatorial description of Gaussian binomial coefficients involves inversions.
The ordinary binomial coefficient formula_10 counts the "r"-combinations chosen from an "m"-element set. If one takes those "m" elements to be the different character positions in a word of length "m", then each "r"-combination corresponds to a word of length "m" using an alphabet of two letters, say {0,1}, with "r" copies of the letter 1 (indicating the positions in the chosen combination) and "m" − "r" letters 0 (for the remaining positions).
So, for example, the formula_20 words using "0"s and "1"s are formula_21.
To obtain the Gaussian binomial coefficient formula_9, each word is associated with a factor "q""d", where "d" is the number of inversions of the word, where, in this case, an inversion is a pair of positions where the left of the pair holds the letter "1" and the right position holds the letter "0".
With the example above, there is one word with 0 inversions, formula_22, one word with 1 inversion, formula_23, two words with 2 inversions, formula_24, formula_25, one word with 3 inversions, formula_26, and one word with 4 inversions, formula_27. This is also the number of left-shifts of the "1"s from the initial position.
These correspond to the coefficients in formula_28.
Another way to see this is to associate each word with a path across a rectangular grid with height "r" and width "m" − "r", going from the bottom left corner to the top right corner. The path takes a step right for each "0" and a step up for each "1". An inversion switches the directions of a step (right+up becomes up+right and vice versa), hence the number of inversions equals the area under the path.
Balls into bins.
Let formula_29 be the number of ways of throwing formula_30 indistinguishable balls into formula_31 indistinguishable bins, where each bin can contain up to formula_32 balls.
The Gaussian binomial coefficient can be used to characterize formula_29.
Indeed,
formula_33
where formula_34 denotes the coefficient of formula_35 in polynomial formula_36 (see also Applications section below).
Properties.
Reflection.
Like the ordinary binomial coefficients, the Gaussian binomial coefficients are center-symmetric, i.e., invariant under the reflection formula_37:
formula_38
In particular,
formula_39
formula_40
Limit at q = 1.
The evaluation of a Gaussian binomial coefficient at "q"
1 is
formula_41
i.e. the sum of the coefficients gives the corresponding binomial value.
Degree of polynomial.
The degree of formula_42 is formula_43.
q identities.
Analogs of Pascal's identity.
The analogs of Pascal's identity for the Gaussian binomial coefficients are:
formula_44
and
formula_45
When formula_46, these both give the usual binomial identity. We can see that as formula_47, both equations remain valid.
The first Pascal analog allows computation of the Gaussian binomial coefficients recursively (with respect to "m" ) using the initial values
formula_48
and also shows that the Gaussian binomial coefficients are indeed polynomials (in "q").
The second Pascal analog follows from the first using the substitution formula_37 and the invariance of the Gaussian binomial coefficients under the reflection formula_37.
These identities have natural interpretations in terms of linear algebra. Recall that formula_49 counts "r"-dimensional subspaces formula_50, and let formula_51 be a projection with one-dimensional nullspace formula_52. The first identity comes from the bijection which takes formula_53 to the subspace formula_54; in case formula_55, the space formula_56 is "r"-dimensional, and we must also keep track of the linear function formula_57 whose graph is formula_58; but in case formula_59, the space formula_56 is ("r"−1)-dimensional, and we can reconstruct formula_60 without any extra information. The second identity has a similar interpretation, taking formula_58 to formula_61 for an ("m"−1)-dimensional space formula_62, again splitting into two cases.
Proofs of the analogs.
Both analogs can be proved by first noting that from the definition of formula_49, we have:
As
formula_63
Equation (1) becomes:
formula_64
and substituting equation (3) gives the first analog.
A similar process, using
formula_65
instead, gives the second analog.
"q"-binomial theorem.
There is an analog of the binomial theorem for "q"-binomial coefficients, known as the Cauchy binomial theorem:
formula_66
Like the usual binomial theorem, this formula has numerous generalizations and extensions; one such, corresponding to Newton's generalized binomial theorem for negative powers, is
formula_67
In the limit formula_68, these formulas yield
formula_69
and
formula_70.
Setting formula_71 gives the generating functions for distinct and any parts respectively. (See also Basic hypergeometric series.)
Central q-binomial identity.
With the ordinary binomial coefficients, we have:
formula_72
With q-binomial coefficients, the analog is:
formula_73
Applications.
Gauss originally used the Gaussian binomial coefficients in his determination of the sign of the quadratic Gauss sum.
Gaussian binomial coefficients occur in the counting of symmetric polynomials and in the theory of partitions. The coefficient of "q""r" in
formula_74
is the number of partitions of "r" with "m" or fewer parts each less than or equal to "n". Equivalently, it is also the number of partitions of "r" with "n" or fewer parts each less than or equal to "m".
Gaussian binomial coefficients also play an important role in the enumerative theory of projective spaces defined over a finite field. In particular, for every finite field "F""q" with "q" elements, the Gaussian binomial coefficient
formula_75
counts the number of "k"-dimensional vector subspaces of an "n"-dimensional vector space over "F""q" (a Grassmannian). When expanded as a polynomial in "q", it yields the well-known decomposition of the Grassmannian into Schubert cells. For example, the Gaussian binomial coefficient
formula_76
is the number of one-dimensional subspaces in ("F""q")"n" (equivalently, the number of points in the associated projective space). Furthermore, when "q" is 1 (respectively −1), the Gaussian binomial coefficient yields the Euler characteristic of the corresponding complex (respectively real) Grassmannian.
The number of "k"-dimensional affine subspaces of "F""q""n" is equal to
formula_77.
This allows another interpretation of the identity
formula_78
as counting the ("r" − 1)-dimensional subspaces of ("m" − 1)-dimensional projective space by fixing a hyperplane, counting such subspaces contained in that hyperplane, and then counting the subspaces not contained in the hyperplane; these latter subspaces are in bijective correspondence with the ("r" − 1)-dimensional affine subspaces of the space obtained by treating this fixed hyperplane as the hyperplane at infinity.
In the conventions common in applications to quantum groups, a slightly different definition is used; the quantum binomial coefficient there is
formula_79.
This version of the quantum binomial coefficient is symmetric under exchange of formula_80 and formula_81.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\binom nk_q"
},
{
"math_id": 1,
"text": "\\begin{bmatrix}n\\\\ k\\end{bmatrix}_q"
},
{
"math_id": 2,
"text": "\\mathbb{F}_q"
},
{
"math_id": 3,
"text": "\\mathrm{Gr}(k, \\mathbb{F}_q^n)"
},
{
"math_id": 4,
"text": "{m \\choose r}_q\n= \n\\frac{(1-q^m)(1-q^{m-1})\\cdots(1-q^{m-r+1})} {(1-q)(1-q^2)\\cdots(1-q^r)} "
},
{
"math_id": 5,
"text": "[k]_q=\\sum_{0\\leq i<k}q^i=1+q+q^2+\\cdots+q^{k-1}=\n\\begin{cases}\n\\frac{1-q^k}{1-q} & \\text{for} & q \\neq 1 \\\\\nk & \\text{for} & q = 1 \n\\end{cases},"
},
{
"math_id": 6,
"text": "{m \\choose r}_q=\\frac{[m]_q[m-1]_q\\cdots[m-r+1]_q}{[1]_q[2]_q\\cdots[r]_q}\\quad(r\\leq m)."
},
{
"math_id": 7,
"text": "[n]_q!=[1]_q[2]_q\\cdots[n]_q"
},
{
"math_id": 8,
"text": "{m \\choose r}_q=\\frac{[m]_q!}{[r]_q!\\,[m-r]_q!}\\quad(r\\leq m)."
},
{
"math_id": 9,
"text": "\\tbinom mr_q"
},
{
"math_id": 10,
"text": "\\tbinom mr"
},
{
"math_id": 11,
"text": "m\\rightarrow \\infty"
},
{
"math_id": 12,
"text": "{\\infty \\choose r}_q = \\lim_{m\\rightarrow \\infty} {m \\choose r}_q = \\frac{1} {(1-q)(1-q^2)\\cdots(1-q^r)} = \\frac{1}{[r]_q!\\,(1-q)^r}"
},
{
"math_id": 13,
"text": "{0 \\choose 0}_q = {1 \\choose 0}_q = 1"
},
{
"math_id": 14,
"text": "{1 \\choose 1}_q = \\frac{1-q}{1-q}=1"
},
{
"math_id": 15,
"text": "{2 \\choose 1}_q = \\frac{1-q^2}{1-q}=1+q"
},
{
"math_id": 16,
"text": "{3 \\choose 1}_q = \\frac{1-q^3}{1-q}=1+q+q^2"
},
{
"math_id": 17,
"text": "{3 \\choose 2}_q = \\frac{(1-q^3)(1-q^2)}{(1-q)(1-q^2)}=1+q+q^2"
},
{
"math_id": 18,
"text": "{4 \\choose 2}_q = \\frac{(1-q^4)(1-q^3)}{(1-q)(1-q^2)}=(1+q^2)(1+q+q^2)=1+q+2q^2+q^3+q^4"
},
{
"math_id": 19,
"text": "{6 \\choose 3}_q = \\frac{(1-q^6)(1-q^5)(1-q^4)}{(1-q)(1-q^2)(1-q^3)}=(1+q^2)(1+q^3)(1+q+q^2+q^3+q^4)=1 + q + 2 q^2 + 3 q^3 + 3 q^4 + 3 q^5 + 3 q^6 + 2 q^7 + q^8 + q^9"
},
{
"math_id": 20,
"text": "{4 \\choose 2} = 6"
},
{
"math_id": 21,
"text": "0011, 0101, 0110, 1001, 1010, 1100"
},
{
"math_id": 22,
"text": "0011"
},
{
"math_id": 23,
"text": "0101"
},
{
"math_id": 24,
"text": "0110"
},
{
"math_id": 25,
"text": "1001"
},
{
"math_id": 26,
"text": "1010"
},
{
"math_id": 27,
"text": "1100"
},
{
"math_id": 28,
"text": "{4 \\choose 2}_q = 1+q+2q^2+q^3+q^4"
},
{
"math_id": 29,
"text": "B(n,m,r)"
},
{
"math_id": 30,
"text": "r"
},
{
"math_id": 31,
"text": "m"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "B(n,m,r)= [q^r] {n+m \\choose m}_q. "
},
{
"math_id": 34,
"text": "[q^r]P"
},
{
"math_id": 35,
"text": "q^r"
},
{
"math_id": 36,
"text": "P"
},
{
"math_id": 37,
"text": " r \\rightarrow m-r "
},
{
"math_id": 38,
"text": "{m \\choose r}_q = {m \\choose m-r}_q. "
},
{
"math_id": 39,
"text": "{m \\choose 0}_q ={m \\choose m}_q=1 \\, ,"
},
{
"math_id": 40,
"text": "{m \\choose 1}_q ={m \\choose m-1}_q=\\frac{1-q^m}{1-q}=1+q+ \\cdots + q^{m-1} \\quad m \\ge 1 \\, ."
},
{
"math_id": 41,
"text": "\\lim_{q \\to 1} \\binom{m}{r}_q = \\binom{m}{r}"
},
{
"math_id": 42,
"text": "\\binom{m}{r}_q"
},
{
"math_id": 43,
"text": "\\binom{m+1}{2}-\\binom{r+1}{2}-\\binom{(m-r)+1}{2} = r(m-r)"
},
{
"math_id": 44,
"text": "{m \\choose r}_q = q^r {m-1 \\choose r}_q + {m-1 \\choose r-1}_q"
},
{
"math_id": 45,
"text": "{m \\choose r}_q = {m-1 \\choose r}_q + q^{m-r}{m-1 \\choose r-1}_q."
},
{
"math_id": 46,
"text": "q=1"
},
{
"math_id": 47,
"text": "m\\to\\infty"
},
{
"math_id": 48,
"text": "{m \\choose m}_q ={m \\choose 0}_q=1 "
},
{
"math_id": 49,
"text": "\\tbinom{m}{r}_q"
},
{
"math_id": 50,
"text": "V\\subset \\mathbb{F}_q^m"
},
{
"math_id": 51,
"text": "\\pi:\\mathbb{F}_q^m \\to \\mathbb{F}_q^{m-1} "
},
{
"math_id": 52,
"text": "E_1 "
},
{
"math_id": 53,
"text": "V\\subset \\mathbb{F}_q^m "
},
{
"math_id": 54,
"text": "V' = \\pi(V)\\subset \\mathbb{F}_q^{m-1}"
},
{
"math_id": 55,
"text": "E_1\\not\\subset V"
},
{
"math_id": 56,
"text": "V'"
},
{
"math_id": 57,
"text": "\\phi:V'\\to E_1"
},
{
"math_id": 58,
"text": "V"
},
{
"math_id": 59,
"text": "E_1\\subset V"
},
{
"math_id": 60,
"text": "V=\\pi^{-1}(V')"
},
{
"math_id": 61,
"text": "V' = V\\cap E_{n-1}"
},
{
"math_id": 62,
"text": "E_{m-1}"
},
{
"math_id": 63,
"text": "\\frac{1-q^m}{1-q^{m-r}}=\\frac{1-q^r+q^r-q^m}{1-q^{m-r}}=q^r+\\frac{1-q^r}{1-q^{m-r}}"
},
{
"math_id": 64,
"text": "\\binom{m}{r}_q = q^r\\binom{m-1}{r}_q + \\frac{1-q^r}{1-q^{m-r}}\\binom{m-1}{r}_q"
},
{
"math_id": 65,
"text": "\\frac{1-q^m}{1-q^r}=q^{m-r}+\\frac{1-q^{m-r}}{1-q^r}"
},
{
"math_id": 66,
"text": "\\prod_{k=0}^{n-1} (1+q^kt)=\\sum_{k=0}^n q^{k(k-1)/2} \n{n \\choose k}_q t^k ."
},
{
"math_id": 67,
"text": "\\prod_{k=0}^{n-1} \\frac{1}{1-q^kt}=\\sum_{k=0}^\\infty \n{n+k-1 \\choose k}_q t^k. "
},
{
"math_id": 68,
"text": "n\\rightarrow\\infty"
},
{
"math_id": 69,
"text": "\\prod_{k=0}^{\\infty} (1+q^kt)=\\sum_{k=0}^\\infty \\frac{q^{k(k-1)/2}t^k}{[k]_q!\\,(1-q)^k}"
},
{
"math_id": 70,
"text": "\\prod_{k=0}^\\infty \\frac{1}{1-q^kt}=\\sum_{k=0}^\\infty \n\\frac{t^k}{[k]_q!\\,(1-q)^k}"
},
{
"math_id": 71,
"text": "t=q"
},
{
"math_id": 72,
"text": "\\sum_{k=0}^n \\binom{n}{k}^2 = \\binom{2n}{n}"
},
{
"math_id": 73,
"text": "\\sum_{k=0}^n q^{k^2}\\binom{n}{k}_q^2 = \\binom{2n}{n}_q"
},
{
"math_id": 74,
"text": "{n+m \\choose m}_q"
},
{
"math_id": 75,
"text": "{n \\choose k}_q"
},
{
"math_id": 76,
"text": "{n \\choose 1}_q=1+q+q^2+\\cdots+q^{n-1}"
},
{
"math_id": 77,
"text": "q^{n-k} {n \\choose k}_q"
},
{
"math_id": 78,
"text": "{m \\choose r}_q = {m-1 \\choose r}_q + q^{m-r}{m-1 \\choose r-1}_q"
},
{
"math_id": 79,
"text": "q^{k^2 - n k}{n \\choose k}_{q^2}"
},
{
"math_id": 80,
"text": "q"
},
{
"math_id": 81,
"text": "q^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=1353729 |
13538721 | 4-hydroxybenzoate polyprenyltransferase | Class of enzymes
In enzymology, a 4-hydroxybenzoate polyprenyltransferase (EC 2.5.1.39) is an enzyme that catalyzes the chemical reaction
a polyprenyl diphosphate + 4-hydroxybenzoate formula_0 diphosphate + a 4-hydroxy-3-polyprenylbenzoate
Thus, the two substrates of this enzyme are a polyprenyl diphosphate and 4-hydroxybenzoate, whereas its two products are diphosphate and 4-hydroxy-3-polyprenylbenzoate.
This enzyme belongs to the family of transferases, specifically those transferring aryl or alkyl groups other than methyl groups. This enzyme participates in ubiquinone biosynthesis.
Nomenclature.
The systematic name of this enzyme class is polyprenyl-diphosphate:4-hydroxybenzoate polyprenyltransferase. Other names in common use include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13538721 |
13542720 | Multiplier algebra | In mathematics, the multiplier algebra, denoted by "M"("A"), of a C*-algebra "A" is a unital C*-algebra that is the largest unital C*-algebra that contains "A" as an ideal in a "non-degenerate" way. It is the noncommutative generalization of Stone–Čech compactification. Multiplier algebras were introduced by .
For example, if "A" is the C*-algebra of compact operators on a separable Hilbert space, "M"("A") is "B"("H"), the C*-algebra of all bounded operators on "H".
Definition.
An ideal "I" in a C*-algebra "B" is said to be essential if "I" ∩ "J" is non-trivial for every ideal "J". An ideal "I" is essential if and only if "I"⊥, the "orthogonal complement" of "I" in the Hilbert C*-module "B" is {0}.
Let "A" be a C*-algebra. Its multiplier algebra "M"("A") is any C*-algebra satisfying the following universal property: for all C*-algebra "D" containing "A" as an ideal, there exists a unique *-homomorphism φ: "D" → "M"("A") such that "φ" extends the identity homomorphism on "A" and "φ"("A"⊥) = {0}.
Uniqueness up to isomorphism is specified by the universal property. When "A" is unital, "M"("A") = "A". It also follows from the definition that for any "D" containing "A" as an essential ideal, the multiplier algebra "M"("A") contains "D" as a C*-subalgebra.
The existence of "M"("A") can be shown in several ways.
A double centralizer of a C*-algebra "A" is a pair ("L", "R") of bounded linear maps on "A" such that "aL"("b") = "R"("a")"b" for all "a" and "b" in "A". This implies that ||"L"|| = ||"R"||. The set of double centralizers of "A" can be given a C*-algebra structure. This C*-algebra contains "A" as an essential ideal and can be identified as the multiplier algebra "M"("A"). For instance, if "A" is the compact operators "K"("H") on a separable Hilbert space, then each "x" ∈ "B"("H") defines a double centralizer of "A" by simply multiplication from the left and right.
Alternatively, "M"("A") can be obtained via representations. The following fact will be needed:
Lemma. If "I" is an ideal in a C*-algebra "B", then any faithful nondegenerate representation "π" of "I" can be extended "uniquely" to "B".
Now take any faithful nondegenerate representation "π" of "A" on a Hilbert space "H". The above lemma, together with the universal property of the multiplier algebra, yields that "M"("A") is isomorphic to the idealizer of "π"("A") in "B"("H"). It is immediate that "M"("K"("H")) = "B"("H").
Lastly, let "E" be a Hilbert C*-module and "B"("E") (resp. "K"("E")) be the adjointable (resp. compact) operators on "E" "M"("A") can be identified via a *-homomorphism of "A" into "B"("E"). Something similar to the above lemma is true:
Lemma. If "I" is an ideal in a C*-algebra "B", then any faithful nondegenerate *-homomorphism "π" of "I" into "B"("E")can be extended "uniquely" to "B".
Consequently, if "π" is a faithful nondegenerate *-homomorphism of "A" into "B"("E"), then "M"("A") is isomorphic to the idealizer of "π"("A"). For instance, "M"("K"("E")) = "B"("E") for any Hilbert module "E".
The C*-algebra "A" is isomorphic to the compact operators on the Hilbert module "A". Therefore, "M"("A") is the adjointable operators on "A".
Strict topology.
Consider the topology on "M"("A") specified by the seminorms {"la", "ra"}"a" ∈ "A", where
formula_0
The resulting topology is called the strict topology on "M"("A"). "A" is strictly dense in "M"("A") .
When "A" is unital, "M"("A") = "A", and the strict topology coincides with the norm topology. For "B"("H") = "M"("K"("H")), the strict topology is the σ-strong* topology. It follows from above that "B"("H") is complete in the σ-strong* topology.
Commutative case.
Let "X" be a locally compact Hausdorff space, "A" = "C"0("X"), the commutative C*-algebra of continuous functions that vanish at infinity. Then "M"("A") is "C""b"("X"), the continuous bounded functions on "X". By the Gelfand–Naimark theorem, one has the isomorphism of C*-algebras
formula_1
where "Y" is the spectrum of "C""b"("X"). "Y" is in fact homeomorphic to the Stone–Čech compactification "βX" of "X".
Corona algebra.
The corona or corona algebra of "A" is the quotient "M"("A")/"A".
For example, the corona algebra of the algebra of compact operators on a Hilbert space is the Calkin algebra.
The corona algebra is a noncommutative analogue of the corona set of a topological space. | [
{
"math_id": 0,
"text": "l_a (x) = \\|ax\\|, \\; r_a(x) = \\| xa \\|."
},
{
"math_id": 1,
"text": "C_b(X) \\simeq C(Y)"
}
]
| https://en.wikipedia.org/wiki?curid=13542720 |
13543961 | Financial accelerator | The financial accelerator in macroeconomics is the process by which adverse shocks to the economy may be amplified by worsening financial market conditions. More broadly, adverse conditions in the real economy and in financial markets propagate the financial and macroeconomic downturn.
Financial accelerator mechanism.
The link between the real economy and financial markets stems from firms’ need for external finance to engage in physical investment opportunities. Firms’ ability to borrow depends essentially on the market value of their net worth. The reason for this is asymmetric information between lenders and borrowers. Lenders are likely to have little information about the reliability of any given borrower. As such, they usually require borrowers to set forth their ability to repay, often in the form of collateralized assets. It follows that a fall in asset prices deteriorates the balance sheets of the firms and their net worth. The resulting deterioration of their ability to borrow has a negative impact on their investment. Decreased economic activity further cuts the asset prices down, which leads to a feedback cycle of falling asset prices, deteriorating balance sheets, tightening financing conditions and declining economic activity. This vicious cycle is called a financial accelerator. It is a financial feedback loop or a loan/credit cycle, which, starting from a small change in financial markets, is, in principle, able to produce a large change in economic conditions.
History of acceleration in macroeconomics.
The financial accelerator framework has been widely used in many studies during the 1980s and 1990s, especially by Bernanke, Gertler and Gilchrist, but the term “financial accelerator” has been introduced to the macroeconomics literature in their 1996 paper. The motivation of this paper was the longstanding puzzle that large fluctuations in aggregate economic activity sometimes seem to arise from seemingly small shocks, which rationalizes the existence of an accelerator mechanism. They argue that financial accelerator results from changes in credit market conditions, which affect the intrinsic costs of borrowing and lending associated with asymmetric information.
The principle of acceleration, namely the idea that small changes in demand can produce large changes in output, is an older phenomenon which has been used since the early 1900s. Although Aftalion's 1913 paper seems to be the first appearance of the acceleration principle, the essence of the accelerator framework could be found in a few other studies previously.
As a well-known example of the traditional view of acceleration, Samuelson (1939) argues that an increase in demand, for instance in government spending, leads to an increase in national income, which in turn drives consumption and investment, accelerating the economic activity. As a result, national income further increases, multiplying the initial effect of the stimulus through generating a virtuous cycle this time.
The roots of the modern view of acceleration go back to Fisher (1933). In his seminal work on debt and deflation, which tries to explain the underpinnings of the Great Depression, he studies a mechanism of a downward spiral in the economy induced by over-indebtedness and reinforced by a cycle of debt liquidation, assets and goods’ price deflation, net worth deterioration and economic contraction. His theory was disregarded in favor of Keynesian economics at that time.
Recently, with the rising view that financial market conditions are of high importance in driving the business cycles, the financial accelerator framework has revived again linking credit market imperfections to recessions as a source of a propagation mechanism. Many economists believe today that the financial accelerator framework describes well many of the financial-macroeconomic linkages underpinning the dynamics of The Great Depression and the subprime mortgage crisis.
A simple theoretical framework.
There are various ways of rationalizing a financial accelerator theoretically. One way is focusing on principal–agent problems in credit markets, as adopted by the influential works of Bernanke, Gertler and Gilchrist (1996), or Kiyotaki and Moore (1997).
The principal-agent view of credit markets refers to the costs (agency costs) associated with borrowing and lending due to imperfect and asymmetric information between lenders (principals) and borrowers (agents). Principals cannot access the information on investment opportunities (project returns), characteristics (creditworthiness) or actions (risk taking behavior) of the agents costlessly. These agency costs characterize three conditions that give rise to a financial accelerator:
Thus, to the extent that net worth is affected by a negative (positive) shock, the effect of the initial shock is amplified due to decreased (increased) investment and production activities as a result of the credit crunch (boom).
The following model simply illustrates the ideas above:
Consider a firm, which possesses liquid assets such as cash holdings (C) and illiquid but collateralizable assets such as land (A). In order to produce output (Y) the firm uses inputs (X), but suppose that the firm needs to borrow (B) in order to finance input costs. Suppose for simplicity that the interest rate is zero. Suppose also that A can be sold with a price of P per unit after the production, and the price of X is normalized to 1. Thus, the amount of X that can be purchased is equal to the cash holdings plus the borrowing
formula_0
Suppose now that it is costly for the lender to seize firm's output Y in case of default; however, ownership of the land A can be transferred to the lender if borrower defaults. Thus, land can serve as collateral. In this case, funds available to firm will be limited by the collateral value of the illiquid asset A, which is given by
formula_1
This borrowing constraint induces a feasibility constraint for the purchase of X
formula_2
Thus, spending on the input is limited by the net worth of the firm. If firm's net worth is less than the desired amount of X, the borrowing constraint will bind and firm's input will be limited, which also limits its output.
As can be seen from the feasibility constraint, borrower's net worth can be shrunk by a decline in the initial cash holdings C or asset prices P. Thus, an adverse shock to a firm's net worth (say an initial decline in the asset prices) deteriorates its balance sheet through limiting its borrowing and triggers a series of falling asset prices, falling net worth, deteriorating balance sheets, falling borrowing (thus investment) and falling output. Decreased economic activity feeds back to a fall in asset demand and asset prices further, causing a vicious cycle.
Welfare losses and government intervention : an example from the subprime mortgage crisis.
We have been experiencing the welfare consequences of the subprime mortgage crisis, in which relatively small losses on subprime assets have triggered large reductions in wealth, employment and output. As stated by Krishnamurthy (2010), the direct losses due to household default on subprime mortgages are estimated to be at most $500 bn, but the effects of the subprime shock have been far reaching. In order to prevent such huge welfare losses, governments may intervene in the financial markets and implement policies to mitigate the effects of the initial financial shock. For the credit market view of the financial accelerator, one policy implementation is to break the link between borrower's net worth and its ability to borrow as shown in the figure above.
There are various ways of breaking the mechanism of a financial accelerator. One way is to reverse the decline in the asset prices. When asset prices fall below a certain level, government can purchase assets at those prices, pulling up the demand for them and raising their prices back. The Federal Reserve was purchasing mortgage-backed securities in 2008 and 2009 with unusually low market prices. The supported asset prices pulls the net worth of the borrowers up, loosening the borrowing limits and stimulating investment.
Financial accelerator in open economies.
The financial accelerator also exists in emerging market crises in the sense that adverse shocks to a small open economy may be amplified by worsening international financial market conditions. Now the link between the real economy and the international financial markets stems from the need for international borrowing; firms’ borrowing to engage in profitable investment and production opportunities, households’ borrowing to smooth consumption when faced with income volatility or even governments’ borrowing from international funds.
Agents in an emerging economy often need external finance but informational frictions or limited commitment can limit their access to international capital markets. The information about the ability and willingness of a borrower to repay its debt is imperfectly observable so that the ability to borrow is often limited. The amount and terms of international borrowing depend on many conditions such as the credit history or default risk, output volatility or country risk, net worth or the value of collateralizable assets and the amount of outstanding liabilities.
An initial shock to productivity, world interest rate or country risk premium may lead to a “sudden stop” of capital inflows which blocks economic activity, accelerating the initial downturn. Or the familiar story of “debt-deflation” amplifies the adverse effects of an asset price shock when agents are highly indebted and the market value of their collateralizable assets deflates dramatically.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X = C + B"
},
{
"math_id": 1,
"text": "B \\leq P A"
},
{
"math_id": 2,
"text": "X \\leq C + P A"
}
]
| https://en.wikipedia.org/wiki?curid=13543961 |
13544419 | MIMO | Use of multiple antennas in radio
In radio, multiple-input and multiple-output (MIMO) () is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. MIMO has become an essential element of wireless communication standards including IEEE 802.11n (Wi-Fi 4), IEEE 802.11ac (Wi-Fi 5), HSPA+ (3G), WiMAX, and Long Term Evolution (LTE). More recently, MIMO has been applied to power-line communication for three-wire installations as part of the ITU G.hn standard and of the HomePlug AV2 specification.
At one time, in wireless the term "MIMO" referred to the use of multiple antennas at the transmitter and the receiver. In modern usage, "MIMO" specifically refers to a class of techniques for sending and receiving more than one data signal simultaneously over the same radio channel by exploiting the difference in signal propagation between different antennas (e.g. due to multipath propagation). Additionally, modern MIMO usage often refers to multiple data signals sent to different receivers (with one or more receive antennas) though this is more accurately termed multi-user multiple-input single-output (MU-MISO).
History.
Early research.
MIMO is often traced back to 1970s research papers concerning multi-channel digital transmission systems and interference (crosstalk) between wire pairs in a cable bundle: AR Kaye and DA George (1970), Branderburg and Wyner (1974), and W. van Etten (1975, 1976). Although these are not examples of exploiting multipath propagation to send multiple information streams, some of the mathematical techniques for dealing with mutual interference proved useful to MIMO development. In the mid-1980s Jack Salz at Bell Laboratories took this research a step further, investigating multi-user systems operating over "mutually cross-coupled linear networks with additive noise sources" such as time-division multiplexing and dually-polarized radio systems.
Methods were developed to improve the performance of cellular radio networks and enable more aggressive frequency reuse in the early 1990s. Space-division multiple access (SDMA) uses directional or smart antennas to communicate on the same frequency with users in different locations within range of the same base station. An SDMA system was proposed by Richard Roy and Björn Ottersten, researchers at ArrayComm, in 1991. Their US patent (No. 5515378 issued in 1996) describes a method for increasing capacity using "an array of receiving antennas at the base station" with a "plurality of remote users."
Invention.
Arogyaswami Paulraj and Thomas Kailath proposed an SDMA-based inverse multiplexing technique in 1993. Their US patent (No. 5,345,599 issued in 1994) described a method of broadcasting at high data rates by splitting a high-rate signal "into several low-rate signals" to be transmitted from "spatially separated transmitters" and recovered by the receive antenna array based on differences in "directions-of-arrival." Paulraj was awarded the prestigious Marconi Prize in 2014 for "his pioneering contributions to developing the theory and applications of MIMO antennas. ... His idea for using multiple antennas at both the transmitting and receiving stations – which is at the heart of the current high speed WiFi and 4G mobile systems – has revolutionized high speed wireless."
In an April 1996 paper and subsequent patent, Greg Raleigh proposed that natural multipath propagation can be exploited to transmit multiple, independent information streams using co-located antennas and multi-dimensional signal processing. The paper also identified practical solutions for modulation (MIMO-OFDM), coding, synchronization, and channel estimation. Later that year (September 1996) Gerard J. Foschini submitted a paper that also suggested it is possible to multiply the capacity of a wireless link using what the author described as "layered space-time architecture."
Greg Raleigh, V. K. Jones, and Michael Pollack founded Clarity Wireless in 1996, and built and field-tested a prototype MIMO system. Cisco Systems acquired Clarity Wireless in 1998. Bell Labs built a laboratory prototype demonstrating its V-BLAST (Vertical-Bell Laboratories Layered Space-Time) technology in 1998. Arogyaswami Paulraj founded Iospan Wireless in late 1998 to develop MIMO-OFDM products. Iospan was acquired by Intel in 2003. Neither Clarity Wireless nor Iospan Wireless shipped MIMO-OFDM products before being acquired.
Standards and commercialization.
MIMO technology has been standardized for wireless LANs, 3G mobile phone networks, and 4G mobile phone networks and is now in widespread commercial use. Greg Raleigh and V. K. Jones founded Airgo Networks in 2001 to develop MIMO-OFDM chipsets for wireless LANs. The Institute of Electrical and Electronics Engineers (IEEE) created a task group in late 2003 to develop a wireless LAN standard delivering at least 100 Mbit/s of user data throughput. There were two major competing proposals: TGn Sync was backed by companies including Intel and Philips, and WWiSE was supported by companies including Airgo Networks, Broadcom, and Texas Instruments. Both groups agreed that the 802.11n standard would be based on MIMO-OFDM with 20 MHz and 40 MHz channel options. TGn Sync, WWiSE, and a third proposal (MITMOT, backed by Motorola and Mitsubishi) were merged to create what was called the Joint Proposal. In 2004, Airgo became the first company to ship MIMO-OFDM products. Qualcomm acquired Airgo Networks in late 2006. The final 802.11n standard supported speeds up to 600 Mbit/s (using four simultaneous data streams) and was published in late 2009.
Surendra Babu Mandava and Arogyaswami Paulraj founded Beceem Communications in 2004 to produce MIMO-OFDM chipsets for WiMAX. The company was acquired by Broadcom in 2010. WiMAX was developed as an alternative to cellular standards, is based on the 802.16e standard, and uses MIMO-OFDM to deliver speeds up to 138 Mbit/s. The more advanced 802.16m standard enables download speeds up to 1 Gbit/s. A nationwide WiMAX network was built in the United States by Clearwire, a subsidiary of Sprint-Nextel, covering 130 million points of presence (PoPs) by mid-2012. Sprint subsequently announced plans to deploy LTE (the cellular 4G standard) covering 31 cities by mid-2013 and to shut down its WiMAX network by the end of 2015.
The first 4G cellular standard was proposed by NTT DoCoMo in 2004. Long term evolution (LTE) is based on MIMO-OFDM and continues to be developed by the 3rd Generation Partnership Project (3GPP). LTE specifies downlink rates up to 300 Mbit/s, uplink rates up to 75 Mbit/s, and quality of service parameters such as low latency. LTE Advanced adds support for picocells, femtocells, and multi-carrier channels up to 100 MHz wide. LTE has been embraced by both GSM/UMTS and CDMA operators.
The first LTE services were launched in Oslo and Stockholm by TeliaSonera in 2009. As of 2015, there were more than 360 LTE networks in 123 countries operational with approximately 373 million connections (devices).
Functions.
MIMO can be sub-divided into three main categories: precoding, spatial multiplexing (SM), and diversity coding.
Precoding is multi-stream beamforming, in the narrowest definition. In more general terms, it is considered to be all spatial processing that occurs at the transmitter. In (single-stream) beamforming, the same signal is emitted from each of the transmit antennas with appropriate phase and gain weighting such that the signal power is maximized at the receiver input. The benefits of beamforming are to increase the received signal gain – by making signals emitted from different antennas add up constructively – and to reduce the multipath fading effect. In line-of-sight propagation, beamforming results in a well-defined directional pattern. However, conventional beams are not a good analogy in cellular networks, which are mainly characterized by multipath propagation. When the receiver has multiple antennas, the transmit beamforming cannot simultaneously maximize the signal level at all of the receive antennas, and precoding with multiple streams is often beneficial. Precoding requires knowledge of channel state information (CSI) at the transmitter and the receiver.
Spatial multiplexing requires MIMO antenna configuration. In spatial multiplexing, a high-rate signal is split into multiple lower-rate streams and each stream is transmitted from a different transmit antenna in the same frequency channel. If these signals arrive at the receiver antenna array with sufficiently different spatial signatures and the receiver has accurate CSI, it can separate these streams into (almost) parallel channels. Spatial multiplexing is a very powerful technique for increasing channel capacity at higher signal-to-noise ratios (SNR). The maximum number of spatial streams is limited by the lesser of the number of antennas at the transmitter or receiver. Spatial multiplexing can be used without CSI at the transmitter, but can be combined with precoding if CSI is available. Spatial multiplexing can also be used for simultaneous transmission to multiple receivers, known as space–division multiple access or multi-user MIMO, in which case CSI is required at the transmitter. The scheduling of receivers with different spatial signatures allows good separability.
Diversity coding techniques are used when there is no channel knowledge at the transmitter. In diversity methods, a single stream (unlike multiple streams in spatial multiplexing) is transmitted, but the signal is coded using techniques called space-time coding. The signal is emitted from each of the transmit antennas with full or near orthogonal coding. Diversity coding exploits the independent fading in the multiple antenna links to enhance signal diversity. Because there is no channel knowledge, there is no beamforming or array gain from diversity coding.
Diversity coding can be combined with spatial multiplexing when some channel knowledge is available at the receiver.
Forms.
Multi-antenna types.
Multi-antenna MIMO (or single-user MIMO) technology has been developed and implemented in some standards, e.g., 802.11n products.
Applications.
Third Generation (3G) (CDMA and UMTS) allows for implementing space-time transmit diversity schemes, in combination with transmit beamforming at base stations. Fourth Generation (4G) LTE And LTE Advanced define very advanced air interfaces extensively relying on MIMO techniques. LTE primarily focuses on single-link MIMO relying on Spatial Multiplexing and space-time coding while LTE-Advanced further extends the design to multi-user MIMO. In wireless local area networks (WLAN), the IEEE 802.11n (Wi-Fi), MIMO technology is implemented in the standard using three different techniques: antenna selection, space-time coding and possibly beamforming.
Spatial multiplexing techniques make the receivers very complex, and therefore they are typically combined with orthogonal frequency-division multiplexing (OFDM) or with orthogonal frequency-division multiple access (OFDMA) modulation, where the problems created by a multi-path channel are handled efficiently. The IEEE 802.16e standard incorporates MIMO-OFDMA. The IEEE 802.11n standard, released in October 2009, recommends MIMO-OFDM.
MIMO is also planned to be used in mobile radio telephone standards such as recent 3GPP and 3GPP2. In 3GPP, High-Speed Packet Access plus (HSPA+) and Long Term Evolution (LTE) standards take MIMO into account. Moreover, to fully support cellular environments, MIMO research consortia including IST-MASCOT propose to develop advanced MIMO techniques, e.g., multi-user MIMO (MU-MIMO).
MIMO wireless communications architectures and processing techniques can be applied to sensing problems. This is studied in a sub-discipline called MIMO radar.
MIMO technology can be used in non-wireless communications systems. One example is the home networking standard ITU-T G.9963, which defines a powerline communications system that uses MIMO techniques to transmit multiple signals over multiple AC wires (phase, neutral and ground).
Mathematical description.
In MIMO systems, a transmitter sends multiple streams by multiple transmit antennas. The transmit streams go through a matrix channel which consists of all formula_0 paths between the formula_1 transmit antennas at the transmitter and formula_2 receive antennas at the receiver. Then, the receiver gets the received signal vectors by the multiple receive antennas and decodes the received signal vectors into the original information. A narrowband flat fading MIMO system is modeled as:
formula_3
where formula_4 and formula_5 are the receive and transmit vectors, respectively, and formula_6 and formula_7 are the channel matrix and the noise vector, respectively.
Referring to information theory, the ergodic channel capacity of MIMO systems where both the transmitter and the receiver have perfect instantaneous channel state information is
formula_8
where formula_9 denotes Hermitian transpose and formula_10 is the ratio between transmit power and noise power (i.e., transmit SNR). The optimal signal covariance formula_11 is achieved through singular value decomposition of the channel matrix formula_12 and an optimal diagonal power allocation matrix formula_13. The optimal power allocation is achieved through waterfilling, that is
formula_14
where formula_15 are the diagonal elements of formula_16, formula_17 is zero if its argument is negative, and formula_18 is selected such that formula_19.
If the transmitter has only statistical channel state information, then the ergodic channel capacity will decrease as the signal covariance formula_20 can only be optimized in terms of the average mutual information as
formula_21
The spatial correlation of the channel has a strong impact on the ergodic channel capacity with statistical information.
If the transmitter has no channel state information it can select the signal covariance formula_20 to maximize channel capacity under worst-case statistics, which means formula_22 and accordingly
formula_23
Depending on the statistical properties of the channel, the ergodic capacity is no greater than formula_24 times larger than that of a SISO system.
MIMO detection.
A fundamental problem in MIMO communication is estimating the transmit vector, formula_5, given the received vector, formula_4. This can be posed as a statistical detection problem, and addressed using a variety of techniques including zero-forcing, successive interference cancellation a.k.a. V-blast, Maximum likelihood estimation and recently, neural network MIMO detection. Such techniques commonly assume that the channel matrix formula_6 is known at the receiver. In practice, in communication systems, the transmitter sends a Pilot signal and the receiver learns the state of the channel (i.e., formula_6) from the received signal formula_25 and the Pilot signal formula_26. Recently, there are works on MIMO detection using Deep learning tools which have shown to work better than other methods such as zero-forcing.
Testing.
MIMO signal testing focuses first on the transmitter/receiver system. The random phases of the sub-carrier signals can produce instantaneous power levels that cause the amplifier to compress, momentarily causing distortion and ultimately symbol errors. Signals with a high PAR (peak-to-average ratio) can cause amplifiers to compress unpredictably during transmission. OFDM signals are very dynamic and compression problems can be hard to detect because of their noise-like nature.
Knowing the quality of the signal channel is also critical. A channel emulator can simulate how a device performs at the cell edge, can add noise or can simulate what the channel looks like at speed. To fully qualify the performance of a receiver, a calibrated transmitter, such as a vector signal generator (VSG), and channel emulator can be used to test the receiver under a variety of different conditions. Conversely, the transmitter's performance under a number of different conditions can be verified using a channel emulator and a calibrated receiver, such as a vector signal analyzer (VSA).
Understanding the channel allows for manipulation of the phase and amplitude of each transmitter in order to form a beam. To correctly form a beam, the transmitter needs to understand the characteristics of the channel. This process is called "channel sounding" or channel estimation. A known signal is sent to the mobile device that enables it to build a picture of the channel environment. The mobile device sends back the channel characteristics to the transmitter. The transmitter can then apply the correct phase and amplitude adjustments to form a beam directed at the mobile device. This is called a closed-loop MIMO system. For beamforming, it is required to adjust the phases and amplitude of each transmitter. In a beamformer optimized for spatial diversity or spatial multiplexing, each antenna element simultaneously transmits a weighted combination of two data symbols.
Literature.
Principal researchers.
Papers by Gerard J. Foschini and Michael J. Gans, Foschini and Emre Telatar have shown that the channel capacity (a theoretical upper bound on system throughput) for a MIMO system is increased as the number of antennas is increased, proportional to the smaller of the number of transmit antennas and the number of receive antennas. This is known as the multiplexing gain and this basic finding in information theory is what led to a spurt of research in this area. Despite the simple propagation models used in the aforementioned seminal works, the multiplexing gain is a fundamental property that can be proved under almost any physical channel propagation model and with practical hardware that is prone to transceiver impairments.
A textbook by A. Paulraj, R. Nabar and D. Gore has published an introduction to this area. There are many other principal textbooks available as well.
Diversity–multiplexing tradeoff.
There exists a fundamental tradeoff between transmit diversity and spatial multiplexing gains in a MIMO system (Zheng and Tse, 2003). In particular, achieving high spatial multiplexing gains is of profound importance in modern wireless systems.
Other applications.
Given the nature of MIMO, it is not limited to wireless communication. It can be used for wire line communication as well. For example, a new type of DSL technology (gigabit DSL) has been proposed based on binder MIMO channels.
Sampling theory in MIMO systems.
An important question which attracts the attention of engineers and mathematicians is how to use the multi-output signals at the receiver to recover the multi-input signals at the transmitter. In Shang, Sun and Zhou (2007), sufficient and necessary conditions are established to guarantee the complete recovery of the multi-input signals.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_t N_r"
},
{
"math_id": 1,
"text": "N_t"
},
{
"math_id": 2,
"text": "N_r"
},
{
"math_id": 3,
"text": "\\mathbf{y} = \\mathbf{H}\\mathbf{x} + \\mathbf{n}"
},
{
"math_id": 4,
"text": "\\mathbf{y}"
},
{
"math_id": 5,
"text": "\\mathbf{x}"
},
{
"math_id": 6,
"text": "\\mathbf{H}"
},
{
"math_id": 7,
"text": "\\mathbf{n}"
},
{
"math_id": 8,
"text": "C_\\mathrm{perfect-CSI} = E\\left[\\max_{\\mathbf{Q}; \\, \\mbox{tr}(\\mathbf{Q}) \\leq 1} \\log_2 \\det\\left(\\mathbf{I} + \\rho \\mathbf{H}\\mathbf{Q}\\mathbf{H}^{H}\\right)\\right] = E\\left[\\log_2 \\det\\left(\\mathbf{I} + \\rho \\mathbf{D}\\mathbf{S} \\mathbf{D} \\right)\\right]"
},
{
"math_id": 9,
"text": "()^H"
},
{
"math_id": 10,
"text": "\\rho"
},
{
"math_id": 11,
"text": "\\mathbf{Q}=\\mathbf{VSV}^H"
},
{
"math_id": 12,
"text": "\\mathbf{UDV}^H \\,=\\, \\mathbf{H}"
},
{
"math_id": 13,
"text": "\\mathbf{S}=\\textrm{diag}(s_1,\\ldots,s_{\\min(N_t, N_r)},0,\\ldots,0)"
},
{
"math_id": 14,
"text": "s_i = \\left(\\mu - \\frac{1}{\\rho d_i^2} \\right)^+, \\quad \\textrm{for} \\,\\, i=1,\\ldots,\\min(N_t, N_r),"
},
{
"math_id": 15,
"text": "d_1,\\ldots,d_{\\min(N_t, N_r)}"
},
{
"math_id": 16,
"text": "\\mathbf{D}"
},
{
"math_id": 17,
"text": "(\\cdot)^+"
},
{
"math_id": 18,
"text": "\\mu"
},
{
"math_id": 19,
"text": "s_1+\\ldots+s_{\\min(N_t, N_r)}=N_t"
},
{
"math_id": 20,
"text": "\\mathbf{Q}"
},
{
"math_id": 21,
"text": "C_\\mathrm{statistical-CSI} = \\max_{\\mathbf{Q}} E\\left[\\log_2 \\det\\left(\\mathbf{I} + \\rho \\mathbf{H}\\mathbf{Q}\\mathbf{H}^{H}\\right)\\right]."
},
{
"math_id": 22,
"text": "\\mathbf{Q}=1/N_t \\mathbf{I}"
},
{
"math_id": 23,
"text": "C_\\mathrm{no-CSI} = E\\left[\\log_2 \\det\\left(\\mathbf{I} + \\frac{\\rho}{N_t}\\mathbf{H}\\mathbf{H}^{H}\\right)\\right]."
},
{
"math_id": 24,
"text": "\\min(N_t, N_r)"
},
{
"math_id": 25,
"text": "Y"
},
{
"math_id": 26,
"text": "X"
}
]
| https://en.wikipedia.org/wiki?curid=13544419 |
1354446 | Schönhage–Strassen algorithm | Multiplication algorithm
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971. It works by recursively applying fast Fourier transform (FFT) over the integers modulo 2"n"+1. The run-time bit complexity to multiply two n-digit numbers using the algorithm is formula_0 in big O notation.
The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007. It is asymptotically faster than older methods such as Karatsuba and Toom–Cook multiplication, and starts to outperform them in practice for numbers beyond about 10,000 to 100,000 decimal digits. In 2007, Martin Fürer published an algorithm with faster asymptotic complexity. In 2019, David Harvey and Joris van der Hoeven demonstrated that multi-digit multiplication has theoretical formula_1 complexity; however, their algorithm has constant factors which make it impossibly slow for any conceivable practical problem (see galactic algorithm).
Applications of the Schönhage–Strassen algorithm include large computations done for their own sake such as the Great Internet Mersenne Prime Search and approximations of π, as well as practical applications such as Lenstra elliptic curve factorization via Kronecker substitution, which reduces polynomial multiplication to integer multiplication.
Description.
This section has a simplified version of the algorithm, showing how to compute the product formula_2 of two natural numbers formula_3, modulo a number of the form formula_4, where formula_5 is some fixed number. The integers formula_3 are to be divided into formula_6 blocks of formula_7 bits, so in practical implementations, it is important to strike the right balance between the parameters formula_8. In any case, this algorithm will provide a way to multiply two positive integers, provided formula_9 is chosen so that formula_10.
Let formula_11 be the number of bits in the signals formula_12 and formula_13, where formula_6 is a power of two. Divide the signals formula_12 and formula_13 into formula_14 blocks of formula_7 bits each, storing the resulting blocks as arrays formula_15 (whose entries we shall consider for simplicity as arbitrary precision integers).
We now select a modulus for the Fourier transform, as follows. Let formula_16 be such that formula_17. Also put formula_18, and regard the elements of the arrays formula_15 as (arbitrary precision) integers modulo formula_19. Observe that since formula_20, the modulus is large enough to accommodate any carries that can result from multiplying formula_12 and formula_13. Thus, the product formula_2 (modulo formula_4) can be calculated by evaluating the convolution of formula_15. Also, with formula_21, we have formula_22, and so formula_23 is a primitive formula_14th root of unity modulo formula_19.
We now take the discrete Fourier transform of the arrays formula_15 in the ring formula_24, using the root of unity formula_23 for the Fourier basis, giving the transformed arrays formula_25. Because formula_6 is a power of two, this can be achieved in logarithmic time using a fast Fourier transform.
Let formula_26 (pointwise product), and compute the inverse transform formula_27 of the array formula_28, again using the root of unity formula_23. The array formula_27 is now the convolution of the arrays formula_15. Finally, the product formula_29 is given by evaluating
formula_30
This basic algorithm can be improved in several ways. Firstly, it is not necessary to store the digits of formula_3 to arbitrary precision, but rather only up to formula_31 bits, which gives a more efficient machine representation of the arrays formula_15. Secondly, it is clear that the multiplications in the forward transforms are simple bit shifts. With some care, it is also possible to compute the inverse transform using only shifts. Taking care, it is thus possible to eliminate any true multiplications from the algorithm except for where the pointwise product formula_26 is evaluated. It is therefore advantageous to select the parameters formula_32 so that this pointwise product can be performed efficiently, either because it is a single machine word or using some optimized algorithm for multiplying integers of a (ideally small) number of words. Selecting the parameters formula_32 is thus an important area for further optimization of the method.
Details.
Every number in base B, can be written as a polynomial:
formula_33
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
formula_34
Because, for formula_35: formula_36,
we have a convolution.
By using FFT (fast Fourier transform), used in original version rather than NTT, with convolution rule; we get
formula_37
That is; formula_38, where formula_39
is the corresponding coefficient in fourier space. This can also be written as: fft(a * b) = fft(a) ● fft(b).
We have the same coefficients due to linearity under Fourier transform, and because these polynomials
only consist of one unique term per coefficient:
formula_40 and
formula_41
Convolution rule: formula_42
We have reduced our convolution problem
to product problem, through FFT.
By finding ifft (polynomial interpolation), for each formula_43, one get the desired coefficients.
Algorithm uses divide and conquer strategy, to divide problem to subproblems.
formula_44, where formula_45 and formula_46 in Schönhage–Strassen algorithm.
Convolution under mod "N".
By letting:
formula_47 and formula_48
where formula_49 is the n-th root
One sees that:
formula_50
This mean, one can use weight formula_51, and then multiply with formula_52 after.
Instead of using weight; one can due to formula_49, in first step of recursion (when formula_53), calculate :
formula_54
In normal FFT, that operates over complex numbers, one would use:
formula_55
formula_56
However, FFT can also be used as a NTT (number theoretic transformation) in Schönhage–Strassen. This means that we have to use θ that generate numbers in a finite field (for example formula_57).
A root of unity under a finite field GF("r"), is an element a such that formula_58 or formula_59. For example GF("p"), where p is a prime,
gives formula_60.
Notice that formula_61 in formula_62 and formula_63 in formula_64. For these candidates, formula_65 under its finite field, and therefore act the way we want .
Same FFT algorithms can still be used, though, as long as θ is root of unity of a finite field.
To find FFT/NTT transform, we do the following:
formula_66
First product gives contribution to formula_67, for each k. Second gives contribution to formula_67, due to formula_68 mod formula_69.
To do the inverse:
formula_70 or formula_71
depending on whether fft one use normalize data or not.
One multiply by formula_72, to normailize fft data to a specific range, where formula_73, where m is found using modular multiplicative inverse.
Implementation details.
Why "N" = 2"M" + 1 in mod "N".
In Schönhage–Strassen algorithm, formula_74. One should think of this as a binary tree, where one have values in formula_75. By letting formula_76, one can for each K find all formula_77: One can group all formula_78 pairs into M different groups. Using formula_79 to group formula_78 pairs through convolution, is a classical problem in algorithms. For example: Let k be total income and i be mans income and j womans income; by using convolution, one can group formula_78 into K groups based on desired total income.
Having this in mind, formula_74 help us to group formula_80 into formula_81 groups, for each group of subtasks in depth k; in tree with formula_82
Notice that formula_83, for some L. This is Fermat number. When doing mod formula_83, we have something called Fermat ring.
Because some Fermat numbers are Fermat primes, one can in some cases avoid calculations.
There are other "N" that could have been used, of course, with same prime number advantages. By letting formula_84, one have the maximal number in a binary number with formula_85 bits.
formula_84 is a Mersenne number, that in some cases is a Mersenne prime. It is a natural candidate against Fermat number formula_86
In search of another "N".
Doing several mod calculations against different N, can be helpful when it comes to solving integer product. By using the Chinese remainder theorem, after splitting M into smaller different types of N, one can find the answer of multiplication xy
Fermat numbers and Mersenne numbers are just two types of numbers, in something called generalized Fermat Mersenne number (GSM); with formula:
formula_87
formula_88
In this formula; formula_89 is a Fermat number, and formula_90 is a Mersenne number.
This formula can be used to generate sets of equations, that can be used in CRT (Chinese remainder theorem):
formula_91, where g is a number such that there exists a x where formula_92, assuming formula_93
Furthermore; formula_94, where a is an element that generates elements in formula_95 in a cyclic manner.
If formula_96, where formula_97, then formula_98.
How to choose "K" for a specific "N".
Following formula helps one out, finding a proper K (number of groups to divide N bits into) given bit size N by calculating efficiency :
formula_99 N is bit size (the one used in formula_100) at outermost level. K gives formula_101 groups of bits, where formula_102.
n is found through N, K and k by finding the smallest x, such that formula_103
If one assume efficiency above 50%, formula_104 and k is very small compared to rest of formula; one get
formula_105
This means: When something is very effective; K is bound above by formula_106 or asymptotically bound above by formula_107
Pseudocode.
Following alogithm, the standard Modular Schönhage-Strassen Multiplication algorithm (with some optimizations), is found in overview through
Further study.
For implemantion details, one can read the book "Prime Numbers: A Computational Perspective". This variant differs somewhat from Schönhage's original method in that it exploits the discrete weighted transform to perform negacyclic convolutions more efficiently. Another source for detailed information is Knuth's "The Art of Computer Programming".
Optimizations.
This section explains a number of important practical optimizations, when implementing Schönhage–Strassen.
Use of other multiplications algorithm, inside algorithm.
Below a certain cutoff point, it's more efficient to use other multiplication algorithms, such as Toom–Cook multiplication.
Square root of 2 trick.
The idea is to use formula_108 as root of unity of order formula_109 in finite field formula_110 ( it is a solution to equation formula_111), when weighting values in NTT (number theoretic transformation) approach. It has been shown to save 10% in integer multiplication time.
Granlund's trick.
By letting formula_112; one can compute
formula_113 and formula_114. In combination with CRT (Chinese Remainder Theorem), to
find exact values of multiplication uv
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n \\cdot \\log n \\cdot \\log \\log n)"
},
{
"math_id": 1,
"text": "O(n\\log n)"
},
{
"math_id": 2,
"text": "ab"
},
{
"math_id": 3,
"text": "a,b"
},
{
"math_id": 4,
"text": "2^n+1"
},
{
"math_id": 5,
"text": "n=2^kM"
},
{
"math_id": 6,
"text": "D=2^k"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "M,k"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "ab < 2^n+1"
},
{
"math_id": 11,
"text": "n=DM"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "D"
},
{
"math_id": 15,
"text": "A,B"
},
{
"math_id": 16,
"text": "M'"
},
{
"math_id": 17,
"text": "DM'\\ge 2M+k"
},
{
"math_id": 18,
"text": "n'=DM'"
},
{
"math_id": 19,
"text": "2^{n'}+1"
},
{
"math_id": 20,
"text": "2^{n'} + 1 \\ge 2^{2M+k} + 1 = D2^{2M}+1"
},
{
"math_id": 21,
"text": "g=2^{2M'}"
},
{
"math_id": 22,
"text": "g^{D/2}\\equiv -1\\pmod{2^{n'}+1}"
},
{
"math_id": 23,
"text": "g"
},
{
"math_id": 24,
"text": "\\mathbb Z/(2^{n'}+1)\\mathbb Z"
},
{
"math_id": 25,
"text": "\\widehat A,\\widehat B"
},
{
"math_id": 26,
"text": "\\widehat C_i=\\widehat A_i\\widehat B_i"
},
{
"math_id": 27,
"text": "C"
},
{
"math_id": 28,
"text": "\\widehat C"
},
{
"math_id": 29,
"text": "ab\\pmod{2^n+1}"
},
{
"math_id": 30,
"text": "ab\\equiv \\sum_j C_j2^{Mj}\\mod{2^n+1}."
},
{
"math_id": 31,
"text": "n'+1"
},
{
"math_id": 32,
"text": "D,M"
},
{
"math_id": 33,
"text": " X = \\sum_{i=0}^N {x_iB^i} "
},
{
"math_id": 34,
"text": "XY = \\left(\\sum_{i=0}^N {x_iB^i}\\right)\\left(\\sum_{j=0}^N {y_iB^j}\\right) "
},
{
"math_id": 35,
"text": " B^k "
},
{
"math_id": 36,
"text": "c_k =\\sum_{(i,j):i+j=k} {a_ib_j} = \\sum_{i=0}^k {a_ib_{k-i}} "
},
{
"math_id": 37,
"text": " \\hat{f}(a * b) = \\hat{f}\\left(\\sum_{i=0}^k a_ib_{k-i} \\right) = \\hat{f}(a) \\bullet \\hat{f}(b). "
},
{
"math_id": 38,
"text": " C_k = a_k \\bullet b_k "
},
{
"math_id": 39,
"text": " C_k "
},
{
"math_id": 40,
"text": " \\hat{f}(x^n) = \\left(\\frac{i}{2\\pi}\\right)^n \\delta^{(n)} "
},
{
"math_id": 41,
"text": " \\hat{f}(a\\, X(\\xi) + b\\, Y(\\xi)) = a\\, \\hat{X}(\\xi) + b\\, \\hat{Y}(\\xi)"
},
{
"math_id": 42,
"text": " \\hat{f}(X * Y) = \\ \\hat{f}(X) \\bullet \\hat{f}(Y) "
},
{
"math_id": 43,
"text": "c_k "
},
{
"math_id": 44,
"text": "c_k =\\sum_{(i,j):i+j \\equiv k \\pmod {N(n)}} a_ib_j "
},
{
"math_id": 45,
"text": " N(n) = 2^n + 1 "
},
{
"math_id": 46,
"text": " N(N) = 2^N + 1 "
},
{
"math_id": 47,
"text": " a_i' = \\theta^i a_i "
},
{
"math_id": 48,
"text": " b_j' = \\theta^j b_j,"
},
{
"math_id": 49,
"text": " \\theta^N = -1 "
},
{
"math_id": 50,
"text": "\n\\begin{align}\nC_k & =\\sum_{(i,j):i+j=k \\equiv \\pmod {N(n)}} a_ib_j = \\theta^{-k} \\sum_{(i,j):i+j \\equiv k \\pmod {N(n)}} a_i'b_j' \\\\[6pt]\n& = \\theta^{-k} \\left(\\sum_{(i,j):i+j=k} a_i'b_j' + \\sum_{(i,j):i+j=k+n} a_i'b_j' \\right) \\\\[6pt]\n& = \\theta^{-k}\\left(\\sum_{(i,j):i+j=k} a_ib_j\\theta^k + \\sum_{(i,j):i+j=k+n} a_ib_j\\theta^{n+k} \\right) \\\\[6pt]\n& = \\sum_{(i,j):i+j=k} a_ib_j + \\theta^n \\sum_{(i,j):i+j=k+n} a_ib_j.\n\\end{align}\n"
},
{
"math_id": 51,
"text": " \\theta^i "
},
{
"math_id": 52,
"text": " \\theta^{-k} "
},
{
"math_id": 53,
"text": " n = N "
},
{
"math_id": 54,
"text": " C_k =\\sum_{(i,j):i+j \\equiv k \\pmod {N(N)}} = \\sum_{(i,j):i+j=k} a_ib_j - \\sum_{(i,j):i+j=k+n} a_ib_j "
},
{
"math_id": 55,
"text": "\n\\exp \\left(\\frac{2k\\pi i}{n}\\right) =\\cos\\frac{2k\\pi}{n} + i \\sin \\frac{2k\\pi} n, \\qquad k=0,1,\\dots, n-1.\n"
},
{
"math_id": 56,
"text": "\n\\begin{align}\nC_k & = \\theta^{-k}\\left(\\sum_{(i,j):i+j=k} a_ib_j\\theta^k + \\sum_{(i,j):i+j=k+n} a_ib_j \\theta^{n+k} \\right) \\\\[6pt]\n& = e^{-i2\\pi k/n} \\left(\\sum_{(i,j):i+j=k} a_ib_je^{i2\\pi k /n} +\\sum_{(i,j):i+j=k+n} a_ib_je^{i2\\pi (n+k)/n} \\right) \n\\end{align}\n"
},
{
"math_id": 57,
"text": "\\mathrm{GF}( 2^n + 1 )"
},
{
"math_id": 58,
"text": " \\theta^{r-1} \\equiv 1 "
},
{
"math_id": 59,
"text": " \\theta^r \\equiv \\theta "
},
{
"math_id": 60,
"text": "\\{1,2,\\ldots,p-1\\}"
},
{
"math_id": 61,
"text": " 2^n \\equiv -1"
},
{
"math_id": 62,
"text": "\\operatorname{GF}(2^n + 1)"
},
{
"math_id": 63,
"text": " \\sqrt{2} \\equiv -1 "
},
{
"math_id": 64,
"text": "\\operatorname{GF}(2^{n+2} + 1) "
},
{
"math_id": 65,
"text": " \\theta^N \\equiv -1 "
},
{
"math_id": 66,
"text": "\n\\begin{align}\nC_k' & = \\hat{f}(k) = \\hat{f}\\left(\\theta^{-k}\\left(\\sum_{(i,j):i+j=k} a_ib_j\\theta^k +\\sum_{(i,j):i+j=k+n} a_ib_j\\theta^{n+k} \\right) \\right) \\\\[6pt]\nC_{k+k}' & = \\hat{f}(k+k) = \\hat{f} \\left(\\sum_{(i,j):i+j=2k} a_i b_j \\theta^k +\\sum_{(i,j):i+j=n+2k} a_i b_j \\theta^{n+k} \\right) \\\\[6pt]\n& = \\hat{f}\\left(\\sum_{(i,j):i+j=2k} a_i b_j \\theta^k + \\sum_{(i,j):i+j=2k+n} a_i b_j \\theta^{n+k} \\right) \\\\[6pt]\n& = \\hat{f}\\left(A_{k \\leftarrow k}\\right) \\bullet \\hat{f}(B_{k \\leftarrow k}) + \\hat{f}(A_{k \\leftarrow k+n}) \\bullet \\hat{f}(B_{k \\leftarrow k+n})\n\\end{align}\n"
},
{
"math_id": 67,
"text": " c_k "
},
{
"math_id": 68,
"text": "(i+j)"
},
{
"math_id": 69,
"text": " N(n) "
},
{
"math_id": 70,
"text": " C_k = 2^{-m}\\hat{f^{-1}} (\\theta^{-k} C_{k+k}') "
},
{
"math_id": 71,
"text": " C_k = \\hat{f^{-1}}(\\theta^{-k} C_{k+k}') "
},
{
"math_id": 72,
"text": " 2^{-m} "
},
{
"math_id": 73,
"text": " \\frac{1}{n} \\equiv 2^{-m} \\bmod N(n) "
},
{
"math_id": 74,
"text": " N = 2^M + 1 "
},
{
"math_id": 75,
"text": " 0 \\le \\text{index} \\le 2^{M}=2^{i+j} "
},
{
"math_id": 76,
"text": " K \\in [0,M] "
},
{
"math_id": 77,
"text": " i+j = K "
},
{
"math_id": 78,
"text": "(i,j)"
},
{
"math_id": 79,
"text": " i+j = k "
},
{
"math_id": 80,
"text": " (i,j) "
},
{
"math_id": 81,
"text": " \\frac{M}{2^k} "
},
{
"math_id": 82,
"text": " N = 2^{\\frac{M}{2^k}} + 1 "
},
{
"math_id": 83,
"text": " N = 2^M + 1 = 2^{2^L} + 1 "
},
{
"math_id": 84,
"text": " N = 2^k - 1 "
},
{
"math_id": 85,
"text": " k+1 "
},
{
"math_id": 86,
"text": " N = 2^{2^L} + 1 "
},
{
"math_id": 87,
"text": " G_{q,p,n} = \\sum_{i=1}^p q^{(p-i)n} = \\frac{q^{pn}-1}{q^n-1} "
},
{
"math_id": 88,
"text": " M_{p,n} = G_{2,p,n} "
},
{
"math_id": 89,
"text": " M_{2,2^k} "
},
{
"math_id": 90,
"text": " M_{p,1} "
},
{
"math_id": 91,
"text": "g^{\\frac{(M_{p,n}-1)}{2}} \\equiv -1 \\pmod {M_{p,n}}"
},
{
"math_id": 92,
"text": " x^2 \\equiv g \\pmod {M_{p,n}}"
},
{
"math_id": 93,
"text": " N = 2^n "
},
{
"math_id": 94,
"text": "g^{2^{(p-1)n}-1} \\equiv a^{2^n -1} \\pmod {M_{p,n}}"
},
{
"math_id": 95,
"text": " \\{1,2,4,...2^{n-1},2^n\\} "
},
{
"math_id": 96,
"text": " N=2^t "
},
{
"math_id": 97,
"text": " 1 \\le t \\le n "
},
{
"math_id": 98,
"text": " g_t = a^{(2^n-1)2^{n-t}} "
},
{
"math_id": 99,
"text": " E = \\frac{\\frac{2N}{K}+k}{n} "
},
{
"math_id": 100,
"text": " 2^N + 1 "
},
{
"math_id": 101,
"text": " \\frac{N}{K} "
},
{
"math_id": 102,
"text": " K = 2^k "
},
{
"math_id": 103,
"text": " 2N/K +k \\le n = K2^x "
},
{
"math_id": 104,
"text": " \\frac{n}{2} \\le \\frac{2N}{K}, K \\le n "
},
{
"math_id": 105,
"text": " K \\le 2\\sqrt{N} "
},
{
"math_id": 106,
"text": " 2\\sqrt{N} "
},
{
"math_id": 107,
"text": " \\sqrt{N} "
},
{
"math_id": 108,
"text": " \\sqrt{2} "
},
{
"math_id": 109,
"text": " 2^{n+2} "
},
{
"math_id": 110,
"text": "\\mathrm{GF}(2^{n+2} +1)"
},
{
"math_id": 111,
"text": " \\theta^{2^{n+2}} \\equiv 1 \\pmod {2^{n+2} + 1}"
},
{
"math_id": 112,
"text": " m = N + h "
},
{
"math_id": 113,
"text": "uv \\bmod {2^N +1}"
},
{
"math_id": 114,
"text": "(u \\bmod {2^h})(v \\bmod 2^h)"
}
]
| https://en.wikipedia.org/wiki?curid=1354446 |
13547663 | Jacobsthal number | In mathematics, the Jacobsthal numbers are an integer sequence named after the German mathematician Ernst Jacobsthal. Like the related Fibonacci numbers, they are a specific type of Lucas sequence formula_0 for which "P" = 1, and "Q" = −2—and are defined by a similar recurrence relation: in simple terms, the sequence starts with 0 and 1, then each following number is found by adding the number before it to twice the number before that. The first Jacobsthal numbers are:
0, 1, 1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731, 5461, 10923, 21845, 43691, 87381, 174763, 349525, … (sequence in the OEIS)
A Jacobsthal prime is a Jacobsthal number that is also prime. The first Jacobsthal primes are:
3, 5, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, 201487636602438195784363, 845100400152152934331135470251, 56713727820156410577229101238628035243, … (sequence in the OEIS)
Jacobsthal numbers.
Jacobsthal numbers are defined by the recurrence relation:
formula_1
The next Jacobsthal number is also given by the recursion formula
formula_2
or by
formula_3
The second recursion formula above is also satisfied by the powers of 2.
The Jacobsthal number at a specific point in the sequence may be calculated directly using the closed-form equation:
formula_4
The generating function for the Jacobsthal numbers is
formula_5
The sum of the reciprocals of the Jacobsthal numbers is approximately 2.7186, slightly larger than e.
The Jacobsthal numbers can be extended to negative indices using the recurrence relation or the explicit formula, giving
formula_6 (see OEIS: )
The following identity holds
formula_7 (see OEIS: )
Jacobsthal–Lucas numbers.
Jacobsthal–Lucas numbers represent the complementary Lucas sequence formula_8. They satisfy the same recurrence relation as Jacobsthal numbers but have different initial values:
formula_9
The following Jacobsthal–Lucas number also satisfies:
formula_10
The Jacobsthal–Lucas number at a specific point in the sequence may be calculated directly using the closed-form equation:
formula_11
The first Jacobsthal–Lucas numbers are:
2, 1, 5, 7, 17, 31, 65, 127, 257, 511, 1025, 2047, 4097, 8191, 16385, 32767, 65537, 131071, 262145, 524287, 1048577, … (sequence in the OEIS).
Jacobsthal Oblong numbers.
The first Jacobsthal Oblong numbers are:
0, 1, 3, 15, 55, 231, 903, 3655, 14535, 58311, … (sequence in the OEIS)
formula_12
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U_n(P,Q)"
},
{
"math_id": 1,
"text": "J_n = \\begin{cases}\n0 & \\mbox{if } n = 0; \\\\\n1 & \\mbox{if } n = 1; \\\\\nJ_{n-1} + 2J_{n-2} & \\mbox{if } n > 1. \\\\\n\\end{cases}"
},
{
"math_id": 2,
"text": "J_{n+1} = 2J_n + (-1)^n,"
},
{
"math_id": 3,
"text": "J_{n+1} = 2^n - J_n."
},
{
"math_id": 4,
"text": "J_n = \\frac{2^n - (-1)^n}{3}."
},
{
"math_id": 5,
"text": "\\frac{x}{(1+x)(1-2x)}."
},
{
"math_id": 6,
"text": "J_{-n} = (-1)^{n+1} J_n / 2^n"
},
{
"math_id": 7,
"text": "2^n(J_{-n} + J_n) = 3 J_n^2 "
},
{
"math_id": 8,
"text": "V_n(1,-2)"
},
{
"math_id": 9,
"text": " \n j_n = \n \\begin{cases}\n 2 & \\mbox{if } n = 0; \\\\\n 1 & \\mbox{if } n = 1; \\\\\n j_{n-1} + 2j_{n-2} & \\mbox{if } n > 1. \\\\\n \\end{cases}\n"
},
{
"math_id": 10,
"text": "\n j_{n+1} = 2j_n - 3(-1)^n. \\,\n"
},
{
"math_id": 11,
"text": "\n j_n = 2^n + (-1)^n. \\,\n"
},
{
"math_id": 12,
"text": "Jo_{n} = J_{n} J_{n+1}"
}
]
| https://en.wikipedia.org/wiki?curid=13547663 |
13547826 | Forbidden graph characterization | Describing a family of graphs by excluding certain (sub)graphs
In graph theory, a branch of mathematics, many important families of graphs can be described by a finite set of individual graphs that do not belong to the family and further exclude all graphs from the family which contain any of these forbidden graphs as (induced) subgraph or minor.
A prototypical example of this phenomenon is Kuratowski's theorem, which states that a graph is planar (can be drawn without crossings in the plane) if and only if it does not contain either of two forbidden graphs, the complete graph "K"5 and the complete bipartite graph "K"3,3. For Kuratowski's theorem, the notion of containment is that of graph homeomorphism, in which a subdivision of one graph appears as a subgraph of the other. Thus, every graph either has a planar drawing (in which case it belongs to the family of planar graphs) or it has a subdivision of at least one of these two graphs as a subgraph (in which case it does not belong to the planar graphs).
Definition.
More generally, a forbidden graph characterization is a method of specifying a family of graph, or hypergraph, structures, by specifying substructures that are forbidden to exist within any graph in the family. Different families vary in the nature of what is "forbidden". In general, a structure "G" is a member of a family formula_0 if and only if a forbidden substructure is not contained in "G". The forbidden substructure might be one of:
The set of structures that are forbidden from belonging to a given graph family can also be called an obstruction set for that family.
Forbidden graph characterizations may be used in algorithms for testing whether a graph belongs to a given family. In many cases, it is possible to test in polynomial time whether a given graph contains any of the members of the obstruction set, and therefore whether it belongs to the family defined by that obstruction set.
In order for a family to have a forbidden graph characterization, with a particular type of substructure, the family must be closed under substructures.
That is, every substructure (of a given type) of a graph in the family must be another graph in the family. Equivalently, if a graph is not part of the family, all larger graphs containing it as a substructure must also be excluded from the family. When this is true, there always exists an obstruction set (the set of graphs that are not in the family but whose smaller substructures all belong to the family). However, for some notions of what a substructure is, this obstruction set could be infinite. The Robertson–Seymour theorem proves that, for the particular case of graph minors, a family that is closed under minors always has a finite obstruction set.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{F}"
}
]
| https://en.wikipedia.org/wiki?curid=13547826 |
13548016 | Orthogonal Procrustes problem | The orthogonal Procrustes problem is a matrix approximation problem in linear algebra. In its classical form, one is given two matrices formula_0 and formula_1 and asked to find an orthogonal matrix formula_2 which most closely maps formula_0 to formula_1. Specifically, the orthogonal Procrustes problem is an optimization problem given by
formula_3
where formula_4 denotes the Frobenius norm. This is a special case of Wahba's problem (with identical weights; instead of considering two matrices, in Wahba's problem the columns of the matrices are considered as individual vectors). Another difference is that Wahba's problem tries to find a proper rotation matrix instead of just an orthogonal one.
The name Procrustes refers to a bandit from Greek mythology who made his victims fit his bed by either stretching their limbs or cutting them off.
Solution.
This problem was originally solved by Peter Schönemann in a 1964 thesis, and shortly after appeared in the journal Psychometrika.
This problem is equivalent to finding the nearest orthogonal matrix to a given matrix formula_5, i.e. solving the "closest orthogonal approximation problem"
formula_6.
To find matrix formula_7, one uses the singular value decomposition (for which the entries of formula_8 are non-negative)
formula_9
to write
formula_10
Proof of Solution.
One proof depends on the basic properties of the Frobenius inner product that induces the Frobenius norm:
formula_11
This quantity formula_12 is an orthogonal matrix (as it is a product of orthogonal matrices) and thus the expression is maximised when formula_12 equals the identity matrix formula_13. Thus
formula_14
where formula_15 is the solution for the optimal value of formula_16 that minimizes the norm squared formula_17.
Generalized/constrained Procrustes problems.
There are a number of related problems to the classical orthogonal Procrustes problem. One might generalize it by seeking the closest matrix in which the columns are orthogonal, but not necessarily orthonormal.
Alternately, one might constrain it by only allowing rotation matrices (i.e. orthogonal matrices with determinant 1, also known as special orthogonal matrices). In this case, one can write (using the above decomposition formula_18)
formula_19
where formula_20 is a modified formula_21, with the smallest singular value replaced by formula_22 (+1 or -1), and the other singular values replaced by 1, so that the determinant of R is guaranteed to be positive. For more information, see the Kabsch algorithm.
The "unbalanced" Procrustes problem concerns minimizing the norm of formula_23, where formula_24, and formula_25, with formula_26, or alternately with complex valued matrices. This is a problem over the Stiefel manifold formula_27, and has no currently known closed form. To distinguish, the standard Procrustes problem (formula_28) is referred to as the "balanced" problem in these contexts. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "\n\\begin{aligned}\n \\underset{\\Omega}{\\text{minimize}}\\quad & \\|\\Omega A-B\\|_F \\\\\n \\text{subject to}\\quad & \\Omega^T\n\\Omega = I,\n\\end{aligned}\n"
},
{
"math_id": 4,
"text": "\\|\\cdot\\|_F"
},
{
"math_id": 5,
"text": "M=BA^{T}"
},
{
"math_id": 6,
"text": "\\min_R\\|R-M\\|_F \\quad\\mathrm{subject\\ to}\\quad R^T R=I"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "\\Sigma"
},
{
"math_id": 9,
"text": "M=U\\Sigma V^T\\,\\!"
},
{
"math_id": 10,
"text": "R=UV^T.\\,\\!"
},
{
"math_id": 11,
"text": "\n\\begin{align}\nR &= \\arg\\min_\\Omega ||\\Omega A-B\\|_F^2 \\\\\n&= \\arg\\min_\\Omega \\langle \\Omega A-B, \\Omega A-B \\rangle_F \\\\\n&= \\arg\\min_\\Omega \\|\\Omega A\\|_F^2 + \\|B\\|_F^2 - 2 \\langle \\Omega A , B \\rangle_F \\\\\n&= \\arg\\min_\\Omega \\|A\\|_F^2 + \\|B\\|_F^2 - 2 \\langle \\Omega A , B \\rangle_F \\\\\n&= \\arg\\max_\\Omega \\langle \\Omega A , B \\rangle_F \\\\\n&= \\arg\\max_\\Omega \\langle \\Omega , B A^T \\rangle_F \\\\\n&= \\arg\\max_\\Omega \\langle \\Omega, U\\Sigma V^T \\rangle_F \\\\\n&= \\arg\\max_\\Omega \\langle U^T \\Omega V , \\Sigma \\rangle_F \\\\\n&= \\arg\\max_\\Omega \\langle S , \\Sigma \\rangle_F \\quad \\text{where } S = U^T \\Omega V \\\\\n\\end{align}\n"
},
{
"math_id": 12,
"text": "S"
},
{
"math_id": 13,
"text": "I"
},
{
"math_id": 14,
"text": "\n\\begin{align}\nI &= U^T R V \\\\\nR &= U V^T \\\\\n\\end{align}\n"
},
{
"math_id": 15,
"text": " R "
},
{
"math_id": 16,
"text": " \\Omega "
},
{
"math_id": 17,
"text": "||\\Omega A-B\\|_F^2 "
},
{
"math_id": 18,
"text": "M=U\\Sigma V^T"
},
{
"math_id": 19,
"text": "R=U\\Sigma'V^T,\\,\\!"
},
{
"math_id": 20,
"text": "\\Sigma'\\,\\!"
},
{
"math_id": 21,
"text": "\\Sigma\\,\\!"
},
{
"math_id": 22,
"text": "\\det(UV^T)"
},
{
"math_id": 23,
"text": "AU - B"
},
{
"math_id": 24,
"text": "A \\in \\mathbb{R}^{m\\times \\ell}, U \\in \\mathbb{R}^{\\ell \\times n}"
},
{
"math_id": 25,
"text": "B \\in \\mathbb{R}^{m\\times n}"
},
{
"math_id": 26,
"text": "m > \\ell \\ge n"
},
{
"math_id": 27,
"text": "U \\in U(m,\\ell)"
},
{
"math_id": 28,
"text": "A \\in \\mathbb{R}^{m\\times m}"
}
]
| https://en.wikipedia.org/wiki?curid=13548016 |
1355025 | Numerical model of the Solar System | A numerical model of the Solar System is a set of mathematical equations, which, when solved, give the approximate positions of the planets as a function of time. Attempts to create such a model established the more general field of celestial mechanics. The results of this simulation can be compared with past measurements to check for accuracy and then be used to predict future positions. Its main use therefore is in preparation of almanacs.
Older efforts.
The simulations can be done in either Cartesian or in spherical coordinates. The former are easier, but extremely calculation intensive, and only practical on an electronic computer. As such only the latter was used in former times. Strictly speaking, the latter was not much less calculation intensive, but it was possible to start with some simple approximations and then to add perturbations, as much as needed to reach the wanted accuracy.
In essence this mathematical simulation of the Solar System is a form of the "N-body problem". The symbol N represents the number of bodies, which can grow quite large if one includes the Sun, 8 planets, dozens of moons, and countless planetoids, comets and so forth. However the influence of the Sun on any other body is so large, and the influence of all the other bodies on each other so small, that the problem can be reduced to the analytically solvable 2-body problem. The result for each planet is an orbit, a simple description of its position as function of time. Once this is solved the influences moons and planets have on each other are added as small corrections. These are small compared to a full planetary orbit. Some corrections might be still several degrees large, while measurements can be made to an accuracy of better than 1″.
Although this method is no longer used for simulations, it is still useful to find an approximate ephemeris as one can take the relatively simple main solution, perhaps add a few of the largest perturbations, and arrive without too much effort at the wanted planetary position. The disadvantage is that perturbation theory is very advanced mathematics.
Modern method.
The modern method consists of numerical integration in 3-dimensional space. One starts with a high accuracy value for the position ("x", "y", "z") and the velocity ("vx", "vy", "vz") for each of the bodies involved. When also the mass of each body is known, the acceleration ("ax", "ay", "az") can be calculated from Newton's Law of Gravitation. Each body attracts each other body, the total acceleration being the sum of all these attractions. Next one chooses a small time-step Δ"t" and applies Newton's Second Law of Motion. The acceleration multiplied with Δ"t" gives a correction to the velocity. The velocity multiplied with Δ"t" gives a correction to the position. This procedure is repeated for all other bodies.
The result is a new value for position and velocity for all bodies. Then, using these new values one starts over the whole calculation for the next time-step Δ"t". Repeating this procedure often enough, and one ends up with a description of the positions of all bodies over time.
The advantage of this method is that for a computer it is a very easy job to do, and it yields highly accurate results for all bodies at the same time, doing away with the complex and difficult procedures for determining perturbations. The disadvantage is that one must start with highly accurate figures in the first place, or the results will drift away from the reality in time; that one gets "x", "y", "z" positions which are often first to be transformed into more practical ecliptical or equatorial coordinates before they can be used; and that it is an all or nothing approach. If one wants to know the position of one planet on one particular time, then all other planets and all intermediate time-steps are to be calculated too.
Integration.
In the previous section it was assumed that acceleration remains constant over a small timestep Δt so that the calculation reduces to simply the addition of V × Δt to R and so forth. In reality this is not the case, except when one takes Δt so small that the number of steps to be taken would be prohibitively high. Because while at any time the position is changed by the acceleration, the value of the acceleration is determined by the instantaneous position. Evidently a full integration is needed.
Several methods are available. First notice the needed equations:
formula_0
This equation describes the acceleration all bodies i running from 1 to N exercise on a particular body j. It is a vector equation, so it is to be split in 3 equations for each of the X, Y, Z components, yielding:
formula_1
with the additional relationships
formula_2, formula_3
likewise for Y and Z.
The former equation (gravitation) may look foreboding, but its calculation is no problem. The latter equations (motion laws) seem simpler, but yet they cannot be calculated. Computers cannot integrate, they cannot work with infinitesimal values, so instead of dt we use Δt and bringing the resulting variable to the left:
formula_4, and: formula_5
Remember that a is still a function of time. The simplest way to solve these is just the Euler algorithm, which in essence is the linear addition described above. Limiting ourselves to 1 dimension only in some general computer language:
a.old = gravitationfunction(x.old)
x.new = x.old + v.old * dt
v.new = v.old + a.old * dt
As in essence the acceleration used for the whole duration of the timestep, is the one as it was in the beginning of the timestep, this simple method has no high accuracy. Much better results are achieved by taking a mean acceleration, the average between the beginning value and the expected (unperturbed) end value:
a.old = gravitationfunction(x.old)
x.expect = x.old + v.old * dt
a.expect = gravitationfunction(x.expect)
v.new = v.old + (a.old + a.expect) * 0.5 * dt
x.new = x.old + (v.new + v.old) * 0.5 * dt
Of course still better results can be expected by taking intermediate values. This is what happens when using the Runge-Kutta method, especially the one of grade 4 or 5 are most useful. The most common method used is the leapfrog method due to its good long term energy conservation.
A completely different method is the use of Taylor series. In that case we write: formula_6
but rather than developing up to some higher derivative in r only, one can develop in r and v (that is r') by writing formula_7and then write out the factors "f" and "g" in a series.
Approximations.
To calculate the accelerations the gravitational attraction of each body on each other body is to be taken into account. As a consequence the amount of calculation in the simulation goes up with the square of the number of bodies: Doubling the number of bodies increases the work with a factor four. To increase the accuracy of the simulation not only more decimals are to be taken but also smaller timesteps, again quickly increasing the amount of work. Evidently tricks are to be applied to reduce the amount of work. Some of these tricks are given here.
By far the most important trick is the use of a proper integration method, as already outlined above.
The choice of units is important. Rather than to work in SI units, which would make some values extremely small and some extremely large, all units are to be scaled such that they are in the neighbourhood of 1. For example, for distances in the Solar System the astronomical unit is most straightforward. If this is not done one is almost certain to see a simulation abandoned in the middle of a calculation on a floating point overflow or underflow, and if not that bad, still accuracy is likely to get lost due to truncation errors.
If N is large (not so much in Solar System simulations, but more in galaxy simulations) it is customary to create dynamic groups of bodies. All bodies in a particular direction and on large distance from the reference body, which is being calculated at that moment, are taken together and their gravitational attraction is averaged over the whole group.
The total amount of energy and angular momentum of a closed system are conserved quantities. By calculating these amounts after every time step the simulation can be programmed to increase the stepsize Δt if they do not change significantly, and to reduce it if they start to do so. Combining the bodies in groups as in the previous and apply larger and thus less timesteps on the faraway bodies than on the closer ones, is also possible.
To allow for an excessively rapid change of the acceleration when a particular body is close to the reference body, it is customary to introduce a small parameter "e" so that
formula_8
Complications.
If the highest possible accuracy is needed, the calculations become much more complex. In the case of comets, nongravitational forces, such as radiation pressure and gas drag, must be taken into account. In the case of Mercury, and other planets for long term calculations, relativistic effects cannot be ignored. Then also the total energy is no longer a constant (because the four vector energy with linear momentum is). The finite speed of light also makes it important to allow for light-time effects, both classical and relativistic. Planets can no longer be considered as particles, but their shape and density must also be considered. For example, the flattening of the Earth causes precession, which causes the axial tilt to change, which affects the long-term movements of all planets.
Long term models, going beyond a few tens of millions of years, are not possible due to the lack of stability of the Solar System. | [
{
"math_id": 0,
"text": "\\vec{a}_j = \\sum_{i \\neq j}^n G \\frac{M_i}{|\\vec{r}_i - \\vec{r}_j|^3} (\\vec{r}_i - \\vec{r}_j)"
},
{
"math_id": 1,
"text": "(a_j)_x = \\sum_{i \\neq j}^n G \\frac{M_i}{( (x_i - x_j)^2 + (y_i - y_j)^2 + (z_i - z_j)^2 )^{3/2}} (x_i - x_j)"
},
{
"math_id": 2,
"text": "a_{x} = \\frac{dv_{x}}{dt}"
},
{
"math_id": 3,
"text": "v_{x} = \\frac{dx}{dt}"
},
{
"math_id": 4,
"text": "\\Delta v_x = a_{x} \\Delta t "
},
{
"math_id": 5,
"text": "\\Delta x = v_{x} \\Delta t "
},
{
"math_id": 6,
"text": "r = r_0 + r'_0 t + r''_0 \\frac{t^2}{2!} + ... "
},
{
"math_id": 7,
"text": "r = f r_0 + g r'_0"
},
{
"math_id": 8,
"text": "a = \\frac{G M}{r^2 + e}"
}
]
| https://en.wikipedia.org/wiki?curid=1355025 |
1355057 | Overhead power line | Above-ground structure for bulk transfer and distribution of electricity
An overhead power line is a structure used in electric power transmission and distribution to transmit electrical energy along large distances. It consists of one or more conductors (commonly multiples of three) suspended by towers or poles. Since the surrounding air provides good cooling, insulation along long passages and allows optical inspection, overhead power lines are generally the lowest-cost method of power transmission for large quantities of electric energy.
Construction.
Towers for support of the lines are made of wood (as-grown or laminated), steel or aluminum (either lattice structures or tubular poles), concrete, and occasionally reinforced plastics. The bare wire conductors on the line are generally made of aluminum (either plain or reinforced with steel, or composite materials such as carbon and glass fiber), though some copper wires are used in medium-voltage distribution and low-voltage connections to customer premises. A major goal of overhead power line design is to maintain adequate clearance between energized conductors and the ground so as to prevent dangerous contact with the line, and to provide reliable support for the conductors, resilience to storms, ice loads, earthquakes and other potential damage causes.
Today some overhead lines are routinely operated at voltages exceeding 765,000 volts between conductors, with even higher voltages possible in some cases.
Classification by operating voltage.
Overhead power transmission lines are classified in the electrical power industry by the range of voltages:
Structures.
Structures for overhead lines take a variety of shapes depending on the type of line. Structures may be as simple as wood poles directly set in the earth, carrying one or more cross-arm beams to support conductors, or "armless" construction with conductors supported on insulators attached to the side of the pole. Tubular steel poles are typically used in urban areas. High-voltage lines are often carried on lattice-type steel towers or pylons. For remote areas, aluminum towers may be placed by helicopters. Concrete poles have also been used. Poles made of reinforced plastics are also available, but their high cost restricts application.
Each structure must be designed for the loads imposed on it by the conductors. The weight of the conductor must be supported, as well as dynamic loads due to wind and ice accumulation, and effects of vibration. Where conductors are in a straight line, towers need only resist the weight since the tension in the conductors approximately balances with no resultant force on the structure. Flexible conductors supported at their ends approximate the form of a catenary, and much of the analysis for construction of transmission lines relies on the properties of this form.
A large transmission line project may have several types of towers, with "tangent" ("suspension" or "line" towers, UK) towers intended for most positions and more heavily constructed towers used for turning the line through an angle, dead-ending (terminating) a line, or for important river or road crossings. Depending on the design criteria for a particular line, semi-flexible type structures may rely on the weight of the conductors to be balanced on both sides of each tower. More rigid structures may be intended to remain standing even if one or more conductors is broken. Such structures may be installed at intervals in power lines to limit the scale of cascading tower failures.
Foundations for tower structures may be large and costly, particularly if the ground conditions are poor, such as in wetlands. Each structure may be stabilized considerably by the use of guy wires to counteract some of the forces applied by the conductors.
Power lines and supporting structures can be a form of visual pollution. In some cases the lines are buried to avoid this, but this "undergrounding" is more expensive and therefore not common.
For a single wood utility pole structure, a pole is placed in the ground, then three crossarms extend from this, either staggered or all to one side. The insulators are attached to the crossarms. For an "H"-type wood pole structure, two poles are placed in the ground, then a crossbar is placed on top of these, extending to both sides. The insulators are attached at the ends and in the middle. Lattice tower structures have two common forms. One has a pyramidal base, then a vertical section, where three crossarms extend out, typically staggered. The strain insulators are attached to the crossarms. Another has a pyramidal base, which extends to four support points. On top of this a horizontal truss-like structure is placed.
A grounded wire is sometimes strung along the tops of the towers to provide lightning protection. An optical ground wire is a more advanced version with embedded optical fibers for communication. Overhead wire markers can be mounted on the ground wire to meet International Civil Aviation Organization recommendations.
Some markers include flashing lamps for night-time warning.
Circuits.
A "single-circuit transmission line" carries conductors for only one circuit. For a three-phase system, this implies that each tower supports three conductors.
A "double-circuit transmission line" has two circuits. For three-phase systems, each tower supports and insulates six conductors. Single phase AC-power lines as used for traction current have four conductors for two circuits. Usually both circuits operate at the same voltage.
In HVDC systems typically two conductors are carried per line, but in rare cases only one pole of the system is carried on a set of towers.
In some countries like Germany most power lines with voltages above 100 kV are implemented as double, quadruple or in rare cases even hextuple power line as rights of way are rare. Sometimes all conductors are installed with the erection of the pylons; often some circuits are installed later. A disadvantage of double circuit transmission lines is that maintenance can be difficult, as either work in close proximity of high voltage or switch-off of two circuits is required. In case of failure, both systems can be affected.
The largest double-circuit transmission line is the Kita-Iwaki Powerline.
Insulators.
Insulators must support the conductors and withstand both the normal operating voltage and surges due to switching and lightning. Insulators are broadly classified as either pin-type, which support the conductor above the structure, or suspension type, where the conductor hangs below the structure. The invention of the strain insulator was a critical factor in allowing higher voltages to be used.
At the end of the 19th century, the limited electrical strength of telegraph-style pin insulators limited the voltage to no more than 69,000 volts. Up to about 33 kV (69 kV in North America) both types are commonly used. At higher voltages only suspension-type insulators are common for overhead conductors.
Insulators are usually made of wet-process porcelain or toughened glass, with increasing use of glass-reinforced polymer insulators. However, with rising voltage levels, polymer insulators (silicone rubber based) are seeing increasing usage. China has already developed polymer insulators having a highest system voltage of 1100 kV and India is currently developing a 1200 kV (highest system voltage) line which will initially be charged with 400 kV to be upgraded to a 1200 kV line.
Suspension insulators are made of multiple units, with the number of unit insulator disks increasing at higher voltages. The number of disks is chosen based on line voltage, lightning withstand requirement, altitude, and environmental factors such as fog, pollution, or salt spray. In cases where these conditions are suboptimal, longer insulators must be used. Longer insulators with longer creepage distance for leakage current, are required in these cases. Strain insulators must be strong enough mechanically to support the full weight of the span of conductor, as well as loads due to ice accumulation, and wind.
Porcelain insulators may have a semi-conductive glaze finish, so that a small current (a few milliamperes) passes through the insulator. This warms the surface slightly and reduces the effect of fog and dirt accumulation. The semiconducting glaze also ensures a more even distribution of voltage along the length of the chain of insulator units.
Polymer insulators by nature have hydrophobic characteristics providing for improved wet performance. Also, studies have shown that the specific creepage distance required in polymer insulators is much lower than that required in porcelain or glass. Additionally, the mass of polymer insulators (especially in higher voltages) is approximately 50% to 30% less than that of a comparative porcelain or glass string. Better pollution and wet performance is leading to the increased use of such insulators.
Insulators for very high voltages, exceeding 200 kV, may have grading rings installed at their terminals. This improves the electric field distribution around the insulator and makes it more resistant to flash-over during voltage surges.
Conductors.
The most common conductor in use for transmission today is aluminum conductor steel reinforced (ACSR). Also seeing much use is all-aluminum-alloy conductor (AAAC). Aluminum is used because it has about half the weight of a comparable resistance copper cable (though larger diameter due to lower specific conductivity), as well as being cheaper.
Copper was more popular in the past and is still in use, especially at lower voltages and for grounding.
While larger conductors lose less energy due to lower electrical resistance, they are more costly than smaller conductors. An optimization rule called "Kelvin's Law" (named for Lord Kelvin) states that the optimum size of conductor for a line is found when the cost of the energy wasted in the conductor is equal to the annual interest paid on that portion of the line construction cost due to the size of the conductors. The optimization problem is made more complex by additional factors such as varying annual load, varying cost of installation, and the discrete sizes of cable that are commonly made.
Since a conductor is a flexible object with uniform weight per unit length, the shape of a conductor strung between two towers approximates that of a catenary. The sag of the conductor (vertical distance between the highest and lowest point of the curve) varies depending on the temperature and additional load such as ice cover. A minimum overhead clearance must be maintained for safety. Since the temperature and therefore length of the conductor increase with increasing current through it, it is sometimes possible to increase the power handling capacity (uprate) by changing the conductors for a type with a lower coefficient of thermal expansion or a higher allowable operating temperature.
Two such conductors that offer reduced thermal sag are known as composite core conductors (ACCR and ACCC conductor). In lieu of steel core strands that are often used to increase overall conductor strength, the ACCC conductor uses a carbon and glass fiber core that offers a coefficient of thermal expansion about 1/10 of that of steel. While the composite core is nonconductive, it is substantially lighter and stronger than steel, which allows the incorporation of 28% more aluminum (using compact trapezoidal-shaped strands) without any diameter or weight penalty. The added aluminum content helps reduce line losses by 25 to 40% compared to other conductors of the same diameter and weight, depending upon electric current. The carbon core conductor's reduced thermal sag allows it to carry up to twice the current ("ampacity") compared to all-aluminum conductor (AAC) or ACSR.
The power lines and their surroundings must be maintained by linemen, sometimes assisted by helicopters with pressure washers or circular saws which may work three times faster. However this work often occurs in the dangerous areas of the Helicopter height–velocity diagram, and the pilot must be qualified for this "human external cargo" method.
Bundle conductors.
For transmission of power across long distances, high voltage transmission is employed. Transmission higher than 132 kV poses the problem of corona discharge, which causes significant power loss and interference with communication circuits. To reduce this corona effect, it is preferable to use more than one conductor per phase, or bundled conductors.
Bundle conductors consist of several parallel cables connected at intervals by spacers, often in a cylindrical configuration. The optimum number of conductors depends on the current rating, but typically higher-voltage lines also have higher current. American Electric Power is building 765 kV lines using six conductors per phase in a bundle. Spacers must resist the forces due to wind, and magnetic forces during a short-circuit.
Bundled conductors reduce the voltage gradient in the vicinity of the line. This reduces the possibility of corona discharge. At extra high voltage, the electric field gradient at the surface of a single conductor is high enough to ionize air, which wastes power, generates unwanted audible noise and interferes with communication systems. The field surrounding a bundle of conductors is similar to the field that would surround a single, very large conductor—this produces lower gradients which mitigates issues associated with high field strength. The transmission efficiency is improved as loss due to corona effect is countered.
Bundled conductors cool themselves more efficiently due to the increased surface area of the conductors, further reducing line losses. When transmitting alternating current, bundle conductors also avoid the reduction in ampacity of a single large conductor due to the skin effect. A bundle conductor also has lower reactance, compared to a single conductor.
While wind resistance is higher, wind-induced oscillation can be damped at bundle spacers. The ice and wind loading of bundled conductors will be greater than a single conductor of the same total cross section, and bundled conductors are more difficult to install than single conductors.
Ground wires.
Overhead power lines are often equipped with a ground conductor (shield wire, static wire, or overhead earth wire). The ground conductor is usually grounded (earthed) at the top of the supporting structure, to minimize the likelihood of direct lightning strikes to the phase conductors. In circuits with earthed neutral, it also serves as a parallel path with the earth for fault currents. Very high-voltage transmission lines may have two ground conductors. These are either at the outermost ends of the highest cross beam, at two V-shaped mast points, or at a separate cross arm. Older lines may use surge arresters every few spans in place of a shield wire; this configuration is typically found in the more rural areas of the United States. By protecting the line from lightning, the design of apparatus in substations is simplified due to lower stress on insulation. Shield wires on transmission lines may include optical fibers (optical ground wires/OPGW), used for communication and control of the power system.
At some HVDC converter stations, the ground wire is used also as the electrode line to connect to a distant grounding electrode. This allows the HVDC system to use the earth as one conductor. The ground conductor is mounted on small insulators bridged by lightning arrestors above the phase conductors. The insulation prevents electrochemical corrosion of the pylon.
Medium-voltage distribution lines may also use one or two shield wires, or may have the grounded conductor strung below the phase conductors to provide some measure of protection against tall vehicles or equipment touching the energized line, as well as to provide a neutral line in Wye wired systems.
On some power lines for very high voltages in the former Soviet Union, the ground wire is used for PLC systems and mounted on insulators at the pylons.
Insulated conductors and cable.
Overhead insulated cables are rarely used, usually for short distances (less than a kilometer). Insulated cables can be directly fastened to structures without insulating supports. An overhead line with bare conductors insulated by air is typically less costly than a cable with insulated conductors.
A more common approach is "covered" line wire. It is treated as bare cable, but often is safer for wildlife, as the insulation on the cables increases the likelihood of a large-wing-span raptor to survive a brush with the lines, and reduces the overall danger of the lines slightly. These types of lines are often seen in the eastern United States and in heavily wooded areas, where tree-line contact is likely. The only pitfall is cost, as insulated wire is often costlier than its bare counterpart. Many utility companies implement covered line wire as jumper material where the wires are often closer to each other on the pole, such as an underground riser/pothead, and on reclosers, cutouts and the like.
Dampers.
Because power lines can suffer from aeroelastic flutter driven by wind, Stockbridge dampers are often attached to the lines to reduce the vibrations.
Compact transmission lines.
A compact overhead transmission line requires a smaller right of way than a standard overhead powerline. Conductors must not get too close to each other. This can be achieved either by short span lengths and insulating crossbars, or by separating the conductors in the span with insulators. The first type is easier to build as it does not require insulators in the span, which may be difficult to install and to maintain.
Examples of compact lines are:
Compact transmission lines may be designed for voltage upgrade of existing lines to increase the power that can be transmitted on an existing right of way.
Low voltage.
Low voltage overhead lines may use either bare conductors carried on glass or ceramic insulators or an aerial bundled cable system. The number of conductors may be anywhere between two (most likely a phase and neutral) up to as many as six (three phase conductors, separate neutral and earth plus street lighting supplied by a common switch); a common case is four (three phase and neutral, where the neutral might also serve as a protective earthing conductor).
Train power.
Overhead lines or overhead wires are used to transmit electrical energy to trams, trolleybuses or trains. Overhead line is designed on the principle of one or more overhead wires situated over rail tracks. Feeder stations at regular intervals along the overhead line supply power from the high-voltage grid. For some cases low-frequency AC is used, and distributed by a special traction current network.
Further applications.
Overhead lines are also occasionally used to supply transmitting antennas, especially for efficient transmission of long, medium and short waves. For this purpose a staggered array line is often used. Along a staggered array line the conductor cables for the supply of the earth net of the transmitting antenna are attached on the exterior of a ring, while the conductor inside the ring, is fastened to insulators leading to the high-voltage standing feeder of the antenna.
Use of area under overhead power lines.
Use of the area below an overhead line is limited because objects must not come too close to the energized conductors. Overhead lines and structures may shed ice, creating a hazard. Radio reception can be impaired under a power line, due both to shielding of a receiver antenna by the overhead conductors, and by partial discharge at insulators and sharp points of the conductors which creates radio noise.
In the area surrounding the overhead lines it is dangerous to risk interference; e.g. flying kites or balloons, using ladders or operating machinery.
Overhead distribution and transmission lines near airfields are often marked on maps, and the lines themselves marked with conspicuous plastic reflectors, to warn pilots of the presence of conductors.
Construction of overhead power lines, especially in wilderness areas, may have significant environmental effects. Environmental studies for such projects may consider the effect of bush clearing, changed migration routes for migratory animals, possible access by predators and humans along transmission corridors, disturbances of fish habitat at stream crossings, and other effects.
Aviation accidents.
General aviation, hang gliding, paragliding, skydiving, balloon, and kite flying must avoid accidental contact with power lines. Nearly every kite product warns users to stay away from power lines. Deaths occur when aircraft crash into power lines. Some power lines are marked with obstruction markers, especially near air strips or over waterways that may support floatplane operations. The placement of power lines sometimes use up sites that would otherwise be used by hang gliders.
History.
The first transmission of electrical impulses over an extended distance was demonstrated on July 14, 1729 by the physicist Stephen Gray. The demonstration used damp hemp cords suspended by silk threads (the low resistance of metallic conductors not being appreciated at the time).
However the first practical use of overhead lines was in the context of telegraphy. By 1837 experimental commercial telegraph systems ran as far as 20 km (13 miles). Electric power transmission was accomplished in 1882 with the first high-voltage transmission between Munich and Miesbach (60 km). 1891 saw the construction of the first three-phase alternating current overhead line on the occasion of the International Electricity Exhibition in Frankfurt, between Lauffen and Frankfurt.
In 1912 the first 110 kV-overhead power line entered service followed by the first 220 kV-overhead power line in 1923. In the 1920s RWE AG built the first overhead line for this voltage and in 1926 built a Rhine crossing with the pylons of Voerde, two masts 138 meters high.
In 1953, the first 345 kV line was put into service by American Electric Power in the United States. In Germany in 1957 the first 380 kV overhead power line was commissioned (between the transformer station and Rommerskirchen). In the same year the overhead line traversing of the Strait of Messina went into service in Italy, whose pylons served the Elbe crossing 1. This was used as the model for the building of the Elbe crossing 2 in the second half of the 1970s which saw the construction of the highest overhead line pylons of the world. Earlier, in 1952, the first 380 kV line was put into service in Sweden, in 1000 km (625 miles) between the more populated areas in the south and the largest hydroelectric power stations in the north.
Starting from 1967 in Russia, and also in the USA and Canada, overhead lines for voltage of 765 kV were built. In 1985 overhead power line was built in Soviet Union between Kokshetau and the power station at Ekibastuz, this was a three-phase alternating current line at 1150 kV. In 1999, in Japan the first powerline designed for 1000 kV with 2 circuits were built, the Kita-Iwaki Powerline. In 2003 the building of the highest overhead line commenced in China, the Yangtze River Crossing.
In the 21st century, replacing steel with carbon fiber cores (advanced reconductoring) became a way for utilities to increase transmission capacity without increasing the amount of land used.
Mathematical analysis.
An overhead power line is one example of a transmission line. At power system frequencies, many useful simplifications can be made for lines of typical lengths. For analysis of power systems, the distributed resistance, series inductance, shunt leakage resistance and shunt capacitance can be replaced with suitable lumped values or simplified networks.
Short and medium line model.
A short length of a power line (less than 80 km) can be approximated with a resistance in series with an inductance and ignoring the shunt admittances. This value is not the total impedance of the line, but rather the series impedance per unit length of line. For a longer length of line (80–250 km), a shunt capacitance is added to the model. In this case it is common to distribute half of the total capacitance to each side of the line. As a result, the power line can be represented as a two-port network, such as with ABCD parameters.
The circuit can be characterized as
formula_0
where
The medium line has an additional shunt admittance
formula_2
where
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "Z = z l = (R + j \\omega L)l "
},
{
"math_id": 1,
"text": "\\omega \\ "
},
{
"math_id": 2,
"text": "Y = y l = j \\omega C l "
}
]
| https://en.wikipedia.org/wiki?curid=1355057 |
13551670 | Electroacoustic phenomena | Electric effects caused by ultrasound
Electroacoustic phenomena arise when ultrasound propagates through a fluid containing ions. The associated particle motion generates electric signals because ions have electric charge. This coupling between ultrasound and electric field is called electroacoustic phenomena. The fluid might be a simple Newtonian liquid, or complex heterogeneous dispersion, emulsion or even a porous body. There are several different electroacoustic effects depending on the nature of the fluid.
Ion vibration current.
Historically, the IVI was the first known electroacoustic effect. It was predicted by Debye in 1933.
Streaming vibration current.
The streaming vibration current was experimentally observed in 1948 by Williams. A theoretical model was developed some 30 years later by Dukhin and others. This effect opens another possibility for characterizing the electric properties of the surfaces in porous bodies. A similar effect can be observed at a non-porous surface, when sound is bounced off at an oblique angle. The incident and reflected waves superimpose to cause oscillatory fluid motion in the plane of the interface, thereby generating an AC streaming current at the frequency of the sound waves.
Double layer compression.
The electrical double layer can be regarded as behaving like a parallel plate capacitor with a compressible dielectric filling. When sound waves induce a local pressure variation, the spacing of the plates varies at the frequency of the excitation, generating an AC displacement current normal to the interface. For practical reasons this is most readily observed at a conducting surface. It is therefore possible to use an electrode immersed in a conducting electrolyte as a microphone, or indeed as a loudspeaker when the effect is applied in reverse.
Colloid vibration potential and current.
Colloid vibration potential measures the AC potential difference generated between two identical relaxed electrodes, placed in the dispersion, if the latter is subjected to an ultrasonic field. When a sound wave travels through a colloidal suspension of particles whose density differs from that of the surrounding medium, inertial forces induced by the vibration of the suspension give rise to a motion of the charged particles relative to the liquid, causing an alternating electromotive force. The manifestations of this electromotive force may be measured, depending on the relation between the impedance of the suspension and that of the measuring instrument, either as colloid vibration potential or as "colloid vibration current".
Colloid vibration potential and current was first reported by Hermans and then independently by Rutgers in 1938. It is widely used for characterizing the ζ-potential of various dispersions and emulsions. The effect, theory, experimental verification and multiple applications are discussed in the book by Dukhin and Goetz.
Electric sonic amplitude.
Electric sonic amplitude was experimentally discovered by Cannon with co-authors in early 1980s. It is also widely used for characterizing ζ-potential in dispersions and emulsions. There is review of this effect theory, experimental verification and multiple applications published by Hunter.
Theory of CVI and ESA.
With regard to the theory of CVI and ESA, there was an important observation made by O'Brien, who linked these measured parameters with dynamic electrophoretic mobility μd.
formula_0
where
A is calibration constant, depending on frequency, but not particles properties;
ρ"p" is particle density,
ρ"m" density of the fluid,
φ is volume fraction of dispersed phase,
Dynamic electrophoretic mobility is similar to electrophoretic mobility that appears in electrophoresis theory. They are identical at low frequencies and/or for sufficiently small particles.
There are several theories of the dynamic electrophoretic mobility. Their overview is given in the Ref.5. Two of them are the most important.
The first one corresponds to the Smoluchowski limit. It yields following simple expression for CVI for sufficiently small particles with negligible CVI frequency dependence:
formula_1
where:
ε"0" is vacuum dielectric permittivity,
ε"m" is fluid dielectric permittivity,
ζ is electrokinetic potential
η is dynamic viscosity of the fluid,
K"s" is conductivity of the system,
K"m" is conductivity of the fluid,
ρ"s" is density of the system.
This remarkably simple equation has same wide range of applicability as Smoluchowski equation for electrophoresis. It is independent on shape of the particles, their concentration.
Validity of this equation is restricted with the following two requirements.
First, it is valid only for a thin double layer, when the Debye length is much smaller than particle's radius a:
formula_2
Secondly, it neglects the contribution of the surface conductivity. This assumes a small Dukhin number:
formula_3
Restriction of the thin double layer limits applicability of this Smoluchowski type theory only to aqueous systems with sufficiently large particles and not very low ionic strength. This theory does not work well for nano-colloids, including proteins and polymers at low ionic strength. It is not valid for low- or non-polar fluids.
There is another theory that is applicable for the other extreme case of a thick double layer, when
formula_4
This theory takes into consideration the double layer overlap that inevitably occurs for concentrated systems with thick double layer. This allows introduction of so-called "quasi-homogeneous" approach, when overlapped diffuse layers of particles cover the complete interparticle space. The theory becomes much simplified in this extreme case, as shown by Shilov and others. Their derivation predicts that surface charge density σ is a better parameter than ζ-potential for characterizing electroacoustic phenomena in such systems. An expression for CVI simplified for small particles follows:
formula_5
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\ CVI(ESA) = A\\phi\\mu_d\\frac{\\rho_p-\\rho_m}{\\rho_m}"
},
{
"math_id": 1,
"text": " \\ CVI(ESA) = A\\phi\\frac{\\varepsilon_0\\varepsilon_m\\zeta\\Kappa_s}{\\eta\\Kappa_m}\\frac{\\rho_p-\\rho_s}{\\rho_s}"
},
{
"math_id": 2,
"text": " {\\kappa}a >> 1"
},
{
"math_id": 3,
"text": " Du << 1"
},
{
"math_id": 4,
"text": " {\\kappa}a < 1"
},
{
"math_id": 5,
"text": " \\ CVI = A\\frac{2{\\sigma}a}{3\\eta}\\frac{\\phi}{1-\\phi}\\frac{\\rho_p-\\rho_s}{\\rho_s}"
}
]
| https://en.wikipedia.org/wiki?curid=13551670 |
13551967 | IκB kinase | Class of enzymes
The IκB kinase (IkappaB kinase or IKK) is an enzyme complex that is involved in propagating the cellular response to inflammation, specifically the regulation of lymphocytes.
The IκB kinase enzyme complex is part of the upstream NF-κB signal transduction cascade. The IκBα (inhibitor of nuclear factor kappa B) protein inactivates the NF-κB transcription factor by masking the nuclear localization signals (NLS) of NF-κB proteins and keeping them sequestered in an inactive state in the cytoplasm. Specifically, IKK phosphorylates the inhibitory IκBα protein. This phosphorylation results in the dissociation of IκBα from NF-κB. NF-κB, which is now free, migrates into the nucleus and activates the expression of at least 150 genes; some of which are anti-apoptotic.
Catalyzed reaction.
In enzymology, an IκB kinase (EC 2.7.11.10) is an enzyme that catalyzes the chemical reaction:
ATP + IκB protein formula_0 ADP + IκB phosphoprotein
Thus, the two substrates of this enzyme are ATP and IκB protein, whereas its two products are ADP and IκB phosphoprotein.
This enzyme belongs to the family of transferases, specifically those transferring a phosphate group to the sidechain oxygen atom of serine or threonine residues in proteins (protein-serine/threonine kinases). The systematic name of this enzyme class is ATP:[IκB protein] phosphotransferase.
Structure.
The IκB kinase complex is composed of three subunits each encoded by a separate gene:
The α- and β-subunits together are catalytically active whereas the γ-subunit serves a regulatory function.
The IKK-α and IKK-β kinase subunits are homologous in structure, composed of a kinase domain, as well as leucine zipper and helix-loop-helix dimerization domains, and a carboxy-terminal NEMO-binding domain (NBD). Mutational studies have revealed the identity of the NBD amino acid sequence as leucine-aspartate-tryptophan-serine-tryptophan-leucine, encoded by residues 737-742 and 738-743 of IKK-α and IKK-β, respectively. The regulatory IKK-γ subunit, or NEMO, is composed of two coiled coil domains, a leucine zipper dimerization domain, and a zinc finger-binding domain. Specifically, the NH2-terminus of NEMO binds to the NBD sequences on IKK-α and IKK-β, leaving the rest of NEMO accessible for interacting with regulatory proteins.
Function.
IκB kinase activity is essential for activation of members of the nuclear factor-kB (NF-κB) family of transcription factors, which play a fundamental role in lymphocyte immunoregulation. Activation of the canonical, or classical, NF-κB pathway begins in response to stimulation by various pro-inflammatory stimuli, including lipopolysaccharide (LPS) expressed on the surface of pathogens, or the release of pro-inflammatory cytokines such as tumor necrosis factor (TNF) or interleukin-1 (IL-1). Following immune cell stimulation, a signal transduction cascade leads to the activation of the IKK complex, an event characterized by the binding of NEMO to the homologous kinase subunits IKK-α and IKK-β. The IKK complex phosphorylates serine residues (S32 and S36) within the amino-terminal domain of inhibitor of NF-κB (IκBα) upon activation, consequently leading to its ubiquitination and subsequent degradation by the proteasome. Degradation of IκBα releases the prototypical p50-p65 dimer for translocation to the nucleus, where it binds to κB sites and directs NF-κB-dependent transcriptional activity. NF-κB target genes can be differentiated by their different functional roles within lymphocyte immunoregulation and include positive cell-cycle regulators, anti-apoptotic and survival factors, and pro-inflammatory genes. Collectively, activation of these immunoregulatory factors promotes lymphocyte proliferation, differentiation, growth, and survival.
Regulation.
Activation of the IKK complex is dependent on phosphorylation of serine residues within the kinase domain of IKK-β, though IKK-α phosphorylation occurs concurrently in endogenous systems. Recruitment of IKK kinases by the regulatory domains of NEMO leads to the phosphorylation of two serine residues within the activation loop of IKK-β, moving the activation loop away from the catalytic pocket, thus allowing access to ATP and IκBα peptide substrates. Furthermore, the IKK complex is capable of undergoing trans-autophosphorylation, where the activated IKK-β kinase subunit phosphorylates its adjacent IKK-α subunit, as well as other inactive IKK complexes, thus resulting in high levels of IκB kinase activity. Following IKK-mediated phosphorylation of IκBα and the subsequent decrease in IκB abundance, the activated IKK kinase subunits undergo extensive carboxy-terminal autophosphorylation, reaching a low activity state that is further susceptible to complete inactivation by phosphatases once upstream inflammatory signaling diminishes.
Deregulation and disease.
Though functionally adaptive in response to inflammatory stimuli, deregulation of NF-κB signaling has been exploited in various disease states. Increased NF-κB activity as a result of constitutive IKK-mediated phosphorylation of IκBα has been observed in the development of atherosclerosis, asthma, rheumatoid arthritis, inflammatory bowel diseases, and multiple sclerosis. Specifically, constitutive NF-κB activity promotes continuous inflammatory signaling at the molecular level that translates to chronic inflammation phenotypically. Furthermore, the ability of NF-κB to simultaneously suppress apoptosis and promote continuous lymphocyte growth and proliferation explains its intimate connection with many types of cancer.
Clinical significance.
This enzyme participates in 15 pathways related to metabolism: MapK signaling, apoptosis, Toll-like receptor signaling, T-cell receptor signaling, B-cell receptor signaling, insulin signaling, adipokine signaling, Type 2 diabetes mellitus, epithelial cell signaling in "helicobacter pylori", pancreatic cancer, prostate cancer, chronic myeloid leukemia, acute myeloid leukemia, and small cell lung cancer.
Inhibition of IκB kinase (IKK) and IKK-related kinases, IKBKE (IKKε) and TANK-binding kinase 1 (TBK1), has been investigated as a therapeutic option for the treatment of inflammatory diseases and cancer. The small-molecule inhibitor of IKK-β SAR113945, developed by Sanofi-Aventis, was evaluated in patients with knee osteoarthritis.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
]
| https://en.wikipedia.org/wiki?curid=13551967 |
13552465 | Hereditary property | In mathematics, a hereditary property is a property of an object that is inherited by all of its subobjects, where the meaning of "subobject" depends on the context. These properties are particularly considered in topology and graph theory, but also in set theory.
In topology.
In topology, a topological property is said to be "hereditary" if whenever a topological space has that property, then so does every subspace of it. If the latter is true only for closed subspaces, then the property is called "weakly hereditary" or
"closed-hereditary".
For example, second countability and metrisability are hereditary properties. Sequentiality and Hausdorff compactness are weakly hereditary, but not hereditary. Connectivity is not weakly hereditary.
If "P" is a property of a topological space "X" and every subspace also has property "P", then "X" is said to be "hereditarily "P"".
In combinatorics and graph theory.
Hereditary properties occur throughout combinatorics and graph theory, although they are known by a variety of names. For example, in the context of permutation patterns, hereditary properties are typically called permutation classes.
In graph theory.
In graph theory, a "hereditary property" usually means a property of a graph which also holds for (is "inherited" by) its induced subgraphs. Equivalently, a hereditary property is preserved by the removal of vertices. A graph class formula_0 is called hereditary if it is closed under induced subgraphs. Examples of hereditary graph classes are independent graphs (graphs with no edges), which is a special case (with "c" = 1) of being "c"-colorable for some number "c", being forests, planar, complete, complete multipartite etc.
Sometimes the term "hereditary" has been defined with reference to graph minors; then it may be called a minor-hereditary property. The Robertson–Seymour theorem implies that a minor-hereditary property may be characterized in terms of a finite set of forbidden minors.
The term "hereditary" has been also used for graph properties that are closed with respect to taking subgraphs. In such a case, properties that are closed with respect to taking induced subgraphs, are called induced-hereditary. The language of hereditary properties and induced-hereditary properties provides a powerful tool for study of structural properties of various types of generalized colourings. The most important result from this area is the unique factorization theorem.
Monotone property.
There is no consensus for the meaning of "monotone property" in graph theory. Examples of definitions are:
The complementary property of a property that is preserved by the removal of edges is preserved under the addition of edges. Hence some authors avoid this ambiguity by saying a property A is monotone if A or AC (the complement of A) is monotone. Some authors choose to resolve this by using the term "increasing monotone" for properties preserved under the addition of some object, and "decreasing monotone" for those preserved under the removal of the same object.
In matroid theory.
In a matroid, every subset of an independent set is again independent. This is a hereditary property of sets.
A family of matroids may have a hereditary property. For instance, a family that is closed under taking matroid minors may be called "hereditary".
In problem solving.
In planning and problem solving, or more formally one-person games, the search space is seen as a directed graph with "states" as nodes, and "transitions" as edges. States can have properties, and such a property P is hereditary if "for each state S that has" P, "each state that can be reached from S also has" P.
The subset of all states that have P plus the subset of all states that have ~P form a partition of the set of states called a hereditary partition. This notion can trivially be extended to more discriminating partitions by instead of properties, considering "aspects" of states and their domains. If states have an aspect "A", with "d""i" ⊂ "D" a partition of the domain "D" of "A", then the subsets of states for which "A" ∈ "d""i" form a hereditary partition of the total set of states iff ∀"i", from any state where "A" ∈ "d""i" only other states where "A" ∈ "d""i" can be reached.
If the current state and the goal state are in different elements of a hereditary partition, there is no path from the current state to the goal state — the problem has no solution.
Can a checkers board be covered with domino tiles, each of which covers exactly two adjacent fields? Yes. What if we remove the top left and the bottom right field? Then no covering is possible any more, because the difference between number of uncovered white fields and the number of uncovered black fields is 2, and adding a domino tile (which covers one white and one black field) keeps that number at 2. For a total covering the number is 0, so a total covering cannot be reached from the start position.
This notion was first introduced by Laurent Siklóssy and Roach.
In model theory.
In model theory and universal algebra, a class "K" of structures of a given signature is said to have the "hereditary property" if every substructure of a structure in "K" is again in "K". A variant of this definition is used in connection with Fraïssé's theorem: A class "K" of finitely generated structures has the "hereditary property" if every finitely generated substructure is again in "K". See age.
In set theory.
Recursive definitions using the adjective "hereditary" are often encountered in set theory.
A set is said to be hereditary (or "pure") if all of its elements are hereditary sets. It is vacuously true that the empty set is a hereditary set, and thus the set formula_1 containing only the empty set formula_2 is a hereditary set, and recursively so is formula_3, for example. In formulations of set theory that are intended to be interpreted in the von Neumann universe or to express the content of Zermelo–Fraenkel set theory, all sets are hereditary, because the only sort of object that is even a candidate to be an element of a set is another set. Thus the notion of hereditary set is interesting only in a context in which there may be urelements.
A couple of notions are defined analogously:
Based on the above, it follows that in ZFC a more general notion can be defined for any predicate formula_4. A set "x" is said to have "hereditarily" the property formula_4 if "x" itself and all members of its transitive closure satisfy formula_5, i.e. formula_6. Equivalently, "x" hereditarily satisfies formula_4 iff it is a member of a transitive subset of formula_7 A property (of a set) is thus said to be hereditary if it is inherited by every subset. For example, being well-ordered is a hereditary property, and so it being finite.
If we instantiate in the above schema formula_4 with ""x" has cardinality less than κ", we obtain the more general notion of a set being "hereditarily of cardinality less than" κ, usually denoted by formula_8 or formula_9. We regain the two simple notions we introduced above as formula_10 being the set of hereditarily finite sets and formula_11 being the set of hereditarily countable sets. (formula_12 is the first uncountable ordinal.) | [
{
"math_id": 0,
"text": "\\mathcal{G}"
},
{
"math_id": 1,
"text": "\\{\\varnothing\\}"
},
{
"math_id": 2,
"text": "\\varnothing"
},
{
"math_id": 3,
"text": "\\{\\varnothing, \\{\\varnothing \\}\\}"
},
{
"math_id": 4,
"text": "\\Phi(x)"
},
{
"math_id": 5,
"text": "\\Phi(y)"
},
{
"math_id": 6,
"text": "x\\cup \\mathop{\\rm tc}(x)\\subseteq \\{y : \\Phi(y)\\}"
},
{
"math_id": 7,
"text": "\\{y : \\Phi(y)\\}."
},
{
"math_id": 8,
"text": "H_\\kappa "
},
{
"math_id": 9,
"text": "H(\\kappa) "
},
{
"math_id": 10,
"text": "H(\\omega)"
},
{
"math_id": 11,
"text": "H(\\omega_1)"
},
{
"math_id": 12,
"text": "\\omega_1"
}
]
| https://en.wikipedia.org/wiki?curid=13552465 |
13554055 | Theodore Motzkin | Israeli American mathematician
Theodore Samuel Motzkin (26 March 1908 – 15 December 1970) was an Israeli-American mathematician.
Biography.
Motzkin's father Leo Motzkin, a Ukrainian Jew, went to Berlin at the age of thirteen to study mathematics. He pursued university studies in the topic and was accepted as a graduate student by Leopold Kronecker, but left the field to work for the Zionist movement before finishing a dissertation.
Motzkin grew up in Berlin and started studying mathematics at an early age as well, entering university when he was only 15. He received his Ph.D. in 1934 from the University of Basel under the supervision of Alexander Ostrowski for a thesis on the subject of linear programming ("Beiträge zur Theorie der linearen Ungleichungen", "Contributions to the Theory of Linear Inequalities", 1936).
In 1935, Motzkin was appointed to the Hebrew University in Jerusalem, contributing to the development of mathematical terminology in Hebrew. In 1936 he was an Invited Speaker at the International Congress of Mathematicians in Oslo. During World War II, he worked as a cryptographer for the British government.
In 1948, Motzkin moved to the United States. After two years at Harvard and Boston College, he was appointed at UCLA in 1950, becoming a professor in 1960. He worked there until his retirement.
Motzkin married Naomi Orenstein in Jerusalem. The couple had three sons:
Contributions to mathematics.
Motzkin's dissertation contained an important contribution to the nascent theory of linear programming (LP), but its importance was only recognized after an English translation appeared in 1951. He would continue to play an important role in the development of LP while at UCLA. Apart from this, Motzkin published about diverse problems in algebra, graph theory, approximation theory, combinatorics, numerical analysis, algebraic geometry and number theory.
The Motzkin transposition theorem, Motzkin numbers, Motzkin–Taussky theorem and the Fourier–Motzkin elimination are named after him. He first developed the "double description" algorithm of polyhedral combinatorics and computational geometry. He was the first to prove the existence of principal ideal domains that are not Euclidean domains, formula_0 being his first example.
He found the first explicit example of a nonnegative polynomial which is not a sum of squares, known as the Motzkin polynomial formula_1.
The quote "complete disorder is impossible," describing Ramsey theory, is attributed to him.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}\\left[\\frac{1+\\sqrt{-19}}{2}\\right]"
},
{
"math_id": 1,
"text": "X^4Y^2+X^2Y^4-3X^2Y^2+1"
}
]
| https://en.wikipedia.org/wiki?curid=13554055 |
1355482 | Pell number | Natural number used to approximate √2
In mathematics, the Pell numbers are an infinite sequence of integers, known since ancient times, that comprise the denominators of the closest rational approximations to the square root of 2. This sequence of approximations begins , , , , and , so the sequence of Pell numbers begins with 1, 2, 5, 12, and 29. The numerators of the same sequence of approximations are half the companion Pell numbers or Pell–Lucas numbers; these numbers form a second infinite sequence that begins with 2, 6, 14, 34, and 82.
Both the Pell numbers and the companion Pell numbers may be calculated by means of a recurrence relation similar to that for the Fibonacci numbers, and both sequences of numbers grow exponentially, proportionally to powers of the silver ratio 1 + √2. As well as being used to approximate the square root of two, Pell numbers can be used to find square triangular numbers, to construct integer approximations to the right isosceles triangle, and to solve certain combinatorial enumeration problems.
As with Pell's equation, the name of the Pell numbers stems from Leonhard Euler's mistaken attribution of the equation and the numbers derived from it to John Pell. The Pell–Lucas numbers are also named after Édouard Lucas, who studied sequences defined by recurrences of this type; the Pell and companion Pell numbers are Lucas sequences.
Pell numbers.
The Pell numbers are defined by the recurrence relation:
formula_0
In words, the sequence of Pell numbers starts with 0 and 1, and then each Pell number is the sum of twice the previous Pell number, plus the Pell number before that. The first few terms of the sequence are
0, 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860, … (sequence in the OEIS).
Analogously to the Binet formula, the Pell numbers can also be expressed by the closed form formula
formula_1
For large values of "n", the term dominates this expression, so the Pell numbers are approximately proportional to powers of the silver ratio , analogous to the growth rate of Fibonacci numbers as powers of the golden ratio.
A third definition is possible, from the matrix formula
formula_2
Many identities can be derived or proven from these definitions; for instance an identity analogous to Cassini's identity for Fibonacci numbers,
formula_3
is an immediate consequence of the matrix formula (found by considering the determinants of the matrices on the left and right sides of the matrix formula).
Approximation to the square root of two.
Pell numbers arise historically and most notably in the rational approximation to √2. If two large integers "x" and "y" form a solution to the Pell equation
formula_4
then their ratio "" provides a close approximation to √2. The sequence of approximations of this form is
formula_5
where the denominator of each fraction is a Pell number and the numerator is the sum of a Pell number and its predecessor in the sequence. That is, the solutions have the form
formula_6
The approximation
formula_7
of this type was known to Indian mathematicians in the third or fourth century BCE. The Greek mathematicians of the fifth century BCE also knew of this sequence of approximations: Plato refers to the numerators as rational diameters. In the second century CE Theon of Smyrna used the term the side and diameter numbers to describe the denominators and numerators of this sequence.
These approximations can be derived from the continued fraction expansion of formula_8:
formula_9
Truncating this expansion to any number of terms produces one of the Pell-number-based approximations in this sequence; for instance,
formula_10
As Knuth (1994) describes, the fact that Pell numbers approximate √2 allows them to be used for accurate rational approximations to a regular octagon with vertex coordinates (±&hairsp;"Pi", ±&hairsp;"P""i"&hairsp;+1) and (±&hairsp;"P""i"&hairsp;+1, ±&hairsp;"Pi"&hairsp;). All vertices are equally distant from the origin, and form nearly uniform angles around the origin. Alternatively, the points formula_11, formula_12, and formula_13 form approximate octagons in which the vertices are nearly equally distant from the origin and form uniform angles.
Primes and squares.
A Pell prime is a Pell number that is prime. The first few Pell primes are
2, 5, 29, 5741, 33461, 44560482149, 1746860020068409, 68480406462161287469, ... (sequence in the OEIS).
The indices of these primes within the sequence of all Pell numbers are
2, 3, 5, 11, 13, 29, 41, 53, 59, 89, 97, 101, 167, 181, 191, 523, 929, 1217, 1301, 1361, 2087, 2273, 2393, 8093, ... (sequence in the OEIS)
These indices are all themselves prime. As with the Fibonacci numbers, a Pell number "P""n" can only be prime if "n" itself is prime, because if "d" is a divisor of "n" then "P""d" is a divisor of "P""n".
The only Pell numbers that are squares, cubes, or any higher power of an integer are 0, 1, and 169 = 132.
However, despite having so few squares or other powers, Pell numbers have a close connection to square triangular numbers. Specifically, these numbers arise from the following identity of Pell numbers:
formula_14
The left side of this identity describes a square number, while the right side describes a triangular number, so the result is a square triangular number.
Falcón and Díaz-Barrero (2006) proved another identity relating Pell numbers to squares and showing that the sum of the Pell numbers up to "P"4"n"&hairsp;+1 is always a square:
formula_15
For instance, the sum of the Pell numbers up to "P"5, 0 + 1 + 2 + 5 + 12 + 29 = 49, is the square of "P"2 + "P"3 = 2 + 5 = 7. The numbers "P"2"n" + "P"2"n"&hairsp;+1 forming the square roots of these sums,
1, 7, 41, 239, 1393, 8119, 47321, … (sequence in the OEIS),
are known as the Newman–Shanks–Williams (NSW) numbers.
Pythagorean triples.
If a right triangle has integer side lengths "a", "b", "c" (necessarily satisfying the Pythagorean theorem "a"2 + "b"2 = "c"2), then ("a","b","c") is known as a Pythagorean triple. As Martin (1875) describes, the Pell numbers can be used to form Pythagorean triples in which "a" and "b" are one unit apart, corresponding to right triangles that are nearly isosceles. Each such triple has the form
formula_16
The sequence of Pythagorean triples formed in this way is
(4,3,5), (20,21,29), (120,119,169), (696,697,985), …
Pell–Lucas numbers.
The companion Pell numbers or Pell–Lucas numbers are defined by the recurrence relation
formula_17
In words: the first two numbers in the sequence are both 2, and each successive number is formed by adding twice the previous Pell–Lucas number to the Pell–Lucas number before that, or equivalently, by adding the next Pell number to the previous Pell number: thus, 82 is the companion to 29, and 82 = 2 × 34 + 14 = 70 + 12. The first few terms of the sequence are (sequence in the OEIS): 2, 2, 6, 14, 34, 82, 198, 478, …
Like the relationship between Fibonacci numbers and Lucas numbers,
formula_18
for all natural numbers "n".
The companion Pell numbers can be expressed by the closed form formula
formula_19
These numbers are all even; each such number is twice the numerator in one of the rational approximations to formula_8 discussed above.
Like the Lucas sequence, if a Pell–Lucas number "Qn" is prime, it is necessary that "n" be either prime or a power of 2. The Pell–Lucas primes are
3, 7, 17, 41, 239, 577, … (sequence in the OEIS).
For these "n" are
2, 3, 4, 5, 7, 8, 16, 19, 29, 47, 59, 163, 257, 421, … (sequence in the OEIS).
Computations and connections.
The following table gives the first few powers of the silver ratio "δ" = "δ"&hairsp;S = 1 + √2 and its conjugate "δ" = 1 − √2.
The coefficients are the half-companion Pell numbers "Hn" and the Pell numbers "Pn" which are the (non-negative) solutions to "H"&hairsp;&hairsp;2 − 2"P"&hairsp;&hairsp;2 = ±1.
A square triangular number is a number
formula_20
which is both the "t"-th triangular number and the "s"-th square number. A "near-isosceles Pythagorean triple" is an integer solution to "a"&hairsp;2 + "b"&hairsp;2 = "c"&hairsp;2 where "a" + 1 = "b".
The next table shows that splitting the odd number "Hn" into nearly equal halves gives a square triangular number when "n" is even and a near isosceles Pythagorean triple when "n" is odd. All solutions arise in this manner.
Definitions.
The half-companion Pell numbers "Hn" and the Pell numbers "Pn" can be derived in a number of easily equivalent ways.
formula_21
formula_22
Raising to powers.
From this it follows that there are "closed forms":
formula_23
and
formula_24
formula_25
formula_26
Reciprocal recurrence formulas.
Let "n" be at least 2.
formula_27
formula_28
formula_29
Matrix formulations.
So
formula_30
Approximations.
The difference between "Hn" and "Pn"√2 is
formula_31
which goes rapidly to zero. So
formula_32
is extremely close to 2"Hn".
From this last observation it follows that the integer ratios "" rapidly approach √2; and and rapidly approach 1 + √2.
"H"&hairsp;&hairsp;2 − 2"P"&hairsp;&hairsp;2 = ±1.
Since √2 is irrational, we cannot have "" = √2, i.e.,
formula_33
The best we can achieve is either
formula_34
The (non-negative) solutions to "H"&hairsp;&hairsp;2 − 2"P"&hairsp;&hairsp;2 = 1 are exactly the pairs ("Hn", "Pn") with "n" even, and the solutions to "H"&hairsp;&hairsp;2 − 2"P"&hairsp;&hairsp;2 = −1 are exactly the pairs ("Hn", "Pn") with "n" odd. To see this, note first that
formula_35
so that these differences, starting with "H" − 2"P" = 1, are alternately
1 and −1. Then note that every positive solution comes in this way from a solution with smaller integers since
formula_36
The smaller solution also has positive integers, with the one exception: "H" = "P" = 1 which comes from "H"0 = 1 and "P"0 = 0.
Square triangular numbers.
The required equation
formula_37
is equivalent to formula_38
which becomes "H"&hairsp;&hairsp;2 = 2"P"&hairsp;&hairsp;2 + 1 with the substitutions "H" = 2"t" + 1 and "P" = 2"s". Hence the "n"-th solution is
formula_39
Observe that "t" and "t" + 1 are relatively prime, so that = "s"&hairsp;2 happens exactly when they are adjacent integers, one a square "H"&hairsp;&hairsp;2 and the other twice a square 2"P"&hairsp;&hairsp;2. Since we know all solutions of that equation, we also have
formula_40
and formula_41
This alternate expression is seen in the next table.
Pythagorean triples.
The equality "c"&hairsp;2 = "a"&hairsp;2 + ("a" + 1)&hairsp;2 = 2"a"&hairsp;2 + 2"a" + 1 occurs exactly when 2"c"&hairsp;2 = 4"a"&hairsp;2 + 4"a" + 2 which becomes 2"P"&hairsp;&hairsp;2 = "H"&hairsp;&hairsp;2 + 1 with the substitutions "H" = 2"a" + 1 and "P" = "c". Hence the "n"-th solution is "an" = and "cn" = "P"2"n"&hairsp;+1.
The table above shows that, in one order or the other, "an" and "bn" = "an" + 1 are "Hn"&hairsp;"H""n"&hairsp;+1 and 2"Pn"&hairsp;"P""n"&hairsp;+1 while "cn" = "H""n"&hairsp;+1&hairsp;"Pn" + "P""n"&hairsp;+1&hairsp;"Hn".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "P_n=\\begin{cases}0&\\mbox{if }n=0;\\\\1&\\mbox{if }n=1;\\\\2P_{n-1}+P_{n-2}&\\mbox{otherwise.}\\end{cases}"
},
{
"math_id": 1,
"text": "P_n=\\frac{\\left(1+\\sqrt2\\right)^n-\\left(1-\\sqrt2\\right)^n}{2\\sqrt2}."
},
{
"math_id": 2,
"text": "\\begin{pmatrix} P_{n+1} & P_n \\\\ P_n & P_{n-1} \\end{pmatrix} = \\begin{pmatrix} 2 & 1 \\\\ 1 & 0 \\end{pmatrix}^n."
},
{
"math_id": 3,
"text": "P_{n+1}P_{n-1}-P_n^2 = (-1)^n,"
},
{
"math_id": 4,
"text": "x^2-2y^2=\\pm 1,"
},
{
"math_id": 5,
"text": "\\frac11, \\frac32, \\frac75, \\frac{17}{12}, \\frac{41}{29}, \\frac{99}{70}, \\dots"
},
{
"math_id": 6,
"text": "\\frac{P_{n-1}+P_n}{P_n}."
},
{
"math_id": 7,
"text": "\\sqrt 2\\approx\\frac{577}{408}"
},
{
"math_id": 8,
"text": "\\sqrt 2"
},
{
"math_id": 9,
"text": "\\sqrt 2 = 1 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\ddots\\,}}}}}."
},
{
"math_id": 10,
"text": "\\frac{577}{408} = 1 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2 + \\cfrac{1}{2}}}}}}}."
},
{
"math_id": 11,
"text": "(\\pm(P_i+P_{i-1}),0)"
},
{
"math_id": 12,
"text": "(0,\\pm(P_i+P_{i-1}))"
},
{
"math_id": 13,
"text": "(\\pm P_i,\\pm P_i)"
},
{
"math_id": 14,
"text": "\\bigl(\\left(P_{k-1}+P_k\\right)\\cdot P_k\\bigr)^2 = \\frac{\\left(P_{k-1}+P_k\\right)^2\\cdot\\left(\\left(P_{k-1}+P_k\\right)^2-(-1)^k\\right)}{2}."
},
{
"math_id": 15,
"text": "\\sum_{i=0}^{4n+1} P_i = \\left(\\sum_{r=0}^n 2^r{2n+1\\choose 2r}\\right)^{\\!2} = \\left(P_{2n}+P_{2n+1}\\right)^2."
},
{
"math_id": 16,
"text": "\\left(2P_{n}P_{n+1}, P_{n+1}^2 - P_{n}^2, P_{n+1}^2 + P_{n}^2=P_{2n+1}\\right)."
},
{
"math_id": 17,
"text": "Q_n=\\begin{cases}2&\\mbox{if }n=0;\\\\2&\\mbox{if }n=1;\\\\2Q_{n-1}+Q_{n-2}&\\mbox{otherwise.}\\end{cases}"
},
{
"math_id": 18,
"text": "Q_n=\\frac{P_{2n}}{P_n}"
},
{
"math_id": 19,
"text": "Q_n=\\left(1+\\sqrt 2\\right)^n+\\left(1-\\sqrt 2\\right)^n."
},
{
"math_id": 20,
"text": "N = \\frac{t(t+1)}{2} = s^2,"
},
{
"math_id": 21,
"text": "\\left(1+\\sqrt2\\right)^n = H_n+P_n\\sqrt{2}"
},
{
"math_id": 22,
"text": "\\left(1-\\sqrt2\\right)^n = H_n-P_n\\sqrt{2}."
},
{
"math_id": 23,
"text": "H_n = \\frac{\\left(1+\\sqrt2\\right)^n+\\left(1-\\sqrt2\\right)^n}{2}."
},
{
"math_id": 24,
"text": "P_n\\sqrt2 = \\frac{\\left(1+\\sqrt2\\right)^n-\\left(1-\\sqrt2\\right)^n}{2}."
},
{
"math_id": 25,
"text": "H_n = \\begin{cases}1&\\mbox{if }n=0;\\\\H_{n-1}+2P_{n-1}&\\mbox{otherwise.}\\end{cases}"
},
{
"math_id": 26,
"text": "P_n = \\begin{cases}0&\\mbox{if }n=0;\\\\H_{n-1}+P_{n-1}&\\mbox{otherwise.}\\end{cases}"
},
{
"math_id": 27,
"text": "H_n = (3P_n-P_{n-2})/2 = 3P_{n-1}+P_{n-2};"
},
{
"math_id": 28,
"text": "P_n = (3H_n-H_{n-2})/4 = (3H_{n-1}+H_{n-2})/2."
},
{
"math_id": 29,
"text": "\\begin{pmatrix} H_n \\\\ P_n \\end{pmatrix} = \\begin{pmatrix} 1 & 2 \\\\ 1 & 1 \\end{pmatrix} \\begin{pmatrix} H_{n-1} \\\\ P_{n-1} \\end{pmatrix} = \\begin{pmatrix} 1 & 2 \\\\ 1 & 1 \\end{pmatrix}^n \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}."
},
{
"math_id": 30,
"text": "\\begin{pmatrix} H_n & 2P_n \\\\ P_n & H_n \\end{pmatrix} = \\begin{pmatrix} 1 & 2 \\\\ 1 & 1 \\end{pmatrix}^n ."
},
{
"math_id": 31,
"text": "\\left(1-\\sqrt2\\right)^n \\approx (-0.41421)^n,"
},
{
"math_id": 32,
"text": "\\left(1+\\sqrt2\\right)^n = H_n+P_n\\sqrt2"
},
{
"math_id": 33,
"text": "\\frac{H^2}{P^2} = \\frac{2P^2}{P^2}."
},
{
"math_id": 34,
"text": "\\frac{H^2}{P^2} = \\frac{2P^2-1}{P^2}\\quad \\mbox{or} \\quad \\frac{H^2}{P^2} = \\frac{2P^2+1}{P^2}."
},
{
"math_id": 35,
"text": "H_{n+1}^2-2P_{n+1}^2 = \\left(H_n+2P_n\\right)^2-2\\left(H_n+P_n\\right)^2 = -\\left(H_n^2-2P_n^2\\right),"
},
{
"math_id": 36,
"text": "(2P-H)^2-2(H-P)^2 = -\\left(H^2-2P^2\\right)."
},
{
"math_id": 37,
"text": "\\frac{t(t+1)}{2}=s^2"
},
{
"math_id": 38,
"text": "4t^2+4t+1 = 8s^2+1,"
},
{
"math_id": 39,
"text": "t_n = \\frac{H_{2n}-1}{2} \\quad\\mbox{and}\\quad s_n = \\frac{P_{2n}}{2}."
},
{
"math_id": 40,
"text": "t_n=\\begin{cases}2P_n^2&\\mbox{if }n\\mbox{ is even};\\\\H_{n}^2&\\mbox{if }n\\mbox{ is odd.}\\end{cases}"
},
{
"math_id": 41,
"text": "s_n=H_nP_n."
}
]
| https://en.wikipedia.org/wiki?curid=1355482 |
1355702 | Artin–Tits group | Family of infinite discrete groups
In the mathematical area of group theory, Artin groups, also known as Artin–Tits groups or generalized braid groups, are a family of infinite discrete groups defined by simple presentations. They are closely related with Coxeter groups. Examples are free groups, free abelian groups, braid groups, and right-angled Artin–Tits groups, among others.
The groups are named after Emil Artin, due to his early work on braid groups in the 1920s to 1940s, and Jacques Tits who developed the theory of a more general class of groups in the 1960s.
Definition.
An Artin–Tits presentation is a group presentation formula_0 where formula_1 is a (usually finite) set of generators and formula_2 is a set of Artin–Tits relations, namely relations of the form formula_3 for distinct formula_4 in formula_5, where both sides have equal lengths, and there exists at most one relation for each pair of distinct generators formula_6. An Artin–Tits group is a group that admits an Artin–Tits presentation. Likewise, an Artin–Tits monoid is a monoid that, as a monoid, admits an Artin–Tits presentation.
Alternatively, an Artin–Tits group can be specified by the set of generators formula_7 and, for every formula_8 in formula_7, the natural number formula_9 that is the length of the words formula_10 and formula_11 such that formula_12 is the relation connecting formula_13 and formula_14, if any. By convention, one puts formula_15 when there is no relation formula_12 . Formally, if we define formula_16 to denote an alternating product of formula_13 and formula_14 of length formula_17, beginning with formula_13 — so that formula_18, formula_19, etc. — the Artin–Tits relations take the form
formula_20
The integers formula_21 can be organized into a symmetric matrix, known as the Coxeter matrix of the group.
If formula_22 is an Artin–Tits presentation of an Artin–Tits group formula_23, the quotient of formula_23 obtained by adding the relation formula_24 for each formula_13 of formula_25 is a Coxeter group. Conversely, if formula_26 is a Coxeter group presented by reflections and the relations formula_24 are removed, the extension thus obtained is an Artin–Tits group. For instance, the Coxeter group associated with the formula_27-strand braid group is the symmetric group of all permutations of formula_28.
General properties.
Artin–Tits monoids are eligible for Garside methods based on the investigation of their divisibility relations, and are well understood:
Very few results are known for general Artin–Tits groups. In particular, the following basic questions remain open in the general case:
– solving the word and conjugacy problems — which are conjectured to be decidable,
– determining torsion — which is conjectured to be trivial,
– determining the center — which is conjectured to be trivial or monogenic in the case when the group is not a direct product ("irreducible case"),
– determining the cohomology — in particular solving the formula_39 conjecture, i.e., finding an acyclic complex whose fundamental group is the considered group.
Partial results involving particular subfamilies are gathered below. Among the few known general results, one can mention:
Particular classes of Artin–Tits groups.
Several important classes of Artin groups can be defined in terms of the properties of the Coxeter matrix.
Other types.
Many other families of Artin–Tits groups have been identified and investigated. Here we mention two of them.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\langle S \\mid R \\rangle "
},
{
"math_id": 1,
"text": " S "
},
{
"math_id": 2,
"text": " R "
},
{
"math_id": 3,
"text": " stst\\ldots = tsts\\ldots "
},
{
"math_id": 4,
"text": " s, t "
},
{
"math_id": 5,
"text": " S"
},
{
"math_id": 6,
"text": " s, t"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "s, t"
},
{
"math_id": 9,
"text": " m_{s,t} \\geqslant 2 "
},
{
"math_id": 10,
"text": "stst\\ldots"
},
{
"math_id": 11,
"text": "tsts\\ldots"
},
{
"math_id": 12,
"text": "stst\\ldots = tsts\\ldots"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "m_{s,t} = \\infty"
},
{
"math_id": 16,
"text": "\\langle s, t \\rangle^m"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "\\langle s, t \\rangle^2 = st"
},
{
"math_id": 19,
"text": "\\langle s, t \\rangle^3 = sts"
},
{
"math_id": 20,
"text": "\\langle s, t \\rangle^{m_{s,t}} = \\langle t, s \\rangle^{m_{t, s}}, \\text{ where } m_{s, t} = m_{t, s} \\in \\{2,3,\\ldots, \\infty\\}."
},
{
"math_id": 21,
"text": "m_{s, t}"
},
{
"math_id": 22,
"text": "\\langle S \\mid R\\rangle"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "s^2 = 1"
},
{
"math_id": 25,
"text": "R"
},
{
"math_id": 26,
"text": "W"
},
{
"math_id": 27,
"text": "n"
},
{
"math_id": 28,
"text": "\\{1, \\ldots, n\\}"
},
{
"math_id": 29,
"text": "G = \\langle S \\mid \\emptyset\\rangle"
},
{
"math_id": 30,
"text": "G = \\langle S \\mid \\{st=ts \\mid s, t \\in S\\} \\rangle"
},
{
"math_id": 31,
"text": "m_{s,t} = 2"
},
{
"math_id": 32,
"text": "G = \\langle \\sigma_1, \\ldots, \\sigma_{n-1} \\mid \\sigma_i\\sigma_j\\sigma_i = \\sigma_j\\sigma_i\\sigma_j \\text{ for } \\vert i - j\\vert = 1, \\sigma_i \\sigma_j = \\sigma_j\\sigma_i \\text{ for } \\vert i - j\\vert \\geqslant 2 \\rangle"
},
{
"math_id": 33,
"text": "m_{\\sigma_i,\\sigma_j} = 3"
},
{
"math_id": 34,
"text": "\\vert i - j\\vert = 1"
},
{
"math_id": 35,
"text": "m_{\\sigma_i,\\sigma_j} = 2"
},
{
"math_id": 36,
"text": "\\vert i - j\\vert > 1"
},
{
"math_id": 37,
"text": "A^+"
},
{
"math_id": 38,
"text": "\\sigma"
},
{
"math_id": 39,
"text": "K(\\pi, 1)"
},
{
"math_id": 40,
"text": "s^2t^2 = t^2s^2"
},
{
"math_id": 41,
"text": "st = ts"
},
{
"math_id": 42,
"text": "A_n"
},
{
"math_id": 43,
"text": "B_n"
},
{
"math_id": 44,
"text": "D_n"
},
{
"math_id": 45,
"text": "I_2(n)"
},
{
"math_id": 46,
"text": "E_6"
},
{
"math_id": 47,
"text": "E_7"
},
{
"math_id": 48,
"text": "E_8"
},
{
"math_id": 49,
"text": "F_4"
},
{
"math_id": 50,
"text": "H_3"
},
{
"math_id": 51,
"text": "H_4"
},
{
"math_id": 52,
"text": "\\Complex^n"
},
{
"math_id": 53,
"text": "2"
},
{
"math_id": 54,
"text": "\\infty"
},
{
"math_id": 55,
"text": " st = ts"
},
{
"math_id": 56,
"text": "\\Gamma"
},
{
"math_id": 57,
"text": "1, 2, \\ldots, n"
},
{
"math_id": 58,
"text": "M"
},
{
"math_id": 59,
"text": "m_{s, t} = 2"
},
{
"math_id": 60,
"text": "m_{s, t} = \\infty"
},
{
"math_id": 61,
"text": "r - 1"
},
{
"math_id": 62,
"text": "m_{s, t} \\geqslant 3"
},
{
"math_id": 63,
"text": " s \\neq t"
},
{
"math_id": 64,
"text": "m_{s, t} \\geqslant 4"
},
{
"math_id": 65,
"text": "\\langle S \\mid R \\rangle"
},
{
"math_id": 66,
"text": "S'"
},
{
"math_id": 67,
"text": "m_{s, t} \\neq \\infty"
},
{
"math_id": 68,
"text": "\\langle S' \\mid R \\cap S'{}^2 \\rangle"
},
{
"math_id": 69,
"text": "\\widetilde{A}_n"
},
{
"math_id": 70,
"text": "n \\geqslant 1"
},
{
"math_id": 71,
"text": "\\widetilde{B}_n"
},
{
"math_id": 72,
"text": "\\widetilde{C}_n"
},
{
"math_id": 73,
"text": "n \\geqslant 2"
},
{
"math_id": 74,
"text": "\\widetilde{D}_n"
},
{
"math_id": 75,
"text": "n \\geqslant 3"
},
{
"math_id": 76,
"text": "\\widetilde{E}_6"
},
{
"math_id": 77,
"text": "\\widetilde{E}_7"
},
{
"math_id": 78,
"text": "\\widetilde{E}_8"
},
{
"math_id": 79,
"text": "\\widetilde{F}_4"
},
{
"math_id": 80,
"text": "\\widetilde{G}_2"
}
]
| https://en.wikipedia.org/wiki?curid=1355702 |
1355939 | Price index | Type of normalized average of prices
A price index ("plural": "price indices" or "price indexes") is a normalized average (typically a weighted average) of price relatives for a given class of goods or services in a given region, during a given interval of time. It is a statistic designed to help to compare how these price relatives, taken as a whole, differ between time periods or geographical locations.
Price indices have several potential uses. For particularly broad indices, the index can be said to measure the economy's general price level or cost of living. More narrow price indices can help producers with business plans and pricing. Sometimes, they can be useful in helping to guide investment.
Some notable price indices include:
History of early price indices.
No clear consensus has emerged on who created the first price index. The earliest reported research in this area came from Welshman Rice Vaughan, who examined price level change in his 1675 book "A Discourse of Coin and Coinage". Vaughan wanted to separate the inflationary impact of the influx of precious metals brought by Spain from the New World from the effect due to currency debasement. Vaughan compared labor statutes from his own time to similar statutes dating back to Edward III. These statutes set wages for certain tasks and provided a good record of the change in wage levels. Vaughan reasoned that the market for basic labor did not fluctuate much with time and that a basic laborer's salary would probably buy the same amount of goods in different time periods, so that a laborer's salary acted as a basket of goods. Vaughan's analysis indicated that price levels in England had risen six- to eight-fold over the preceding century.
While Vaughan can be considered a forerunner of price index research, his analysis did not actually involve calculating an index. In 1707, Englishman William Fleetwood created perhaps the first true price index. An Oxford student asked Fleetwood to help show how prices had changed. The student stood to lose his fellowship since a 15th-century stipulation barred students with annual incomes over five pounds from receiving a fellowship. Fleetwood, who already had an interest in price change, had collected a large amount of price data going back hundreds of years. Fleetwood proposed an index consisting of averaged price relatives and used his methods to show that the value of five pounds had changed greatly over the course of 260 years. He argued on behalf of the Oxford students and published his findings anonymously in a volume entitled "Chronicon Preciosum".
Formal calculation.
Given a set formula_0 of goods and services, the total market value of transactions in formula_0 in some period formula_1 would be
formula_2
where
formula_3 represents the prevailing price of formula_4 in period formula_1
formula_5 represents the quantity of formula_4 sold in period formula_1
If, across two periods formula_6 and formula_7, the same quantities of each good or service were sold, but under different prices, then
formula_8
and
formula_9
would be a reasonable measure of the price of the set in one period relative to that in the other, and would provide an index measuring relative prices overall, weighted by quantities sold.
Of course, for any practical purpose, quantities purchased are rarely if ever identical across any two periods. As such, this is not a very practical index formula.
One might be tempted to modify the formula slightly to
formula_10
This new index, however, does not do anything to distinguish growth or reduction in quantities sold from price changes. To see that this is so, consider what happens if all the prices double between formula_6 and formula_7, while quantities stay the same: formula_11 will double. Now consider what happens if all the "quantities" double between formula_6 and formula_7 while all the "prices" stay the same: formula_11 will double. In either case, the change in formula_11 is identical. As such, formula_11 is as much a "quantity" index as it is a "price" index.
Various indices have been constructed in an attempt to compensate for this difficulty.
Paasche and Laspeyres price indices.
The two most basic formulae used to calculate price indices are the Paasche index (after the economist Hermann Paasche ) and the Laspeyres index (after the economist Etienne Laspeyres ).
The Paasche index is computed as
formula_12
while the Laspeyres index is computed as
formula_13
where formula_11 is the relative index of the price levels in two periods, formula_6 is the base period (usually the first year), and formula_7 the period for which the index is computed.
Note that the only difference in the formulas is that the former uses period n quantities, whereas the latter uses base period (period 0) quantities. A helpful mnemonic device to remember which index uses which period is that L comes before P in the alphabet so the Laspeyres index uses the earlier base quantities and the Paasche index the final quantities.
When applied to bundles of individual consumers, a Laspeyres index of 1 would state that an agent in the current period can afford to buy the same bundle as she consumed in the previous period, given that income has not changed; a Paasche index of 1 would state that an agent could have consumed the same bundle in the base period as she is consuming in the current period, given that income has not changed.
Hence, one may think of the Paasche index as one where the numeraire is the bundle of goods using current year prices and current year quantities. Similarly, the Laspeyres index can be thought of as a price index taking the bundle of goods using current prices and base period quantities as the numeraire.
The Laspeyres index tends to overstate inflation (in a cost of living framework), while the Paasche index tends to understate it, because the indices do not account for the fact that consumers typically react to price changes by changing the quantities that they buy. For example, if prices go up for good formula_4 then, "ceteris paribus", quantities demanded of that good should go down.
Lowe indices.
Many price indices are calculated with the Lowe index procedure. In a Lowe price index, the expenditure or quantity weights associated with each item are not drawn from each indexed period. Usually they are inherited from an earlier period, which is sometimes called the expenditure base period. Generally, the expenditure weights are updated occasionally, but the prices are updated in every period. Prices are drawn from the time period the index is supposed to summarize." Lowe indexes are named for economist Joseph Lowe. Most CPIs and employment cost indices from Statistics Canada, the U.S. Bureau of Labor Statistics, and many other national statistics offices are Lowe indices. Lowe indexes are sometimes called a "modified Laspeyres index", where the principal modification is to draw quantity weights less frequently than every period. For a consumer price index, the weights on various kinds of expenditure are generally computed from surveys of households asking about their budgets, and such surveys are less frequent than price data collection is. Another phrasings is that Laspeyres and Paasche indexes are special cases of Lowe indexes in which all price and quantity data are updated every period.
Comparisons of output between countries often use Lowe quantity indexes. The Geary-Khamis method used in the World Bank's International Comparison Program is of this type. Here the quantity data are updated each period from each of multiple countries, whereas the prices incorporated are kept the same for some period of time, e.g. the "average prices for the group of countries".
Fisher index and Marshall–Edgeworth index.
The Marshall–Edgeworth index (named for economists Alfred Marshall and Francis Ysidro Edgeworth), tries to overcome the problems of over- and understatement by the Laspeyres and Paasche indexes by using the arithmetic means of the quantities:
formula_14
The Fisher index, named for economist Irving Fisher), also known as the Fisher ideal index, is calculated as the geometric mean of formula_15 and formula_16:
formula_17
All these indices provide some overall measurement of relative prices between time periods or locations.
Practical measurement considerations.
Normalizing index numbers.
Price indices are represented as index numbers, number values that indicate relative change but not absolute values (i.e. one price index value can be compared to another or a base, but the number alone has no meaning). Price indices generally select a base year and make that index value equal to 100. Every other year is expressed as a percentage of that base year. In this example, let 2000 be the base year:
When an index has been normalized in this manner, the meaning of the number 112, for instance, is that the total cost for the basket of goods is 4% more in 2001 than in the base year (in this case, year 2000), 8% more in 2002, and 12% more in 2003.
Relative ease of calculating the Laspeyres index.
As can be seen from the definitions above, if one already has price and quantity data (or, alternatively, price and expenditure data) for the base period, then calculating the Laspeyres index for a new period requires only new price data. In contrast, calculating many other indices (e.g., the Paasche index) for a new period requires both new price data and new quantity data (or alternatively, both new price data and new expenditure data) for each new period. Collecting only new price data is often easier than collecting both new price data and new quantity data, so calculating the Laspeyres index for a new period tends to require less time and effort than calculating these other indices for a new period.
In practice, price indices regularly compiled and released by national statistical agencies are of the Laspeyres type, due to the above-mentioned difficulties in obtaining current-period quantity or expenditure data.
Calculating indices from expenditure data.
Sometimes, especially for aggregate data, expenditure data are more readily available than quantity data. For these cases, the indices can be formulated in terms of relative prices and base year expenditures, rather than quantities.
Here is a reformulation for the Laspeyres index:
Let formula_18 be the total expenditure on good c in the base period, then (by definition) we have
formula_19
and therefore also
formula_20.
We can substitute these values into our Laspeyres formula as follows:
formula_21
A similar transformation can be made for any index.
Calculating indices from real estate data.
There are three methods which are commonly used for building the transaction based real estate indicies: 1) hedonic, 2) repeat-sales and 3) the hybrid, a combination of 1 and 2. The hedonic approach builds housing price indices, for example, by using the time variable hedonic and cross-sectional hedonic models. In the hedonic model, housing (or other forms of property)'s prices are regressed according to properties' characteristics and are estimated on pooled property transaction data with time dummies as additional regressors or calculated based on a period-by-period basis.
In the case of repeat-sales method, there are two approaches of calculation: the original repeat-sales and the weighted repeat-sales models. The repeat-sales method standardizes properties’ characteristics by analysing properties that have been sold at least two times. It is a variant of the hedonic model with the only difference that hedonic characteristics are excluded as they assume properties’ characteristics remain unchanged in different periods. The hybrid method uses the features of hedonic and repeat-sales techniques to construct the real estate price indices. The idea was originalated by Case et al. and had a lot of changes since then. The invariant models include 1) the Quigley model, 2) the Hill, Knight and Sirmans, and 3) the Englund, Quigley and Redfearn. Most commonly used real estate indices are mostly constructed based on the repeat sales method.
Chained vs unchained calculations.
The above price indices were calculated relative to a fixed base period. An alternative is to take the base period for each time period to be the immediately preceding time period. This can be done with any of the above indices. Here is an example with the Laspeyres index, where formula_7 is the period for which we wish to calculate the index and formula_6 is a reference period that anchors the value of the series:
formula_22
Each term
formula_23
answers the question "by what factor have prices increased between period formula_24 and period formula_7". These are multiplied together to answer the question "by what factor have prices increased since period formula_6". The index is then the result of these multiplications, and gives the price relative to period formula_6 prices.
Chaining is defined for a quantity index just as it is for a price index.
Index number theory.
Price index formulas can be evaluated based on their relation to economic concepts (like cost of living) or on their mathematical properties. Several different tests of such properties have been proposed in index number theory literature. W.E. Diewert summarized past research in a list of nine such tests for a price index formula_25, where formula_26 and formula_27 are vectors giving prices for a base period and a reference period while formula_28 and formula_29 give quantities for these periods.
Quality change.
Price indices often capture changes in price and quantities for goods and services, but they often fail to account for variation in the quality of goods and services. This could be overcome if the principal method for relating price and quality, namely hedonic regression, could be reversed. Then quality change could be calculated from the price. Instead, statistical agencies generally use "matched-model" price indices, where one model of a particular good is priced at the same store at regular time intervals. The matched-model method becomes problematic when statistical agencies try to use this method on goods and services with rapid turnover in quality features. For instance, computers rapidly improve and a specific model may quickly become obsolete. Statisticians constructing matched-model price indices must decide how to compare the price of the obsolete item originally used in the index with the new and improved item that replaces it. Statistical agencies use several different methods to make such price comparisons.
The problem discussed above can be represented as attempting to bridge the gap between the price for the old item at time t, formula_40, with the price of the new item at the later time period, formula_41.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "\\sum_{c\\,\\in\\, C} (p_{c,t}\\cdot q_{c,t})"
},
{
"math_id": 3,
"text": "p_{c,t}\\,"
},
{
"math_id": 4,
"text": "c"
},
{
"math_id": 5,
"text": "q_{c,t}\\, "
},
{
"math_id": 6,
"text": "t_0"
},
{
"math_id": 7,
"text": "t_n"
},
{
"math_id": 8,
"text": "q_{c,t_n}=q_c=q_{c,t_0}\\, \\forall c"
},
{
"math_id": 9,
"text": "P=\\frac{\\sum (p_{c,t_n}\\cdot q_c)}{\\sum (p_{c,t_0}\\cdot q_c)}"
},
{
"math_id": 10,
"text": "P=\\frac{\\sum (p_{c,t_n}\\cdot q_{c,t_n})}{\\sum (p_{c,t_0}\\cdot q_{c,t_0})}"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "P_P=\\frac{\\sum (p_{c,t_n}\\cdot q_{c,t_n})}{\\sum (p_{c,t_0}\\cdot q_{c,t_n})}"
},
{
"math_id": 13,
"text": "P_L=\\frac{\\sum (p_{c,t_n}\\cdot q_{c,t_0})}{\\sum (p_{c,t_0}\\cdot q_{c,t_0})}"
},
{
"math_id": 14,
"text": "P_{ME}=\\frac{\\sum [p_{c,t_n}\\cdot \\frac{1}{2}\\cdot(q_{c,t_0}+q_{c,t_n})]}{\\sum [p_{c,t_0}\\cdot \\frac{1}{2}\\cdot(q_{c,t_0}+q_{c,t_n})]}=\\frac{\\sum [p_{c,t_n}\\cdot (q_{c,t_0}+q_{c,t_n})]}{\\sum [p_{c,t_0}\\cdot (q_{c,t_0}+q_{c,t_n})]}"
},
{
"math_id": 15,
"text": "P_P"
},
{
"math_id": 16,
"text": "P_L"
},
{
"math_id": 17,
"text": "P_F = \\sqrt{P_P\\cdot P_L}"
},
{
"math_id": 18,
"text": "E_{c,t_0}"
},
{
"math_id": 19,
"text": "E_{c,t_0} = p_{c,t_0}\\cdot q_{c,t_0}"
},
{
"math_id": 20,
"text": "\\frac{E_{c,t_0}}{p_{c,t_0}} = q_{c,t_0}"
},
{
"math_id": 21,
"text": "\nP_L\n=\\frac{\\sum (p_{c,t_n}\\cdot q_{c,t_0})}{\\sum (p_{c,t_0}\\cdot q_{c,t_0})}\n=\\frac{\\sum (p_{c,t_n}\\cdot \\frac{E_{c,t_0}}{p_{c,t_0}})}{\\sum E_{c,t_0}}\n=\\frac{\\sum (\\frac{p_{c,t_n}}{p_{c,t_0}} \\cdot E_{c,t_0})}{\\sum E_{c,t_0}}\n"
},
{
"math_id": 22,
"text": "\nP_{t_n}=\n\\frac{\\sum (p_{c,t_1}\\cdot q_{c,t_0})}{\\sum (p_{c,t_0}\\cdot q_{c,t_0})}\n\\times\n\\frac{\\sum (p_{c,t_2}\\cdot q_{c,t_1})}{\\sum (p_{c,t_1}\\cdot q_{c,t_1})}\n\\times\n\\cdots\n\\times\n\\frac{\\sum (p_{c,t_n}\\cdot q_{c,t_{n-1}})}{\\sum (p_{c,t_{n-1}}\\cdot q_{c,t_{n-1}})}\n"
},
{
"math_id": 23,
"text": "\\frac{\\sum (p_{c,t_n}\\cdot q_{c,t_{n-1}})}{\\sum (p_{c,t_{n-1}}\\cdot q_{c,t_{n-1}})}"
},
{
"math_id": 24,
"text": "t_{n-1}"
},
{
"math_id": 25,
"text": "I(P_{t_0}, P_{t_m}, Q_{t_0}, Q_{t_m})"
},
{
"math_id": 26,
"text": "P_{t_0}"
},
{
"math_id": 27,
"text": "P_{t_m}"
},
{
"math_id": 28,
"text": "Q_{t_0}"
},
{
"math_id": 29,
"text": "Q_{t_m}"
},
{
"math_id": 30,
"text": "I(p_{t_m},p_{t_n},\\alpha \\cdot q_{t_m},\\beta\\cdot q_{t_n})=1~~\\forall (\\alpha ,\\beta )\\in (0,\\infty )^2"
},
{
"math_id": 31,
"text": "\\alpha"
},
{
"math_id": 32,
"text": "\\beta"
},
{
"math_id": 33,
"text": "I(p_{t_m},\\alpha \\cdot p_{t_n},q_{t_m},q_{t_n})=\\alpha \\cdot I(p_{t_m},p_{t_n},q_{t_m},q_{t_n})"
},
{
"math_id": 34,
"text": "I(\\alpha \\cdot p_{t_m},\\alpha \\cdot p_{t_n},\\beta \\cdot q_{t_m}, \\gamma \\cdot q_{t_n})=I(p_{t_m},p_{t_n},q_{t_m},q_{t_n})~~\\forall (\\alpha,\\beta,\\gamma)\\in(0,\\infty )^3"
},
{
"math_id": 35,
"text": "I(p_{t_n},p_{t_m},q_{t_n},q_{t_m})=\\frac{1}{I(p_{t_m},p_{t_n},q_{t_m},q_{t_n})}"
},
{
"math_id": 36,
"text": "I(p_{t_m},p_{t_n},q_{t_m},q_{t_n}) \\le I(p_{t_m},p_{t_r},q_{t_m},q_{t_r})~~\\Leftarrow~~p_{t_n} \\le p_{t_r}"
},
{
"math_id": 37,
"text": "I(p_{t_m},p_{t_n},q_{t_m},q_{t_n}) \\cdot I(p_{t_n},p_{t_r},q_{t_n},q_{t_r})=I(p_{t_m},p_{t_r},q_{t_m},q_{t_r})~~\\Leftarrow~~t_m \\le t_n \\le t_r"
},
{
"math_id": 38,
"text": "t_m"
},
{
"math_id": 39,
"text": "t_r"
},
{
"math_id": 40,
"text": "P(M)_{t}"
},
{
"math_id": 41,
"text": "P(N)_{t+1}"
},
{
"math_id": 42,
"text": "{P(N)_{t+1}}"
},
{
"math_id": 43,
"text": "{P(N)_{t}}"
},
{
"math_id": 44,
"text": "P(M)_t"
}
]
| https://en.wikipedia.org/wiki?curid=1355939 |
1356209 | Certificate of Entitlement | Document entitling a person to own a motorised vehicle in Singapore
The Certificate of Entitlement (COE) are classes of categories as part of a quota license for owning a vehicle in Singapore. The licence is obtained from a successful winning bid in an open bid uniform price auction which grants the legal right of the holder to register, own and use a vehicle in Singapore for an initial period of 10 years. When demand is high, the cost of a COE can exceed the value of the car itself. The COE system was implemented in 1990 to regulate the number of vehicles on the road and control traffic congestion, especially in a land-constrained country such as Singapore.
History.
On 1 May 1990, the previous transportation unit of Singapore's Public Works Department (PWD) instituted a quota limit to vehicles called the COE, as rising affluence in the country catapulted land transport network usage and previous measure to curb vehicle ownership by simply increasing road taxes was ineffective in controlling vehicle population growth.
The premise was that the country had limited land resources, ie. limited supply of roads and car parks / parking lots, (with scarce land being managed to have a greater emphasis on providing an adequate supply of homes), along with demand for vehicle ownership spiralling out of control, would result in traffic conditions exceeding the criterion of a healthy road network that is sustainable by developments in land transport infrastructure resulting in gridlock.
Along with a congestion tax called the Electronic Road Pricing (ERP), the COE system is one of many key pillars in Singapore's traffic management strategies that aims to provide a sustainable urban quality of life. In place of the COE and the ERP, the government has encouraged its citizens and tourists alike to take advantage of the extensive public transportation network to get around the country instead, such as the Mass Rapid Transit (MRT), Light Rail Transit (LRT) or public buses, and to embrace a "car-lite society".
System.
Before buying a new vehicle, potential vehicle owners in Singapore are required by the Land Transport Authority (LTA) to first place a monetary bid for a Certificate of Entitlement (COE). The number of available COEs is governed by a quota system called the Vehicle Quota System (VQS) and is announced by LTA in April of each year with a review in October for possible adjustments for the period of one year starting from May. Approximately one-twelfth of the yearly quota is auctioned off each month in a sealed-bid, uniform price auction system and successful bidders pay the lowest winning bid.
Vehicle Quota System (VQS).
The number of COEs available to the public is regulated by the Vehicle Quota System (VQS) that is calculated every 6 months based on the following conditions:
Formula.
Since the change in the total motor vehicle population is given by the number of registrations minus the number of de-registrations and any unallocated quota in a given year may be carried over to the following year, the quota formula is as follows:
formula_0
In the formula above, the subscript formula_1 denotes calendar year and the subscript formula_2 denotes quota year (May to April). Initially, projected de-registrations for (calendar) year formula_1 were simply taken to be equal to actual de-registrations in formula_3 but from quota year 1999–2000 onwards, a projected number of de-registrations has been used.
Each year, the quota is set to allow for a targeted formula_4 percent growth in the total motor vehicle population, plus additional quota licenses to cover the number of motor vehicles that will be deregistered during the (calendar) year, plus any unallocated quota licenses from the previous quota year.
Validity.
The holder of a COE is allowed to own a vehicle for an initial period of 10 years, after which they must scrap or export their vehicle or bid for another COE at the prevailing rate if they wish to continue using their vehicle for an intended remaining lifespan.
At the end of the 10-year COE period, vehicle owners may choose to deregister their vehicle or to revalidate their COEs for another 5-year or 10-year period by paying the Prevailing Quota Premium, which is the three-month moving average of the Quota Premium for the respective vehicle category. You do not need to bid for a new COE to renew the existing COE of your vehicle. A 5-year COE cannot be further renewed, which means that at the end of a 5-year COE, the vehicle will have to be de-registered and either scrapped or exported to another country other than Singapore.
Depending on the value of the COE at the time of renewal vehicle owners are subjected to a somewhat emotional dilemma of whether to pay for a new COE which can amount to more than the market value of the vehicle or to deregister their vehicle. The emotional dilemma is certainly enhanced when the vehicle owner is forced to deregister and scrap an otherwise road worthy vehicle due to lack of time or insufficient funds to afford the COE at the prevailing rate.
For comparison in terms of vehicle value to COE value a Second Hand 2007 Mercedes-Benz C200K with a COE expiring in 2017 was advertised at S$86,800. As of November 2013 for a category B Car with a displacement above 1600cc the COE is priced at S$84,578.
Auction process.
COE biddings starts on the first and third Monday of the month and typically lasts for three days to the following Wednesday. Bidding duration will be pushed further in some circumstances, including public holidays. Bidding results can be obtained through the local media on the same day or on a website.
All COE bids made in the two car categories (Cat A and B COEs) and the motorcycle category (Cat D COEs) must be made in the name of the buyer. Once COE is obtained, the vehicle has to be registered in the name of the bidder, i.e. Cat A, B and D COEs are non-transferable. To provide flexibility, successful COE bids in the Cat C (Goods vehicles and Buses) and Cat E (Open Category) in the name of the individuals are transferable. However these can only be transferred once within the first 3 months, while successful bids by companies are not transferable at all.
An additional restriction on car ownership is the requirement that motor vehicles more than ten years old, known as "time expired" vehicles, must either renew the COE for another 5 or 10 years or de-register the vehicle for scrapping or exporting from Singapore, usually to neighbouring countries in ASEAN. COEs renewed for 10 years are renewable indefinitely, but for vehicles which have a renewed COE of only 5 years, the owner of the vehicle has to scrap the vehicle at the end of the period with no option to renew the COE, totaling a car ownership of 15 years.
Some of these vehicles, especially luxury ones, have been exported further to other right hand drive countries such as Australia and New Zealand, which has traditionally imported such vehicles from Japan. The result of the peculiarities of the Singapore car market has resulted in Singapore being the second largest exporter of used cars in the world after Japan. Cars are exported to many countries, including beyond Asia such as Kenya and South Africa in Africa as well as Jamaica and Trinidad and Tobago in the Caribbean. As these cars are often only about ten years old, they are often in high demand as they remain in relatively good condition.
Owners of such vehicles are given financial incentives to do this, which include a Preferential Additional Registration Fee (PARF). This program was implemented to reduce traffic congestion and it complements other measures to curb road usage such as the Electronic Road Pricing (ERP) program.
COE Category Refinement in 2013.
In September 2013, The COE system has been refined to include a new criterion for Category A cars. Under the change, the engine power of Cat A cars should not exceed 97 kilowatts (kW). This is equivalent to about 130 brake horsepower. This is in addition to the previous criterion of engine capacity of Cat A cars not exceeding 1600 cubic centimetres. However, cars with engine power output exceeding will be classified under Category B in COE bidding exercises starting February 2014 despite having engine capacity below 1600 cubic centimetres. The review of the COE categories' criteria was because LTA wanted to differentiate and regulate the buying of mass market and premium cars under Cat A in a bid to control COE prices that hovered closer and closer to S$100,000.
Categories of COE.
Initially, COEs were divided into eight categories but after many revisions, the system has been simplified to five categories.
Categories A, B & D are non-transferable. Taxis used to be classed under category A but issuance of COEs became unrestricted from August 2012 onwards.
Reception.
In 1994, academics Winston Koh and David Lee of the National University of Singapore proposed to reform the bidding process. Instead of bidding in dollars, applicants for COEs would bid in percentage of the price of the vehicle. In 2003, economist Tan Ling Hui of the International Monetary Fund reiterated the idea. In 2023, with COE prices surging, the idea of percentage bidding resurfaced in the general media. Proponents of percentage bidding argued that it was more equitable than bidding in dollars. | [
{
"math_id": 0,
"text": "\\begin{align} (\\text{Total COE Quota})_{qy} = &g.(\\text{Motor vehicle population})_{y-1} \\\\ &+ (\\text{Projected de-registrations})_{y} \\\\&+ (\\text{Unallocated quota})_{qy-1} \\end{align}"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "qy"
},
{
"math_id": 3,
"text": "y-1"
},
{
"math_id": 4,
"text": "g"
}
]
| https://en.wikipedia.org/wiki?curid=1356209 |
1356272 | Fineness | Weight of fine metal in a precious metal object
The fineness of a precious metal object (coin, bar, jewelry, etc.) represents the weight of "fine metal" therein, in proportion to the total weight which includes alloying base metals and any impurities. Alloy metals are added to increase hardness and durability of coins and jewelry, alter colors, decrease the cost per weight, or avoid the cost of high-purity refinement. For example, copper is added to the precious metal silver to make a more durable alloy for use in coins, housewares and jewelry. Coin silver, which was used for making silver coins in the past, contains 90% silver and 10% copper, by mass. Sterling silver contains 92.5% silver and 7.5% of other metals, usually copper, by mass.
Various ways of expressing fineness have been used and two remain in common use: "millesimal fineness" expressed in units of parts per 1,000 and "karats" or "carats" used only for gold. Karats measure the parts per 24, so that 18 karat = <templatestyles src="Fraction/styles.css" />18⁄24 = 75% and 24 karat gold is considered 100% gold.
Millesimal fineness.
Millesimal fineness is a system of denoting the purity of platinum, gold and silver alloys by parts per thousand of pure metal by mass in the alloy. For example, an alloy containing 75% gold is denoted as "750". Many European countries use decimal hallmark stamps (i.e., "585", "750", etc.) rather than "14 k", "18 k", etc., which is used in the United Kingdom and United States.
It is an extension of the older karat system of denoting the purity of gold by fractions of 24, such as "18 karat" for an alloy with 75% (18 parts per 24) pure gold by mass.
The millesimal fineness is usually rounded to a three figure number, particularly where used as a hallmark, and the fineness may vary slightly from the traditional versions of purity.
Here are the most common millesimal finenesses used for precious metals and the most common terms associated with them.
Karat.
The karat (US spelling, symbol k or Kt) or carat (UK spelling, symbol c or Ct) is a fractional measure of purity for gold alloys, in parts fine per 24 parts whole. The karat system is a standard adopted by US federal law.
"K" = 24 × ("M"g / "M"m)
Mass.
where
"K" is the karat rating of the material,
"M"g is the mass of pure gold in the alloy, and
"M"m is the total mass of the material.
24-karat gold is pure (while 100% purity is very difficult to attain, 24-karat as a designation is permitted in commerce for a minimum of 99.95% purity), 18-karat gold is 18 parts gold, 6 parts another metal (forming an alloy with 75% gold), 12-karat gold is 12 parts gold (12 parts another metal), and so forth.
In England, the carat was divisible into four grains, and the grain was divisible into four quarts. For example, a gold alloy of <templatestyles src="Fraction/styles.css" />127⁄128 fineness (that is, 99.2% purity) could have been described as being "23-karat, 3-grain, 1-quart gold".
The karat fractional system is increasingly being complemented or superseded by the millesimal system, described above for bullion, though jewelry generally tends to still use the karat system.
Conversion between percentage of pure gold and karats:
Volume.
However, this system of calculation gives only the mass of pure gold contained in an alloy. The term "18-karat gold" means that the alloy's mass consists of 75% of gold and 25% of other metals. The quantity of gold "by volume" in a less-than-24-karat gold alloy differs according to the alloys used. For example, knowing that standard 18-karat yellow gold consists of 75% gold, 12.5% silver and the remaining 12.5% of copper (all by mass), the volume of pure gold in this alloy will be 60% since gold is much denser than the other metals used: 19.32 g/cm3 for gold, 10.49 g/cm3 for silver and 8.96 g/cm3 for copper.
Etymology.
"Karat" is a variant of "carat". First attested in English in the mid-15th century, the word "carat" came from Middle French , in turn derived either from Italian or Medieval Latin . These were borrowed into Medieval Europe from the Arabic meaning "fruit of the carob tree", also "weight of 5 grains", () and was a unit of mass though it was probably not used to measure gold in classical times. The Arabic term ultimately originates from the Greek () meaning carob seed (literally "small horn") (diminutive of – , "horn").
In 309 AD, Roman Emperor Constantine I began to mint a new gold coin "solidus" that was <templatestyles src="Fraction/styles.css" />1⁄72 of a "libra" (Roman pound) of gold equal to a mass of 24 "siliquae", where each siliqua (or carat) was <templatestyles src="Fraction/styles.css" />1⁄1728 of a libra. This is believed to be the origin of the value of the karat.
Verifying fineness.
While there are many methods of detecting fake precious metals, there are realistically only two options available for verifying the marked fineness of metal as being reasonably accurate: assaying the metal (which requires destroying it), or using X-ray fluorescence (XRF). XRF will measure only the outermost portion of the piece of metal and so may get misled by thick plating.
That becomes a concern because it would be possible for an unscrupulous refiner to produce precious metals bars that are slightly less pure than marked on the bar. A refiner doing $1 billion of business each year that marked .980 pure bars as .999 fine would make about an extra $20 million in profit. In the United States, the actual purity of gold articles must be no more than .003 less than the marked purity (e.g. .996 fine for gold marked .999 fine), and the actual purity of silver articles must be no more than .004 less than the marked purity.
Fine weight.
A piece of alloy metal containing a precious metal may also have the weight of its precious component referred to as its "fine weight". For example, 1 troy ounce of 18 karat gold (which is 75% gold) may be said to have a fine weight of 0.75 troy ounces.
Most modern government-issued bullion coins specify their fine weight. For example, the American Gold Eagle is embossed "One Oz. Fine Gold" and weighs 1.091 troy oz.
Troy mass of silver content.
Fineness of silver in Britain was traditionally expressed as the mass of silver expressed in troy ounces and pennyweights (<templatestyles src="Fraction/styles.css" />1⁄20 troy ounce) in one troy pound (12 troy ounces) of the resulting alloy. Britannia silver has a fineness of 11 ounces, 10 pennyweights, or about formula_0 silver, whereas sterling silver has a fineness of 11 ounces, 2 pennyweights, or exactly formula_1 silver.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{(11+\\frac{10}{20})}{12} = 95.833\\% "
},
{
"math_id": 1,
"text": "\\frac{(11+\\frac{2}{20})}{12} = 92.5\\%"
}
]
| https://en.wikipedia.org/wiki?curid=1356272 |
13563938 | Universal parabolic constant | Mathematical constant in conic sections
The universal parabolic constant is a mathematical constant.
It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted "P".
In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point "F" and the directrix is the line "L".)
The value of "P" is
formula_1
(sequence in the OEIS). The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not.
Derivation.
Take formula_2 as the equation of the parabola. The focal parameter is formula_0 and the semilatus rectum is formula_3.
formula_4
Properties.
"P" is a transcendental number.
Proof. Suppose that "P" is algebraic. Then formula_5 must also be algebraic. However, by the Lindemann–Weierstrass theorem, formula_6 would be transcendental, which is not the case. Hence "P" is transcendental.
Since "P" is transcendental, it is also irrational.
Applications.
The average distance from a point randomly selected in the unit square to its center is
formula_7
Proof.
formula_8
There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is formula_9.
If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola. | [
{
"math_id": 0,
"text": "p=2f"
},
{
"math_id": 1,
"text": "P = \\ln(1 + \\sqrt2) + \\sqrt2 = 2.29558714939\\dots"
},
{
"math_id": 2,
"text": "y = \\frac{x^2}{4f}"
},
{
"math_id": 3,
"text": "\\ell=2f"
},
{
"math_id": 4,
"text": "\\begin{align}\nP & := \\frac{1}{p}\\int_{-\\ell}^\\ell \\sqrt{1+\\left(y'(x)\\right)^2}\\, dx \\\\\n & = \\frac{1}{2f}\\int_{-2f}^{2f}\\sqrt{1+\\frac{x^2}{4f^2}}\\, dx \\\\\n & = \\int_{-1}^{1}\\sqrt{1+t^2}\\, dt & (x = 2 f t) \\\\\n & = \\operatorname{arsinh}(1) + \\sqrt{2}\\\\\n & = \\ln(1+\\sqrt{2}) + \\sqrt{2}.\n\\end{align}"
},
{
"math_id": 5,
"text": " \\!\\ P - \\sqrt2 = \\ln(1 + \\sqrt2)"
},
{
"math_id": 6,
"text": " \\!\\ e^{\\ln(1+ \\sqrt2)} = 1 + \\sqrt2 "
},
{
"math_id": 7,
"text": " d_\\text{avg} = {P \\over 6}. "
},
{
"math_id": 8,
"text": "\n\\begin{align}\nd_\\text{avg} & := 8\\int_{0}^{1 \\over 2}\\int_{0}^{x}\\sqrt{x^2+y^2}\\, dy\\, dx \\\\\n & = 8\\int_{0}^{1 \\over 2}{1 \\over 2}x^2(\\ln(1 + \\sqrt2) + \\sqrt2)\\, dx \\\\\n & = 4P\\int_{0}^{1 \\over 2}x^2\\, dx \\\\\n & = {P \\over 6}\n\\end{align}\n"
},
{
"math_id": 9,
"text": " {P \\over 4} "
}
]
| https://en.wikipedia.org/wiki?curid=13563938 |
13564 | Homomorphism | Structure-preserving map between two algebraic structures of the same type
In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word "homomorphism" comes from the Ancient Greek language: () meaning "same" and () meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German meaning "similar" to meaning "same". The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925).
Homomorphisms of vector spaces are also called linear maps, and their study is the subject of linear algebra.
The concept of homomorphism has been generalized, under the name of morphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point of category theory.
A homomorphism may also be an isomorphism, an endomorphism, an automorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms.
Definition.
A homomorphism is a map between two algebraic structures of the same type (e.g. two groups, two fields, two vector spaces), that preserves the operations of the structures. This means a map formula_0 between two sets formula_1, formula_2 equipped with the same structure such that, if formula_3 is an operation of the structure (supposed here, for simplification, to be a binary operation), then
formula_4
for every pair formula_5, formula_6 of elements of formula_1. One says often that formula_7 preserves the operation or is compatible with the operation.
Formally, a map formula_8 preserves an operation formula_9 of arity formula_10, defined on both formula_1 and formula_2 if
formula_11
for all elements formula_12 in formula_1.
The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants. In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure.
For example:
An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism.
The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function
formula_13
satisfies
formula_14
and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies
formula_15
and is also a group homomorphism.
Examples.
The real numbers are a ring, having both addition and multiplication. The set of all 2×2 matrices is also a ring, under matrix addition and matrix multiplication. If we define a function between these rings as follows:
formula_16
where r is a real number, then f is a homomorphism of rings, since f preserves both addition:
formula_17
and multiplication:
formula_18
For another example, the nonzero complex numbers form a group under the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have a multiplicative inverse, which is required for elements of a group.) Define a function formula_7 from the nonzero complex numbers to the nonzero real numbers by
formula_19
That is, formula_7 is the absolute value (or modulus) of the complex number formula_20. Then formula_7 is a homomorphism of groups, since it preserves multiplication:
formula_21
Note that "f" cannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition:
formula_22
As another example, the diagram shows a monoid homomorphism formula_7 from the monoid formula_23 to the monoid formula_24. Due to the different names of corresponding operations, the structure preservation properties satisfied by formula_7 amount to formula_25 and formula_26.
A composition algebra formula_1 over a field formula_27 has a quadratic form, called a "norm", formula_28, which is a group homomorphism from the multiplicative group of formula_1 to the multiplicative group of formula_27.
Special homomorphisms.
Several kinds of homomorphisms have a specific name, which is also defined for general morphisms.
Isomorphism.
An isomorphism between algebraic structures of the same type is commonly defined as a bijective homomorphism.
In the more general context of category theory, an isomorphism is defined as a morphism that has an inverse that is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set.
More precisely, if
formula_8
is a (homo)morphism, it has an inverse if there exists a homomorphism
formula_29
such that
formula_30
If formula_1 and formula_2 have underlying sets, and formula_0 has an inverse formula_31, then formula_7 is bijective. In fact, formula_7 is injective, as formula_32 implies formula_33, and formula_7 is surjective, as, for any formula_5 in formula_2, one has formula_34, and formula_5 is the image of an element of formula_1.
Conversely, if formula_0 is a bijective homomorphism between algebraic structures, let formula_35 be the map such that formula_36 is the unique element formula_5 of formula_1 such that formula_37. One has formula_38 and it remains only to show that "g" is a homomorphism. If formula_39 is a binary operation of the structure, for every pair formula_5, formula_6 of elements of formula_2, one has
formula_40
and formula_31 is thus compatible with formula_41 As the proof is similar for any arity, this shows that formula_31 is a homomorphism.
This proof does not work for non-algebraic structures. For example, for topological spaces, a morphism is a continuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, called homeomorphism or bicontinuous map, is thus a bijective continuous map, whose inverse is also continuous.
Endomorphism.
An endomorphism is a homomorphism whose domain equals the codomain, or, more generally, a morphism whose source is equal to its target.
The endomorphisms of an algebraic structure, or of an object of a category form a monoid under composition.
The endomorphisms of a vector space or of a module form a ring. In the case of a vector space or a free module of finite dimension, the choice of a basis induces a ring isomorphism between the ring of endomorphisms and the ring of square matrices of the same dimension.
Automorphism.
An automorphism is an endomorphism that is also an isomorphism.
The automorphisms of an algebraic structure or of an object of a category form a group under composition, which is called the automorphism group of the structure.
Many groups that have received a name are automorphism groups of some algebraic structure. For example, the general linear group formula_42 is the automorphism group of a vector space of dimension formula_43 over a field formula_10.
The automorphism groups of fields were introduced by Évariste Galois for studying the roots of polynomials, and are the basis of Galois theory.
Monomorphism.
For algebraic structures, monomorphisms are commonly defined as injective homomorphisms.
In the more general context of category theory, a monomorphism is defined as a morphism that is left cancelable. This means that a (homo)morphism formula_44 is a monomorphism if, for any pair formula_31, formula_45 of morphisms from any other object formula_46 to formula_1, then formula_47 implies formula_48.
These two definitions of "monomorphism" are equivalent for all common algebraic structures. More precisely, they are equivalent for fields, for which every homomorphism is a monomorphism, and for varieties of universal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (the fields do not form a variety, as the multiplicative inverse is defined either as a unary operation or as a property of the multiplication, which are, in both cases, defined only for nonzero elements).
In particular, the two definitions of a monomorphism are equivalent for sets, magmas, semigroups, monoids, groups, rings, fields, vector spaces and modules.
A split monomorphism is a homomorphism that has a left inverse and thus it is itself a right inverse of that other homomorphism. That is, a homomorphism formula_49 is a split monomorphism if there exists a homomorphism formula_50 such that formula_51 A split monomorphism is always a monomorphism, for both meanings of "monomorphism". For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures.
Epimorphism.
In algebra, epimorphisms are often defined as surjective homomorphisms. On the other hand, in category theory, epimorphisms are defined as right cancelable morphisms. This means that a (homo)morphism formula_0 is an epimorphism if, for any pair formula_31, formula_45 of morphisms from formula_2 to any other object formula_46, the equality formula_53 implies formula_48.
A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions of "epimorphism" are equivalent for sets, vector spaces, abelian groups, modules (see below for a proof), and groups. The importance of these structures in all mathematics, especially in linear algebra and homological algebra, may explain the coexistence of two non-equivalent definitions.
Algebraic structures for which there exist non-surjective epimorphisms include semigroups and rings. The most basic example is the inclusion of integers into rational numbers, which is a homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism.
A wide generalization of this example is the localization of a ring by a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental in commutative algebra and algebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred.
A split epimorphism is a homomorphism that has a right inverse and thus it is itself a left inverse of that other homomorphism. That is, a homomorphism formula_49 is a split epimorphism if there exists a homomorphism formula_50 such that formula_54 A split epimorphism is always an epimorphism, for both meanings of "epimorphism". For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures.
In summary, one has
formula_55
the last implication is an equivalence for sets, vector spaces, modules, abelian groups, and groups; the first implication is an equivalence for sets and vector spaces.
Kernel.
Any homomorphism formula_56 defines an equivalence relation formula_57 on formula_58 by formula_59 if and only if formula_52. The relation formula_57 is called the kernel of formula_7. It is a congruence relation on formula_58. The quotient set formula_60 can then be given a structure of the same type as formula_58, in a natural way, by defining the operations of the quotient set by formula_61, for each operation formula_62 of formula_58. In that case the image of formula_58 in formula_63 under the homomorphism formula_7 is necessarily isomorphic to formula_64; this fact is one of the isomorphism theorems.
When the algebraic structure is a group for some operation, the equivalence class formula_65 of the identity element of this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted by formula_66 (usually read as "formula_58 mod formula_65"). Also in this case, it is formula_65, rather than formula_57, that is called the kernel of formula_7. The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case of abelian groups, vector spaces and modules, but is different and has received a specific name in other cases, such as normal subgroup for kernels of group homomorphisms and ideals for kernels of ring homomorphisms (in the case of non-commutative rings, the kernels are the two-sided ideals).
Relational structures.
In model theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. Let "L" be a signature consisting of function and relation symbols, and "A", "B" be two "L"-structures. Then a homomorphism from "A" to "B" is a mapping "h" from the domain of "A" to the domain of "B" such that
In the special case with just one binary relation, we obtain the notion of a graph homomorphism.
Formal language theory.
Homomorphisms are also used in the study of formal languages and are often briefly referred to as "morphisms". Given alphabets formula_67 and formula_68, a function formula_69 such that formula_70 for all formula_71 is called a "homomorphism" on formula_72. If formula_45 is a homomorphism on formula_72 and formula_73 denotes the empty string, then formula_45 is called an formula_73"-free homomorphism" when formula_74 for all formula_75 in formula_72.
A homomorphism formula_69 on formula_72 that satisfies formula_76 for all formula_77 is called a formula_10"-uniform" homomorphism. If formula_78 for all formula_77 (that is, formula_45 is 1-uniform), then formula_45 is also called a "coding" or a "projection".
The set formula_79 of words formed from the alphabet formula_80 may be thought of as the free monoid generated by formula_80. Here the monoid operation is concatenation and the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: A \\to B"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "\\cdot"
},
{
"math_id": 4,
"text": "f(x\\cdot y)=f(x)\\cdot f(y)"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "f: A\\to B"
},
{
"math_id": 9,
"text": "\\mu"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "f(\\mu_A(a_1, \\ldots, a_k)) = \\mu_B(f(a_1), \\ldots, f(a_k)),"
},
{
"math_id": 12,
"text": "a_1, ..., a_k"
},
{
"math_id": 13,
"text": "x\\mapsto e^x"
},
{
"math_id": 14,
"text": "e^{x+y} = e^xe^y,"
},
{
"math_id": 15,
"text": "\\ln(xy)=\\ln(x)+\\ln(y), "
},
{
"math_id": 16,
"text": "f(r) = \\begin{pmatrix}\n r & 0 \\\\\n 0 & r\n\\end{pmatrix}"
},
{
"math_id": 17,
"text": "f(r+s) = \\begin{pmatrix}\n r+s & 0 \\\\\n 0 & r+s\n\\end{pmatrix} = \\begin{pmatrix}\n r & 0 \\\\\n 0 & r\n\\end{pmatrix} + \\begin{pmatrix}\n s & 0 \\\\\n 0 & s\n\\end{pmatrix} = f(r) + f(s)"
},
{
"math_id": 18,
"text": "f(rs) = \\begin{pmatrix}\n rs & 0 \\\\\n 0 & rs\n\\end{pmatrix} = \\begin{pmatrix}\n r & 0 \\\\\n 0 & r\n\\end{pmatrix} \\begin{pmatrix}\n s & 0 \\\\\n 0 & s\n\\end{pmatrix} = f(r)\\,f(s)."
},
{
"math_id": 19,
"text": "f(z) = |z| ."
},
{
"math_id": 20,
"text": "z"
},
{
"math_id": 21,
"text": "f(z_1 z_2) = |z_1 z_2| = |z_1| |z_2| = f(z_1) f(z_2)."
},
{
"math_id": 22,
"text": "|z_1 + z_2| \\ne |z_1| + |z_2|."
},
{
"math_id": 23,
"text": "(\\mathbb{N}, +, 0)"
},
{
"math_id": 24,
"text": "(\\mathbb{N}, \\times, 1)"
},
{
"math_id": 25,
"text": "f(x+y) = f(x) \\times f(y)"
},
{
"math_id": 26,
"text": "f(0) = 1"
},
{
"math_id": 27,
"text": "F"
},
{
"math_id": 28,
"text": "N: A \\to F"
},
{
"math_id": 29,
"text": "g: B\\to A"
},
{
"math_id": 30,
"text": "f\\circ g = \\operatorname{Id}_B \\qquad \\text{and} \\qquad g\\circ f = \\operatorname{Id}_A."
},
{
"math_id": 31,
"text": "g"
},
{
"math_id": 32,
"text": "f(x) = f(y)"
},
{
"math_id": 33,
"text": "x = g(f(x)) = g(f(y)) = y"
},
{
"math_id": 34,
"text": "x = f(g(x))"
},
{
"math_id": 35,
"text": "g: B \\to A"
},
{
"math_id": 36,
"text": "g(y)"
},
{
"math_id": 37,
"text": "f(x) = y"
},
{
"math_id": 38,
"text": "f \\circ g = \\operatorname{Id}_B \\text{ and } g \\circ f = \\operatorname{Id}_A,"
},
{
"math_id": 39,
"text": "*"
},
{
"math_id": 40,
"text": "g(x*_B y) = g(f(g(x))*_Bf(g(y))) = g(f(g(x)*_A g(y))) = g(x)*_A g(y),"
},
{
"math_id": 41,
"text": "*."
},
{
"math_id": 42,
"text": "\\operatorname{GL}_n(k)"
},
{
"math_id": 43,
"text": "n"
},
{
"math_id": 44,
"text": "f:A \\to B"
},
{
"math_id": 45,
"text": "h"
},
{
"math_id": 46,
"text": "C"
},
{
"math_id": 47,
"text": "f \\circ g = f \\circ h"
},
{
"math_id": 48,
"text": "g = h"
},
{
"math_id": 49,
"text": "f\\colon A \\to B"
},
{
"math_id": 50,
"text": "g\\colon B \\to A"
},
{
"math_id": 51,
"text": "g \\circ f = \\operatorname{Id}_A."
},
{
"math_id": 52,
"text": "f(a) = f(b)"
},
{
"math_id": 53,
"text": "g \\circ f = h \\circ f"
},
{
"math_id": 54,
"text": "f\\circ g = \\operatorname{Id}_B."
},
{
"math_id": 55,
"text": "\\text {split epimorphism} \\implies \\text{epimorphism (surjective)}\\implies \\text {epimorphism (right cancelable)};"
},
{
"math_id": 56,
"text": "f: X \\to Y"
},
{
"math_id": 57,
"text": "\\sim"
},
{
"math_id": 58,
"text": "X"
},
{
"math_id": 59,
"text": "a \\sim b"
},
{
"math_id": 60,
"text": "X/{\\sim}"
},
{
"math_id": 61,
"text": "[x] \\ast [y] = [x \\ast y]"
},
{
"math_id": 62,
"text": "\\ast"
},
{
"math_id": 63,
"text": "Y"
},
{
"math_id": 64,
"text": "X/\\!\\sim"
},
{
"math_id": 65,
"text": "K"
},
{
"math_id": 66,
"text": "X/K"
},
{
"math_id": 67,
"text": "\\Sigma_1"
},
{
"math_id": 68,
"text": "\\Sigma_2"
},
{
"math_id": 69,
"text": "h \\colon \\Sigma_1^* \\to \\Sigma_2^*"
},
{
"math_id": 70,
"text": "h(uv) = h(u) h(v)"
},
{
"math_id": 71,
"text": "u,v \\in \\Sigma_1"
},
{
"math_id": 72,
"text": "\\Sigma_1^*"
},
{
"math_id": 73,
"text": "\\varepsilon"
},
{
"math_id": 74,
"text": "h(x) \\neq \\varepsilon"
},
{
"math_id": 75,
"text": "x \\neq \\varepsilon"
},
{
"math_id": 76,
"text": "|h(a)| = k"
},
{
"math_id": 77,
"text": "a \\in \\Sigma_1"
},
{
"math_id": 78,
"text": "|h(a)| = 1"
},
{
"math_id": 79,
"text": "\\Sigma^*"
},
{
"math_id": 80,
"text": "\\Sigma"
}
]
| https://en.wikipedia.org/wiki?curid=13564 |
1356402 | Steric effects | Geometric aspects of ions and molecules affecting their shape and reactivity
Steric effects arise from the spatial arrangement of atoms. When atoms come close together there is generally a rise in the energy of the molecule. Steric effects are nonbonding interactions that influence the shape (conformation) and reactivity of ions and molecules. Steric effects complement electronic effects, which dictate the shape and reactivity of molecules. Steric repulsive forces between overlapping electron clouds result in structured groupings of molecules stabilized by the way that opposites attract and like charges repel.
Steric hindrance.
Steric hindrance is a consequence of steric effects. Steric hindrance is the slowing of chemical reactions due to steric bulk. It is usually manifested in "intermolecular reactions", whereas discussion of steric effects often focus on "intramolecular interactions". Steric hindrance is often exploited to control selectivity, such as slowing unwanted side-reactions.
Steric hindrance between adjacent groups can also affect torsional bond angles. Steric hindrance is responsible for the observed shape of rotaxanes and the low rates of racemization of 2,2'-disubstituted biphenyl and binaphthyl derivatives.
Measures of steric properties.
Because steric effects have profound impact on properties, the steric properties of substituents have been assessed by numerous methods.
Rate data.
Relative rates of chemical reactions provide useful insights into the effects of the steric bulk of substituents. Under standard conditions, methyl bromide solvolyzes 107 faster than does neopentyl bromide. The difference reflects the inhibition of attack on the compound with the sterically bulky (CH3)3C group.
A-values.
A-values provide another measure of the bulk of substituents. A-values are derived from equilibrium measurements of monosubstituted cyclohexanes. The extent that a substituent favors the equatorial position gives a measure of its bulk.
Ceiling temperatures.
Ceiling temperature (formula_0) is a measure of the steric properties of the monomers that comprise a polymer. formula_0 is the temperature where the rate of polymerization and depolymerization are equal. Sterically hindered monomers give polymers with low formula_0's, which are usually not useful.
Cone angles.
Ligand cone angles are measures of the size of ligands in coordination chemistry. It is defined as the solid angle formed with the metal at the vertex and the hydrogen atoms at the perimeter of the cone (see figure).
Significance and applications.
Steric effects are critical to chemistry, biochemistry, and pharmacology. In organic chemistry, steric effects are nearly universal and affect the rates and activation energies of most chemical reactions to varying degrees.
In biochemistry, steric effects are often exploited in naturally occurring molecules such as enzymes, where the catalytic site may be buried within a large protein structure. In pharmacology, steric effects determine how and at what rate a drug will interact with its target bio-molecules.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_c"
}
]
| https://en.wikipedia.org/wiki?curid=1356402 |
1356536 | Reuleaux polygon | Constant-width curve of equal-radius arcs
In geometry, a Reuleaux polygon is a curve of constant width made up of circular arcs of constant radius. These shapes are named after their prototypical example, the Reuleaux triangle, which in turn is named after 19th-century German engineer Franz Reuleaux. The Reuleaux triangle can be constructed from an equilateral triangle by connecting each pair of adjacent vertices with a circular arc centered on the opposing vertex, and Reuleaux polygons can be formed by a similar construction from any regular polygon with an odd number of sides as well as certain irregular polygons. Every curve of constant width can be accurately approximated by Reuleaux polygons. They have been applied in coinage shapes.
Construction.
If formula_0 is a convex polygon with an odd number of sides, in which each vertex is equidistant to the two opposite vertices and closer to all other vertices, then replacing each side of formula_0 by an arc centered at its opposite vertex produces a Reuleaux polygon. As a special case, this construction is possible for every regular polygon with an odd number of sides.
Every Reuleaux polygon must have an odd number of circular-arc sides, and can be constructed in this way from a polygon, the convex hull of its arc endpoints. However, it is possible for other curves of constant width to be made of an even number of arcs with varying radii.
Properties.
The Reuleaux polygons based on regular polygons are the only curves of constant width whose boundaries are formed by finitely many circular arcs of equal length.
Every curve of constant width can be approximated arbitrarily closely by a (possibly irregular) Reuleaux polygon of the same width.
A regular Reuleaux polygon has sides of equal length. More generally, when a Reuleaux polygon has sides that can be split into arcs of equal length, the convex hull of the arc endpoints is a Reinhardt polygon. These polygons are optimal in multiple ways: they have the largest possible perimeter for their diameter, the largest possible width for their diameter, and the largest possible width for their perimeter.
Applications.
The constant width of these shapes allows their use as coins that can be used in coin-operated machines. For instance, the United Kingdom has made 20-pence and 50-pence coins in the shape of a regular Reuleaux heptagon. The Canadian loonie dollar coin uses another regular Reuleaux polygon with 11 sides. However, some coins with rounded-polygon sides, such as the 12-sided 2017 British pound coin, do not have constant width and are not Reuleaux polygons.
Although Chinese inventor Guan Baihua has made a bicycle with Reuleaux polygon wheels, the invention has not caught on.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
}
]
| https://en.wikipedia.org/wiki?curid=1356536 |
13566263 | Dukhin number | The Dukhin number (Du) is a dimensionless quantity that characterizes the contribution of the surface conductivity to various electrokinetic and electroacoustic effects, as well as to electrical conductivity and permittivity of fluid heterogeneous systems. The number was named after Stanislav and Andrei Dukhin.
Overview.
It was introduced by Lyklema in “Fundamentals of Interface and Colloid Science”. A recent IUPAC Technical Report used this term explicitly and detailed several means of measurement in physical systems.
The Dukhin number is a ratio of the surface conductivity formula_0 to the fluid bulk electrical conductivity Km multiplied by particle size "a":
formula_1
There is another expression of this number that is valid when the surface conductivity is associated only with ions motion above the slipping plane in the double layer. In this case, the value of the surface conductivity depends on ζ-potential, which leads to the following expression for the Dukhin number for symmetrical electrolyte with equal ions diffusion coefficient:
formula_2
where the parameter "m" characterizes the contribution of electro-osmosis into motion of ions within the double layer
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa^{\\sigma}"
},
{
"math_id": 1,
"text": " {\\rm Du} = \\frac{\\kappa^{\\sigma}}{{\\Kappa_m}a}."
},
{
"math_id": 2,
"text": " {\\rm Du} = \\frac{2(1+3m/z^2)}{{\\kappa}a}\\left(\\mathrm{cosh}\\frac{zF\\zeta}{2RT}-1\\right),"
},
{
"math_id": 3,
"text": " m = \\frac{2\\varepsilon_0\\varepsilon_m R^2T^2}{3\\eta F^2 D}."
}
]
| https://en.wikipedia.org/wiki?curid=13566263 |
13566984 | Double layer (surface science) | Molecular interface between a surface and a fluid
In surface science, a double layer (DL, also called an electrical double layer, EDL) is a structure that appears on the surface of an object when it is exposed to a fluid. The object might be a solid particle, a gas bubble, a liquid droplet, or a porous body. The DL refers to two parallel layers of charge surrounding the object. The first layer, the surface charge (either positive or negative), consists of ions which are adsorbed onto the object due to chemical interactions. The second layer is composed of ions attracted to the surface charge via the Coulomb force, electrically screening the first layer. This second layer is loosely associated with the object. It is made of free ions that move in the fluid under the influence of electric attraction and thermal motion rather than being firmly anchored. It is thus called the "diffuse layer".
Interfacial DLs are most apparent in systems with a large surface-area-to-volume ratio, such as a colloid or porous bodies with particles or pores (respectively) on the scale of micrometres to nanometres. However, DLs are important to other phenomena, such as the electrochemical behaviour of electrodes.
DLs play a fundamental role in many everyday substances. For instance, homogenized milk exists only because fat droplets are covered with a DL that prevents their coagulation into butter. DLs exist in practically all heterogeneous fluid-based systems, such as blood, paint, ink and ceramic and cement slurry.
The DL is closely related to electrokinetic phenomena and electroacoustic phenomena.
Development of the (interfacial) double layer.
Helmholtz.
When an "electronic" conductor is brought in contact with a solid or liquid "ionic" conductor (electrolyte), a common boundary (interface) among the two phases appears. Hermann von Helmholtz was the first to realize that charged electrodes immersed in electrolyte solutions repel the co-ions of the charge while attracting counterions to their surfaces. Two layers of opposite polarity form at the interface between electrode and electrolyte. In 1853, he showed that an electrical double layer (DL) is essentially a molecular dielectric and stores charge electrostatically. Below the electrolyte's decomposition voltage, the stored charge is linearly dependent on the voltage applied.
This early model predicted a constant differential capacitance independent from the charge density depending on the dielectric constant of the electrolyte solvent and the thickness of the double-layer.
This model, while a good foundation for the description of the interface, does not consider important factors including diffusion/mixing of ions in solution, the possibility of adsorption onto the surface, and the interaction between solvent dipole moments and the electrode.
Gouy–Chapman.
Louis Georges Gouy in 1910 and David Leonard Chapman in 1913 both observed that capacitance was not a constant and that it depended on the applied potential and the ionic concentration. The "Gouy–Chapman model" made significant improvements by introducing a diffuse model of the DL. In this model, the charge distribution of ions as a function of distance from the metal surface allows Maxwell–Boltzmann statistics to be applied. Thus the electric potential decreases exponentially away from the surface of the fluid bulk.
Gouy-Chapman layers may bear special relevance in bioelectrochemistry. The observation of long-distance inter-protein electron transfer through the aqueous solution has been attributed to a diffuse region between redox partner proteins (cytochromes "c" and "c"1) that is depleted of cations in comparison to the solution bulk, thereby leading to reduced screening, electric fields extending several nanometers, and currents decreasing quasi exponentially with the distance at rate ~1 nm−1. This region is termed "Gouy-Chapman conduit" and is strongly regulated by phosphorylation, which adds one negative charge to the protein surface that disrupts cationic depletion and prevents long-distance charge transport. Similar effects are observed at the redox active site of photosynthetic complexes.
Stern.
The Gouy-Chapman model fails for highly charged DLs. In 1924, Otto Stern suggested combining the Helmholtz model with the Gouy-Chapman model: in Stern's model, some ions adhere to the electrode as suggested by Helmholtz, giving an internal Stern layer, while some form a Gouy-Chapman diffuse layer.
The Stern layer accounts for ions' finite size and consequently an ion's closest approach to the electrode is on the order of the ionic radius. The Stern model has its own limitations, namely that it effectively treats ions as point charges, assumes all significant interactions in the diffuse layer are Coulombic, assumes dielectric permittivity to be constant throughout the double layer, and that fluid viscosity is constant plane.
Grahame.
D. C. Grahame modified the Stern model in 1947. He proposed that some ionic or uncharged species can penetrate the Stern layer, although the closest approach to the electrode is normally occupied by solvent molecules. This could occur if ions lose their solvation shell as they approach the electrode. He called ions in direct contact with the electrode "specifically adsorbed ions". This model proposed the existence of three regions. The inner Helmholtz plane (IHP) passes through the centres of the specifically adsorbed ions. The outer Helmholtz plane (OHP) passes through the centres of solvated ions at the distance of their closest approach to the electrode. Finally the diffuse layer is the region beyond the OHP.
Bockris/Devanathan/Müller (BDM).
In 1963, J. O'M. Bockris, M. A. V. Devanathan and Klaus Müller proposed the BDM model of the double-layer that included the action of the solvent in the interface. They suggested that the attached molecules of the solvent, such as water, would have a fixed alignment to the electrode surface. This first layer of solvent molecules displays a strong orientation to the electric field depending on the charge. This orientation has great influence on the permittivity of the solvent that varies with field strength. The IHP passes through the centers of these molecules. Specifically adsorbed, partially solvated ions appear in this layer. The solvated ions of the electrolyte are outside the IHP. Through the centers of these ions pass the OHP. The diffuse layer is the region beyond the OHP.
Trasatti/Buzzanca.
Further research with double layers on ruthenium dioxide films in 1971 by Sergio Trasatti and Giovanni Buzzanca demonstrated that the electrochemical behavior of these electrodes at low voltages with specific adsorbed ions was like that of capacitors. The specific adsorption of the ions in this region of potential could also involve a partial charge transfer between the ion and the electrode. It was the first step towards understanding pseudocapacitance.
Conway.
Between 1975 and 1980, Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991, he described the difference between 'Supercapacitor' and 'Battery' behavior in electrochemical energy storage. In 1999, he coined the term supercapacitor to explain the increased capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions.
His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as the result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption.
Marcus.
The physical and mathematical basics of electron charge transfer absent chemical bonds leading to pseudocapacitance was developed by Rudolph A. Marcus. Marcus Theory explains the rates of electron transfer reactions—the rate at which an electron can move from one chemical species to another. It was originally formulated to address outer sphere electron transfer reactions, in which two chemical species change only in their charge, with an electron jumping. For redox reactions without making or breaking bonds, Marcus theory takes the place of Henry Eyring's transition state theory which was derived for reactions with structural changes. Marcus received the Nobel Prize in Chemistry in 1992 for this theory.
Mathematical description.
There are detailed descriptions of the interfacial DL in many books on colloid and interface science and microscale fluid transport. There is also a recent IUPAC technical report on the subject of interfacial double layer and related electrokinetic phenomena.
As stated by Lyklema, "...the reason for the formation of a "relaxed" ("equilibrium") double layer is the non-electric affinity of charge-determining ions for a surface..." This process leads to the buildup of an electric surface charge, expressed usually in C/m2. This surface charge creates an electrostatic field that then affects the ions in the bulk of the liquid. This electrostatic field, in combination with the thermal motion of the ions, creates a counter charge, and thus screens the electric surface charge. The net electric charge in this screening diffuse layer is equal in magnitude to the net surface charge, but has the opposite polarity. As a result, the complete structure is electrically neutral.
The diffuse layer, or at least part of it, can move under the influence of tangential stress. There is a conventionally introduced slipping plane that separates mobile fluid from fluid that remains attached to the surface. Electric potential at this plane is called electrokinetic potential or zeta potential (also denoted as ζ-potential).
The electric potential on the external boundary of the Stern layer versus the bulk electrolyte is referred to as Stern potential. Electric potential difference between the fluid bulk and the surface is called the electric surface potential.
Usually zeta potential is used for estimating the degree of DL charge. A characteristic value of this electric potential in the DL is 25 mV with a maximum value around 100 mV (up to several volts on electrodes). The chemical composition of the sample at which the ζ-potential is 0 is called the point of zero charge or the iso-electric point. It is usually determined by the solution pH value, since protons and hydroxyl ions are the charge-determining ions for most surfaces.
Zeta potential can be measured using electrophoresis, electroacoustic phenomena, streaming potential, and electroosmotic flow.
The characteristic thickness of the DL is the Debye length, κ−1. It is reciprocally proportional to the square root of the ion concentration "C". In aqueous solutions it is typically on the scale of a few nanometers and the thickness decreases with increasing concentration of the electrolyte.
The electric field strength inside the DL can be anywhere from zero to over 109 V/m. These steep electric potential gradients are the reason for the importance of the DLs.
The theory for a flat surface and a symmetrical electrolyte is usually referred to as the Gouy-Chapman theory. It yields a simple relationship between electric charge in the diffuse layer σd and the Stern potential Ψd:
formula_0
There is no general analytical solution for mixed electrolytes, curved surfaces or even spherical particles. There is an asymptotic solution for spherical particles with low charged DLs. In the case when electric potential over DL is less than 25 mV, the so-called Debye-Huckel approximation holds. It yields the following expression for electric potential" Ψ" in the spherical DL as a function of the distance "r" from the particle center:
formula_1
There are several asymptotic models which play important roles in theoretical developments associated with the interfacial DL.
The first one is "thin DL". This model assumes that DL is much thinner than the colloidal particle or capillary radius. This restricts the value of the Debye length and particle radius as following:
formula_2
This model offers tremendous simplifications for many subsequent applications. Theory of electrophoresis is just one example. The theory of electroacoustic phenomena is another example.
The thin DL model is valid for most aqueous systems because the Debye length is only a few nanometers in such cases. It breaks down only for nano-colloids in solution with ionic strengths close to water.
The opposing "thick DL" model assumes that the Debye length is larger than particle radius:
formula_3
This model can be useful for some nano-colloids and non-polar fluids, where the Debye length is much larger.
The last model introduces "overlapped DLs". This is important in concentrated dispersions and emulsions when distances between particles become comparable with the Debye length.
Electrical double layers.
The electrical double layer (EDL) is the result of the variation of electric potential near a surface, and has a significant influence on the behaviour of colloids and other surfaces in contact with solutions or solid-state fast ion conductors.
The primary difference between a double layer on an electrode and one on an interface is the mechanism of surface charge formation. With an electrode, it is possible to regulate the surface charge by applying an external electric potential. This application, however, is impossible in colloidal and porous double layers, because for colloidal particles, one does not have access to the interior of the particle to apply a potential difference.
EDLs are analogous to the double layer in plasma.
Differential capacitance.
EDLs have an additional parameter defining their characterization: differential capacitance. Differential capacitance, denoted as "C", is described by the equation below:
formula_4
where σ is the surface charge and ψ is the electric surface potential.
Electron transfer in electrical double layer.
The formation of electrical double layer (EDL) has been traditionally assumed to be entirely dominated by ion adsorption and redistribution. With considering the fact that the contact electrification between solid-solid is dominated by electron transfer, it is suggested by Wang that the EDL is formed by a two-step process. In the first step, when the molecules in the solution first approach a virgin surface that has no pre-existing surface charges, it may be possible that the atoms/molecules in the solution directly interact with the atoms on the solid surface to form strong overlap of electron clouds. Electron transfer occurs first to make the “neutral” atoms on solid surface become charged, i.e., the formation of ions. In the second step, if there are ions existing in the liquid, such as H+ and OH–, the loosely distributed negative ions in the solution would be attracted to migrate toward the surface bonded ions due to electrostatic interactions, forming an EDL. Both electron transfer and ion transfer co-exist at liquid-solid interface.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\sigma^d = -\\sqrt{{8\\varepsilon_0}{\\varepsilon_m}CRT}\\sinh \\frac{F\\Psi^d}{2RT}"
},
{
"math_id": 1,
"text": " {\\Psi}(r) = {\\Psi^d}\\frac{a}{r}\\exp({-\\kappa}(r-a))"
},
{
"math_id": 2,
"text": " \\kappa a \\gg 1 "
},
{
"math_id": 3,
"text": " \\kappa a < 1 "
},
{
"math_id": 4,
"text": " C = \\frac{d \\sigma}{ d \\Psi}"
}
]
| https://en.wikipedia.org/wiki?curid=13566984 |
1356994 | S3 Texture Compression | Texture compression algorithm
S3 Texture Compression (S3TC) (sometimes also called DXTn, DXTC, or BCn) is a group of related lossy texture compression algorithms originally developed by Iourcha et al. of S3 Graphics, Ltd. for use in their Savage 3D computer graphics accelerator. The method of compression is strikingly similar to the previously published Color Cell Compression, which is in turn an adaptation of Block Truncation Coding published in the late 1970s. Unlike some image compression algorithms (e.g. JPEG), S3TC's fixed-rate data compression coupled with the single memory access (cf. Color Cell Compression and some VQ-based schemes) made it well-suited for use in compressing textures in hardware-accelerated 3D computer graphics. Its subsequent inclusion in Microsoft's DirectX 6.0 and OpenGL 1.3 (via the GL_EXT_texture_compression_s3tc extension) led to widespread adoption of the technology among hardware and software makers. While S3 Graphics is no longer a competitor in the graphics accelerator market, license fees have been levied and collected for the use of S3TC technology until October 2017, for example in game consoles and graphics cards. The wide use of S3TC has led to a de facto requirement for OpenGL drivers to support it, but the patent-encumbered status of S3TC presented a major obstacle to open source implementations, while implementation approaches which tried to avoid the patented parts existed.
Patent.
Some (e.g. US 5956431 A) of the multiple USPTO patents on S3 Texture Compression expired on October 2, 2017. At least one continuation patent, US6,775,417, however had a 165-day extension. This continuation patent expired on March 16, 2018.
Codecs.
There are five variations of the S3TC algorithm (named DXT1 through DXT5, referring to the FourCC code assigned by Microsoft to each format), each designed for specific types of image data. All convert a 4×4 block of pixels to a 64-bit or 128-bit quantity, resulting in compression ratios of 6:1 with 24-bit RGB input data or 4:1 with 32-bit RGBA input data. S3TC is a lossy compression algorithm, resulting in image quality degradation, an effect which is minimized by the ability to increase texture resolutions while maintaining the same memory requirements. Hand-drawn cartoon-like images do not compress well, nor do normal map data, both of which usually generate artifacts. ATI's 3Dc compression algorithm is a modification of DXT5 designed to overcome S3TC's shortcomings with regard to normal maps. id Software worked around the normalmap compression issues in Doom 3 by moving the red component into the alpha channel before compression and moving it back during rendering in the pixel shader.
Like many modern image compression algorithms, S3TC only specifies the method used to decompress images, allowing implementers to design the compression algorithm to suit their specific needs, although the patent still covers compression algorithms. The nVidia GeForce 256 through to GeForce 4 cards also used 16-bit interpolation to render DXT1 textures, which resulted in banding when unpacking textures with color gradients. Again, this created an unfavorable impression of texture compression, not related to the fundamentals of the codec itself.
DXT1.
DXT1 (also known as Block Compression 1 or BC1) is the smallest variation of S3TC, storing 16 input pixels in 64 bits of output, consisting of two 16-bit RGB 5:6:5 color values formula_0 and formula_1, and a 4×4 two-bit lookup table.
If formula_2(compare these colors by interpreting them as two 16-bit unsigned numbers), then two other colors are calculated, such that for each component, formula_3 and formula_4.
This mode operates similarly to mode 0xC0 of the original Apple Video codec.
Otherwise, if formula_5, then formula_6 and formula_7 is transparent black corresponding to a premultiplied alpha format. This color sometimes causes a black border surrounding the transparent area when linear texture filtering and alpha test is used, due to colors being interpolated between the color of opaque texel and neighbouring black transparent texel.
The lookup table is then consulted to determine the color value for each pixel, with a value of 0 corresponding to formula_0 and a value of 3 corresponding to formula_7.
DXT2 and DXT3.
DXT2 and DXT3 (collectively also known as Block Compression 2 or BC2) converts 16 input pixels (corresponding to a 4x4 pixel block) into 128 bits of output, consisting of 64 bits of alpha channel data (4 bits for each pixel) followed by 64 bits of color data, encoded the same way as DXT1 (with the exception that the 4-color version of the DXT1 algorithm is always used instead of deciding which version to use based on the relative values of formula_0 and formula_1).
In DXT2, the color data is interpreted as being premultiplied by alpha, in DXT3 it is interpreted as not having been premultiplied by alpha. Typically DXT2/3 are well suited to images with sharp alpha transitions, between translucent and opaque areas.
DXT4 and DXT5.
DXT4 and DXT5 (collectively also known as Block Compression 3 or BC3) converts 16 input pixels into 128 bits of output, consisting of 64 bits of alpha channel data (two 8-bit alpha values and a 4×4 3-bit lookup table) followed by 64 bits of color data (encoded the same way as DXT1).
If formula_8, then six other alpha values are calculated, such that formula_9, formula_10, formula_11, formula_12, formula_13, and formula_14.
Otherwise, if formula_15, four other alpha values are calculated such that formula_16, formula_17, formula_18, and formula_19 with formula_20 and formula_21.
The lookup table is then consulted to determine the alpha value for each pixel, with a value of 0 corresponding to formula_22 and a value of 7 corresponding to formula_23. DXT4's color data is premultiplied by alpha, whereas DXT5's is not. Because DXT4/5 use an interpolated alpha scheme, they generally produce superior results for alpha (transparency) gradients than DXT2/3.
Further variants.
BC4 and BC5.
BC4 and BC5 (Block Compression 4 and 5) are added in Direct3D 10. They reuse the alpha channel encoding found in DXT4/5 (BC3).
BC6H and BC7.
BC6H (sometimes BC6) and BC7 (Block Compression 6H and 7) are added in Direct3D 11.
BC6H and BC7 have a much more complex algorithm with a selection of encoding modes. The quality is much better as a result. These two modes are also specified much more exactly, with ranges of accepted deviation. Earlier BCn modes decode slightly differently among GPU vendors.
Data preconditioning.
BCn textures can be further compressed for on-disk storage and distribution (texture supercompression). An application would decompress this extra layer and send the BCn data to the GPU as usual.
BCn can be combined with Oodle Texture, a lossy preprocessor that modifies the input texture so that the BCn output is more easily compressed by a LZ77 compressor (rate-distortion optimization). BC7 specifically can also use "bc7prep", a lossless pass to re-encode the texture in a more compressible form (requiring its inverse at decompression).
crunch is another tool that performs RDO and optionally further re-encoding.
In 2021, Microsoft produced a "BCPack" compression algorithm specifically for BCn-compressed textures. Xbox series X and S have hardware support for decompressing BCPack streams.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c_0"
},
{
"math_id": 1,
"text": "c_1"
},
{
"math_id": 2,
"text": "c_0 > c_1"
},
{
"math_id": 3,
"text": "c_2 = {2 \\over 3} c_0 + {1 \\over 3} c_1"
},
{
"math_id": 4,
"text": "c_3 = {1 \\over 3} c_0 + {2 \\over 3} c_1"
},
{
"math_id": 5,
"text": "c_0 \\le c_1"
},
{
"math_id": 6,
"text": "c_2 = {1 \\over 2} c_0 + {1 \\over 2} c_1"
},
{
"math_id": 7,
"text": "c_3"
},
{
"math_id": 8,
"text": "\\alpha_0 > \\alpha_1"
},
{
"math_id": 9,
"text": "\\alpha_2 = {{6\\alpha_0 + 1\\alpha_1} \\over 7}"
},
{
"math_id": 10,
"text": "\\alpha_3 = {{5\\alpha_0 + 2\\alpha_1} \\over 7}"
},
{
"math_id": 11,
"text": "\\alpha_4 = {{4\\alpha_0 + 3\\alpha_1} \\over 7}"
},
{
"math_id": 12,
"text": "\\alpha_5 = {{3\\alpha_0 + 4\\alpha_1} \\over 7}"
},
{
"math_id": 13,
"text": "\\alpha_6 = {{2\\alpha_0 + 5\\alpha_1} \\over 7}"
},
{
"math_id": 14,
"text": "\\alpha_7 = {{1\\alpha_0 + 6\\alpha_1} \\over 7}"
},
{
"math_id": 15,
"text": "\\alpha_0 \\le \\alpha_1"
},
{
"math_id": 16,
"text": "\\alpha_2 = {{4\\alpha_0 + 1\\alpha_1} \\over 5}"
},
{
"math_id": 17,
"text": "\\alpha_3 = {{3\\alpha_0 + 2\\alpha_1} \\over 5}"
},
{
"math_id": 18,
"text": "\\alpha_4 = {{2\\alpha_0 + 3\\alpha_1} \\over 5}"
},
{
"math_id": 19,
"text": "\\alpha_5 = {{1\\alpha_0 + 4\\alpha_1} \\over 5}"
},
{
"math_id": 20,
"text": "\\alpha_6 = 0"
},
{
"math_id": 21,
"text": "\\alpha_7 = 255"
},
{
"math_id": 22,
"text": "\\alpha_0"
},
{
"math_id": 23,
"text": "\\alpha_7"
}
]
| https://en.wikipedia.org/wiki?curid=1356994 |
13576258 | Transmission delay | Time delay in networking caused by the data rate of a link
In a network based on packet switching, transmission delay (or store-and-forward delay, also known as packetization delay or serialization delay) is the amount of time required to push all the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link.
Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits. It is given by the following formula:
formula_0 seconds
where:
formula_1 is the transmission delay in seconds;
formula_2 is the number of bits;
formula_3 is the rate of transmission (say, in bits per second).
Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus, store-and-forward packet switches introduce a store-and-forward delay at the input to each link along the packet's route.
*Processing delay
*Queuing delay
*Propagation delay | [
{
"math_id": 0,
"text": "D_T = N/R"
},
{
"math_id": 1,
"text": "D_T"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "R"
}
]
| https://en.wikipedia.org/wiki?curid=13576258 |
13576575 | Isoionic point | Term used in protein sciences
The isoionic point is the pH value at which a zwitterion molecule has an equal number of positive and negative charges and no adherent ionic species. It was first defined by S.P.L. Sørensen, Kaj Ulrik Linderstrøm-Lang and Ellen Lund in 1926 and is mainly a term used in protein sciences.
It is different from the isoelectric point (p"I") in that p"I" is the pH value at which the net charge of the molecule, "including" bound ions is zero. Whereas the isoionic point is at net charge zero in a deionized solution. Thus, the isoelectric and isoionic points are equal when the concentration of charged species is zero.
For a diprotic acid, the hydrogen ion concentration can be found at the isoionic point using the following equation
formula_0
Note that if formula_6 then formula_7 and if formula_8 then formula_9. Therefore, under these conditions, the equation simplifies to
formula_10
The equation can be further simplified to calculate the pH by taking the negative logarithm of both sides to yield
formula_11
which shows that under certain conditions, the isoionic and isoelectric point are similar.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[H^+]=\\sqrt{{K_1 K_2 C + K_1 K_w} \\over {K_1 + C}}"
},
{
"math_id": 1,
"text": "[H^+]="
},
{
"math_id": 2,
"text": "K_1="
},
{
"math_id": 3,
"text": "K_2="
},
{
"math_id": 4,
"text": "K_w="
},
{
"math_id": 5,
"text": "C="
},
{
"math_id": 6,
"text": "K_1 K_2 C \\gg K_1 K_w"
},
{
"math_id": 7,
"text": "K_1 K_2 C + K_1 K_w \\approx K_1 K_2 C"
},
{
"math_id": 8,
"text": "C \\gg K_1"
},
{
"math_id": 9,
"text": "K_1 + C \\approx C"
},
{
"math_id": 10,
"text": "[H^+]=\\sqrt{{K_1 K_2 C + K_1 K_w} \\over {K_1 + C}} \\approx \\sqrt{{K_1 K_2 C} \\over {C}} \\approx \\sqrt{K_1 K_2}"
},
{
"math_id": 11,
"text": "pH = {{pK_1 + pK_2} \\over {2}}"
}
]
| https://en.wikipedia.org/wiki?curid=13576575 |
13576645 | Eigendecomposition of a matrix | Matrix decomposition
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
Fundamental theory of matrix eigenvectors and eigenvalues.
A (nonzero) vector v of dimension N is an eigenvector of a square "N" × "N" matrix A if it satisfies a linear equation of the form
formula_0
for some scalar λ. Then λ is called the eigenvalue corresponding to v. Geometrically speaking, the eigenvectors of A are the vectors that A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.
This yields an equation for the eigenvalues
formula_1
We call "p"("λ") the characteristic polynomial, and the equation, called the characteristic equation, is an Nth-order polynomial equation in the unknown λ. This equation will have Nλ distinct solutions, where 1 ≤ "Nλ" ≤ "N". The set of solutions, that is, the eigenvalues, is called the spectrum of A.
If the field of scalars is algebraically closed, then we can factor p as
formula_2
The integer ni is termed the algebraic multiplicity of eigenvalue λi. The algebraic multiplicities sum to N: formula_3
For each eigenvalue λi, we have a specific eigenvalue equation
formula_4
There will be 1 ≤ "m""i" ≤ "n""i" linearly independent solutions to each eigenvalue equation. The linear combinations of the "m""i" solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue "λ""i". The integer "m""i" is termed the geometric multiplicity of "λ""i". It is important to keep in mind that the algebraic multiplicity "n""i" and geometric multiplicity "m""i" may or may not be equal, but we always have "m""i" ≤ "n""i". The simplest case is of course when "m""i" = "n""i" = 1. The total number of linearly independent eigenvectors, "N"v, can be calculated by summing the geometric multiplicities
formula_5
The eigenvectors can be indexed by eigenvalues, using a double index, with v"ij" being the jth eigenvector for the ith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index v"k", with "k" = 1, 2, ..., "N"v.
Eigendecomposition of a matrix.
Let A be a square "n" × "n" matrix with n linearly independent eigenvectors qi (where "i" = 1, ..., "n"). Then A can be factorized as
formula_6
where Q is the square "n" × "n" matrix whose ith column is the eigenvector qi of A, and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, "Λii" = "λi". Note that only diagonalizable matrices can be factorized in this way. For example, the defective matrix formula_7 (which is a shear matrix) cannot be diagonalized.
The n eigenvectors qi are usually normalized, but they don't have to be. A non-normalized set of n eigenvectors, vi can also be used as the columns of Q. That can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1. If one of the eigenvalues "λi" has multiple linearly independent eigenvectors (that is, the geometric multiplicity of "λi" is greater than 1), then these eigenvectors for this eigenvalue "λi" can be chosen to be mutually orthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if A is a normal matrix, then by the spectral theorem, it's always possible to diagonalize A in an orthonormal basis {qi}.
The decomposition can be derived from the fundamental property of eigenvectors:
formula_8
The linearly independent eigenvectors qi with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products "A"x, for x ∈ C"n", which is the same as the image (or range) of the corresponding matrix transformation, and also the column space of the matrix A. The number of linearly independent eigenvectors qi with nonzero eigenvalues is equal to the rank of the matrix A, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectors qi with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the null space (also known as the kernel) of the matrix transformation A.
Example.
The 2 × 2 real matrix A
formula_9
may be decomposed into a diagonal matrix through multiplication of a non-singular matrix Q
formula_10
Then
formula_11
for some real diagonal matrix formula_12.
Multiplying both sides of the equation on the left by Q:
formula_13
The above equation can be decomposed into two simultaneous equations:
formula_14
Factoring out the eigenvalues x and y:
formula_15
Letting
formula_16
this gives us two vector equations:
formula_17
And can be represented by a single vector equation involving two solutions as eigenvalues:
formula_18
where λ represents the two eigenvalues x and y, and u represents the vectors a and b.
Shifting "λ"u to the left hand side and factoring u out
formula_19
Since Q is non-singular, it is essential that u is nonzero. Therefore,
formula_20
Thus
formula_21
giving us the solutions of the eigenvalues for the matrix A as "λ" = 1 or "λ" = 3, and the resulting diagonal matrix from the eigendecomposition of A is thus formula_22.
Putting the solutions back into the above simultaneous equations
formula_23
Solving the equations, we have
formula_24
Thus the matrix Q required for the eigendecomposition of A is
formula_25
that is:
formula_26
Matrix inverse via eigendecomposition.
If a matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is invertible and its inverse is given by
formula_27
If formula_28 is a symmetric matrix, since formula_29 is formed from the eigenvectors of formula_28, formula_29 is guaranteed to be an orthogonal matrix, therefore formula_30. Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate:
formula_31
Practical implications.
When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.
Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See also Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).
If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the Laplacian of the sorted eigenvalues:
formula_32
where the eigenvalues are subscripted with an s to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.
Functional calculus.
The eigendecomposition allows for much easier computation of power series of matrices. If "f" ("x") is given by
formula_33
then we know that
formula_34
Because Λ is a diagonal matrix, functions of Λ are very easy to calculate:
formula_35
The off-diagonal elements of "f" (Λ) are zero; that is, "f" (Λ) is also a diagonal matrix. Therefore, calculating "f" (A) reduces to just calculating the function on each of the eigenvalues.
A similar technique works more generally with the holomorphic functional calculus, using
formula_36
from above. Once again, we find that
formula_35
Examples.
formula_37
which are examples for the functions formula_38. Furthermore, formula_39 is the matrix exponential.
Decomposition for spectral matrices.
Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.
Normal matrices.
A complex-valued square matrix formula_40 is normal (meaning , formula_41, where formula_42 is the conjugate transpose) if and only if it can be decomposed as formula_43, where formula_44 is a unitary matrix (meaning formula_45) and formula_46 diag(formula_47) is a diagonal matrix. The columns formula_48 of formula_44 form an orthonormal basis and are eigenvectors of formula_28 with corresponding eigenvalues formula_47.
For example, consider the 2 x 2 normal matrix formula_49.
The eigenvalues are formula_50 and formula_51.
The (normalized) eigenvectors corresponding to these eigenvalues are formula_52 and formula_53.
The diagonalization is formula_43, where formula_54, formula_46formula_55 and formula_56formula_57.
The verification is formula_58formula_57formula_55formula_57formula_59.
This example illustrates the process of diagonalizing a normal matrix formula_28 by finding its eigenvalues and eigenvectors, forming the unitary matrix formula_44, the diagonal matrix formula_60, and verifying the decomposition.
Real symmetric matrices.
As a special case, for every "n" × "n" real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix A can be decomposed as formula_61, where Q is an orthogonal matrix whose columns are the real, orthonormal eigenvectors of A, and Λ is a diagonal matrix whose entries are the eigenvalues of A.
Diagonalizable Matrices.
Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed asformula_62, where formula_63 is a matrix whose columns are eigenvectors of formula_28 and formula_64 is a diagonal matrix consisting of the corresponding eigenvalues of formula_28.
Positive Definite Matrices.
Positive definite matrices are matrices for which all eigenvalues are positive. They can be decomposed as formula_65 using the Cholesky decomposition, where formula_66 is a lower triangular matrix.
Unitary and Hermitian Matrices.
Unitary Matrices satisfy formula_67 (real case) or formula_68 (complex case), where formula_69denotes the conjugate transpose and formula_70denotes the conjugate transpose. They diagonalize using unitary transformations.
Hermitian matrices satisfy formula_71, where formula_72denotes the conjugate transpose. They can be diagonalized using unitary or orthogonal matrices.
Numerical computations.
Numerical computation of eigenvalues.
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abel–Ruffini theorem implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply using nth roots. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative.
Iterative numerical algorithms for approximating roots of polynomials exist, such as Newton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.
A simple and accurate iterative method is the power method: a random vector v is chosen and a sequence of unit vectors is computed as
formula_77
This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that v has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example, Google uses it to calculate the page rank of documents in their search engine. Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at the span of "all" the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration. Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.
Numerical computation of eigenvectors.
Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation
formula_78
using Gaussian elimination or any other method for solving matrix equations.
However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by the Rayleigh quotient of the eigenvector). In the QR algorithm for a Hermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of the Q matrices from the steps in the algorithm. (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.) For Hermitian matrices, the Divide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.
Additional topics.
Generalized eigenspaces.
Recall that the "geometric" multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of "λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix ("λI − A)"k" for "any sufficiently large k". That is, it is the space of "generalized eigenvectors" (first sense), where a generalized eigenvector is any vector which "eventually" becomes 0 if "λ"I − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.
This usage should not be confused with the "generalized eigenvalue problem" described below.
Conjugate eigenvector.
A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is
formula_79
For example, in coherent electromagnetic scattering theory, the linear transformation A represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.
Generalized eigenvalue problem.
A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector v that obeys
formula_80
where A and B are matrices. If v obeys this equation, with some λ, then we call v the "generalized eigenvector" of A and B (in the second sense), and λ is called the "generalized eigenvalue" of A and B (in the second sense) which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation
formula_81
If "n" linearly independent vectors {v1, …, v"n"} can be found, such that for every "i" ∈ {1, …, "n"}, Av"i" = "λi"Bv"i", then we define the matrices P and D such that
formula_82
formula_83
Then the following equality holds
formula_84
And the proof is
formula_85
And since P is invertible, we multiply the equation from the right by its inverse, finishing the proof.
The set of matrices of the form A − "λ"B, where λ is a complex number, is called a "pencil"; the term "matrix pencil" can also refer to the pair (A, B) of matrices.
If B is invertible, then the original problem can be written in the form
formula_86
which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if A and B are Hermitian matrices, since in this case B−1A is not generally Hermitian and important properties of the solution are no longer apparent.
If A and B are both symmetric or Hermitian, and B is also a positive-definite matrix, the eigenvalues "λi" are real and eigenvectors v1 and v2 with distinct eigenvalues are B-orthogonal (v1*Bv2 = 0). In this case, eigenvectors can be chosen so that the matrix P defined above satisfies
formula_87 or formula_88
and there exists a basis of generalized eigenvectors (it is not a defective problem). This case is sometimes called a "Hermitian definite pencil" or "definite pencil".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{A} \\mathbf{v} = \\lambda \\mathbf{v}"
},
{
"math_id": 1,
"text": " p\\left(\\lambda\\right) = \\det\\left(\\mathbf{A} - \\lambda \\mathbf{I}\\right) = 0. "
},
{
"math_id": 2,
"text": "p(\\lambda) = \\left(\\lambda - \\lambda_1\\right)^{n_1}\\left(\\lambda - \\lambda_2\\right)^{n_2} \\cdots \\left(\\lambda-\\lambda_{N_{\\lambda}}\\right)^{n_{N_{\\lambda}}} = 0. "
},
{
"math_id": 3,
"text": "\\sum_{i=1}^{N_\\lambda}{n_i} = N."
},
{
"math_id": 4,
"text": "\\left(\\mathbf{A} - \\lambda_i \\mathbf{I}\\right)\\mathbf{v} = 0. "
},
{
"math_id": 5,
"text": "\\sum_{i=1}^{N_{\\lambda}}{m_i} = N_{\\mathbf{v}}."
},
{
"math_id": 6,
"text": "\\mathbf{A}=\\mathbf{Q}\\mathbf{\\Lambda}\\mathbf{Q}^{-1} "
},
{
"math_id": 7,
"text": "\\left[ \\begin{smallmatrix} 1 & 1 \\\\ 0 & 1 \\end{smallmatrix} \\right]"
},
{
"math_id": 8,
"text": "\\begin{align}\n\\mathbf{A} \\mathbf{v} &= \\lambda \\mathbf{v} \\\\\n\\mathbf{A} \\mathbf{Q} &= \\mathbf{Q} \\mathbf{\\Lambda} \\\\\n\\mathbf{A} &= \\mathbf{Q}\\mathbf{\\Lambda}\\mathbf{Q}^{-1} .\n\\end{align}"
},
{
"math_id": 9,
"text": "\\mathbf{A} = \\begin{bmatrix} 1 & 0 \\\\ 1 & 3 \\\\ \\end{bmatrix}"
},
{
"math_id": 10,
"text": "\\mathbf{Q} = \\begin{bmatrix}\n a & b \\\\\n c & d\n \\end{bmatrix} \\in \\mathbb{R}^{2\\times2}.\n"
},
{
"math_id": 11,
"text": "\\begin{bmatrix}\n a & b \\\\\n c & d \n \\end{bmatrix}^{-1}\\begin{bmatrix}\n 1 & 0 \\\\\n 1 & 3\n \\end{bmatrix}\\begin{bmatrix}\n a & b \\\\\n c & d\n \\end{bmatrix} =\n \\begin{bmatrix}\n x & 0 \\\\\n 0 & y\n \\end{bmatrix},\n"
},
{
"math_id": 12,
"text": "\\left[ \\begin{smallmatrix} x & 0 \\\\ 0 & y \\end{smallmatrix} \\right]"
},
{
"math_id": 13,
"text": "\\begin{bmatrix} 1 & 0 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} = \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix} \\begin{bmatrix} x & 0 \\\\ 0 & y \\end{bmatrix}."
},
{
"math_id": 14,
"text": " \\begin{cases}\n\\begin{bmatrix} 1 & 0\\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} a \\\\ c \\end{bmatrix} = \\begin{bmatrix} ax \\\\ cx \\end{bmatrix} \\\\[1.2ex]\n\\begin{bmatrix} 1 & 0\\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} b \\\\ d \\end{bmatrix} = \\begin{bmatrix} by \\\\ dy \\end{bmatrix}\n\\end{cases} ."
},
{
"math_id": 15,
"text": " \\begin{cases}\n\\begin{bmatrix} 1 & 0\\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} a \\\\ c \\end{bmatrix} = x\\begin{bmatrix} a \\\\ c \\end{bmatrix} \\\\[1.2ex]\n\\begin{bmatrix} 1 & 0\\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} b \\\\ d \\end{bmatrix} = y\\begin{bmatrix} b \\\\ d \\end{bmatrix}\n\\end{cases} "
},
{
"math_id": 16,
"text": "\\mathbf{a} = \\begin{bmatrix} a \\\\ c \\end{bmatrix}, \\quad \\mathbf{b} = \\begin{bmatrix} b \\\\ d \\end{bmatrix},"
},
{
"math_id": 17,
"text": " \\begin{cases}\n\\mathbf{A} \\mathbf{a} = x \\mathbf{a} \\\\\n\\mathbf{A} \\mathbf{b} = y \\mathbf{b}\n\\end{cases}"
},
{
"math_id": 18,
"text": "\\mathbf{A} \\mathbf{u} = \\lambda \\mathbf{u}"
},
{
"math_id": 19,
"text": "\\left(\\mathbf{A} - \\lambda \\mathbf{I}\\right) \\mathbf{u} = \\mathbf{0}"
},
{
"math_id": 20,
"text": "\\det(\\mathbf{A} - \\lambda \\mathbf{I}) = 0"
},
{
"math_id": 21,
"text": "(1- \\lambda)(3 - \\lambda) = 0"
},
{
"math_id": 22,
"text": "\\left[ \\begin{smallmatrix} 1 & 0 \\\\ 0 & 3 \\end{smallmatrix} \\right]"
},
{
"math_id": 23,
"text": " \\begin{cases}\n\\begin{bmatrix} 1 & 0 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} a \\\\ c \\end{bmatrix} = 1\\begin{bmatrix} a \\\\ c \\end{bmatrix} \\\\[1.2ex]\n\\begin{bmatrix} 1 & 0 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} b \\\\ d \\end{bmatrix} = 3\\begin{bmatrix} b \\\\ d \\end{bmatrix}\n\\end{cases} "
},
{
"math_id": 24,
"text": "a = -2c \\quad\\text{and} \\quad b = 0, \\qquad c,d \\in \\mathbb{R}."
},
{
"math_id": 25,
"text": "\\mathbf{Q} = \\begin{bmatrix} -2c & 0 \\\\ c & d \\end{bmatrix},\\qquad c, d\\in \\mathbb{R}, "
},
{
"math_id": 26,
"text": "\\begin{bmatrix}\n-2c & 0 \\\\ c & d \\end{bmatrix}^{-1} \\begin{bmatrix} 1 & 0 \\\\ 1 & 3 \\end{bmatrix} \\begin{bmatrix} -2c & 0 \\\\ c & d \\end{bmatrix} = \\begin{bmatrix} 1 & 0 \\\\ 0 & 3 \\end{bmatrix},\\qquad c, d\\in \\mathbb{R}"
},
{
"math_id": 27,
"text": "\\mathbf{A}^{-1} = \\mathbf{Q}\\mathbf{\\Lambda}^{-1}\\mathbf{Q}^{-1}"
},
{
"math_id": 28,
"text": "\\mathbf{A}"
},
{
"math_id": 29,
"text": "\\mathbf{Q}"
},
{
"math_id": 30,
"text": "\\mathbf{Q}^{-1} = \\mathbf{Q}^\\mathrm{T}"
},
{
"math_id": 31,
"text": "\\left[\\mathbf{\\Lambda}^{-1}\\right]_{ii} = \\frac{1}{\\lambda_i}"
},
{
"math_id": 32,
"text": "\\min\\left|\\nabla^2 \\lambda_\\mathrm{s}\\right|"
},
{
"math_id": 33,
"text": "f(x) = a_0 + a_1 x + a_2 x^2 + \\cdots"
},
{
"math_id": 34,
"text": "f\\!\\left(\\mathbf{A}\\right) = \\mathbf{Q}\\,f\\!\\left(\\mathbf{\\Lambda}\\right)\\mathbf{Q}^{-1}"
},
{
"math_id": 35,
"text": "\\left[f\\left(\\mathbf{\\Lambda}\\right)\\right]_{ii} = f\\left(\\lambda_i\\right)"
},
{
"math_id": 36,
"text": "\\mathbf{A}^{-1} = \\mathbf{Q} \\mathbf{\\Lambda}^{-1} \\mathbf{Q}^{-1}"
},
{
"math_id": 37,
"text": "\\begin{align}\n \\mathbf{A}^2 &= \\left(\\mathbf{Q}\\mathbf{\\Lambda}\\mathbf{Q}^{-1}\\right) \\left(\\mathbf{Q}\\mathbf{\\Lambda}\\mathbf{Q}^{-1}\\right)\n = \\mathbf{Q}\\mathbf{\\Lambda}\\left(\\mathbf{Q}^{-1}\\mathbf{Q}\\right)\\mathbf{\\Lambda}\\mathbf{Q}^{-1}\n = \\mathbf{Q}\\mathbf{\\Lambda}^2\\mathbf{Q}^{-1} \\\\[1.2ex]\n \\mathbf{A}^n &= \\mathbf{Q}\\mathbf{\\Lambda}^n\\mathbf{Q}^{-1}\n \\\\[1.2ex]\n \\exp \\mathbf{A}\n &= \\mathbf{Q} \\exp(\\mathbf{\\Lambda}) \\mathbf{Q}^{-1}\n\\end{align}"
},
{
"math_id": 38,
"text": " f(x)=x^2, \\; f(x)=x^n, \\; f(x)=\\exp{x} "
},
{
"math_id": 39,
"text": " \\exp{\\mathbf{A}} "
},
{
"math_id": 40,
"text": "A"
},
{
"math_id": 41,
"text": "\\mathbf{A}^*\\mathbf{A}=\\mathbf{A} \\mathbf{A}^*"
},
{
"math_id": 42,
"text": "\\mathbf{A}^*"
},
{
"math_id": 43,
"text": "\\mathbf{A}=\\mathbf{U} \\mathbf\\Lambda\\mathbf{U}^*"
},
{
"math_id": 44,
"text": "\\mathbf{U}"
},
{
"math_id": 45,
"text": "\\mathbf{U}^*=\\mathbf{U}^{-1}"
},
{
"math_id": 46,
"text": "\\mathbf\\Lambda ="
},
{
"math_id": 47,
"text": "\\lambda_1, \\ldots,\\lambda_n"
},
{
"math_id": 48,
"text": "\\mathbf{u}_1,\\cdots,\\mathbf{u}_n "
},
{
"math_id": 49,
"text": "\\mathbf{A}=\\begin{bmatrix} 1 & 2 \\\\ 2 & 1 \\end{bmatrix}"
},
{
"math_id": 50,
"text": "\\lambda_1=3"
},
{
"math_id": 51,
"text": "\\lambda_2 = -1"
},
{
"math_id": 52,
"text": "\\mathbf{u}_1=\\frac{1}{\\sqrt{2}}\\begin{bmatrix}1\\\\1\\end{bmatrix}"
},
{
"math_id": 53,
"text": "\\mathbf{u}_2=\\frac{1}{\\sqrt{2}}\\begin{bmatrix}-1\\\\1\\end{bmatrix}"
},
{
"math_id": 54,
"text": "\\mathbf{U}=\\begin{bmatrix} 1/\\sqrt{2} & 1/\\sqrt{2} \\\\ 1/\\sqrt{2} & -1/\\sqrt{2} \\end{bmatrix}"
},
{
"math_id": 55,
"text": "\\begin{bmatrix} 3 & 0 \\\\ 0 & -1\\end{bmatrix}"
},
{
"math_id": 56,
"text": "\\mathbf{U}^*=\\mathbf{U}^{-1}="
},
{
"math_id": 57,
"text": "\\begin{bmatrix} 1/\\sqrt{2} & 1/\\sqrt{2} \\\\ 1/\\sqrt{2} & -1/\\sqrt{2} \\end{bmatrix}"
},
{
"math_id": 58,
"text": "\\mathbf{U} \\mathbf\\Lambda\\mathbf{U}^*="
},
{
"math_id": 59,
"text": "=\\begin{bmatrix} 1 & 2 \\\\ 2 & 1 \\end{bmatrix}=\\mathbf{A}"
},
{
"math_id": 60,
"text": "\\mathbf\\Lambda"
},
{
"math_id": 61,
"text": "\\mathbf{A}=\\mathbf{Q} \\mathbf{\\Lambda}\\mathbf{Q}^\\mathsf{T}"
},
{
"math_id": 62,
"text": "\\mathbf{A}=\\mathbf{P} \\mathbf{D}\\mathbf{P}^{-1}"
},
{
"math_id": 63,
"text": "\\mathbf{P}"
},
{
"math_id": 64,
"text": "\\mathbf{D}"
},
{
"math_id": 65,
"text": "\\mathbf{A}=\\mathbf{L} \\mathbf{L}^\\mathsf{T}"
},
{
"math_id": 66,
"text": "\\mathbf{L}"
},
{
"math_id": 67,
"text": "\\mathbf{U}\\mathbf{U}^*=\\mathbf{I}"
},
{
"math_id": 68,
"text": "\\mathbf{U}\\mathbf{U}^\\dagger=\\mathbf{I}"
},
{
"math_id": 69,
"text": "\\mathbf{U}^*"
},
{
"math_id": 70,
"text": "\\mathbf{U}^\\dagger"
},
{
"math_id": 71,
"text": "\\mathbf{H}=\\mathbf{H}^\\dagger"
},
{
"math_id": 72,
"text": "\\mathbf{H}^\\dagger"
},
{
"math_id": 73,
"text": "\\det\\left(\\mathbf{A}\\right) = \\prod_{i=1}^{N_\\lambda}{\\lambda_i^{n_i} }"
},
{
"math_id": 74,
"text": " \\operatorname{tr}\\left(\\mathbf{A}\\right) = \\sum_{i=1}^{N_\\lambda}{ {n_i}\\lambda_i}"
},
{
"math_id": 75,
"text": "N_\\lambda = N,"
},
{
"math_id": 76,
"text": "\\lambda_i \\ne 0 \\quad \\forall \\,i"
},
{
"math_id": 77,
"text": "\\frac{\\mathbf{A}\\mathbf{v}}{\\left\\|\\mathbf{A}\\mathbf{v}\\right\\|}, \\frac{\\mathbf{A}^2\\mathbf{v}}{\\left\\|\\mathbf{A}^2\\mathbf{v}\\right\\|}, \\frac{\\mathbf{A}^3\\mathbf{v}}{\\left\\|\\mathbf{A}^3\\mathbf{v}\\right\\|}, \\ldots"
},
{
"math_id": 78,
"text": "\\left(\\mathbf{A} - \\lambda_i \\mathbf{I}\\right)\\mathbf{v}_{i,j} = \\mathbf{0} "
},
{
"math_id": 79,
"text": "\\mathbf{A}\\mathbf{v} = \\lambda \\mathbf{v}^*."
},
{
"math_id": 80,
"text": " \\mathbf{A}\\mathbf{v} = \\lambda \\mathbf{B} \\mathbf{v}"
},
{
"math_id": 81,
"text": "\\det(\\mathbf{A} - \\lambda \\mathbf{B}) = 0. "
},
{
"math_id": 82,
"text": "P = \\begin{bmatrix}\n | & & | \\\\\n \\mathbf{v}_1 & \\cdots & \\mathbf{v}_n \\\\\n | & & | \n \\end{bmatrix} \\equiv\n \\begin{bmatrix}\n (\\mathbf{v}_1)_1 & \\cdots & (\\mathbf{v}_n)_1 \\\\\n \\vdots & & \\vdots \\\\\n (\\mathbf{v}_1)_n & \\cdots & (\\mathbf{v}_n)_n \n \\end{bmatrix}\n"
},
{
"math_id": 83,
"text": "(D)_{ij} = \\begin{cases}\n \\lambda_i, & \\text{if }i = j\\\\\n 0, & \\text{otherwise}\n\\end{cases}"
},
{
"math_id": 84,
"text": "\\mathbf{A} = \\mathbf{B}\\mathbf{P}\\mathbf{D}\\mathbf{P}^{-1}"
},
{
"math_id": 85,
"text": "\n \\mathbf{A}\\mathbf{P}= \\mathbf{A} \\begin{bmatrix}\n | & & | \\\\\n \\mathbf{v}_1 & \\cdots & \\mathbf{v}_n \\\\\n | & & | \n \\end{bmatrix} = \\begin{bmatrix}\n | & & | \\\\\n A\\mathbf{v}_1 & \\cdots & A\\mathbf{v}_n \\\\\n | & & | \n \\end{bmatrix} = \\begin{bmatrix}\n | & & | \\\\\n \\lambda_1B\\mathbf{v}_1 & \\cdots & \\lambda_nB\\mathbf{v}_n \\\\\n | & & | \n \\end{bmatrix} = \\begin{bmatrix}\n | & & | \\\\\n B\\mathbf{v}_1 & \\cdots & B\\mathbf{v}_n \\\\\n | & & | \n \\end{bmatrix}\n \\mathbf{D} =\n \\mathbf{B}\\mathbf{P}\\mathbf{D}\n"
},
{
"math_id": 86,
"text": "\\mathbf{B}^{-1}\\mathbf{A}\\mathbf{v} = \\lambda \\mathbf{v}"
},
{
"math_id": 87,
"text": "\\mathbf{P}^* \\mathbf B \\mathbf{P} = \\mathbf{I}"
},
{
"math_id": 88,
"text": "\\mathbf{P}\\mathbf{P}^*\\mathbf B = \\mathbf{I},"
}
]
| https://en.wikipedia.org/wiki?curid=13576645 |
13577327 | Atmosphere of Titan | The atmosphere of Titan is the dense layer of gases surrounding Titan, the largest moon of Saturn. Titan is the only natural satellite in the Solar System with an atmosphere that is denser than the atmosphere of Earth and is one of two moons with an atmosphere significant enough to drive weather (the other being the atmosphere of Triton). Titan's lower atmosphere is primarily composed of nitrogen (94.2%), methane (5.65%), and hydrogen (0.099%). There are trace amounts of other hydrocarbons, such as ethane, diacetylene, methylacetylene, acetylene, propane, PAHs and of other gases, such as cyanoacetylene, hydrogen cyanide, carbon dioxide, carbon monoxide, cyanogen, acetonitrile, argon and helium. The isotopic study of nitrogen isotopes ratio also suggests acetonitrile may be present in quantities exceeding hydrogen cyanide and cyanoacetylene. The surface pressure is about 50% higher than on Earth at 1.5 bars (147 kPa) which is near the triple point of methane and allows there to be gaseous methane in the atmosphere and liquid methane on the surface. The orange color as seen from space is produced by other more complex chemicals in small quantities, possibly tholins, tar-like organic precipitates.
Observational history.
The presence of a significant atmosphere was first suspected by Spanish astronomer Josep Comas i Solà, who observed distinct limb darkening on Titan in 1903 from the Fabra Observatory in Barcelona, Catalonia. This observation was confirmed by Dutch astronomer Gerard P. Kuiper in 1944 using a spectroscopic technique that yielded an estimate of an atmospheric partial pressure of methane of the order of 100 millibars (10 kPa). Subsequent observations in the 1970s showed that Kuiper's figures had been significant underestimates; methane abundances in Titan's atmosphere were ten times higher, and the surface pressure was at least double what he had predicted. The high surface pressure meant that methane could only form a small fraction of Titan's atmosphere. In 1980, "Voyager 1" made the first detailed observations of Titan's atmosphere, revealing that its surface pressure was higher than Earth's, at 1.5 bars (about 1.48 times that of Earth's).
The joint NASA/ESA "Cassini-Huygens" mission provided a wealth of information about Titan, and the Saturn system in general, since entering orbit on July 1, 2004. It was determined that Titan's atmospheric isotopic abundances were evidence that the abundant nitrogen in the atmosphere came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times. It was determined that complex organic chemicals could arise on Titan, including polycyclic aromatic hydrocarbons, propylene, and methane.
The "Dragonfly" mission by NASA is planning to land a large aerial vehicle on Titan in 2034. The mission will study Titan's habitability and prebiotic chemistry at various locations. The drone-like aircraft will perform measurements of geologic processes, and surface and atmospheric composition.
Overview.
Observations from the "Voyager" space probes have shown that the Titanean atmosphere is denser than Earth's, with a surface pressure about 1.48 times that of Earth's. Titan's atmosphere is about 1.19 times as massive as Earth's overall, or about 7.3 times more massive on a per surface area basis. It supports opaque haze layers that block most visible light from the Sun and other sources and renders Titan's surface features obscure. The atmosphere is so thick and the gravity so low that humans could fly through it by flapping "wings" attached to their arms. Titan's lower gravity means that its atmosphere is far more extended than Earth's; even at a distance of 975 km, the "Cassini" spacecraft had to make adjustments to maintain a stable trajectory against atmospheric drag. The atmosphere of Titan is opaque at many wavelengths and a complete reflectance spectrum of the surface is impossible to acquire from the outside. It was not until the arrival of "Cassini–Huygens" in 2004 that the first direct images of Titan's surface were obtained. The "Huygens" probe was unable to detect the direction of the Sun during its descent, and although it was able to take images from the surface, the "Huygens" team likened the process to "taking pictures of an asphalt parking lot at dusk".
Vertical structure.
<templatestyles src="Plain image with caption/styles.css"/>
Diagram of Titan's atmosphere
Titan's vertical atmospheric structure is similar to Earth. They both have a troposphere, stratosphere, mesosphere, and thermosphere. However, Titan's lower surface gravity creates a more extended atmosphere, with scale heights of 15–50 km (9–31 mi) in comparison to 5–8 km (3.1-5 mi) on Earth. Voyager data, combined with data from "Huygens" and radiative-convective models provide increased understanding of Titan's atmospheric structure.
Atmospheric composition and chemistry.
Titan's atmospheric chemistry is diverse and complex. Each layer of the atmosphere has unique chemical interactions occurring within that are then interacting with other sub layers in the atmosphere. For instance, the hydrocarbons are thought to form in Titan's upper atmosphere in reactions resulting from the breakup of methane by the Sun's ultraviolet light, producing a thick orange smog. The table below highlights the production and loss mechanisms of the most abundant photochemically produced molecules in Titan's atmosphere.
Magnetic field.
Titan's internal magnetic field is negligible, and perhaps even nonexistent, although studies in 2008 showed that Titan retains remnants of Saturn's magnetic field on the brief occasions when it passes outside Saturn's magnetosphere and is directly exposed to the solar wind. This may ionize and carry away some molecules from the top of the atmosphere. One interesting case was detected as an example of the coronal mass ejection impact onto Saturn's magnetosphere, causing Titan's orbit to be exposed to the shocked solar wind in the magnetosheath. This leads to the increased particle precipitation and the formation of extreme electron densities in Titan's ionosphere. Its orbital distance of 20.3 Saturn radii does place it within Saturn's magnetosphere occasionally. However, the difference between Saturn's rotational period (10.7 hours) and Titan's orbital period (15.95 days) causes a relative speed of about between the Saturn's magnetized plasma and Titan. That can actually intensify reactions causing atmospheric loss, instead of guarding the atmosphere from the solar wind.
Chemistry of the ionosphere.
In November 2007, scientists uncovered evidence of negative ions with roughly 13 800 times the mass of hydrogen in Titan's ionosphere, which are thought to fall into the lower regions to form the orange haze which obscures Titan's surface. The smaller negative ions have been identified as linear carbon chain anions with larger molecules displaying evidence of more complex structures, possibly derived from benzene. These negative ions appear to play a key role in the formation of more complex molecules, which are thought to be tholins, and may form the basis for polycyclic aromatic hydrocarbons, cyanopolyynes and their derivatives. Remarkably, negative ions such as these have previously been shown to enhance the production of larger organic molecules in molecular clouds beyond our Solar System, a similarity which highlights the possible wider relevance of Titan's negative ions.
Atmospheric circulation.
There is a pattern of air circulation found flowing in the direction of Titan's rotation, from west to east. In addition, seasonal variation in the atmospheric circulation has also been detected. Observations by "Cassini" of the atmosphere made in 2004 also suggest that Titan is a "super rotator", like Venus, with an atmosphere that rotates much faster than its surface. The atmospheric circulation is explained by a big Hadley circulation that is occurring from pole to pole.
Methane cycle.
Similar to the hydrological cycle on Earth, Titan features a methane cycle. This methane cycle results in surface formations that resemble formations we find on Earth. Lakes of methane and ethane are found across Titan's polar regions. Methane condenses into clouds in the atmosphere, and then precipitates onto the surface. This liquid methane then flows into the lakes. Some of the methane in the lakes will evaporate over time, and form clouds in the atmosphere again, starting the process over. However, since methane is lost in the thermosphere, there has to be a source of methane to replenish atmospheric methane. Energy from the Sun should have converted all traces of methane in Titan's atmosphere into more complex hydrocarbons within 50 million years — a short time compared to the age of the Solar System. This suggests that methane must be somehow replenished by a reservoir on or within Titan itself. Most of the methane on Titan is in the atmosphere. Methane is transported through the cold trap at the tropopause. Therefore the circulation of methane in the atmosphere influences the radiation balance and chemistry of other layers in the atmosphere. If there is a reservoir of methane on Titan, the cycle would only be stable over geologic timescales.
Evidence that Titan's atmosphere contains over a thousand times more methane than carbon monoxide would appear to rule out significant contributions from cometary impacts, because comets are composed of more carbon monoxide than methane. That Titan might have accreted an atmosphere from the early Saturnian nebula at the time of formation also seems unlikely; in such a case, it ought to have atmospheric abundances similar to the solar nebula, including hydrogen and neon. Many astronomers have suggested that the ultimate origin for the methane in Titan's atmosphere is from within Titan itself, released via eruptions from cryovolcanoes.
Another possible source for methane replenishment in Titan's atmosphere is methane clathrates. Clathrates are compounds in which an ice lattice surrounds a gas particle, much like a cage. In this case, methane gas is surrounded by a water crystal cage. These methane clathrates could be present underneath Titan's icy surface, having formed much earlier in Titan's history. Through the dissociation of methane clathrates, methane could be outgassed into the atmosphere, replenishing the supply.
On December 1, 2022, astronomers reported viewing clouds, likely made of methane, moving across Titan, using the James Webb Space Telescope.
Daytime and twilight (sunrise/sunset) skies.
Sky brightness and viewing conditions are expected to be quite different from Earth and Mars due to Titan's farther distance from the Sun (~10 AU) and complex haze layers in its atmosphere. The sky brightness model videos show what a typical sunny day may look like standing on the surface of Titan based on radiative transfer models.
For astronauts who see with visible light, the daytime sky has a distinctly dark orange color and appears uniform in all directions due to significant Mie scattering from the many high-altitude haze layers. The daytime sky is calculated to be ~100–1000 times dimmer than an afternoon on Earth, which is similar to the viewing conditions of a thick smog or dense fire smoke. The sunsets on Titan are expected to be "underwhelming events", where the Sun disappears about half-way up in the sky (~50° above the horizon) with no distinct change in color. After that, the sky will slowly darken until it reaches night. However, the surface is expected to remain as bright as the full Moon up to 1 Earth day after sunset.
In near-infrared light, the sunsets resemble a or dusty desert sunset. Mie scattering has a weaker influence at longer infrared wavelengths, allowing for more colorful and variable sky conditions. During the daytime, the Sun has a noticeable solar corona that transitions color from white to "red" over the afternoon. The afternoon sky brightness is ~100 times dimmer than Earth. As evening time approaches, the Sun is expected to disappear fairly close to the horizon. Titan's atmospheric optical depth is the lowest at 5 microns. So, the Sun at 5 microns may even be visible when it is below the horizon due to atmospheric refraction. Similar to images of from Mars rovers, a fan-like corona is seen to develop above the Sun due to scattering from haze or dust at high-altitudes.
In regards to Saturn, the planet is nearly fixed in its position in the sky because Titan's orbit is tidally locked around Saturn. However, there is a small 3° east-to-west motion over a Titan year due to the orbital eccentricity, similar to the analemma on Earth. Sunlight reflected off of Saturn, Saturnshine, is about 1000 times weaker than solar insolation on the surface of Titan. Even though Saturn appears several times bigger in the sky than the Moon in Earth's sky, the outline of Saturn is masked out by the brighter Sun during the daytime. Saturn may become discernible at night, but only at a wavelength of 5 microns. This is due to two factors: the small optical depth of Titan's atmosphere at 5 microns and the strong 5 μm emissions from Saturn's night side. In visible light, Saturn will make the sky on Titan's Saturn-facing side appear slightly brighter, similar to an overcast night with a full moon on Earth. Saturn's rings are hidden from view owing to the alignment of Titan's orbital plane and the plane of the rings. Saturn is expected to show phases, akin to the phases of Venus on Earth, that partially illuminate the surface of Titan at night, except for eclipses.
From outer space, "Cassini" images from near-infrared to UV wavelengths have shown that the twilight periods (phase angles > 150°) are "brighter" than the daytime on Titan. This observation has not been observed on any other planetary body with a thick atmosphere. The Titanean twilight outshining the dayside is due to a combination of Titan's atmosphere extending hundreds of kilometers above the surface and intense forward Mie scattering from the haze. Radiative transfer models have not reproduced this effect.
Anti-greenhouse effect.
The temperature of Titan is increased over the blackbody temperature by a strong greenhouse effect caused by infrared absorption by pressure-induced opacity of Titan's atmosphere, but the greenhouse warming is somewhat reduced by an effect tagged by Pollack the anti-greenhouse effect, absorbing some incoming solar energy before it can reach the surface, leading to cooler surface temperatures than if methane were less abundant. The greenhouse effect increases surface temperature by 21 K, while the anti-greenhouse takes away half this effect, reducing this to an increase of 12 K.
When comparing the atmospheric temperature profiles of Earth and Titan, stark contrasts emerge. On Earth, the temperature typically increases as altitude decreases from 80 to 60 kilometers above the surface. In contrast, Titan’s temperature profile shows a decline over the same altitude range. This variation is largely due to the differing impacts of greenhouse and anti-greenhouse effects in Earth's and Titan's atmospheres, respectively.
Titan orbits within Saturn's magnetosphere for approximately 95% of its orbital period. During this time, charged particles trapped in the magnetosphere interact with Titan's upper atmosphere as the moon passes by, leading to the generation of a denser haze. Consequently, the variability of Saturn’s magnetic field over its approximately 30-year orbital period could cause variations in these interactions, potentially increasing or decreasing the haze density. Although most observed variations in Titan's atmosphere during its orbital period are typically attributed to its direct interactions with sunlight, the influence of Saturn's magnetospheric changes is believed to play a non-negligible role. The interaction between Titan’s atmosphere and Saturn’s magnetic environment underscores the complex interplay between celestial bodies and their atmospheres, revealing a dynamic system shaped by both internal chemical processes and external astronomical conditions; future studies, if approached, may help to prove (or disprove) the impact of a changing magnetosphere on a dense atmosphere like that of Titan.
Atmospheric evolution.
The persistence of a dense atmosphere on Titan has been enigmatic as the atmospheres of the structurally similar satellites of Jupiter, Ganymede and Callisto, are negligible. Although the disparity is still poorly understood, data from recent missions have provided basic constraints on the evolution of Titan's atmosphere.
Roughly speaking, at the distance of Saturn, solar insolation and solar wind flux are sufficiently low that elements and compounds that are volatile on the terrestrial planets tend to accumulate in all three phases. Titan's surface temperature is also quite low, about 94 K (–179 C/–290 F). Consequently, the mass fractions of substances that can become atmospheric constituents are much larger on Titan than on Earth. In fact, current interpretations suggest that only about 50% of Titan's mass is silicates, with the rest consisting primarily of various H2O (water) ices and NH3·H2O (ammonia hydrates). NH3, which may be the original source of Titan's atmospheric N2 (dinitrogen), may constitute as much as 8% of the NH3·H2O mass. Titan is most likely differentiated into layers, where the liquid water layer beneath ice Ih may be rich in NH3.
Tentative constraints are available, with the current loss mostly due to low gravity and solar wind aided by photolysis. The loss of Titan's early atmosphere can be estimated with the 14N–15N isotopic ratio, because the lighter 14N is preferentially lost from the upper atmosphere under photolysis and heating. Because Titan's original 14N–15N ratio is poorly constrained, the early atmosphere may have had more N2 by factors ranging from 1.5 to 100 with certainty only in the lower factor. Because N2 is the primary component (98%) of Titan's atmosphere, the isotopic ratio suggests that much of the atmosphere has been lost over geologic time. Nevertheless, atmospheric pressure on its surface remains nearly 1.5 times that of Earth as it began with a proportionally greater volatile budget than Earth or Mars. It is possible that most of the atmospheric loss was within 50 million years of accretion, from a highly energetic escape of light atoms carrying away a large portion of the atmosphere (hydrodynamic escape). Such an event could be driven by heating and photolysis effects of the early Sun's higher output of X-ray and ultraviolet (XUV) photons.
Because Callisto and Ganymede are structurally similar to Titan, it is unclear why their atmospheres are insignificant relative to Titan's. Nevertheless, the origin of Titan's N2 via geologically ancient photolysis of accreted and degassed NH3, as opposed to degassing of N2 from accretionary clathrates, may be the key to a correct inference. Had N2 been released from clathrates, 36Ar and 38Ar that are inert primordial isotopes of the Solar System should also be present in the atmosphere, but neither has been detected in significant quantities. The insignificant concentration of 36Ar and 38Ar also indicates that the ~40 K temperature required to trap them and N2 in clathrates did not exist in the Saturnian sub-nebula. Instead, the temperature may have been higher than 75 K, limiting even the accumulation of NH3 as hydrates. Temperatures would have been even higher in the Jovian sub-nebula due to the greater gravitational potential energy release, mass, and proximity to the Sun, greatly reducing the NH3 inventory accreted by Callisto and Ganymede. The resulting N2 atmospheres may have been too thin to survive the atmospheric erosion effects that Titan has withstood.
An alternative explanation is that cometary impacts release more energy on Callisto and Ganymede than they do at Titan due to the higher gravitational field of Jupiter. That could erode the atmospheres of Callisto and Ganymede, whereas the cometary material would actually build Titan's atmosphere. However, the 2H–1H (i.e. D–H) ratio of Titan's atmosphere is , nearly 1.5 times lower than that of comets. The difference suggests that cometary material is unlikely to be the major contributor to Titan's atmosphere. Titan's atmosphere also contains over a thousand times more methane than carbon monoxide which supports the idea that cometary material is not a likely contributor since comets are composed of more carbon monoxide than methane.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H^+_2"
},
{
"math_id": 1,
"text": "H^+\n"
},
{
"math_id": 2,
"text": "O\n^+"
}
]
| https://en.wikipedia.org/wiki?curid=13577327 |
13578878 | Impulse excitation technique | Method to characterize materials
The impulse excitation technique (IET) is a non-destructive material characterization technique to determine the elastic properties and internal friction of a material of interest. It measures the resonant frequencies in order to calculate the Young's modulus, shear modulus, Poisson's ratio and internal friction of predefined shapes like rectangular bars, cylindrical rods and disc shaped samples. The measurements can be performed at room temperature or at elevated temperatures (up to 1700 °C) under different atmospheres.
The measurement principle is based on tapping the sample with a small projectile and recording the induced vibration signal with a piezoelectric sensor, microphone, laser vibrometer or accelerometer. To optimize the results a microphone or a laser vibrometer can be used as there is no contact between the test-piece and the sensor. Laser vibrometers are preferred to measure signals in vacuum. Afterwards, the acquired vibration signal in the time domain is converted to the frequency domain by a fast Fourier transformation. Dedicated software will determine the resonant frequency with high accuracy to calculate the elastic properties based on the classical beam theory.
Elastic properties.
Different resonant frequencies can be excited dependent on the position of the support wires, the mechanical impulse and the microphone. The two most important resonant frequencies are the flexural which is controlled by the Young's modulus of the sample and the torsional which is controlled by the shear modulus for isotropic materials.
For predefined shapes like rectangular bars, discs, rods and grinding wheels, dedicated software calculates the sample's elastic properties using the sample dimensions, weight and resonant frequency (ASTM E1876-15).
Flexure mode.
The first figure gives an example of a test-piece vibrating in the flexure
mode. This induced vibration is also referred as the out-of-plane vibration mode. The in-plane vibration will be excited by turning the sample 90° on the axis parallel to its length. The natural frequency of this flexural vibration mode is characteristic for the dynamic Young's modulus.
To minimize the damping of the test-piece, it has to be supported at the nodes where the vibration amplitude is zero. The test-piece is mechanically excited at one of the anti-nodes to cause maximum vibration.
Torsion mode.
The second figure gives an example of a test-piece vibrating in the torsion mode. The natural frequency of this vibration is characteristic for the shear modulus.
To minimize the damping of the test-piece, it has to be supported at the center of both axis. The mechanical excitation has to be performed in one corner in order to twist the beam rather than flexing it.
Poisson's ratio.
The Poisson's ratio is a measure in which a material tends to expand in directions perpendicular to the direction of compression. After measuring the Young's modulus and the shear modulus, dedicated software determines the Poisson's ratio using Hooke's law which can only be applied to isotropic materials according to the different standards.
Internal friction / Damping.
Material damping or internal friction is characterized by the decay of the vibration amplitude of the sample in free vibration as the logarithmic decrement. The damping behaviour originates from anelastic processes occurring in a strained solid i.e. thermoelastic damping, magnetic damping, viscous damping, defect damping, ... For example, different materials defects (dislocations, vacancies, ...) can contribute to an increase in the internal friction between the vibrating defects and the neighboring regions.
Dynamic vs. static methods.
Considering the importance of elastic properties for design and engineering applications, a number of experimental techniques are developed and these can be classified into 2 groups; static and dynamic methods. Statics methods (like the four-point bending test and nanoindentation) are based on direct measurements of stresses and strains during mechanical tests. Dynamic methods (like ultrasound spectroscopy and impulse excitation technique) provide an advantage over static methods because the measurements are relatively quick and simple and involve small elastic strains. Therefore, IET is very suitable for porous and brittle materials like ceramics and refractories. The technique can also be easily modified for high temperature experiments and only a small amount of material needs to be available.
Accuracy and uncertainty.
The most important parameters to define the measurement uncertainty are the mass and dimensions of the sample. Therefore, each parameter has to be measured (and prepared) to a level of accuracy of 0.1%. Especially, the sample thickness is most critical (third power in the equation for Young's modulus). In that case, an overall accuracy of 1% can be obtained practically in most applications.
Applications.
The impulse excitation technique can be used in a wide range of applications. Nowadays, IET equipment can perform measurements between −50 °C and 1700 °C in different atmospheres (air, inert, vacuum). IET is mostly used in research and as quality control tool to study the transitions as function of time and temperature.
A detailed insight into the material crystal structure can be obtained by studying the elastic and damping properties. For example, the interaction of dislocations and point defects in carbon steels are studied. Also the material damage accumulated during a thermal shock treatment can be determined for refractory materials. This can be an advantage in understanding the physical properties of certain materials.
Finally, the technique can be used to check the quality of systems. In this case, a reference piece is required to obtain a reference frequency spectrum. Engine blocks for example can be tested by tapping them and comparing the recorded signal with a pre-recorded signal of a reference engine block.
By using simple cluster analysis algorithms or principal component analysis, sample's pattern recognition is also achievable with a set of pre-recorded signals.
formula_0
Experimental correlations.
Rectangular bar.
Young's modulus.
with
formula_1
E the Young's modulus
m the mass
ff the flexural frequency
b the width
L the length
t the thickness
"T" the correction factor
The correction factor can only be used if L/t ≥ 20!
formula_2
Shear modulus.
with
formula_3
Note that we assume that b≥t
"G" the shear modulus
ft the torsional frequency
m the mass
b the width
L the length
t the thickness
"R" the correction factor
formula_4
Cylindrical rod.
Young's modulus.
with
formula_5
E the Young's modulus
m the mass
ff the flexural frequency
d the diameter
L the length
"T"' the correction factor
The correction factor can only be used if L/d ≥ 20!
formula_6
Shear modulus.
with
ft the torsional frequency
m the mass
d the diameter
L the length
Poisson ratio.
If the Young's modulus and shear modulus are known, the Poisson's ratio can be calculated according to:
formula_7
Damping coefficient.
The induced vibration signal (in the time domain) is fitted as a sum of exponentially damped sinusoidal functions according to:
formula_8
with
f the natural frequency
δ = kt the logarithmic decrement
In this case, the damping parameter Q−1 can be defined as:
formula_9 with W the energy of the system
Extended IET applications: the Resonalyser Method.
Isotropic versus orthotropic material behaviour.
Isotropic elastic properties can be found by IET using the above described empirical formulas for the Young's modulus E, the shear modulus G and Poisson's ratio v. For isotropic materials the relation between strains and stresses in any point of flat sheets is given by the flexibility matrix [S] in the following expression:
In this expression, ε1 and ε2 are normal strains in the 1- and 2-direction and Υ12 is the shear strain. σ1 and σ2 are the normal stresses and τ12 is the shear stress. The orientation of the axes 1 and 2 in the above figure is arbitrary. This means that the values for E, G and v are the same in any material direction.
More complex material behaviour like orthotropic material behaviour can be identified by extended IET procedures. A material is called orthotropic when the elastic properties are symmetric with respect to a rectangular Cartesian system of axes. In case of a two dimensional state of stress, like in thin sheets, the stress-strain relations for orthotropic material become:
E1 and E2 are the Young's moduli in the 1- and 2-direction and G12 is the in-plane shear modulus. v12 is the major Poisson's ratio and v21 is the minor Poisson's ratio. The flexibility matrix [S] is symmetric. The minor Poisson's ratio can hence be found if E1, E2 and v12 are known.
The figure above shows some examples of common orthotropic materials: layered uni-directionally reinforced composites with fiber directions parallel to the plate edges, layered bi-directionally reinforced composites, short fiber reinforced composites with preference directions (like wooden particle boards), plastics with preference orientation, rolled metal sheets, and much more...
Extended IET for orthotropic material behaviour.
Standard methods for the identification of the two Young's moduli E1 and E2 require two tensile, bending of IET tests, one on a beam cut along the 1-direction and one on a beam cut along the 2-direction. Major and minor Poisson's ratios can be identified if also the transverse strains are measured during the tensile tests. The identification of the in-plane shear modulus requires an additional in plane shearing test.
The "Resonalyser procedure" is an extension of the IET using an inverse method (also called "Mixed numerical experimental method"). The non destructive Resonalyser procedure allows a fast and accurate simultaneous identification of the 4 Engineering constants E1, E2, G12 and v12 for orthotropic materials. For the identification of the four orthotropic material constants, the first three natural frequencies of a rectangular test plate with constant thickness and the first natural frequency of two test beams with rectangular cross section must be measured. One test beam is cut along the longitudinal direction 1, the other one cut along the transversal direction 2 (see Figure on the right).
The Young's modulus of the test beams can be found using the bending IET formula for test beams with a rectangular cross section.
The ratio Width/Length of the test plate must be cut according to the following formula:
This ratio yields a so-called "Poisson plate". The interesting property of a Freely suspended Poisson plate is that the modal shapes that are associated with the 3 first resonance frequencies are fixed: the first resonance frequency is associated with a torsional modal shape, the second resonance frequency is associated with a saddle modal shape and the third resonance frequency is associated with a breathing modal shape.
So, without the necessity to do an investigation to the nature of the modal shapes, the IET on a Poisson plate reveals the vibrational behaviour of a Poisson plate.
The question is now how to extract the orthotropic Engineering constants from the frequencies measured with IET on the beams and Poisson plate. This problem can be solved by an inverse method (also called" Mixed numerical/experimental method") based on a finite element (FE) computer model of the Poisson plate. A FE model allows computing resonance frequencies for a given set of material properties
In an inverse method, the material properties in the finite element model are updated in such a way that the computed resonance frequencies match the measured resonance frequencies.
Problems with inverse methods are:
· The need of good starting values for the material properties
· Are the parameters converging to the correct physical solution?
· Is the solution unique?
The requirements to obtain good results are:
In the case the Young's moduli (obtained by IET) are fixed (as non variable parameters) in the inverse method procedure and if only the Poisson's ratio v12 and the in-plane shear modulus G12 are taken as variable parameters in the FE-model, the Resonalyser procedure satisfies all above requirements.
Indeed,
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = 0.9465\\left( \\frac{m f^2_f} {b} \\right)\\left( \\frac{L^3} {t^3} \\right)T"
},
{
"math_id": 1,
"text": "T = 1+6.585\\left( \\frac{t} {L} \\right)^2"
},
{
"math_id": 2,
"text": "G = \\frac{4Lmf_t^2} {bt}R"
},
{
"math_id": 3,
"text": "R = \\left [ \\frac{1+\\left ( \\frac{b}{t} \\right )^2}{4-2.521\\frac{t}{b}\\left (1- \\frac{1.991}{e^{\\pi\\frac{b}{t}}+1} \\right )} \\right ]\n\\left [1+ \\frac{0.00851b^2}{L^2} \\right ]-0.060\\left ( \\frac{b}{L} \\right )^\\frac{3}{2}\\left ( \\frac{b}{t}-1 \\right )^2"
},
{
"math_id": 4,
"text": "E = 1.6067\\left( \\frac{L^3} {d^4} \\right)mf_f^2T'"
},
{
"math_id": 5,
"text": "T' = 1+4.939\\left( \\frac{d} {L} \\right)^2"
},
{
"math_id": 6,
"text": "G = 16 \\left(\\frac{L} {\\pi d^2}\\right)mf_t^2"
},
{
"math_id": 7,
"text": " \\nu = \\left(\\frac{E} {2G}\\right)-1 "
},
{
"math_id": 8,
"text": "x\\left(t\\right) = \\sum Ae^{-k t}\\sin\\left(2\\pi f t+\\phi\\right)"
},
{
"math_id": 9,
"text": "Q^{-1} = \\frac{\\Delta W}{2 \\pi W} = \\frac{k}{\\pi f}"
}
]
| https://en.wikipedia.org/wiki?curid=13578878 |
13580135 | Conditional short-circuit current | Conditional short-circuit current is the value of the alternating current component of a prospective current, which a switch without integral short-circuit protection, but protected by a suitable short circuit protective device (SCPD) in series, can withstand for the operating time of the current under specified test conditions. It may be understood to be the RMS value of the maximum permissible current over a specified time interval (t0,t1) and operating conditions.
The IEC definition is critiqued to be open to interpretation.
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I = \\sqrt {\\frac{1}{t_1-t_0} \\int_{t_0}^{t_1} i^2(t) dt }"
}
]
| https://en.wikipedia.org/wiki?curid=13580135 |
13580667 | JLO cocycle | In noncommutative geometry, the Jaffe- Lesniewski-Osterwalder (JLO) cocycle (named after Arthur Jaffe, Andrzej Lesniewski, and Konrad Osterwalder) is a cocycle in an entire cyclic cohomology group. It is a non-commutative version of the classic Chern character of the conventional differential geometry. In noncommutative geometry, the concept of a manifold is replaced by a noncommutative algebra formula_0 of "functions" on the putative noncommutative space. The cyclic cohomology of the algebra formula_0 contains the information about the topology of that noncommutative space, very much as the de Rham cohomology contains the information about the topology of a conventional manifold.
The JLO cocycle is associated with a metric structure of non-commutative differential geometry known as a formula_1-summable spectral triple (also known as a formula_1-summable Fredholm module). It was first introduced in a 1988 paper by Jaffe, Lesniewski, and Osterwalder.
formula_1-summable spectral triples.
The input to the JLO construction is a "formula_1-summable spectral triple." These triples consists of the following data:
(a) A Hilbert space formula_2 such that formula_0 acts on it as an algebra of bounded operators.
(b) A formula_3-grading formula_4 on formula_2, formula_5. We assume that the algebra formula_0 is even under the formula_3-grading, i.e. formula_6, for all formula_7.
(c) A self-adjoint (unbounded) operator formula_8, called the "Dirac operator" such that
(i) formula_8 is odd under formula_4, i.e. formula_9.
(ii) Each formula_7 maps the domain of formula_8, formula_10 into itself, and the operator formula_11 is bounded.
(iii) formula_12, for all formula_13.
A classic example of a formula_1-summable spectral triple arises as follows. Let formula_14 be a compact spin manifold, formula_15, the algebra of smooth functions on formula_14, formula_2 the Hilbert space of square integrable forms on formula_14, and formula_8 the standard Dirac operator.
The cocycle.
Given a formula_1-summable spectral triple, the JLO cocycle formula_16 associated to the triple is a sequence
formula_17
of functionals on the algebra formula_0, where
formula_18
formula_19
for formula_20. The cohomology class defined by formula_16 is independent of the value of formula_21
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{A}"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\mathcal{H}"
},
{
"math_id": 3,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 4,
"text": "\\gamma"
},
{
"math_id": 5,
"text": "\\mathcal{H}=\\mathcal{H}_0\\oplus\\mathcal{H}_1"
},
{
"math_id": 6,
"text": "a\\gamma=\\gamma a"
},
{
"math_id": 7,
"text": "a\\in\\mathcal{A}"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "D\\gamma=-\\gamma D"
},
{
"math_id": 10,
"text": "\\mathrm{Dom}\\left(D\\right)"
},
{
"math_id": 11,
"text": "\\left[D,a\\right]:\\mathrm{Dom}\\left(D\\right)\\to\\mathcal{H}"
},
{
"math_id": 12,
"text": "\\mathrm{tr}\\left(e^{-tD^2}\\right)<\\infty"
},
{
"math_id": 13,
"text": "t>0"
},
{
"math_id": 14,
"text": "M"
},
{
"math_id": 15,
"text": "\\mathcal{A}=C^\\infty\\left(M\\right)"
},
{
"math_id": 16,
"text": "\\Phi_t\\left(D\\right)"
},
{
"math_id": 17,
"text": "\\Phi_t\\left(D\\right)=\\left(\\Phi_t^0\\left(D\\right),\\Phi_t^2\\left(D\\right),\\Phi_t^4\\left(D\\right),\\ldots\\right)"
},
{
"math_id": 18,
"text": "\\Phi_t^0\\left(D\\right)\\left(a_0\\right)=\\mathrm{tr}\\left(\\gamma a_0 e^{-tD^2}\\right),"
},
{
"math_id": 19,
"text": "\\Phi_t^n\\left(D\\right)\\left(a_0,a_1,\\ldots,a_n\\right)=\\int_{0\\leq s_1\\leq\\ldots s_n\\leq t}\\mathrm{tr}\\left(\\gamma a_0 e^{-s_1 D^2}\\left[D,a_1\\right]e^{-\\left(s_2-s_1\\right)D^2}\\ldots\\left[D,a_n\\right]e^{-\\left(t-s_n\\right)D^2}\\right)ds_1\\ldots ds_n,"
},
{
"math_id": 20,
"text": "n=2,4,\\dots"
},
{
"math_id": 21,
"text": "t"
}
]
| https://en.wikipedia.org/wiki?curid=13580667 |
1358112 | Lychrel number | <templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Do any base-10 Lychrel numbers exist?
A Lychrel number is a natural number that cannot form a palindrome through the iterative process of repeatedly reversing its digits and adding the resulting numbers. This process is sometimes called the "196-algorithm", after the most famous number associated with the process. In base ten, no Lychrel numbers have been yet proven to exist, but many, including 196, are suspected on heuristic and statistical grounds. The name "Lychrel" was coined by Wade Van Landingham as a rough anagram of "Cheryl", his girlfriend's first name.
Reverse-and-add process.
The reverse-and-add process produces the sum of a number and the number formed by reversing the order of its digits. For example, 56 + 65 = 121. As another example, 125 + 521 = 646.
Some numbers become palindromes quickly after repeated reversal and addition, and are therefore not Lychrel numbers. All one-digit and two-digit numbers eventually become palindromes after repeated reversal and addition.
About 80% of all numbers under 10,000 resolve into a palindrome in four or fewer steps; about 90% of those resolve in seven steps or fewer. Here are a few examples of non-Lychrel numbers:
The smallest number that is not known to form a palindrome is 196. It is therefore the smallest Lychrel number candidate.
The number resulting from the reversal of the digits of a Lychrel number not ending in zero is also a Lychrel number.
Formal definition of the process.
Let formula_0 be a natural number. We define the Lychrel function for a number base "b" > 1, formula_1, to be the following:
formula_2
where formula_3 is the number of digits in the number in base formula_4, and
formula_5
is the value of each digit of the number. A number is a Lychrel number if there does not exist a natural number formula_6 such that formula_7, where formula_8 is the formula_6-th iteration of formula_9
Proof not found.
In other bases (these bases are powers of 2, like binary and hexadecimal), certain numbers can be proven to never form a palindrome after repeated reversal and addition, but no such proof has been found for 196 and other base 10 numbers.
It is conjectured that 196 and other numbers that have not yet yielded a palindrome are Lychrel numbers, but no number in base ten has yet been proven to be Lychrel. Numbers which have not been demonstrated to be non-Lychrel are informally called "candidate Lychrel" numbers. The first few candidate Lychrel numbers (sequence in the OEIS) are:
196, 295, 394, 493, 592, 689, 691, 788, 790, 879, 887, 978, 986, 1495, 1497, 1585, 1587, 1675, 1677, 1765, 1767, 1855, 1857, 1945, 1947, 1997.
The numbers in bold are suspected Lychrel seed numbers (see below). Computer programs by Jason Doucette, Ian Peters and Benjamin Despres have found other Lychrel candidates. Indeed, Benjamin Despres' program has identified all suspected Lychrel seed numbers of less than 17 digits. Wade Van Landingham's site lists the total number of found suspected Lychrel seed numbers for each digit length.
The brute-force method originally deployed by John Walker has been refined to take advantage of iteration behaviours. For example, Vaughn Suite devised a program that only saves the first and last few digits of each iteration, enabling testing of the digit patterns in millions of iterations to be performed without having to save each entire iteration to a file. However, so far no algorithm has been developed to circumvent the reversal and addition iterative process.
Threads, seed and kin numbers.
The term thread, coined by Jason Doucette, refers to the sequence of numbers that may or may not lead to a palindrome through the reverse and add process. Any given seed and its associated kin numbers will converge on the same thread. The thread does not include the original seed or kin number, but only the numbers that are common to both, after they converge.
Seed numbers are a subset of Lychrel numbers, that is the smallest number of each non palindrome producing thread. A seed number may be a palindrome itself. The first three examples are shown in bold in the list above.
Kin numbers are a subset of Lychrel numbers, that include all numbers of a thread, except the seed, or any number that will converge on a given thread after a single iteration. This term was introduced by Koji Yamashita in 1997.
196 palindrome quest.
Because 196 (base-10) is the smallest candidate Lychrel number, it has received the most attention.
In the 1980s, the 196 palindrome problem attracted the attention of microcomputer hobbyists, with search programs by Jim Butterfield and others appearing in several mass-market computing magazines. In 1985 a program by James Killman ran unsuccessfully for over 28 days, cycling through 12,954 passes and reaching a 5366-digit number.
John Walker began his 196 Palindrome Quest on 12 August 1987 on a Sun 3/260 workstation. He wrote a C program to perform the reversal and addition iterations and to check for a palindrome after each step. The program ran in the background with a low priority and produced a checkpoint to a file every two hours and when the system was shut down, recording the number reached so far and the number of iterations. It restarted itself automatically from the last checkpoint after every shutdown. It ran for almost three years, then terminated (as instructed) on 24 May 1990 with the message:
Stop point reached on pass 2,415,836.
Number contains 1,000,000 digits.
196 had grown to a number of one million digits after 2,415,836 iterations without reaching a palindrome. Walker published his findings on the internet along with the last checkpoint, inviting others to resume the quest using the number reached so far.
In 1995, Tim Irvin and Larry Simkins used a multiprocessor computer and reached the two million digit mark in only three months without finding a palindrome. Jason Doucette then followed suit and reached 12.5 million digits in May 2000. Wade VanLandingham used Jason Doucette's program to reach 13 million digits, a record published in Yes Mag: Canada's Science Magazine for Kids. Since June 2000, Wade VanLandingham has been carrying the flag using programs written by various enthusiasts. By 1 May 2006, VanLandingham had reached the 300 million digit mark (at a rate of one million digits every 5 to 7 days). Using distributed processing, in 2011 Romain Dolbeau completed a billion iterations to produce a number with 413,930,770 digits, and in February 2015 his calculations reached a number with a billion digits. A palindrome has yet to be found.
Other potential Lychrel numbers which have also been subjected to the same brute force method of repeated reversal addition include 879, 1997 and 7059: they have been taken to several million iterations with no palindrome being found.
Other bases.
In base 2, 10110 (22 in decimal) has been proven to be a Lychrel number, since after 4 steps it reaches 10110100, after 8 steps it reaches 1011101000, after 12 steps it reaches 101111010000, and in general after 4"n" steps it reaches a number consisting of 10, followed by "n" + 1 ones, followed by 01, followed by "n" + 1 zeros. This number obviously cannot be a palindrome, and none of the other numbers in the sequence are palindromes.
Lychrel numbers have been proven to exist in the following bases: 11, 17, 20, 26, and all powers of 2.
No base contains any Lychrel numbers smaller than the base. In fact, in any given base "b", no single-digit number takes more than two iterations to form a palindrome. For "b" > 4, if "k" < "b"/2 then "k" becomes palindromic after one iteration: "k" + "k" = 2"k", which is single-digit in base "b" (and thus a palindrome). If "k" > "b"/2, "k" becomes palindromic after two iterations.
The smallest number in each base which could possibly be a Lychrel number are (sequence in the OEIS):
Extension to negative integers.
Lychrel numbers can be extended to the negative integers by use of a signed-digit representation to represent each integer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "F_b : \\mathbb{N} \\rightarrow \\mathbb{N}"
},
{
"math_id": 2,
"text": "F_b(n) = n + \\sum_{i=0}^{k-1} d_i b^{k - i - 1}"
},
{
"math_id": 3,
"text": "k = \\lfloor \\log_{b} n \\rfloor + 1"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "d_i = \\frac{n \\bmod{b^{i+1}} - n \\bmod b^i}{b^i}"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "F_b^{i+1}(n) = 2 F_b^i(n)"
},
{
"math_id": 8,
"text": "F^i"
},
{
"math_id": 9,
"text": "F"
}
]
| https://en.wikipedia.org/wiki?curid=1358112 |
1358178 | Pulse-Doppler radar | Type of radar system
A pulse-Doppler radar is a radar system that determines the range to a target using pulse-timing techniques, and uses the Doppler effect of the returned signal to determine the target object's velocity. It combines the features of pulse radars and continuous-wave radars, which were formerly separate due to the complexity of the electronics.
The first operational pulse-Doppler radar was in the CIM-10 Bomarc, an American long range supersonic missile powered by ramjet engines, and which was armed with a W40 nuclear weapon to destroy entire formations of attacking enemy aircraft. Pulse-Doppler systems were first widely used on fighter aircraft starting in the 1960s. Earlier radars had used pulse-timing in order to determine range and the angle of the antenna (or similar means) to determine the bearing. However, this only worked when the radar antenna was not pointed down; in that case the reflection off the ground overwhelmed any returns from other objects. As the ground moves at the same speed but opposite direction of the aircraft, Doppler techniques allow the ground return to be filtered out, revealing aircraft and vehicles. This gives pulse-Doppler radars "look-down/shoot-down" capability. A secondary advantage in military radar is to reduce the transmitted power while achieving acceptable performance for improved safety of stealthy radar.
Pulse-Doppler techniques also find widespread use in meteorological radars, allowing the radar to determine wind speed from the velocity of any precipitation in the air. Pulse-Doppler radar is also the basis of synthetic aperture radar used in radar astronomy, remote sensing and mapping. In air traffic control, they are used for discriminating aircraft from clutter. Besides the above conventional surveillance applications, pulse-Doppler radar has been successfully applied in healthcare, such as fall risk assessment and fall detection, for nursing or clinical purposes.
History.
The earliest radar systems failed to operate as expected. The reason was traced to Doppler effects that degrade performance of systems not designed to account for moving objects. Fast-moving objects cause a phase-shift on the transmit pulse that can produce signal cancellation. Doppler has maximum detrimental effect on moving target indicator systems, which must use reverse phase shift for Doppler compensation in the detector.
Doppler weather effects (precipitation) were also found to degrade conventional radar and moving target indicator radar, which can mask aircraft reflections. This phenomenon was adapted for use with weather radar in the 1950s after declassification of some World War II systems.
Pulse-Doppler radar was developed during World War II to overcome limitations by increasing pulse repetition frequency. This required the development of the klystron, the traveling wave tube, and solid state devices. Early pulse-dopplers were incompatible with other high power microwave amplification devices that are not coherent, but more sophisticated techniques were developed that record the phase of each transmitted pulse for comparison to returned echoes.
Early examples of military systems includes the AN/SPG-51B developed during the 1950s specifically for the purpose of operating in hurricane conditions with no performance degradation.
The Hughes AN/ASG-18 Fire Control System was a prototype airborne radar/combination system for the planned North American XF-108 Rapier interceptor aircraft for the United States Air Force, and later for the Lockheed YF-12. The US's first pulse-Doppler radar, the system had look-down/shoot-down capability and could track one target at a time.
It became possible to use pulse-Doppler radar on aircraft after digital computers were incorporated in the design. Pulse-Doppler provided look-down/shoot-down capability to support air-to-air missile systems in most modern military aircraft by the mid 1970s.
Principle.
Range measurement.
Pulse-Doppler systems measure the range to objects by measuring the elapsed time between sending a pulse of radio energy and receiving a reflection of the object. Radio waves travel at the speed of light, so the distance to the object is the elapsed time multiplied by the speed of light, divided by two – there and back.
Velocity measurement.
Pulse-Doppler radar is based on the Doppler effect, where movement in range produces frequency shift on the signal reflected from the target.
formula_0
Radial velocity is essential for pulse-Doppler radar operation. As the reflector moves between each transmit pulse, the returned signal has a phase difference, or "phase shift", from pulse to pulse. This causes the reflector to produce Doppler modulation on the reflected signal.
Pulse-Doppler radars exploit this phenomenon to improve performance.
The amplitude of the successively returning pulse from the same scanned volume is
formula_1
where
So
formula_5
This allows the radar to separate the reflections from multiple objects located in the same volume of space by separating the objects using a spread spectrum to segregate different signals:
formula_6
where formula_7 is the phase shift induced by range motion.
Benefits.
Rejection speed is selectable on pulse-Doppler aircraft-detection systems so nothing below that speed will be detected. A one degree antenna beam illuminates millions of square feet of terrain at range, and this produces thousands of detections at or below the horizon if Doppler is not used.
Pulse-Doppler radar uses the following signal processing criteria to exclude unwanted signals from slow-moving objects. This is also known as clutter rejection. Rejection velocity is usually set just above the prevailing wind speed (). The velocity threshold is much lower for weather radar.
formula_8
In airborne pulse-Doppler radar, the velocity threshold is offset by the speed of the aircraft relative to the ground.
formula_9
where formula_10 is the angle offset between the antenna position and the aircraft flight trajectory.
Surface reflections appear in almost all radar. Ground clutter generally appears in a circular region within a radius of about near ground-based radar. This distance extends much further in airborne and space radar. Clutter results from radio energy being reflected from the earth surface, buildings, and vegetation. Clutter includes weather in radar intended to detect and report aircraft and spacecraft.
Clutter creates a vulnerability region in pulse-amplitude time-domain radar. Non-Doppler radar systems cannot be pointed directly at the ground due to excessive false alarms, which overwhelm computers and operators. Sensitivity must be reduced near clutter to avoid overload. This vulnerability begins in the low-elevation region several beam widths above the horizon, and extends downward. This also exists throughout the volume of moving air associated with weather phenomenon.
Pulse-Doppler radar corrects this as follows.
Clutter rejection capability of about 60 dB is needed for look-down/shoot-down capability, and pulse-Doppler is the only strategy that can satisfy this requirement. This eliminates vulnerabilities associated with the low-elevation and below-horizon environment.
Pulse compression and moving target indicator (MTI) provide up to 25 dB sub-clutter visibility. An MTI antenna beam is aimed above the horizon to avoid an excessive false alarm rate, which renders systems vulnerable. Aircraft and some missiles exploit this weakness using a technique called flying below the radar to avoid detection (nap-of-the-earth). This flying technique is ineffective against pulse-Doppler radar.
Pulse-Doppler provides an advantage when attempting to detect missiles and low observability aircraft flying near terrain, sea surface, and weather.
Audible Doppler and target size support passive vehicle type classification when identification friend or foe is not available from a transponder signal. Medium pulse repetition frequency (PRF) reflected microwave signals fall between 1,500 and 15,000 cycle per second, which is audible. This means a helicopter sounds like a helicopter, a jet sounds like a jet, and propeller aircraft sound like propellers. Aircraft with no moving parts produce a tone. The actual size of the target can be calculated using the audible signal.
Detriments.
Ambiguity processing is required when target range is above the red line in the graphic, which increases scan time.
Scan time is a critical factor for some systems because vehicles moving at or above the speed of sound can travel every few seconds, like the Exocet, Harpoon, Kitchen, and air-to-air missiles. The maximum time to scan the entire volume of the sky must be on the order of a dozen seconds or less for systems operating in that environment.
Pulse-Doppler radar by itself can be too slow to cover the entire volume of space above the horizon unless fan beam is used. This approach is used with the AN/SPS 49(V)5 Very Long Range Air Surveillance Radar, which sacrifices elevation measurement to gain speed.
Pulse-Doppler antenna motion must be slow enough so that all the return signals from at least 3 different PRFs can be processed out to the maximum anticipated detection range. This is known as dwell time. Antenna motion for pulse-Doppler must be as slow as radar using MTI.
Search radar that include pulse-Doppler are usually dual mode because best overall performance is achieved when pulse-Doppler is used for areas with high false alarm rates (horizon or below and weather), while conventional radar will scan faster in free-space where false alarm rate is low (above horizon with clear skies).
The antenna type is an important consideration for multi-mode radar because undesirable phase shift introduced by the radar antenna can degrade performance measurements for sub-clutter visibility.
Signal processing.
The signal processing enhancement of pulse-Doppler allows small high-speed objects to be detected in close proximity to large slow moving reflectors. To achieve this, the transmitter must be coherent and should produce low phase noise during the detection interval, and the receiver must have large instantaneous dynamic range.
Pulse-Doppler signal processing also includes ambiguity resolution to identify true range and velocity.
The received signals from multiple PRF are compared to determine true range using the range ambiguity resolution process.
The received signals are also compared using the frequency ambiguity resolution process.
Range resolution.
The range resolution is the minimal range separation between two objects traveling at the same speed before the radar can detect two discrete reflections:
formula_11
In addition to this sampling limit, the duration of the transmitted pulse could mean that returns from two targets will be received simultaneously from different parts of the pulse.
Velocity resolution.
The velocity resolution is the minimal radial velocity difference between two objects traveling at the same range before the radar can detect two discrete reflections:
formula_12
Special consideration.
Pulse-Doppler radar has special requirements that must be satisfied to achieve acceptable performance.
Pulse repetition frequency.
Pulse-Doppler typically uses medium pulse repetition frequency (PRF) from about 3 kHz to 30 kHz. The range between transmit pulses is 5 km to 50 km.
Range and velocity cannot be measured directly using medium PRF, and ambiguity resolution is required to identify true range and speed. Doppler signals are generally above 1 kHz, which is audible, so audio signals from medium-PRF systems can be used for passive target classification.
Angular measurement.
Radar systems require angular measurement. Transponders are not normally associated with pulse-Doppler radar, so sidelobe suppression is required for practical operation.
Tracking radar systems use angle error to improve accuracy by producing measurements perpendicular to the radar antenna beam. Angular measurements are averaged over a span of time and combined with radial movement to develop information suitable to predict target position for a short time into the future.
The two angle error techniques used with tracking radar are monopulse and conical scan.
Coherency.
Pulse-Doppler radar requires a coherent oscillator with very little noise. Phase noise reduces sub-clutter visibility performance by producing apparent motion on stationary objects.
Cavity magnetron and crossed-field amplifier are not appropriate because noise introduced by these devices interfere with detection performance. The only amplification devices suitable for pulse-Doppler are klystron, traveling wave tube, and solid state devices.
Scalloping.
Pulse-Doppler signal processing introduces a phenomenon called scalloping. The name is associated with a series of holes that are scooped-out of the detection performance.
Scalloping for pulse-Doppler radar involves blind velocities created by the clutter rejection filter. Every volume of space must be scanned using 3 or more different PRF. A two PRF detection scheme will have detection gaps with a pattern of discrete ranges, each of which has a blind velocity.
Windowing.
Ringing artifacts pose a problem with search, detection, and ambiguity resolution in pulse-Doppler radar.
Ringing is reduced in two ways.
First, the shape of the transmit pulse is adjusted to smooth the leading edge and trailing edge so that RF power is increased and decreased without an abrupt change. This creates a transmit pulse with smooth ends instead of a square wave, which reduces ringing phenomenon that is otherwise associated with target reflection.
Second, the shape of the receive pulse is adjusted using a window function that minimizes ringing that occurs any time pulses are applied to a filter. In a digital system, this adjusts the phase and/or amplitude of each sample before it is applied to the fast Fourier transform. The Dolph-Chebyshev window is the most effective because it produces a flat processing floor with no ringing that would otherwise cause false alarms.
Antenna.
Pulse-Doppler radar is generally limited to mechanically aimed antennas and active phased arrays.
Mechanical RF components, such as wave-guide, can produce Doppler modulation due to phase shift induced by vibration. This introduces a requirement to perform full spectrum operational tests using shake tables that can produce high power mechanical vibration across all anticipated audio frequencies.
Doppler is incompatible with most electronically steered phased-array antenna. This is because the phase-shifter elements in the antenna are non-reciprocal and the phase shift must be adjusted before and after each transmit pulse. Spurious phase shift is produced by the sudden impulse of the phase shift, and settling during the receive period between transmit pulses places Doppler modulation onto stationary clutter. That receive modulation corrupts the measure of performance for sub-clutter visibility. Phase shifter settling time on the order of 50ns is required. Start of receiver sampling needs to be postponed at least 1 phase-shifter settling time-constant (or more) for each 20 dB of sub-clutter visibility.
Most antenna phase shifters operating at PRF above 1 kHz introduce spurious phase shift unless special provisions are made, such as reducing phase shifter settling time to a few dozen nanoseconds.
The following gives the maximum permissible settling time for antenna phase shift modules.
formula_13
where
The antenna type and scan performance is a practical consideration for multi-mode radar systems.
Diffraction.
Choppy surfaces, like waves and trees, form a diffraction grating suitable for bending microwave signals. Pulse-Doppler can be so sensitive that diffraction from mountains, buildings or wave tops can be used to detect fast moving objects otherwise blocked by solid obstruction along the line of sight. This is a very lossy phenomenon that only becomes possible when radar has significant excess sub-clutter visibility.
Refraction and ducting use transmit frequency at L-band or lower to extend the horizon, which is very different from diffraction. Refraction for over-the-horizon radar uses variable density in the air column above the surface of the earth to bend RF signals. An inversion layer can produce a transient troposphere duct that traps RF signals in a thin layer of air like a wave-guide.
Subclutter visibility.
Subclutter visibility involves the maximum ratio of clutter power to target power, which is proportional to dynamic range. This determines performance in heavy weather and near the earth surface.
formula_14
Subclutter visibility is the ratio of the smallest signal that can be detected in the presence of a larger signal.
formula_15
A small fast-moving target reflection can be detected in the presence of larger slow-moving clutter reflections when the following is true:
formula_16
Performance.
The pulse-Doppler radar equation can be used to understand trade-offs between different design constraints, like power consumption, detection range, and microwave safety hazards. This is a very simple form of modeling that allows performance to be evaluated in a sterile environment.
The theoretical range performance is as follows.
formula_17
where
"R" = distance to the target,
"P"t = transmitter power,
"G"t = gain of the transmitting antenna,
"A"r = effective aperture (area) of the receiving antenna,
"σ" = radar cross section, or scattering coefficient, of the target,
"F" = antenna pattern propagation factor,
"D" = Doppler filter size (transmit pulses in each Fast Fourier transform),
"k"B = Boltzmann constant,
"T" = absolute temperature,
"B" = receiver bandwidth (band-pass filter),
"N" = noise figure.
This equation is derived by combining the radar equation with the noise equation and accounting for in-band noise distribution across multiple detection filters. The value "D" is added to the standard radar range equation to account for both pulse-Doppler signal processing and transmitter FM noise reduction.
Detection range is increased proportional to the fourth root of the number of filters for a given power consumption. Alternatively, power consumption is reduced by the number of filters for a given detection range.
Pulse-Doppler signal processing integrates all of the energy from all of the individual reflected pulses that enter the filter. This means a pulse-Doppler signal processing system with 1024 elements provides 30.103 dB of improvement due to the type of signal processing that must be used with pulse-Doppler radar. The energy of all of the individual pulses from the object are added together by the filtering process.
Signal processing for a 1024-point filter improves performance by 30.103 dB, assuming compatible transmitter and antenna. This corresponds to 562% increase in maximal distance.
These improvements are the reason pulse-Doppler is essential for military and astronomy.
Aircraft tracking uses.
Pulse-Doppler radar for aircraft detection has two modes.
Scan mode involves frequency filtering, amplitude thresholding, and ambiguity resolution. Once a reflection has been detected and resolved, the pulse-Doppler radar automatically transitions to tracking mode for the volume of space surrounding the track.
Track mode works like a phase-locked loop, where Doppler velocity is compared with the range movement on successive scans. Lock indicates the difference between the two measurements is below a threshold, which can only occur with an object that satisfies Newtonian mechanics. Other types of electronic signals cannot produce a lock. Lock exists in no other type of radar.
The lock criterion needs to be satisfied during normal operation.
Lock eliminates the need for human intervention with the exception of helicopters and electronic jamming.
Weather phenomenon obey adiabatic process associated with air mass and not Newtonian mechanics, so the lock criterion is not normally used for weather radar.
Pulse-Doppler signal processing selectively excludes low-velocity reflections so that no detections occurs below a threshold velocity. This eliminates terrain, weather, biologicals, and mechanical jamming with the exception of decoy aircraft.
The target Doppler signal from the detection is converted from frequency domain back into time domain sound for the operator in track mode on some radar systems. The operator uses this sound for passive target classification, such as recognizing helicopters and electronic jamming.
Helicopters.
Special consideration is required for aircraft with large moving parts because pulse-Doppler radar operates like a phase-locked loop. Blade tips moving near the speed of sound produce the only signal that can be detected when a helicopter is moving slow near terrain and weather.
A helicopter appears like a rapidly pulsing noise emitter except in a clear environment free from clutter. An audible signal is produced for passive identification of the type of airborne object. Microwave Doppler frequency shift produced by reflector motion falls into the audible sound range for human beings (), which is used for target classification in addition to the kinds of conventional radar display used for that purpose, like A-scope, B-scope, C-scope, and RHI indicator. The human ear may be able to tell the difference better than electronic equipment.
A special mode is required because the Doppler velocity feedback information must be unlinked from radial movement so that the system can transition from scan to track with no lock.
Similar techniques are required to develop track information for jamming signals and interference that cannot satisfy the lock criterion.
Multi-mode.
Pulse-Doppler radar must be multi-mode to handle aircraft turning and crossing trajectory.
Once in track mode, pulse-Doppler radar must include a way to modify Doppler filtering for the volume of space surrounding a track when radial velocity falls below the minimum detection velocity. Doppler filter adjustment must be linked with a radar track function to automatically adjust Doppler rejection speed within the volume of space surrounding the track.
Tracking will cease without this feature because the target signal will otherwise be rejected by the Doppler filter when radial velocity approaches zero because there is no change in frequency.
Multi-mode operation may also include continuous wave illumination for semi-active radar homing.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Doppler frequency} = \\frac{2 \\times \\text{transmit frequency} \\times \\text{radial velocity}}{C}."
},
{
"math_id": 1,
"text": "I = I_0 \\sin\\left(\\frac{4\\pi (x_0 + v \\Delta t)}{\\lambda}\\right) = I_0 \\sin(\\Theta_0 + \\Delta\\Theta),"
},
{
"math_id": 2,
"text": "x_0"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "\\Delta t"
},
{
"math_id": 5,
"text": "\\Delta\\Theta = \\frac{4\\pi v \\Delta t}{\\lambda}."
},
{
"math_id": 6,
"text": "v = \\text{target speed} = \\frac{\\lambda\\Delta\\Theta}{4\\pi \\Delta t},"
},
{
"math_id": 7,
"text": "\\Delta\\Theta"
},
{
"math_id": 8,
"text": " \\left\\vert \\frac{\\text{Doppler frequency} \\times C}{2 \\times \\text{transmit frequency}} \\right\\vert > \\text{velocity threshold}."
},
{
"math_id": 9,
"text": " \\left\\vert \\frac{\\text{Doppler frequency} \\times C}{2 \\times \\text{transmit frequency}} - \\text{ground speed} \\times \\cos\\Theta \\right\\vert > \\text{velocity threshold},"
},
{
"math_id": 10,
"text": "\\Theta"
},
{
"math_id": 11,
"text": "\\text{range resolution} = \\frac{C}{\\text{PRF} \\times (\\text{number of samples between transmit pulses})}."
},
{
"math_id": 12,
"text": "\\text{velocity resolution} = \\frac{C \\times \\text{PRF}}{2 \\times \\text{transmit frequency} \\times \\text{filter size in transmit pulses}}."
},
{
"math_id": 13,
"text": "T = \\frac{1}{e^\\frac{\\text{SCV}}{20} \\times S \\times \\text{PRF}},"
},
{
"math_id": 14,
"text": "\\text{dynamic range} = \\min \\begin{cases} \\tfrac{\\text{carrier power}}{\\text{noise power}} & \\text{transmit noise, where bandwidth is } \\tfrac{\\text{PRF}}{\\text{filter size}}\\\\ 2^{\\text{sample bits} + \\text{filter size}} & \\text{receiver dynamic range} \\end{cases}."
},
{
"math_id": 15,
"text": "\\text{subclutter visibility} = \\frac{\\text{dynamic range}}{\\text{CFAR detection threshold}}."
},
{
"math_id": 16,
"text": "\\text{target power} > \\frac{\\text{clutter power}}{\\text{subclutter visibility}}."
},
{
"math_id": 17,
"text": " R = \\left( \\frac{P_\\text{t} G_\\text{t} A_\\text{r} \\sigma F D}{16 \\pi^2 k_\\text{B} T B N} \\right)^\\frac{1}{4}, "
}
]
| https://en.wikipedia.org/wiki?curid=1358178 |
13581828 | Surface conductivity | Surface conductivity is an additional conductivity of an electrolyte in the vicinity of the charged interfaces. Surface and volume conductivity of liquids correspond to the electrically driven motion of ions in an electric field. A layer of counter ions of the opposite polarity to the surface charge exists close to the interface. It is formed due to attraction of counter-ions by the surface charges. This layer of higher ionic concentration is a part of the interfacial double layer. The concentration of the ions in this layer is higher as compared to the ionic strength of the liquid bulk. This leads to the higher electric conductivity of this layer.
Smoluchowski was the first to recognize the importance of surface conductivity at the beginning of the 20th century.
There is a detailed description of surface conductivity by Lyklema in "Fundamentals of Interface and Colloid Science"
The Double Layer (DL) has two regions, according to the well established Gouy-Chapman-Stern model. The upper level, which is in contact with the bulk liquid is the diffuse layer. The inner layer that is in contact with interface is the Stern layer.
It is possible that the lateral motion of ions in both parts of the DL contributes to the surface conductivity.
The contribution of the Stern layer is less well described. It is often called "additional surface conductivity".
The theory of the surface conductivity of the diffuse part of the DL was developed by Bikerman. He derived a simple equation that links surface conductivity κσ with the behaviour of ions at the interface. For symmetrical electrolyte and assuming identical ions diffusion coefficients D+=D−=D it is given in the reference:
formula_0
where
F is the Faraday constant
T is the absolute temperature
R is the gas constant
C is the ionic concentration in the bulk fluid
z is the ion valency
ζ is the electrokinetic potential
The parameter "m" characterizes the contribution of electro-osmosis to the motion of ions within the DL:
formula_1
The Dukhin number is a dimensionless parameter that characterizes the contribution of the surface conductivity to a variety of electrokinetic phenomena, such as, electrophoresis and electroacoustic phenomena. This parameter and, consequently, surface conductivity can be calculated from the electrophoretic mobility using appropriate theory. Electrophoretic instrument by Malvern and electroacoustic instruments by Dispersion Technology contain software for conducting such calculations.
Surface Science.
Surface conductivity may refer to the electrical conduction across a solid surface measured by surface probes. Experiments may be done to test this material property as in the n-type surface conductivity of p-type. Additionally, surface conductivity is measured in coupled phenomena such as photoconductivity, for example, for the metal oxide semiconductor ZnO. Surface conductivity differs from bulk conductivity for analogous reasons to the electrolyte solution case, where the charge carriers of holes (+1) and electrons (-1) play the role of ions in solution.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " {\\kappa}^{\\sigma} = \\frac{4F^2Cz^2D(1+3m/z^2)}{RT\\kappa}\\left(\\cosh\\frac{zF\\zeta}{2RT}-1\\right)"
},
{
"math_id": 1,
"text": " m = \\frac{2\\varepsilon_0\\varepsilon_m R^2T^2}{3\\eta F^2 D}"
}
]
| https://en.wikipedia.org/wiki?curid=13581828 |
1358331 | Continuous-wave radar | Type of radar where a known stable frequency continuous wave radio energy is transmitted
Transmitter
transmittedenergy
Receiver
backscattered energy,containing much informationabout the backscatterer
Principle of a measurement with a continuous-wave radar
Continuous-wave radar (CW radar) is a type of radar system where a known stable frequency continuous wave radio energy is transmitted and then received from any reflecting objects. Individual objects can be detected using the Doppler effect, which causes the received signal to have a different frequency from the transmitted signal, allowing it to be detected by filtering out the transmitted frequency.
Doppler-analysis of radar returns can allow the filtering out of slow or non-moving objects, thus offering immunity to interference from large stationary objects and slow-moving clutter. This makes it particularly useful for looking for objects against a background reflector, for instance, allowing a high-flying aircraft to look for aircraft flying at low altitudes against the background of the surface. Because the very strong reflection off the surface can be filtered out, the much smaller reflection from a target can still be seen.
CW radar systems are used at both ends of the range spectrum.
Operation.
The main advantage of CW radar is that energy is not pulsed so these are much simpler to manufacture and operate. They have no minimum or maximum range, although the broadcast power level imposes a practical limit on range. Continuous-wave radar maximize total power on a target because the transmitter is broadcasting continuously.
The military uses continuous-wave radar to guide semi-active radar homing (SARH) air-to-air missiles, such as the U.S. AIM-7 Sparrow and the Standard missile family. The launch aircraft "illuminates" the target with a CW radar signal, and the missile homes in on the reflected radio waves. Since the missile is moving at high velocities relative to the aircraft, there is a strong Doppler shift. Most modern air combat radars, even pulse Doppler sets, have a CW function for missile guidance purposes.
Maximum distance in a continuous-wave radar is determined by the overall bandwidth and transmitter power. This bandwidth is determined by two factors.
Doubling transmit power increases distance performance by about 20%. Reducing the total FM transmit noise by half has the same effect.
Frequency domain receivers used for continuous-wave Doppler radar receivers are very different from conventional radar receivers. The receiver consists of a bank of filters, usually more than 100. The number of filters determines the maximum distance performance.
Doubling the number of receiver filters increases distance performance by about 20%. Maximum distance performance is achieved when receiver filter size is equal to the maximum FM noise riding on the transmit signal. Reducing receiver filter size below average amount of FM transmit noise will not improve range performance.
A CW radar is said to be "matched" when the receiver filter size matches the RMS bandwidth of the FM noise on the transmit signal.
Types.
There are two types of continuous-wave radar: "unmodulated continuous-wave" and "modulated continuous-wave".
Unmodulated continuous-wave.
This kind of radar can cost less than $10 (2021). Return frequencies are shifted away from the transmitted frequency based on the Doppler effect when objects are moving. There is no way to evaluate distance. This type of radar is typically used with competition sports, like golf, tennis, baseball, NASCAR racing, and some smart-home appliances including light-bulbs and motion sensors.
The Doppler frequency change depends on the speed of light in the air ("c’ ≈ c/1.0003" is slightly slower than in vacuum) and "v" the speed of the target:
formula_0
The Doppler frequency is thus:
formula_1
Since the usual variation of targets' speed of a radar is much smaller than formula_2, it is possible to simplify with formula_3 :
formula_4
Continuous-wave radar without frequency modulation (FM) only detects moving targets, as stationary targets (along the line of sight) will not cause a Doppler shift. Reflected signals from stationary and slow-moving objects are masked by the transmit signal, which overwhelms reflections from slow-moving objects during normal operation.
Modulated continuous-wave.
"Frequency-modulated continuous-wave radar" (FM-CW) – also called continuous-wave frequency-modulated (CWFM) radar
– is a short-range measuring radar set capable of determining distance. This increases reliability by providing distance measurement along with speed measurement, which is essential when there is more than one source of reflection arriving at the radar antenna. This kind of radar is often used as "radar altimeter" to measure the exact height during the landing procedure of aircraft. It is also used as early-warning radar, wave radar, and proximity sensors. Doppler shift is not always required for detection when FM is used. While early implementations, such as the APN-1 Radar Altimeter of the 1940s, were designed for short ranges, Over The Horizon Radars (OTHR) such as the Jindalee Operational Radar Network (JORN) are designed to survey intercontinental distances of some thousands of kilometres.
In this system the transmitted signal of a known stable frequency continuous wave varies up and down in frequency over a fixed period of time by a modulating signal. Frequency difference between the receive signal and the transmit signal increases with delay, and hence with distance. This smears out, or blurs, the Doppler signal. Echoes from a target are then mixed with the transmitted signal to produce a beat signal which will give the distance of the target after demodulation.
A variety of modulations are possible, the transmitter frequency can slew up and down as follows :
Range demodulation is limited to 1/4 wavelength of the transmit modulation. Instrumented range for 100 Hz FM would be 500 km. That limit depends upon the type of modulation and demodulation. The following generally applies.
formula_5
The radar will report incorrect distance for reflections from distances beyond the instrumented range, such as from the moon. FMCW range measurements are only reliable to about 60% of the instrumented range, or about 300 km for 100 Hz FM.
Sawtooth frequency modulation.
Sawtooth modulation is the most used in FM-CW radars where range is desired for objects that lack rotating parts. Range information is mixed with the Doppler velocity using this technique. Modulation can be turned off on alternate scans to identify velocity using unmodulated carrier frequency shift. This allows range and velocity to be found with one radar set. Triangle wave modulation can be used to achieve the same goal.
As shown in the figure the received waveform (green) is simply a delayed replica of the transmitted waveform (red). The transmitted frequency is used to down-convert the receive signal to baseband, and the amount of frequency shift between the transmit signal and the reflected signal increases with time delay (distance). The time delay is thus a measure of the range; a small frequency spread is produced by nearby reflections, a larger frequency spread corresponds with more time delay and a longer range.
With the advent of modern electronics, digital signal processing is used for most detection processing. The beat signals are passed through an analog-to-digital converter, and digital processing is performed on the result. As explained in the literature, FM-CW ranging for a linear ramp waveform is given in the following set of equations:
formula_6
where formula_7 is the radar frequency sweep amount and formula_8 is the time to complete the frequency sweep.
Then, formula_9, rearrange to a more useful:
formula_10, where formula_11 is the round trip time of the radar energy.
It is then a trivial matter to calculate the physical one-way distance for an idealized typical case as:
formula_12
where formula_13 is the speed of light in any transparent medium of refractive index n (n=1 in vacuum and 1.0003 for air).
For practical reasons, receive samples are not processed for a brief period after the modulation ramp begins because incoming reflections will have modulation from the previous modulation cycle. This imposes a range limit and limits performance.
formula_14
Sinusoidal frequency modulation.
Sinusoidal FM is used when both range and velocity are required simultaneously for complex objects with multiple moving parts like turbine fan blades, helicopter blades, or propellers. This processing reduces the effect of complex spectra modulation produced by rotating parts that introduce errors into range measurement process.
This technique also has the advantage that the receiver never needs to stop processing incoming signals because the modulation waveform is continuous with no impulse modulation.
Sinusoidal FM is eliminated by the receiver for close in reflections because the transmit frequency will be the same as the frequency being reflected back into the receiver. The spectrum for more distant objects will contain more modulation. The amount of spectrum spreading caused by modulation riding on the receive signal is proportional to the distance to the reflecting object.
The time domain formula for FM is:
formula_15
where formula_16 (modulation index)
A time delay is introduced in transit between the radar and the reflector.
formula_17
where formula_18 time delay
The detection process down converts the receive signal using the transmit signal. This eliminates the carrier.
formula_19
formula_20
The Carson bandwidth rule can be seen in this equation, and that is a close approximation to identify the amount of spread placed on the receive spectrum:
formula_21
formula_22
Receiver demodulation is used with FMCW similar to the receiver demodulation strategy used with pulse compression. This takes place before Doppler CFAR detection processing. A large modulation index is needed for practical reasons.
Practical systems introduce reverse FM on the receive signal using digital signal processing before the fast Fourier transform process is used to produce the spectrum. This is repeated with several different demodulation values. Range is found by identifying the receive spectrum where width is minimum.
Practical systems also process receive samples for several cycles of the FM in order to reduce the influence of sampling artifacts.
Configurations.
There are two different antenna configurations used with continuous-wave radar: "monostatic radar", and "bistatic radar".
Monostatic.
The radar receive antenna is located nearby the radar transmit antenna in monostatic radar.
Feed-through null is typically required to eliminate bleed-through between the transmitter and receiver to increase sensitivity in practical systems. This is typically used with continuous-wave angle tracking (CWAT) radar receivers that are interoperable with surface-to-air missile systems.
Interrupted continuous-wave can be used to eliminate bleed-through between the transmit and receive antenna. This kind of system typically takes one sample between each pair of transmit pulses, and the sample rate is typically 30 kHz or more. This technique is used with the least expensive kinds of radar, such as those used for traffic monitoring and sports.
FM-CW radars can be built with one antenna using either a circulator, or circular polarization.
Bistatic.
The radar receive antenna is located far from the radar transmit antenna in bistatic radar. The transmitter is fairly expensive, while the receiver is fairly inexpensive and disposable.
This is typically used with semi-active radar homing including most surface-to-air missile systems. The transmit radar is typically located near the missile launcher. The receiver is located in the missile.
The transmit antenna "illuminates" the target in much the same way as a search light. The transmit antenna also issues an omnidirectional sample.
The receiver uses two antennas – one antenna aimed at the target and one antenna aimed at the transmit antenna. The receive antenna that is aimed at the transmit antenna is used to develop the feed-through null, which allows the target receiver to operate reliably in or near the main beam of the antenna.
The bistatic FM-CW receiver and transmitter pair may also take the form of an over-the-air deramping (OTAD) system. An OTAD transmitter broadcasts an FM-CW signal on two different frequency channels; one for synchronisation of the receiver with the transmitter, the other for illuminating the measurement scene. Using directive antennas, the OTAD receiver collects both signals simultaneously and mixes the synchronisation signal with the downconverted echo signal from the measurement scene in a process known as over-the-air deramping. The frequency of deramped signal is proportional to the bistatic range to the target less the baseline distance between the OTAD transmitter and the OTAD receiver.
Most modern systems FM-CW radars use one transmitter antenna and multiple receiver antennas. Because the transmitter is on continuously at effectively the same frequency as the receiver, special care must be exercised to avoid overloading the receiver stages.
Monopulse.
Monopulse antennas produce angular measurements without pulses or other modulation. This technique is used in semi-active radar homing.
Leakage.
The transmit signal will leak into the receiver on practical systems. Significant leakage will come from nearby environmental reflections even if antenna components are perfect. As much as 120 dB of leakage rejection is required to achieve acceptable performance.
Three approaches can be used to produce a practical system that will function correctly.
Null and filter approaches must be used with bistatic radar, like semi-active radar homing, for practical reasons because side-lobes from the illumination radar will illuminate the environment in addition to the main-lobe illumination on the target. Similar constraints apply to ground-based CW radar. This adds cost.
Interruption applies to cheap hand held mono-static radar systems (police radar and sporting goods). This is impractical for bistatic systems because of the cost and complexity associated with coordinating time with nanosecond precision in two different locations.
The design constraint that drives this requirement is the dynamic range limitation of practical receiver components that include band pass filters that take time to settle out.
Null.
The null approach takes two signals:
The actual transmit signal is rotated 180 degrees, attenuated, and fed into the receiver. The phase shift and attenuation are set using feedback obtained from the receiver to cancel most of the leakage. Typical improvement is on the order of 30 dB to 70 dB.
Filter.
The filter approach relies on using a very narrow band reject filter that will eliminate low velocity signals from nearby reflectors. The band reject area spans 10 mile per hour to 100 mile per hour depending upon the anticipated environment. Typical improvement is on the order of 30 dB to 70 dB.
Interruption, FMICW.
While interrupted carrier systems are not considered to be CW systems, performance characteristics are sufficiently similar to group interrupted CW systems with pure CW radar because the pulse rate is high enough that range measurements cannot be done without frequency modulation (FM).
This technique turns the transmitter off for a period before receiver sampling begins. Receiver interference declines by about 8.7 dB per time constant. Leakage reduction of 120 dB requires 14 recover bandwidth time constants between when the transmitter is turned off and receiver sampling begins.
The interruption concept is widely used, especially in long-range radar applications where the receiver sensitivity is very important. It is commonly known as "frequency modulated interrupted continuous wave", or FMICW.
Advantages.
Because of simplicity, CW radar are inexpensive to manufacture, relatively free from failure, cheap to maintain, and fully automated. Some are small enough to carry in a pocket. More sophisticated CW radar systems can reliably achieve accurate detections exceeding 100 km distance while providing missile illumination.
The FMCW ramp can be compressed providing extra signal to noise gains such one does not need the extra power that pulse radar using a no FM modulation would. This combined with the fact that it is coherent means that Fourier integration can be used rather than azimuth integration providing superior signal to noise and a Doppler measurement.
Doppler processing allows signal integration between successive receiver samples. This means that the number of samples can be increased to extend the detection range without increasing transmit power. That technique can be used to produce inexpensive stealthy low-power radar.
CW performance is similar to Pulse-Doppler radar performance for this reason.
Limitations.
Unmodulated continuous wave radar cannot measure distance. Signal amplitude provides the only way to determine which object corresponds with which speed measurement when there is more than one moving object near the receiver, but amplitude information is not useful without range measurement to evaluate target size. Moving objects include birds flying near objects in front of the antenna. Reflections from small objects directly in front of the receiver can be overwhelmed by reflections entering antenna side-lobes from large object located to the side, above, or behind the radar, such as trees with wind blowing through the leaves, tall grass, sea surface, freight trains, busses, trucks, and aircraft.
Small radar systems that lack range modulation are only reliable when used with one object in a sterile environment free from vegetation, aircraft, birds, weather phenomenon, and other nearby vehicles.
With 20 dB antenna side-lobes, a truck or tree with 1,000 square feet of reflecting surface behind the antenna can produce a signal as strong as a car with 10 square feet of reflecting in front of a small hand held antenna. An area survey is required to determine if hand held devices will operate reliably because unobserved roadway traffic and trees behind the operator can interfere with observations made in front of the operator.
This is a typical problem with radar speed guns used by law enforcement officers, NASCAR events, and sports, like baseball, golf, and tennis. Interference from a second radar, automobile ignition, other moving objects, moving fan blades on the intended target, and other radio frequency sources will corrupt measurements. These systems are limited by wavelength, which is 0.02 meter at Ku band, so the beam spread exceeds 45 degrees if the antenna is smaller than 12 inches (0.3 meter). Significant antenna side-lobes extend in all directions unless the antenna is larger than the vehicle on which the radar is mounted.
Side-lobe suppression and FM range modulation are required for reliable operation. There is no way to know the direction of the arriving signal without side-lobe suppression, which requires two or more antennae, each with its own individual receiver. There is no way to know distance without FM range modulation.
Speed, direction, and distance are all required to pick out an individual object.
These limitations are due to the well known limitations of basic physics that cannot be overcome by design.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f_r = f_t \\left( \\frac{1+v/c'}{1-v/c'} \\right)"
},
{
"math_id": 1,
"text": "f_d = f_r-f_t = 2v \\frac {f_t}{c'-v}"
},
{
"math_id": 2,
"text": "c', (v \\ll c')"
},
{
"math_id": 3,
"text": "c'-v \\approx c' "
},
{
"math_id": 4,
"text": "f_d \\approx 2v \\frac {f_t}{c'} "
},
{
"math_id": 5,
"text": "\\text{Instrumented Range} = F_r-F_t = \\frac {\\text{Speed of Light}}{(4 \\times \\text{Modulation Frequency})}"
},
{
"math_id": 6,
"text": "k = \\frac {\\Delta{f_{radar}}} {\\Delta{t_{radar}}}"
},
{
"math_id": 7,
"text": "\\Delta{f_{radar}}"
},
{
"math_id": 8,
"text": "\\Delta{t_{radar}}"
},
{
"math_id": 9,
"text": "\\Delta{f_{echo}} = t_rk"
},
{
"math_id": 10,
"text": "t_r = \\frac {\\Delta{f_{echo}}} {k}"
},
{
"math_id": 11,
"text": "t_r"
},
{
"math_id": 12,
"text": "\\text{dist}_{oneway} = \\frac {c' t_r}{2}"
},
{
"math_id": 13,
"text": "c'=c/n"
},
{
"math_id": 14,
"text": "\\text{Range Limit} = 0.5 \\ c' \\ t_{radar} "
},
{
"math_id": 15,
"text": " y(t) = \\cos \\left\\{ 2 \\pi [ f_{c} + \\Beta \\cos \\left( 2 \\pi f_{m} t \\right) ] t \\right\\}\\,"
},
{
"math_id": 16,
"text": "\\Beta = \\frac{f_{\\Delta}}{f_{m}}"
},
{
"math_id": 17,
"text": " y(t) = \\cos \\left\\{ 2 \\pi [ f_{c} + \\Beta \\cos \\left( 2 \\pi f_{m} (t + \\delta t) \\right) ] (t + \\delta t) \\right\\}\\,"
},
{
"math_id": 18,
"text": "\\delta t ="
},
{
"math_id": 19,
"text": " y(t) = \\cos \\left\\{ 2 \\pi [ f_{c} + \\Beta \\cos \\left( 2 \\pi f_{m} (t + \\delta t) \\right) ] (t + \\delta t) \\right\\}\\;\\cos \\left\\{ 2 \\pi [ f_{c} + \\Beta \\cos \\left( 2 \\pi f_{m} t \\right) ] t \\right\\}\\,"
},
{
"math_id": 20,
"text": " y(t) \\approx \\cos \\left\\{ -4 t \\pi \\Beta \\sin ( 2 \\pi f_{m} (2t + \\delta t) \\sin ( \\pi f_{m} \\delta t) + 2 \\delta t \\pi \\Beta \\cos (2 \\pi f_{m} ( t + \\delta t) ) \\right\\}\\,"
},
{
"math_id": 21,
"text": "\\text{Modulation Spectrum Spread} \\approx 2 (\\Beta + 1 ) f_m \\sin (\\delta t ) "
},
{
"math_id": 22,
"text": "\\text{Range} = 0.5 C / \\delta t "
}
]
| https://en.wikipedia.org/wiki?curid=1358331 |
1358431 | Surface brightness | Astronomical term for luminosity per area
In astronomy, surface brightness (SB) quantifies the apparent brightness or flux density per unit angular area of a spatially extended object such as a galaxy or nebula, or of the night sky background. An object's surface brightness depends on its surface luminosity density, i.e., its luminosity emitted per unit surface area. In visible and infrared astronomy, surface brightness is often quoted on a magnitude scale, in magnitudes per square arcsecond (MPSAS) in a particular filter band or photometric system.
Measurement of the surface brightnesses of celestial objects is called surface photometry.
General description.
The total magnitude is a measure of the brightness of an extended object such as a nebula, cluster, galaxy or comet. It can be obtained by summing up the luminosity over the area of the object. Alternatively, a photometer can be used by applying apertures or slits of different sizes of diameter. The background light is then subtracted from the measurement to obtain the total brightness. The resulting magnitude value is the same as a point-like source that is emitting the same amount of energy. The total magnitude of a comet is the combined magnitude of the coma and nucleus.
The apparent magnitude of an astronomical object is generally given as an integrated value—if a galaxy is quoted as having a magnitude of 12.5, it means we see the same total amount of light from the galaxy as we would from a star with magnitude 12.5. However, a star is so small it is effectively a point source in most observations (the largest angular diameter, that of R Doradus, is 0.057 ± 0.005 arcsec), whereas a galaxy may extend over several arcseconds or arcminutes. Therefore, the galaxy will be harder to see than the star against the airglow background light. Apparent magnitude is a good indication of visibility if the object is point-like or small, whereas surface brightness is a better indicator if the object is large. What counts as small or large depends on the specific viewing conditions and follows from Ricco's law. In general, in order to adequately assess an object's visibility one needs to know both parameters.
This is the reason the extreme naked eye limit for viewing a star is apparent magnitude 8, but only apparent magnitude 6.9 for galaxies.
Calculating surface brightness.
Surface brightnesses are usually quoted in magnitudes per square arcsecond. Because the magnitude is logarithmic, calculating surface brightness cannot be done by simple division of magnitude by area. Instead, for a source with a total or integrated magnitude "m" extending over a visual area of "A" square arcseconds, the surface brightness "S" is given by
formula_0
For astronomical objects, surface brightness is analogous to photometric luminance and is therefore constant with distance: as an object becomes fainter with distance, it also becomes correspondingly smaller in visual area. In geometrical terms, for a nearby object emitting a given amount of light, radiative flux decreases with the square of the distance to the object, but the physical area corresponding to a given solid angle or visual area (e.g. 1 square arcsecond) decreases by the same proportion, resulting in the same surface brightness. For extended objects such as nebulae or galaxies, this allows the estimation of spatial distance from surface brightness by means of the distance modulus or luminosity distance.
Relationship to physical units.
The surface brightness in magnitude units is related to the surface brightness in physical units of solar luminosity per square parsec by
formula_1
where formula_2 and formula_3 are the absolute magnitude and the luminosity of the Sun in chosen color-band respectively.
Surface brightness can also be expressed in candela per square metre using the formula [value in cd/m2] = × 10(−0.4×[value in mag/arcsec2]).
Examples.
A truly dark sky has a surface brightness of cd m−2 or 21.8 mag arcsec−2.
The peak surface brightness of the central region of the Orion Nebula is about 17 Mag/arcsec2 (about 14 millinits) and the outer bluish glow has a peak surface brightness of 21.3 Mag/arcsec2 (about 0.27 millinits).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = m + 2.5 \\cdot \\log_{10} A."
},
{
"math_id": 1,
"text": "S(\\mathrm{mag/arcsec^2}) = M_{\\odot} + 21.572-2.5\\log_{10} S (L_{\\odot}/\\mathrm{pc}^2),"
},
{
"math_id": 2,
"text": "M_{\\odot}"
},
{
"math_id": 3,
"text": " L_{\\odot} "
}
]
| https://en.wikipedia.org/wiki?curid=1358431 |
1358592 | Fibered knot | Mathematical knot
In knot theory, a branch of mathematics, a knot or link formula_0
in the 3-dimensional sphere formula_1 is called fibered or fibred (sometimes Neuwirth knot in older texts, after Lee Neuwirth) if there is a 1-parameter family formula_2 of Seifert surfaces for formula_0, where the parameter formula_3 runs through the points of the unit circle formula_4, such that if formula_5 is not equal to formula_3
then the intersection of formula_6 and formula_2 is exactly formula_0.
Examples.
Knots that are fibered.
For example:
Knots that are not fibered.
The Alexander polynomial of a fibered knot is monic, i.e. the coefficients of the highest and lowest powers of "t" are plus or minus 1. Examples of knots with nonmonic Alexander polynomials abound, for example the twist knots have Alexander polynomials formula_7, where "q" is the number of half-twists. In particular the stevedore knot is not fibered.
Related constructions.
Fibered knots and links arise naturally, but not exclusively, in complex algebraic geometry. For instance, each singular point of a complex plane curve can be described
topologically as the cone on a fibered knot or link called the link of the singularity. The trefoil knot is the link of the cusp singularity formula_8; the Hopf link (oriented correctly) is the link of the node singularity formula_9. In these cases, the family of Seifert surfaces is an aspect of the Milnor fibration of the singularity.
A knot is fibered if and only if it is the binding of some open book decomposition of formula_1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "S^3"
},
{
"math_id": 2,
"text": "F_t"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "S^1"
},
{
"math_id": 5,
"text": "s"
},
{
"math_id": 6,
"text": "F_s"
},
{
"math_id": 7,
"text": "qt-(2q+1)+qt^{-1}"
},
{
"math_id": 8,
"text": "z^2+w^3"
},
{
"math_id": 9,
"text": "z^2+w^2"
}
]
| https://en.wikipedia.org/wiki?curid=1358592 |
13587617 | Cameron–Erdős conjecture | Theorem in combinatorics
In combinatorics, the Cameron–Erdős conjecture (now a theorem) is the statement that the number of sum-free sets contained in formula_0 is formula_1
The sum of two odd numbers is even, so a set of odd numbers is always sum-free. There are formula_2 odd numbers in ["N"&hairsp;], and so formula_3 subsets of odd numbers in ["N"&hairsp;]. The Cameron–Erdős conjecture says that this counts a constant proportion of the sum-free sets.
The conjecture was stated by Peter Cameron and Paul Erdős in 1988. It was proved by Ben Green and independently by Alexander Sapozhenko in 2003.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[N] = \\{1,\\ldots,N\\}"
},
{
"math_id": 1,
"text": "O\\big({2^{N/2}}\\big)."
},
{
"math_id": 2,
"text": "\\lceil N/2\\rceil"
},
{
"math_id": 3,
"text": "2^{N/2}"
}
]
| https://en.wikipedia.org/wiki?curid=13587617 |
13588803 | Infinite divisibility (probability) | In probability theory, a probability distribution is infinitely divisible if it can be expressed as the probability distribution of the sum of an arbitrary number of independent and identically distributed (i.i.d.) random variables. The characteristic function of any infinitely divisible distribution is then called an infinitely divisible characteristic function.
More rigorously, the probability distribution "F" is infinitely divisible if, for every positive integer "n", there exist "n" i.i.d. random variables "X""n"1, ..., "X""nn" whose sum "S""n" = "X""n"1 + ... + "X""nn" has the same distribution "F".
The concept of infinite divisibility of probability distributions was introduced in 1929 by Bruno de Finetti. This type of decomposition of a distribution is used in probability and statistics to find families of probability distributions that might be natural choices for certain models or applications. Infinitely divisible distributions play an important role in probability theory in the context of limit theorems.
Examples.
Examples of continuous distributions that are infinitely divisible are the normal distribution, the Cauchy distribution, the Lévy distribution, and all other members of the stable distribution family, as well as the Gamma distribution, the chi-square distribution, the Wald distribution, the Log-normal distribution and the Student's t-distribution.
Among the discrete distributions, examples are the Poisson distribution and the negative binomial distribution (and hence the geometric distribution also). The one-point distribution whose only possible outcome is 0 is also (trivially) infinitely divisible.
The uniform distribution and the binomial distribution are "not" infinitely divisible, nor are any other distributions with bounded support (≈ finite-sized domain), other than the one-point distribution mentioned above. The distribution of the reciprocal of a random variable having a Student's t-distribution is also not infinitely divisible.
Any compound Poisson distribution is infinitely divisible; this follows immediately from the definition.
Limit theorem.
Infinitely divisible distributions appear in a broad generalization of the central limit theorem: the limit as "n" → +∞ of the sum "S""n" = "X""n"1 + ... + "X""nn" of independent uniformly asymptotically negligible (u.a.n.) random variables within a triangular array
formula_0
approaches — in the weak sense — an infinitely divisible distribution. The uniformly asymptotically negligible (u.a.n.) condition is given by
formula_1
Thus, for example, if the uniform asymptotic negligibility (u.a.n.) condition is satisfied via an appropriate scaling of identically distributed random variables with finite variance, the weak convergence is to the normal distribution in the classical version of the central limit theorem. More generally, if the u.a.n. condition is satisfied via a scaling of identically distributed random variables (with not necessarily finite second moment), then the weak convergence is to a stable distribution. On the other hand, for a triangular array of independent (unscaled) Bernoulli random variables where the u.a.n. condition is satisfied through
formula_2
the weak convergence of the sum is to the Poisson distribution with mean "λ" as shown by the familiar proof of the law of small numbers.
Lévy process.
Every infinitely divisible probability distribution corresponds in a natural way to a Lévy process. A Lévy process is a stochastic process { "Lt" : "t" ≥ 0 } with stationary independent increments, where "stationary" means that for "s" < "t", the probability distribution of "L""t" − "L""s" depends only on "t" − "s" and where "independent increments" means that that difference "L""t" − "L""s" is independent of the corresponding difference on any interval not overlapping with ["s", "t"], and similarly for any finite number of mutually non-overlapping intervals.
If { "Lt" : "t" ≥ 0 } is a Lévy process then, for any "t" ≥ 0, the random variable "L""t" will be infinitely divisible: for any "n", we can choose ("X""n"1, "X""n"2, ..., "X""nn") = ("L""t"/"n" − "L"0, "L"2"t"/"n" − "L""t"/"n", ..., "L""t" − "L"("n"−1)"t"/"n"). Similarly, "L""t" − "L""s" is infinitely divisible for any "s" < "t".
On the other hand, if "F" is an infinitely divisible distribution, we can construct a Lévy process { "Lt" : "t" ≥ 0 } from it. For any interval ["s", "t"] where "t" − "s" > 0 equals a rational number "p"/"q", we can define "L""t" − "L""s" to have the same distribution as "X""q"1 + "X""q"2 + ... + "X""qp". Irrational values of "t" − "s" > 0 are handled via a continuity argument.
Additive process.
An additive process formula_3 (a cadlag, continuous in probability stochastic process with independent increments) has an infinitely divisible distribution for any formula_4. Let formula_5 be its family of infinitely divisible distributions.
formula_5 satisfies a number of conditions of continuity and monotonicity. Moreover, if a family of infinitely divisible distributions formula_5 satisfies these continuity and monotonicity conditions, there exists (uniquely in law) an additive process formula_5 with this distribution.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{array}{cccc}\nX_{11} \\\\\nX_{21} & X_{22} \\\\\nX_{31} & X_{32} & X_{33} \\\\\n\\vdots & \\vdots & \\vdots & \\ddots\n\\end{array}\n"
},
{
"math_id": 1,
"text": "\\lim_{n\\to\\infty} \\, \\max_{1 \\le k \\le n} \\; P( \\left| X_{nk} \\right| > \\varepsilon ) = 0 \\text{ for every }\\varepsilon > 0."
},
{
"math_id": 2,
"text": "\\lim_{n\\rightarrow\\infty} np_n = \\lambda,"
},
{
"math_id": 3,
"text": "\\{X_t\\}_{t \\geq 0}"
},
{
"math_id": 4,
"text": "t\\geq 0"
},
{
"math_id": 5,
"text": "\\{\\mu_t\\}_{t\\geq0}"
}
]
| https://en.wikipedia.org/wiki?curid=13588803 |
1358940 | Monte Carlo methods in finance | Probabilistic measurement methods
Monte Carlo methods are used in corporate finance and mathematical finance to value and analyze (complex) instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value, and then determining the distribution of their value over the range of resultant outcomes. This is usually done by help of stochastic asset models. The advantage of Monte Carlo methods over other techniques increases as the dimensions (sources of uncertainty) of the problem increase.
Monte Carlo methods were first introduced to finance in 1964 by David B. Hertz through his "Harvard Business Review" article, discussing their application in Corporate Finance. In 1977, Phelim Boyle pioneered the use of simulation in derivative valuation in his seminal "Journal of Financial Economics" paper.
This article discusses typical financial problems in which Monte Carlo methods are used. It also touches on the use of so-called "quasi-random" methods such as the use of Sobol sequences.
Overview.
The Monte Carlo method encompasses any technique of statistical sampling employed to approximate solutions to quantitative problems. Essentially, the Monte Carlo method solves a problem by directly simulating the underlying (physical) process and then calculating the (average) result of the process. This very general approach is valid in areas such as physics, chemistry, computer science etc.
In finance, the Monte Carlo method is used to simulate the various sources of uncertainty that affect the value of the instrument, portfolio or investment in question, and to then calculate a representative value given these possible values of the underlying inputs. ("Covering all conceivable real world contingencies in proportion to their likelihood.") In terms of financial theory, this, essentially, is an application of risk neutral valuation; see also risk neutrality.
Applications:
Although Monte Carlo methods provide flexibility, and can handle multiple sources of uncertainty, the use of these techniques is nevertheless not always appropriate. In general, simulation methods are preferred to other valuation techniques only when there are several state variables (i.e. several sources of uncertainty). These techniques are also of limited use in valuing American style derivatives. See below.
Applicability.
Level of complexity.
Many problems in mathematical finance entail the computation of a particular integral (for instance the problem of finding the arbitrage-free value of a particular derivative). In many cases these integrals can be valued analytically, and in still more cases they can be valued using numerical integration, or computed using a partial differential equation (PDE). However, when the number of dimensions (or degrees of freedom) in the problem is large, PDEs and numerical integrals become intractable, and in these cases Monte Carlo methods often give better results.
For more than three or four state variables, formulae such as Black–Scholes (i.e. analytic solutions) do not exist, while other numerical methods such as the Binomial options pricing model and finite difference methods face several difficulties and are not practical. In these cases, Monte Carlo methods converge to the solution more quickly than numerical methods, require less memory and are easier to program. For simpler situations, however, simulation is not the better solution because it is very time-consuming and computationally intensive.
Monte Carlo methods can deal with derivatives which have path dependent payoffs in a fairly straightforward manner. On the other hand, Finite Difference (PDE) solvers struggle with path dependence.
American options.
Monte-Carlo methods are harder to use with American options. This is because, in contrast to a partial differential equation, the Monte Carlo method really only estimates the option value assuming a given starting point and time.
However, for early exercise, we would also need to know the option value at the intermediate times between the simulation start time and the option expiry time. In the Black–Scholes PDE approach these prices are easily obtained, because the simulation runs backwards from the expiry date. In Monte-Carlo this information is harder to obtain, but it can be done for example using the least squares algorithm of Carriere (see link to original paper) which was made popular a few years later by Longstaff and Schwartz (see link to original paper).
Monte Carlo methods.
Mathematically.
The fundamental theorem of arbitrage-free pricing states that the value of a derivative is equal to the discounted expected value of the derivative payoff where the expectation is taken under the risk-neutral measure [1]. An expectation is, in the language of pure mathematics, simply an integral with respect to the measure. Monte Carlo methods are ideally suited to evaluating difficult integrals (see also Monte Carlo method).
Thus if we suppose that our risk-neutral probability space is formula_0 and that we have a derivative H that depends on a set of underlying instruments formula_1. Then given a sample formula_2 from the probability space the value of the derivative is formula_3. Today's value of the derivative is found by taking the expectation over all possible samples and discounting at the risk-free rate. I.e. the derivative has value:
formula_4
where formula_5 is the discount factor corresponding to the risk-free rate to the final maturity date "T" years into the future.
Now suppose the integral is hard to compute. We can approximate the integral by generating sample paths and then taking an average. Suppose we generate N samples then
formula_6
which is much easier to compute.
Sample paths for standard models.
In finance, underlying random variables (such as an underlying stock price) are usually assumed to follow a path that is a function of a Brownian motion 2. For example, in the standard Black–Scholes model, the stock price evolves as
formula_7
To sample a path following this distribution from time 0 to T, we chop the time interval into M units of length formula_8, and approximate the Brownian motion over the interval formula_9 by a single normal variable of mean 0 and variance formula_8. This leads to a sample path of
formula_10
for each "k" between 1 and "M". Here each formula_11 is a draw from a standard normal distribution.
Let us suppose that a derivative H pays the average value of "S" between 0 and "T" then a sample path formula_2 corresponds to a set formula_12 and
formula_13
We obtain the Monte-Carlo value of this derivative by generating "N" lots of "M" normal variables, creating "N" sample paths and so "N" values of "H", and then taking the average.
Commonly the derivative will depend on two or more (possibly correlated) underlyings. The method here can be extended to generate sample paths of several variables, where the normal variables building up the sample paths are appropriately correlated.
It follows from the central limit theorem that quadrupling the number of sample paths approximately halves the error in the simulated price (i.e. the error has order formula_14 convergence in the sense of standard deviation of the solution).
In practice Monte Carlo methods are used for European-style derivatives involving at least three variables (more direct methods involving numerical integration can usually be used for those problems with only one or two underlyings. "See" Monte Carlo option model.
Greeks.
Estimates for the "Greeks" of an option i.e. the (mathematical) derivatives of option value with respect to input parameters, can be obtained by numerical differentiation. This can be a time-consuming process (an entire Monte Carlo run must be performed for each "bump" or small change in input parameters). Further, taking numerical derivatives tends to emphasize the error (or noise) in the Monte Carlo value – making it necessary to simulate with a large number of sample paths. Practitioners regard these points as a key problem with using Monte Carlo methods.
Variance reduction.
Square root convergence is slow, and so using the naive approach described above requires using a very large number of sample paths (1 million, say, for a typical problem) in order to obtain an accurate result. Remember that an estimator for the price of a derivative is a random variable, and in the framework of a risk-management activity, uncertainty on the price of a portfolio of derivatives and/or on its risks can lead to suboptimal risk-management decisions.
This state of affairs can be mitigated by variance reduction techniques.
Antithetic paths.
A simple technique is, for every sample path obtained, to take its antithetic path — that is given a path formula_12 to also take formula_15. Since the variables formula_11 and formula_16 form an antithetic pair, a large value of one is accompanied by a small value of the other. This suggests that an unusually large or small output computed from the first path may be balanced by the value computed from the antithetic path, resulting in a reduction in variance. Not only does this reduce the number of normal samples to be taken to generate "N" paths, but also, under same conditions, such as negative correlation between two estimates, reduces the variance of the sample paths, improving the accuracy.
Control variate method.
It is also natural to use a control variate. Let us suppose that we wish to obtain the Monte Carlo value of a derivative "H", but know the value analytically of a similar derivative I. Then "H"* = (Value of "H" according to Monte Carlo) + B*[(Value of "I" analytically) − (Value of "I" according to same Monte Carlo paths)] is a better estimate, where B is covar(H,I)/var(H).
The intuition behind that technique, when applied to derivatives, is the following: note that the source of the variance of a derivative will be directly dependent on the risks (e.g. delta, vega) of this derivative. This is because any error on, say, the estimator for the forward value of an underlier, will generate a corresponding error depending on the delta of the derivative with respect to this forward value. The simplest example to demonstrate this consists in comparing the error when pricing an at-the-money call and an at-the-money straddle (i.e. call+put), which has a much lower delta.
Therefore, a standard way of choosing the derivative "I" consists in choosing a replicating portfolios of options for "H". In practice, one will price "H" without variance reduction, calculate deltas and vegas, and then use a combination of calls and puts that have the same deltas and vegas as control variate.
Importance sampling.
Importance sampling consists of simulating the Monte Carlo paths using a different probability distribution (also known as a change of measure) that will give more likelihood for the simulated underlier to be located in the area where the derivative's payoff has the most convexity (for example, close to the strike in the case of a simple option). The simulated payoffs are then not simply averaged as in the case of a simple Monte Carlo, but are first multiplied by the likelihood ratio between the modified probability distribution and the original one (which is obtained by analytical formulas specific for the probability distribution). This will ensure that paths whose probability have been arbitrarily enhanced by the change of probability distribution are weighted with a low weight (this is how the variance gets reduced).
This technique can be particularly useful when calculating risks on a derivative. When calculating the delta using a Monte Carlo method, the most straightforward way is the "black-box" technique consisting in doing a Monte Carlo on the original market data and another one on the changed market data, and calculate the risk by doing the difference. Instead, the importance sampling method consists in doing a Monte Carlo in an arbitrary reference market data (ideally one in which the variance is as low as possible), and calculate the prices using the weight-changing technique described above. This results in a risk that will be much more stable than the one obtained through the "black-box" approach.
Quasi-random (low-discrepancy) methods.
Instead of generating sample paths randomly, it is possible to systematically (and in fact completely deterministically, despite the "quasi-random" in the name) select points in a probability spaces so as to optimally "fill up" the space. The selection of points is a low-discrepancy sequence such as a Sobol sequence. Taking averages of derivative payoffs at points in a low-discrepancy sequence is often more efficient than taking averages of payoffs at random points.
External links.
General
Derivative valuation
Corporate Finance
Personal finance | [
{
"math_id": 0,
"text": "\\mathbb{P}"
},
{
"math_id": 1,
"text": "S_1,...,S_n"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "H( S_1(\\omega),S_2(\\omega),\\dots, S_n(\\omega)) =: H(\\omega) "
},
{
"math_id": 4,
"text": " H_0 = {DF}_T \\int_\\omega H(\\omega)\\, d\\mathbb{P}(\\omega) "
},
{
"math_id": 5,
"text": "{DF}_T"
},
{
"math_id": 6,
"text": " H_0 \\approx {DF}_T \\frac{1}{N} \\sum_{\\omega\\in \\text{sample set}} H(\\omega)"
},
{
"math_id": 7,
"text": " dS = \\mu S \\,dt + \\sigma S \\,dW_t. "
},
{
"math_id": 8,
"text": "\\delta t"
},
{
"math_id": 9,
"text": "dt"
},
{
"math_id": 10,
"text": " S( k\\delta t) = S(0) \\exp\\left( \\sum_{i=1}^{k} \\left[\\left(\\mu - \\frac{\\sigma^2}{2}\\right)\\delta t + \\sigma\\varepsilon_i\\sqrt{\\delta t}\\right] \\right)"
},
{
"math_id": 11,
"text": "\\varepsilon_i"
},
{
"math_id": 12,
"text": "\\{\\varepsilon_1,\\dots,\\varepsilon_M\\}"
},
{
"math_id": 13,
"text": " H(\\omega) = \\frac1{M} \\sum_{k=1}^{M} S( k \\delta t)."
},
{
"math_id": 14,
"text": "\\epsilon=\\mathcal{O}\\left(N^{-1/2}\\right)"
},
{
"math_id": 15,
"text": "\\{-\\varepsilon_1,\\dots,-\\varepsilon_M\\}"
},
{
"math_id": 16,
"text": "-\\varepsilon_i"
}
]
| https://en.wikipedia.org/wiki?curid=1358940 |
1358959 | Sobol sequence | Type of sequence in numerical analysis
Sobol’ sequences (also called LPτ sequences or ("t", "s") sequences in base 2) are an example of quasi-random low-discrepancy sequences. They were first introduced by the Russian mathematician Ilya M. Sobol’ (Илья Меерович Соболь) in 1967.
These sequences use a base of two to form successively finer uniform partitions of the unit interval and then reorder the coordinates in each dimension.
Good distributions in the "s"-dimensional unit hypercube.
Let "Is" = [0,1]"s" be the "s"-dimensional unit hypercube, and "f" a real integrable function over "Is". The original motivation of Sobol’ was to construct a sequence "xn" in "Is" so that
formula_0
and the convergence be as fast as possible.
It is more or less clear that for the sum to converge towards the integral, the points "xn" should fill "Is" minimizing the holes. Another good property would be that the projections of "xn" on a lower-dimensional face of "Is" leave very few holes as well. Hence the homogeneous filling of "Is" does not qualify because in lower dimensions many points will be at the same place, therefore useless for the integral estimation.
These good distributions are called ("t","m","s")-nets and ("t","s")-sequences in base "b". To introduce them, define first an elementary "s"-interval in base "b" a subset of "Is" of the form
formula_1
where "aj" and "dj" are non-negative integers, and formula_2 for all "j" in {1, ...,s}.
Given 2 integers formula_3, a ("t","m","s")-net in base "b" is a sequence "xn" of "bm" points of "Is" such that formula_4 for all elementary interval "P" in base "b" of hypervolume "λ"("P") = "bt−m".
Given a non-negative integer "t", a ("t","s")-sequence in base "b" is an infinite sequence of points "xn" such that for all integers formula_5, the sequence formula_6 is a ("t","m","s")-net in base "b".
In his article, Sobol’ described "Πτ-meshes" and "LPτ sequences", which are ("t","m","s")-nets and ("t","s")-sequences in base 2 respectively. The terms ("t","m","s")-nets and ("t","s")-sequences in base "b" (also called Niederreiter sequences) were coined in 1988 by Harald Niederreiter. The term "Sobol’ sequences" was introduced in late English-speaking papers in comparison with Halton, Faure and other low-discrepancy sequences.
A fast algorithm.
A more efficient Gray code implementation was proposed by Antonov and Saleev.
As for the generation of Sobol’ numbers, they are clearly aided by the use of Gray code formula_7 instead of "n" for constructing the "n"-th point draw.
Suppose we have already generated all the Sobol’ sequence draws up to "n" − 1 and kept in memory the values "x""n"−1,"j" for all the required dimensions. Since the Gray code "G"("n") differs from that of the preceding one "G"("n" − 1) by just a single, say the "k"-th, bit (which is a rightmost zero bit of "n" − 1), all that needs to be done is a single XOR operation for each dimension in order to propagate all of the "x""n"−1 to "x""n", i.e.
formula_8
Additional uniformity properties.
Sobol’ introduced additional uniformity conditions known as property A and A’.
There are mathematical conditions that guarantee properties A and A'.
formula_9
where V"d" is the "d" × "d" binary matrix defined by
formula_10
with "v""k","j","m" denoting the "m"-th digit after the binary point of the direction number "v""k","j" = (0."v""k","j",1"v""k","j",2...)2.
formula_11
where U"d" is the 2"d" × 2"d" binary matrix defined by
formula_12
with "v""k","j","m" denoting the "m"-th digit after the binary point of the direction number "v""k","j" = (0."v""k","j",1"v""k","j",2...)2.
Tests for properties A and A’ are independent. Thus it is possible to construct the Sobol’ sequence that satisfies both properties A and A’ or only one of them.
The initialisation of Sobol’ numbers.
To construct a Sobol’ sequence, a set of direction numbers "v""i","j" needs to be selected. There is some freedom in the selection of initial direction numbers. Therefore, it is possible to receive different realisations of the Sobol’ sequence for selected dimensions. A bad selection of initial numbers can considerably reduce the efficiency of Sobol’ sequences when used for computation.
Arguably the easiest choice for the initialisation numbers is just to have the "l"-th leftmost bit set, and all other bits to be zero, i.e. "m""k","j" = 1 for all "k" and "j". This initialisation is usually called "unit initialisation". However, such a sequence fails the test for Property A and A’ even for low dimensions and hence this initialisation is bad.
Implementation and availability.
Good initialisation numbers for different numbers of dimensions are provided by several authors. For example, Sobol’ provides initialisation numbers for dimensions up to 51. The same set of initialisation numbers is used by Bratley and Fox.
Initialisation numbers for high dimensions are available on Joe and Kuo. Peter Jäckel provides initialisation numbers up to dimension 32 in his book "Monte Carlo methods in finance".
Other implementations are available as C, Fortran 77, or Fortran 90 routines in the Numerical Recipes collection of software. A free/open-source implementation in up to 1111 dimensions, based on the Joe and Kuo initialisation numbers, is available in C, and up to 21201 dimensions in Python and Julia. A different free/open-source implementation in up to 1111 dimensions is available for C++, Fortran 90, Matlab, and Python.
Commercial Sobol’ sequence generators are available within, for example, the NAG Library. BRODA Ltd. provides Sobol' and scrambled Sobol' sequences generators with additional unifomity properties A and A' up to a maximum dimension 131072. These generators were co-developed with Prof. I. Sobol'. MATLAB contains Sobol' sequences generators up to dimension 1111 as part of its Statistics Toolbox. | [
{
"math_id": 0,
"text": " \\lim_{n\\to\\infty} \\frac{1}{n} \\sum_{i=1}^n f(x_i) = \\int_{I^s} f "
},
{
"math_id": 1,
"text": " \\prod_{j=1}^s \\left[ \\frac{a_j}{b^{d_j}}, \\frac{a_j+1}{b^{d_j}} \\right], "
},
{
"math_id": 2,
"text": " a_j < b^{d_j} "
},
{
"math_id": 3,
"text": "0\\leq t\\leq m"
},
{
"math_id": 4,
"text": "\\operatorname{Card} P \\cap \\{x_1, ..., x_{b^m}\\} = b^t"
},
{
"math_id": 5,
"text": "k \\geq 0, m \\geq t"
},
{
"math_id": 6,
"text": "\\{x_{kb^m}, ..., x_{(k+1)b^m-1}\\}"
},
{
"math_id": 7,
"text": "G(n)=n \\oplus \\lfloor n/2 \\rfloor"
},
{
"math_id": 8,
"text": "\nx_{n,i} = x_{n-1,i} \\oplus v_{k,i}.\n"
},
{
"math_id": 9,
"text": "\n\\det(\\mathbf{V}_d) \\equiv 1 (\\mod 2),\n"
},
{
"math_id": 10,
"text": "\n\\mathbf{V}_d := \\begin{pmatrix}\n{v_{1,1,1}}&{v_{2,1,1}}&{\\dots}&{v_{d,1,1}}\\\\ \n{v_{1,2,1}}&{v_{2,2,1}}&{\\dots}&{v_{d,2,1}}\\\\ \n{\\vdots}&{\\vdots}&{\\ddots}&{\\vdots}\\\\ \n{v_{1,d,1}}&{v_{2,d,1}}&{\\dots}&{v_{d,d,1}}\n\\end{pmatrix},\n"
},
{
"math_id": 11,
"text": "\n\\det(\\mathbf{U}_d) \\equiv 1 \\mod 2,\n"
},
{
"math_id": 12,
"text": "\n\\mathbf{U}_d := \\begin{pmatrix}\n{v_{1,1,1}}&{v_{1,1,2}}&{v_{2,1,1}}&{v_{2,1,2}}&{\\dots}&{v_{d,1,1}}&{v_{d,1,2}}\\\\ \n{v_{1,2,1}}&{v_{1,2,2}}&{v_{2,2,1}}&{v_{2,2,2}}&{\\dots}&{v_{d,2,1}}&{v_{d,2,2}}\\\\ \n{\\vdots}&{\\vdots}&{\\vdots}&{\\vdots}&{\\ddots}&{\\vdots}&{\\vdots}\\\\ \n{v_{1,2d,1}}&{v_{1,2d,2}}&{v_{2,2d,1}}&{v_{2,2d,2}}&{\\dots}&{v_{d,2d,1}}&{v_{d,2d,2}}\n\\end{pmatrix},\n"
}
]
| https://en.wikipedia.org/wiki?curid=1358959 |
13589753 | Takebe Kenkō | Japanese mathematician and cartographer
, also known as Takebe Kenkō, was a Japanese mathematician and cartographer during the Edo period.
Biography.
Takebe was the favorite student of the Japanese mathematician Seki Takakazu Takebe is considered to have extended and disseminated Seki's work.
In 1706, Takebe was offered a position in the Tokugawa shogunate's department of ceremonies.
In 1719, Takebe's new map of Japan was completed; and the work was highly valued for its quality and detail.
"Shōgun" Yoshimune honored Takebe with rank and successively better positions in the shogunate.
Legacy.
Takebe played critical role in the development of the Enri (, "circle principle") - a crude analogon to the western calculus. He also created charts for trigonometric functions.
He obtained power series expansion of formula_0 in 1722, 15 years earlier than Euler.
This was the first power series expansion obtained in Wasan. This result was first conjectured by heavy numeric computation.
He used Richardson extrapolation in 1695, about 200 years earlier than Richardson.
He also computated 41 digits of formula_1, based on polygon approximation and Richardson extrapolation.
Takebe Prizes.
In the context of its 50th anniversary celebrations, the Mathematical Society of Japan established the Takebe Prize and the Takebe Prizes for the encouragement of young people who show promise as mathematicians.
Selected works.
In a statistical overview derived from writings by and about Takebe Kenko, OCLC/WorldCat encompasses roughly 10+ works in 10+ publications in 3 languages and 10+ library holdings.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\arcsin(x))^2"
},
{
"math_id": 1,
"text": "\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=13589753 |
13595037 | Variance inflation factor | Statistical measure in mathematical model
In statistics, the variance inflation factor (VIF) is the ratio (quotient) of the variance of a parameter estimate when fitting a full model that includes other parameters to the variance of the parameter estimate if the model is fit with only the parameter on its own. The VIF provides an index that measures how much the variance (the square of the estimate's standard deviation) of an estimated regression coefficient is increased because of collinearity.
Cuthbert Daniel claims to have invented the concept behind the variance inflation factor, but did not come up with the name.
Definition.
Consider the following linear model with "k" independent variables:
"Y" = "β"0 + "β"1 "X"1 + "β"2 "X" 2 + ... + "β""k" "X""k" + "ε".
The standard error of the estimate of "β""j" is the square root of the "j" + 1 diagonal element of "s"2("X"′"X")−1, where "s" is the root mean squared error (RMSE) (note that RMSE2 is a consistent estimator of the true variance of the error term, formula_0); "X" is the regression design matrix — a matrix such that "X""i", "j"+1 is the value of the "j"th independent variable for the "i"th case or observation, and such that "X""i",1, the predictor vector associated with the intercept term, equals 1 for all "i". It turns out that the square of this standard error, the estimated variance of the estimate of "β""j", can be equivalently expressed as:
formula_1
where "R""j"2 is the multiple "R"2 for the regression of "X""j" on the other covariates (a regression that does not involve the response variable "Y") and formula_2 are the coefficient estimates, id est, the estimates of formula_3. This identity separates the influences of several distinct factors on the variance of the coefficient estimate:
The remaining term, 1 / (1 − "R""j"2) is the VIF. It reflects all other factors that influence the uncertainty in the coefficient estimates. The VIF equals 1 when the vector "X""j" is orthogonal to each column of the design matrix for the regression of "X""j" on the other covariates. By contrast, the VIF is greater than 1 when the vector "X""j" is not orthogonal to all columns of the design matrix for the regression of "X""j" on the other covariates. Finally, note that the VIF is invariant to the scaling of the variables (that is, we could scale each variable "X""j" by a constant "c""j" without changing the VIF).
formula_5
Now let formula_6, and without losing generality, we reorder the columns of "X" to set the first column to be formula_7
formula_8
formula_9.
By using Schur complement, the element in the first row and first column in formula_10 is,
formula_11
Then we have,
formula_12
Here formula_13 is the coefficient of regression of dependent variable formula_14 over covariate formula_15. formula_16 is the corresponding residual sum of squares.
Calculation and analysis.
We can calculate "k" different VIFs (one for each "X""i") in three steps:
Step one.
First we run an ordinary least square regression that has "X""i" as a function of all the other explanatory variables in the first equation. If "i" = 1, for example, equation would be
formula_17
where formula_18 is a constant and formula_19 is the error term.
Step two.
Then, calculate the VIF factor for formula_20 with the following formula :
formula_21
where "R"2"i" is the coefficient of determination of the regression equation in step one, with formula_22 on the left hand side, and all other predictor variables (all the other X variables) on the right hand side.
Step three.
Analyze the magnitude of multicollinearity by considering the size of the formula_23. A rule of thumb is that if formula_24 then multicollinearity is high (a cutoff of 5 is also commonly used). However, there is no value of VIF greater than 1 in which the variance of the slopes of predictors isn't inflated. As a result, including two or more variables in a multiple regression that are not orthogonal (i.e. have correlation = 0), will alter each other's slope, SE of the slope, and P-value, because there is shared variance between the predictors that can't be uniquely attributed to any one of them.
Some software instead calculates the tolerance which is just the reciprocal of the VIF. The choice of which to use is a matter of personal preference.
Interpretation.
The square root of the variance inflation factor indicates how much larger the standard error increases compared to if that variable had 0 correlation to other predictor variables in the model.
Example
If the variance inflation factor of a predictor variable were 5.27 (√5.27 = 2.3), this means that the standard error for the coefficient of that predictor variable is 2.3 times larger than if that predictor variable had 0 correlation with the other predictor variables.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\sigma^2 "
},
{
"math_id": 1,
"text": "\n\\widehat{\\operatorname{var}}(\\hat{\\beta}_j) = \\frac{s^2}{(n-1)\\widehat{\\operatorname{var}}(X_j)}\\cdot \\frac{1}{1-R_j^2},\n"
},
{
"math_id": 2,
"text": "\\hat{\\beta}_j"
},
{
"math_id": 3,
"text": "{\\beta}_j"
},
{
"math_id": 4,
"text": "\\widehat\\operatorname{var}(X_j)"
},
{
"math_id": 5,
"text": "\n\\widehat{\\operatorname{var}}(\\hat{\\beta}_j) = s^2 [(X^T X)^{-1}]_{jj}\n"
},
{
"math_id": 6,
"text": "r= X^T X"
},
{
"math_id": 7,
"text": "X_j"
},
{
"math_id": 8,
"text": "\nr^{-1} = \\begin{bmatrix} r_{j,j} & r_{j,-j} \\\\ r_{-j,j} & r_{-j,-j}\\end{bmatrix}^{-1}\n"
},
{
"math_id": 9,
"text": " r_{j,j} = X_j^T X_j, r_{j,-j} = X_j^T X_{-j}, r_{-j,j} = X_{-j}^T X_j, r_{-j,-j} = X_{-j}^T X_{-j}"
},
{
"math_id": 10,
"text": " r^{-1} "
},
{
"math_id": 11,
"text": "r^{-1}_{1,1} = [r_{j,j} - r_{j,-j} r_{-j,-j}^{-1} r_{-j,j} ]^{-1} "
},
{
"math_id": 12,
"text": "\n\\begin{align}\n& \\widehat{\\operatorname{var}}(\\hat{\\beta}_j) = s^2 [(X^T X)^{-1}]_{jj} = s^2 r^{-1}_{1,1} \\\\\n= {} & s^2 [X_j^T X_j - X_j^T X_{-j} (X_{-j}^T X_{-j})^{-1} X_{-j}^T X_j ]^{-1} \\\\ \n= {} & s^2 [X_j^T X_j - X_j^T X_{-j} (X_{-j}^T X_{-j})^{-1} (X_{-j}^T X_{-j}) (X_{-j}^T X_{-j})^{-1} X_{-j}^T X_j ]^{-1} \\\\\n= {} & s^2 [X_j^T X_j - \\hat{\\beta}_{*j}^T(X_{-j}^T X_{-j}) \\hat{\\beta}_{*j} ]^{-1} \\\\\n= {} & s^2 \\frac{1}{\\mathrm{RSS}_j} \\\\\n= {} & \\frac{s^2}{(n-1)\\widehat\\operatorname{var}(X_j)}\\cdot \\frac{1}{1-R_j^2}\n\\end{align}\n"
},
{
"math_id": 13,
"text": " \\hat{\\beta}_{*j} "
},
{
"math_id": 14,
"text": "X_j "
},
{
"math_id": 15,
"text": "X_{-j} "
},
{
"math_id": 16,
"text": "\\mathrm{RSS}_j "
},
{
"math_id": 17,
"text": "X_1=\\alpha_0 + \\alpha_2 X_2 + \\alpha_3 X_3 + \\cdots + \\alpha_k X_k +\\varepsilon"
},
{
"math_id": 18,
"text": "\\alpha_0"
},
{
"math_id": 19,
"text": "\\varepsilon"
},
{
"math_id": 20,
"text": "\\hat\\alpha_i"
},
{
"math_id": 21,
"text": "\\mathrm{VIF}_i = \\frac{1}{1-R^2_i}"
},
{
"math_id": 22,
"text": " X_i "
},
{
"math_id": 23,
"text": "\\operatorname{VIF}(\\hat \\alpha_i)"
},
{
"math_id": 24,
"text": "\\operatorname{VIF}(\\hat \\alpha_i) > 10"
}
]
| https://en.wikipedia.org/wiki?curid=13595037 |
1359541 | Solar rotation | Differential rotation of the Sun
Solar rotation varies with latitude. The Sun is not a solid body, but is composed of a gaseous plasma. Different latitudes rotate at different periods. The source of this differential rotation is an area of current research in solar astronomy. The rate of surface rotation is observed to be the fastest at the equator (latitude "φ"
0°) and to decrease as latitude increases. The solar rotation period is 25.67 days at the equator and 33.40 days at 75 degrees of latitude.
The Carrington rotation at the time this article was loaded, (UTC), was CR.
Surface rotation as an equation.
The differential rotation rate of the photosphere can be approximated by the equation:
formula_0
where formula_1 is the angular velocity in degrees per day, formula_2 is the solar latitude, A is angular velocity at the equator, and B, C are constants controlling the decrease in velocity with increasing latitude. The values of A, B, and C differ depending on the techniques used to make the measurement, as well as the time period studied. A current set of accepted average values is:
formula_3
formula_4
formula_5
Sidereal rotation.
At the equator, the solar rotation period is 24.47 days. This is called the sidereal rotation period, and should not be confused with the synodic rotation period of 26.24 days, which is the time for a fixed feature on the Sun to rotate to the same apparent position as viewed from Earth (the Earth's orbital rotation is in the same direction as the Sun's rotation). The synodic period is longer because the Sun must rotate for a sidereal period plus an extra amount due to the orbital motion of Earth around the Sun. Note that astrophysical literature does not typically use the equatorial rotation period, but instead often uses the definition of a Carrington rotation: a synodic rotation period of 27.2753 days or a sidereal period of 25.38 days. This chosen period roughly corresponds to the prograde rotation at a latitude of 26° north or south, which is consistent with the typical latitude of sunspots and corresponding periodic solar activity. When the Sun is viewed from the "north" (above Earth's north pole), solar rotation is counterclockwise (eastward). To a person standing on Earth's North Pole at the time of equinox, sunspots would appear to move from left to right across the Sun's face.
In Stonyhurst heliographic coordinates, the left side of the Sun's face is called East, and the right side of the Sun's face is called West. Therefore, sunspots are said to move across the Sun's face from east to west.
Bartels' Rotation Number.
Bartels' Rotation Number is a serial count that numbers the apparent rotations of the Sun as viewed from Earth, and is used to track certain recurring or shifting patterns of solar activity. For this purpose, each rotation has a length of exactly 27 days, close to the synodic Carrington rotation rate. Julius Bartels arbitrarily assigned rotation day one to 8 February 1832. The serial number serves as a kind of calendar to mark the recurrence periods of solar and geophysical parameters.
Carrington rotation.
The Carrington rotation is a system for comparing locations on the Sun over a period of time, allowing the following of sunspot groups or reappearance of eruptions at a later time.
Because solar rotation is variable with latitude, depth and time, any such system is necessarily arbitrary and only makes comparison meaningful over moderate periods of time. Solar rotation is taken to be 27.2753 days (see below) for the purpose of Carrington rotations. Each rotation of the Sun under this scheme is given a unique number called the Carrington Rotation Number, starting from November 9, 1853. (The Bartels Rotation Number is a similar numbering scheme that uses a period of exactly 27 days and starts from February 8, 1832.)
The heliographic longitude of a solar feature conventionally refers to its angular distance relative to the central meridian crossed by the Sun-Earth radial line.
The "Carrington longitude" of the same feature refers to an arbitrary fixed reference point of an imagined rigid rotation, as defined originally by Richard Christopher Carrington.
Carrington determined the solar rotation rate from low latitude sunspots in the 1850s and arrived at 25.38 days for the sidereal rotation period. Sidereal rotation is measured relative to the stars, but because the Earth is orbiting the Sun, we see this period as 27.2753 days.
It is possible to construct a diagram with the longitude of sunspots horizontally and time vertically. The longitude is measured by the time of crossing the central meridian and based on the Carrington rotations. In each rotation, plotted under the preceding ones, most sunspots or other phenomena will reappear directly below the same phenomenon on the previous rotation. There may be slight drifts left or right over longer periods of time.
The Bartels "musical diagram" or the Condegram spiral plot are other techniques for expressing the approximate 27-day periodicity of various phenomena originating at the solar surface.
Start of Carrington Rotation.
Start dates of a new synodical solar rotation number according to Carrington.
Using sunspots to measure rotation.
The rotation constants have been measured by measuring the motion of various features ("tracers") on the solar surface. The first and most widely used tracers are sunspots. Though sunspots had been observed since ancient times, it was only when the telescope came into use that they were observed to turn with the Sun, and thus the period of the solar rotation could be defined. The English scholar Thomas Harriot was probably the first to observe sunspots telescopically as evidenced by a drawing in his notebook dated December 8, 1610, and the first published observations (June 1611) entitled “De Maculis in Sole Observatis, et Apparente earum cum Sole Conversione Narratio” ("Narration on Spots Observed on the Sun and their Apparent Rotation with the Sun") were by Johannes Fabricius who had been systematically observing the spots for a few months and had noted also their movement across the solar disc. This can be considered the first observational evidence of the solar rotation. Christoph Scheiner (“Rosa Ursine sive solis”, book 4, part 2, 1630) was the first to measure the equatorial rotation rate of the Sun and noticed that the rotation at higher latitudes is slower, so he can be considered the discoverer of solar differential rotation.
Each measurement gives a slightly different answer, yielding the above standard deviations (shown as +/−). St. John (1918) was perhaps the first to summarise the published solar rotation rates, and concluded that the differences in series measured in different years can hardly be attributed to personal observation or to local disturbances on the Sun, and are probably due to time variations in the rate of rotation, and Hubrecht (1915) was the first one to find that the two solar hemispheres rotate differently. A study of magnetograph data showed a synodic period in agreement with other studies of 26.24 days at the equator and almost 38 days at the poles.
Internal solar rotation.
Until the advent of helioseismology, the study of wave oscillations in the Sun, very little was known about the internal rotation of the Sun. The differential profile of the surface was thought to extend into the solar interior as rotating cylinders of constant angular momentum. Through helioseismology this is now known not to be the case and the rotation profile of the Sun has been found. On the surface, the Sun rotates slowly at the poles and quickly at the equator. This profile extends on roughly radial lines through the solar convection zone to the interior. At the tachocline the rotation abruptly changes to solid-body rotation in the solar radiation zone.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega=A+B\\,\\sin^2(\\varphi)+C\\,\\sin^4(\\varphi)"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": "A= 14.713 \\pm 0.0491\\ ^{\\circ}/\\text{day}"
},
{
"math_id": 4,
"text": "B= -2.396 \\pm 0.188\\ ^{\\circ}/\\text{day}"
},
{
"math_id": 5,
"text": "C= -1.787 \\pm 0.253\\ ^{\\circ}/\\text{day}"
}
]
| https://en.wikipedia.org/wiki?curid=1359541 |
1359832 | Pregeometry (model theory) | Formulation of matroids using closure operators
Pregeometry, and in full combinatorial pregeometry, are essentially synonyms for "matroid". They were introduced by Gian-Carlo Rota with the intention of providing a less "ineffably cacophonous" alternative term. Also, the term combinatorial geometry, sometimes abbreviated to geometry, was intended to replace "simple matroid". These terms are now infrequently used in the study of matroids.
It turns out that many fundamental concepts of linear algebra – closure, independence, subspace, basis, dimension – are available in the general framework of pregeometries.
In the branch of mathematical logic called model theory, infinite finitary matroids, there called "pregeometries" (and "geometries" if they are simple matroids), are used in the discussion of independence phenomena. The study of how pregeometries, geometries, and abstract closure operators influence the structure of first-order models is called geometric stability theory.
Motivation.
If formula_0 is a vector space over some field and formula_1, we define formula_2 to be the set of all linear combinations of vectors from formula_3, also known as the span of formula_3. Then we have formula_4 and formula_5 and formula_6. The Steinitz exchange lemma is equivalent to the statement: if formula_7, then formula_8
The linear algebra concepts of independent set, generating set, basis and dimension can all be expressed using the formula_9-operator alone. A pregeometry is an abstraction of this situation: we start with an arbitrary set formula_10 and an arbitrary operator formula_9 which assigns to each subset formula_3 of formula_10 a subset formula_2 of formula_10, satisfying the properties above. Then we can define the "linear algebra" concepts also in this more general setting.
This generalized notion of dimension is very useful in model theory, where in certain situation one can argue as follows: two models with the same cardinality must have the same dimension and two models with the same dimension must be isomorphic.
Definitions.
Pregeometries and geometries.
A combinatorial pregeometry (also known as a finitary matroid) is a pair formula_11, where formula_10 is a set and formula_12 (called the closure map) satisfies the following axioms. For all formula_13 and formula_14:
Sets of the form formula_2 for some formula_24 are called closed. It is then clear that finite intersections of closed sets are closed and that formula_2 is the smallest closed set containing formula_3.
A geometry is a pregeometry in which the closure of singletons are singletons and the closure of the empty set is the empty set.
Independence, bases and dimension.
Given sets formula_25, formula_3 is independent over formula_26 if formula_27 for any formula_28. We say that formula_3 is independent if it is independent over the empty set.
A set formula_29 is a basis for formula_3 over formula_26 if it is independent over formula_26 and formula_30.
A basis is the same as a maximal independent subset, and using Zorn's lemma one can show that every set has a basis. Since a pregeometry satisfies the Steinitz exchange property all bases are of the same cardinality, hence we may define the dimension of formula_3 over formula_26, written as formula_31, as the cardinality of any basis of formula_3 over formula_26. Again, the dimension formula_32 of formula_3 is defined to be the dimesion over the empty set.
The sets formula_33 are independent over formula_26 if formula_34 whenever formula_35 is a finite subset of formula_3. Note that this relation is symmetric.
Automorphisms and homogeneous pregeometries.
An automorphism of a pregeometry formula_11 is a bijection formula_36 such that formula_37 for any formula_38.
A pregeometry formula_10 is said to be homogeneous if for any closed formula_38 and any two elements formula_39 there is an automorphism of formula_10 which maps formula_40 to formula_41 and fixes formula_42 pointwise.
The associated geometry and localizations.
Given a pregeometry formula_11 its associated geometry (sometimes referred in the literature as the canonical geometry) is the geometry formula_43 where
Its easy to see that the associated geometry of a homogeneous pregeometry is homogeneous.
Given formula_24 the localization of formula_10 is the pregeometry formula_46 where formula_47.
Types of pregeometries.
The pregeometry formula_11 is said to be:
Triviality, modularity and local modularity pass to the associated geometry and are preserved under localization.
If formula_10 is a locally modular homogeneous pregeometry and formula_53 then the localization of formula_10 in formula_41 is modular.
The geometry formula_10 is modular if and only if whenever formula_54, formula_24, formula_55 and formula_56 then formula_57.
Examples.
The trivial example.
If formula_10 is any set we may define formula_58 for all formula_24. This pregeometry is a trivial, homogeneous, locally finite geometry.
Vector spaces and projective spaces.
Let formula_59 be a field (a division ring actually suffices) and let formula_0 be a vector space over formula_59. Then formula_0 is a pregeometry where closures of sets are defined to be their span. The closed sets are the linear subspaces of formula_0 and the notion of dimension from linear algebra coincides with the pregeometry dimension.
This pregeometry is homogeneous and modular. Vector spaces are considered to be the prototypical example of modularity.
formula_0 is locally finite if and only if formula_59 is finite.
formula_0 is not a geometry, as the closure of any nontrivial vector is a subspace of size at least formula_60.
The associated geometry of a formula_61-dimensional vector space over formula_59 is the formula_62-dimensional projective space over formula_59. It is easy to see that this pregeometry is a projective geometry.
Affine spaces.
Let formula_0 be a formula_61-dimensional affine space over a field formula_59. Given a set define its closure to be its affine hull (i.e. the smallest affine subspace containing it).
This forms a homogeneous formula_63-dimensional geometry.
An affine space is not modular (for example, if formula_42 and formula_51 are parallel lines then the formula in the definition of modularity fails). However, it is easy to check that all localizations are modular.
Field extensions and transcendence degree.
Let formula_64 be a field extension. The set formula_65 becomes a pregeometry if we define formula_66for formula_67. The set formula_3 is independent in this pregeometry if and only if it is algebraically independent over formula_68. The dimension of formula_3 coincides with the transcendence degree formula_69.
In model theory, the case of formula_65 being algebraically closed and formula_68 its prime field is especially important.
While vector spaces are modular and affine spaces are "almost" modular (i.e. everywhere locally modular), algebraically closed fields are examples of the other extremity, not being even locally modular (i.e. none of the localizations is modular).
Strongly minimal sets in model theory.
Given a countable first-order language "L" and an "L-"structure "M," any definable subset "D" of "M" that is strongly minimal gives rise to a pregeometry on the set "D". The closure operator here is given by the algebraic closure in the model-theoretic sense.
A model of a strongly minimal theory is determined up to isomorphism by its dimension as a pregeometry; this fact is used in the proof of Morley's categoricity theorem.
In minimal sets over stable theories the independence relation coincides with the notion of forking independence. | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "A\\subseteq V"
},
{
"math_id": 2,
"text": "\\text{cl}(A)"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "A\\subseteq \\text{cl}(A)"
},
{
"math_id": 5,
"text": "\\text{cl}(\\text{cl}(A))=\\text{cl}(A)"
},
{
"math_id": 6,
"text": "A\\subseteq B \\Rightarrow \\text{cl}(A)\\subseteq\\text{cl}(B)"
},
{
"math_id": 7,
"text": "b\\in\\text{cl}(A\\cup\\{c\\})\\smallsetminus\\text{cl}(A)"
},
{
"math_id": 8,
"text": "c\\in\\text{cl}(A\\cup\\{b\\})."
},
{
"math_id": 9,
"text": "\\text{cl}"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "(S,\\text{cl})"
},
{
"math_id": 12,
"text": "\\text{cl}:\\mathcal{P}(S)\\to\\mathcal{P}(S)"
},
{
"math_id": 13,
"text": "a, b, c\\in S"
},
{
"math_id": 14,
"text": "A, B\\subseteq S"
},
{
"math_id": 15,
"text": "\\text{cl}:(\\mathcal{P}(S),\\subseteq)\\to(\\mathcal{P}(S),\\subseteq)"
},
{
"math_id": 16,
"text": "\\text{id}"
},
{
"math_id": 17,
"text": "A\\subseteq B"
},
{
"math_id": 18,
"text": "A\\subseteq\\text{cl}(A)\\subseteq\\text{cl}(B)"
},
{
"math_id": 19,
"text": "a\\in\\text{cl}(A)"
},
{
"math_id": 20,
"text": "F\\subseteq A"
},
{
"math_id": 21,
"text": "a\\in\\text{cl}(F)"
},
{
"math_id": 22,
"text": "c\\in\\text{cl}(A\\cup\\{b\\}) "
},
{
"math_id": 23,
"text": "c\\in\\text{cl}(A\\cup\\{b\\})\\smallsetminus\\text{cl}(A)"
},
{
"math_id": 24,
"text": "A\\subseteq S"
},
{
"math_id": 25,
"text": "A,D\\subseteq S"
},
{
"math_id": 26,
"text": "D"
},
{
"math_id": 27,
"text": "a\\notin \\text{cl}((A\\setminus\\{a\\})\\cup D)"
},
{
"math_id": 28,
"text": "a\\in A"
},
{
"math_id": 29,
"text": "B \\subseteq A"
},
{
"math_id": 30,
"text": "A\\subseteq \\text{cl}(B\\cup D)"
},
{
"math_id": 31,
"text": "\\text{dim}_D A"
},
{
"math_id": 32,
"text": "\\text{dim} A"
},
{
"math_id": 33,
"text": "A,B"
},
{
"math_id": 34,
"text": "\\text{dim}_{B\\cup D} A' = \\dim_D A'"
},
{
"math_id": 35,
"text": "A'"
},
{
"math_id": 36,
"text": "\\sigma:S\\to S"
},
{
"math_id": 37,
"text": "\\sigma(\\text{cl}(X))=\\text{cl}(\\sigma (X))"
},
{
"math_id": 38,
"text": "X\\subseteq S"
},
{
"math_id": 39,
"text": "a,b\\in S\\setminus X"
},
{
"math_id": 40,
"text": "a"
},
{
"math_id": 41,
"text": "b"
},
{
"math_id": 42,
"text": "X"
},
{
"math_id": 43,
"text": "(S',\\text{cl}')"
},
{
"math_id": 44,
"text": "S'=\\{\\text{cl}(a)\\mid a\\in S\\setminus \\text{cl} (\\varnothing)\\}"
},
{
"math_id": 45,
"text": "\\text{cl}'(\\{\\text{cl}(a)\\mid a\\in X\\}) = \\{\\text{cl}(b)\\mid b\\in\\text{cl}(X)\\}"
},
{
"math_id": 46,
"text": "(S,\\text{cl}_A)"
},
{
"math_id": 47,
"text": "\\text{cl}_A(X)=\\text{cl}(X\\cup A)"
},
{
"math_id": 48,
"text": "\\text{cl}(X)=\\bigcup\\{\\text{cl}(a)\\mid a\\in X\\}"
},
{
"math_id": 49,
"text": "X,Y\\subseteq S"
},
{
"math_id": 50,
"text": "\n\\text{dim}(X\\cup Y) = \\text{dim}(X) + \\text{dim}(Y) - \\text{dim}(X\\cap Y)\n"
},
{
"math_id": 51,
"text": "Y"
},
{
"math_id": 52,
"text": "X\\cap Y"
},
{
"math_id": 53,
"text": "a\\in S\\setminus\\text{cl}(\\varnothing)"
},
{
"math_id": 54,
"text": "a,b\\in S"
},
{
"math_id": 55,
"text": "\\text{dim}\\{a,b\\}=2"
},
{
"math_id": 56,
"text": "\\text{dim}_A\\{a,b\\} \\le 1"
},
{
"math_id": 57,
"text": "(\\text{cl}\\{a,b\\}\\cap\\text{cl}(A))\\setminus\\text{cl}(\\varnothing)\\ne\\varnothing"
},
{
"math_id": 58,
"text": "\\text{cl}(A)=A"
},
{
"math_id": 59,
"text": "F"
},
{
"math_id": 60,
"text": "2"
},
{
"math_id": 61,
"text": "\\kappa"
},
{
"math_id": 62,
"text": "(\\kappa-1)"
},
{
"math_id": 63,
"text": "(\\kappa+1)"
},
{
"math_id": 64,
"text": "L/K"
},
{
"math_id": 65,
"text": "L"
},
{
"math_id": 66,
"text": "\\text{cl}(A)=\\{x\\in L : x \\text{ is algebraic over } K(A)\\}"
},
{
"math_id": 67,
"text": "A\\subseteq L"
},
{
"math_id": 68,
"text": "K"
},
{
"math_id": 69,
"text": "\\text{trdeg}(K(A)/K)"
}
]
| https://en.wikipedia.org/wiki?curid=1359832 |
13600 | Hipparchus | 2nd-century BC Greek astronomer, geographer and mathematician
Hipparchus (; , ; c. 190 – c. 120 BC) was a Greek astronomer, geographer, and mathematician. He is considered the founder of trigonometry, but is most famous for his incidental discovery of the precession of the equinoxes. Hipparchus was born in Nicaea, Bithynia, and probably died on the island of Rhodes, Greece. He is known to have been a working astronomer between 162 and 127 BC.
Hipparchus is considered the greatest ancient astronomical observer and, by some, the greatest overall astronomer of antiquity. He was the first whose quantitative and accurate models for the motion of the Sun and Moon survive. For this he certainly made use of the observations and perhaps the mathematical techniques accumulated over centuries by the Babylonians and by Meton of Athens (fifth century BC), Timocharis, Aristyllus, Aristarchus of Samos, and Eratosthenes, among others.
He developed trigonometry and constructed trigonometric tables, and he solved several problems of spherical trigonometry. With his solar and lunar theories and his trigonometry, he may have been the first to develop a reliable method to predict solar eclipses.
His other reputed achievements include the discovery and measurement of Earth's precession, the compilation of the first known comprehensive star catalog from the western world, and possibly the invention of the astrolabe, as well as of the armillary sphere that he may have used in creating the star catalogue. Hipparchus is sometimes called the "father of astronomy", a title conferred on him by Jean Baptiste Joseph Delambre in 1817.
Life and work.
Hipparchus was born in Nicaea (Greek: ), in Bithynia. The exact dates of his life are not known, but Ptolemy attributes astronomical observations to him in the period from 147 to 127 BC, and some of these are stated as made in Rhodes; earlier observations since 162 BC might also have been made by him. His birth date (c. 190 BC) was calculated by Delambre based on clues in his work. Hipparchus must have lived some time after 127 BC because he analyzed and published his observations from that year. Hipparchus obtained information from Alexandria as well as Babylon, but it is not known when or if he visited these places. He is believed to have died on the island of Rhodes, where he seems to have spent most of his later life.
In the second and third centuries, coins were made in his honour in Bithynia that bear his name and show him with a globe.
Relatively little of Hipparchus's direct work survives into modern times. Although he wrote at least fourteen books, only his commentary on the popular astronomical poem by Aratus was preserved by later copyists. Most of what is known about Hipparchus comes from Strabo's "Geography" and Pliny's "Natural History" in the first century; Ptolemy's second-century "Almagest"; and additional references to him in the fourth century by Pappus and Theon of Alexandria in their commentaries on the "Almagest".
Hipparchus's only preserved work is "Commentary on the Phaenomena of Eudoxus and Aratus" (Greek: ). This is a highly critical commentary in the form of two books on a popular poem by Aratus based on the work by Eudoxus. Hipparchus also made a list of his major works that apparently mentioned about fourteen books, but which is only known from references by later authors. His famous star catalog was incorporated into the one by Ptolemy and may be almost perfectly reconstructed by subtraction of two and two-thirds degrees from the longitudes of Ptolemy's stars. The first trigonometric table was apparently compiled by Hipparchus, who is consequently now known as "the father of trigonometry".
Babylonian sources.
Earlier Greek astronomers and mathematicians were influenced by Babylonian astronomy to some extent, for instance the period relations of the Metonic cycle and Saros cycle may have come from Babylonian sources (see "Babylonian astronomical diaries"). Hipparchus seems to have been the first to exploit Babylonian astronomical knowledge and techniques systematically. Eudoxus in the 4th century BC and Timocharis and Aristillus in the 3rd century BC already divided the ecliptic in 360 parts (our degrees, Greek: moira) of 60 arcminutes and Hipparchus continued this tradition. It was only in Hipparchus's time (2nd century BC) when this division was introduced (probably by Hipparchus's contemporary Hypsikles) for all circles in mathematics. Eratosthenes (3rd century BC), in contrast, used a simpler sexagesimal system dividing a circle into 60 parts. Hipparchus also adopted the Babylonian astronomical "cubit" unit (Akkadian "ammatu", Greek πῆχυς "pēchys") that was equivalent to 2° or 2.5° ('large cubit').
Hipparchus probably compiled a list of Babylonian astronomical observations; Gerald J. Toomer, a historian of astronomy, has suggested that Ptolemy's knowledge of eclipse records and other Babylonian observations in the "Almagest" came from a list made by Hipparchus. Hipparchus's use of Babylonian sources has always been known in a general way, because of Ptolemy's statements, but the only text by Hipparchus that survives does not provide sufficient information to decide whether Hipparchus's knowledge (such as his usage of the units cubit and finger, degrees and minutes, or the concept of hour stars) was based on Babylonian practice. However, Franz Xaver Kugler demonstrated that the synodic and anomalistic periods that Ptolemy attributes to Hipparchus had already been used in Babylonian ephemerides, specifically the collection of texts nowadays called "System B" (sometimes attributed to Kidinnu).
Hipparchus's long draconitic lunar period (5,458 months = 5,923 lunar nodal periods) also appears a few times in Babylonian records. But the only such tablet explicitly dated, is post-Hipparchus so the direction of transmission is not settled by the tablets.
Geometry, trigonometry and other mathematical techniques.
Hipparchus was recognized as the first mathematician known to have possessed a trigonometric table, which he needed when computing the eccentricity of the orbits of the Moon and Sun. He tabulated values for the chord function, which for a central angle in a circle gives the length of the straight line segment between the points where the angle intersects the circle. He may have computed this for a circle with a circumference of 21,600 units and a radius (rounded) of 3,438 units; this circle has a unit length for each arcminute along its perimeter. (This was “proven” by Toomer, but he later “cast doubt“ upon his earlier affirmation. Other authors have argued that a circle of radius 3,600 units may instead have been used by Hipparchus.) He tabulated the chords for angles with increments of 7.5°. In modern terms, the chord subtended by a central angle in a circle of given radius R equals R times twice the sine of half of the angle, i.e.:
formula_0
The now-lost work in which Hipparchus is said to have developed his chord table, is called "Tōn en kuklōi eutheiōn" ("Of Lines Inside a Circle") in Theon of Alexandria's fourth-century commentary on section I.10 of the "Almagest". Some claim the table of Hipparchus may have survived in astronomical treatises in India, such as the "Surya Siddhanta". Trigonometry was a significant innovation, because it allowed Greek astronomers to solve any triangle, and made it possible to make quantitative astronomical models and predictions using their preferred geometric techniques.
Hipparchus must have used a better approximation for π than the one given by Archimedes of between <templatestyles src="Fraction/styles.css" />3+10⁄71 (≈ 3.1408) and <templatestyles src="Fraction/styles.css" />3+1⁄7 (≈ 3.1429). Perhaps he had the approximation later used by Ptolemy, sexagesimal 3;08,30 (≈ 3.1417) ("Almagest" VI.7).
Hipparchus could have constructed his chord table using the Pythagorean theorem and a theorem known to Archimedes. He also might have used the relationship between sides and diagonals of a cyclic quadrilateral, today called Ptolemy's theorem because its earliest extant source is a proof in the "Almagest" (I.10).
The stereographic projection was ambiguously attributed to Hipparchus by Synesius (c. 400 AD), and on that basis Hipparchus is often credited with inventing it or at least knowing of it. However, some scholars believe this conclusion to be unjustified by available evidence. The oldest extant description of the stereographic projection is found in Ptolemy's "Planisphere" (2nd century AD).
Besides geometry, Hipparchus also used arithmetic techniques developed by the Chaldeans. He was one of the first Greek mathematicians to do this and, in this way, expanded the techniques available to astronomers and geographers.
There are several indications that Hipparchus knew spherical trigonometry, but the first surviving text discussing it is by Menelaus of Alexandria in the first century, who now, on that basis, commonly is credited with its discovery. (Previous to the finding of the proofs of Menelaus a century ago, Ptolemy was credited with the invention of spherical trigonometry.) Ptolemy later used spherical trigonometry to compute things such as the rising and setting points of the ecliptic, or to take account of the lunar parallax. If he did not use spherical trigonometry, Hipparchus may have used a globe for these tasks, reading values off coordinate grids drawn on it, or he may have made approximations from planar geometry, or perhaps used arithmetical approximations developed by the Chaldeans.
Lunar and solar theory.
Motion of the Moon.
Hipparchus also studied the motion of the Moon and confirmed the accurate values for two periods of its motion that Chaldean astronomers are widely presumed to have possessed before him. The traditional value (from Babylonian System B) for the mean synodic month is 29 days; 31,50,8,20 (sexagesimal) = 29.5305941... days. Expressed as 29 days + 12 hours + hours this value has been used later in the Hebrew calendar. The Chaldeans also knew that 251 synodic months ≈ 269 anomalistic months. Hipparchus used the multiple of this period by a factor of 17, because that interval is also an eclipse period, and is also close to an integer number of years (4,267 moons : 4,573 anomalistic periods : 4,630.53 nodal periods : 4,611.98 lunar orbits : 344.996 years : 344.982 solar orbits : 126,007.003 days : 126,351.985 rotations). What was so exceptional and useful about the cycle was that all 345-year-interval eclipse pairs occur slightly more than 126,007 days apart within a tight range of only approximately ±<templatestyles src="Fraction/styles.css" />1⁄2 hour, guaranteeing (after division by 4,267) an estimate of the synodic month correct to one part in order of magnitude 10 million.
Hipparchus could confirm his computations by comparing eclipses from his own time (presumably 27 January 141 BC and 26 November 139 BC according to Toomer) with eclipses from Babylonian records 345 years earlier ("Almagest" IV.2).
Later al-Biruni ("Qanun" VII.2.II) and Copernicus ("de revolutionibus" IV.4) noted that the period of 4,267 moons is approximately five minutes longer than the value for the eclipse period that Ptolemy attributes to Hipparchus. However, the timing methods of the Babylonians had an error of no fewer than eight minutes. Modern scholars agree that Hipparchus rounded the eclipse period to the nearest hour, and used it to confirm the validity of the traditional values, rather than to try to derive an improved value from his own observations. From modern ephemerides and taking account of the change in the length of the day (see ΔT) we estimate that the error in the assumed length of the synodic month was less than 0.2 second in the fourth century BC and less than 0.1 second in Hipparchus's time.
Orbit of the Moon.
It had been known for a long time that the motion of the Moon is not uniform: its speed varies. This is called its "anomaly" and it repeats with its own period; the anomalistic month. The Chaldeans took account of this arithmetically, and used a table giving the daily motion of the Moon according to the date within a long period. However, the Greeks preferred to think in geometrical models of the sky. At the end of the third century BC, Apollonius of Perga had proposed two models for lunar and planetary motion:
Apollonius demonstrated that these two models were in fact mathematically equivalent. However, all this was theory and had not been put to practice. Hipparchus is the first astronomer known to attempt to determine the relative proportions and actual sizes of these orbits. Hipparchus devised a geometrical method to find the parameters from three positions of the Moon at particular phases of its anomaly. In fact, he did this separately for the eccentric and the epicycle model. Ptolemy describes the details in the "Almagest" IV.11. Hipparchus used two sets of three lunar eclipse observations that he carefully selected to satisfy the requirements. The eccentric model he fitted to these eclipses from his Babylonian eclipse list: 22/23 December 383 BC, 18/19 June 382 BC, and 12/13 December 382 BC. The epicycle model he fitted to lunar eclipse observations made in Alexandria at 22 September 201 BC, 19 March 200 BC, and 11 September 200 BC.
These figures are due to the cumbersome unit he used in his chord table and may partly be due to some sloppy rounding and calculation errors by Hipparchus, for which Ptolemy criticised him while also making rounding errors. A simpler alternate reconstruction agrees with all four numbers. Hipparchus found inconsistent results; he later used the ratio of the epicycle model (<templatestyles src="Fraction/styles.css" />3122+1⁄2 : <templatestyles src="Fraction/styles.css" />247+1⁄2), which is too small (60 : 4;45 sexagesimal). Ptolemy established a ratio of 60 : <templatestyles src="Fraction/styles.css" />5+1⁄4. (The maximum angular deviation producible by this geometry is the arcsin of <templatestyles src="Fraction/styles.css" />5+1⁄4 divided by 60, or approximately 5° 1', a figure that is sometimes therefore quoted as the equivalent of the Moon's equation of the center in the Hipparchan model.)
Apparent motion of the Sun.
Before Hipparchus, Meton, Euctemon, and their pupils at Athens had made a solstice observation (i.e., timed the moment of the summer solstice) on 27 June 432 BC (proleptic Julian calendar). Aristarchus of Samos is said to have done so in 280 BC, and Hipparchus also had an observation by Archimedes. He observed the summer solstices in 146 and 135 BC both accurately to a few hours, but observations of the moment of equinox were simpler, and he made twenty during his lifetime. Ptolemy gives an extensive discussion of Hipparchus's work on the length of the year in the "Almagest" III.1, and quotes many observations that Hipparchus made or used, spanning 162–128 BC, including an equinox timing by Hipparchus (at 24 March 146 BC at dawn) that differs by 5 hours from the observation made on Alexandria's large public equatorial ring that same day (at 1 hour before noon). Ptolemy claims his solar observations were on a transit instrument set in the meridian.
At the end of his career, Hipparchus wrote a book entitled "Peri eniausíou megéthous" ("On the Length of the Year") regarding his results. The established value for the tropical year, introduced by Callippus in or before 330 BC was <templatestyles src="Fraction/styles.css" />365+1⁄4 days. Speculating a Babylonian origin for the Callippic year is difficult to defend, since Babylon did not observe solstices thus the only extant System B year length was based on Greek solstices (see below). Hipparchus's equinox observations gave varying results, but he points out (quoted in "Almagest" III.1(H195)) that the observation errors by him and his predecessors may have been as large as <templatestyles src="Fraction/styles.css" />1⁄4 day. He used old solstice observations and determined a difference of approximately one day in approximately 300 years. So he set the length of the tropical year to <templatestyles src="Fraction/styles.css" />365+1⁄4 − <templatestyles src="Fraction/styles.css" />1⁄300 days (= 365.24666... days = 365 days 5 hours 55 min, which differs from the modern estimate of the value (including earth spin acceleration), in his time of approximately 365.2425 days, an error of approximately 6 min per year, an hour per decade, and ten hours per century.
Between the solstice observation of Meton and his own, there were 297 years spanning 108,478 days; this implies a tropical year of 365.24579... days = 365 days;14,44,51 (sexagesimal; = 365 days + + + ), a year length found on one of the few Babylonian clay tablets which explicitly specifies the System B month. Whether Babylonians knew of Hipparchus's work or the other way around is debatable.
Hipparchus also gave the value for the sidereal year to be 365 + + days (= 365.25694... days = 365 days 6 hours 10 min). Another value for the sidereal year that is attributed to Hipparchus (by the physician Galen in the second century AD) is 365 + + days (= 365.25347... days = 365 days 6 hours 5 min), but this may be a corruption of another value attributed to a Babylonian source: 365 + + days (= 365.25694... days = 365 days 6 hours 10 min). It is not clear whether Hipparchus got the value from Babylonian astronomers or calculated by himself.
Orbit of the Sun.
Before Hipparchus, astronomers knew that the lengths of the seasons are not equal. Hipparchus made observations of equinox and solstice, and according to Ptolemy ("Almagest" III.4) determined that spring (from spring equinox to summer solstice) lasted 94<templatestyles src="Fraction/styles.css" />1⁄2 days, and summer (from summer solstice to autumn equinox) <templatestyles src="Fraction/styles.css" />92+1⁄2 days. This is inconsistent with a premise of the Sun moving around the Earth in a circle at uniform speed. Hipparchus's solution was to place the Earth not at the center of the Sun's motion, but at some distance from the center. This model described the apparent motion of the Sun fairly well. It is known today that the planets, including the Earth, move in approximate ellipses around the Sun, but this was not discovered until Johannes Kepler published his first two laws of planetary motion in 1609. The value for the eccentricity attributed to Hipparchus by Ptolemy is that the offset is <templatestyles src="Fraction/styles.css" />1⁄24 of the radius of the orbit (which is a little too large), and the direction of the apogee would be at longitude 65.5° from the vernal equinox. Hipparchus may also have used other sets of observations, which would lead to different values. One of his two eclipse trios' solar longitudes are consistent with his having initially adopted inaccurate lengths for spring and summer of <templatestyles src="Fraction/styles.css" />95+3⁄4 and <templatestyles src="Fraction/styles.css" />91+1⁄4 days. His other triplet of solar positions is consistent with <templatestyles src="Fraction/styles.css" />94+1⁄4 and <templatestyles src="Fraction/styles.css" />92+1⁄2 days, an improvement on the results (<templatestyles src="Fraction/styles.css" />94+1⁄2 and <templatestyles src="Fraction/styles.css" />92+1⁄2 days) attributed to Hipparchus by Ptolemy. Ptolemy made no change three centuries later, and expressed lengths for the autumn and winter seasons which were already implicit (as shown, e.g., by A. Aaboe).
Distance, parallax, size of the Moon and the Sun.
Hipparchus also undertook to find the distances and sizes of the Sun and the Moon, in the now-lost work "On Sizes and Distances" (Greek: ). His work is mentioned in Ptolemy's "Almagest" V.11, and in a commentary thereon by Pappus; Theon of Smyrna (2nd century) also mentions the work, under the title "On Sizes and Distances of the Sun and Moon".
Hipparchus measured the apparent diameters of the Sun and Moon with his "diopter". Like others before and after him, he found that the Moon's size varies as it moves on its (eccentric) orbit, but he found no perceptible variation in the apparent diameter of the Sun. He found that at the "mean" distance of the Moon, the Sun and Moon had the same apparent diameter; at that distance, the Moon's diameter fits 650 times into the circle, i.e., the mean apparent diameters are <templatestyles src="Fraction/styles.css" />360⁄650 = 0°33′14″.
Like others before and after him, he also noticed that the Moon has a noticeable parallax, i.e., that it appears displaced from its calculated position (compared to the Sun or stars), and the difference is greater when closer to the horizon. He knew that this is because in the then-current models the Moon circles the center of the Earth, but the observer is at the surface—the Moon, Earth and observer form a triangle with a sharp angle that changes all the time. From the size of this parallax, the distance of the Moon as measured in Earth radii can be determined. For the Sun however, there was no observable parallax (we now know that it is about 8.8", several times smaller than the resolution of the unaided eye).
In the first book, Hipparchus assumes that the parallax of the Sun is 0, as if it is at infinite distance. He then analyzed a solar eclipse, which Toomer presumes to be the eclipse of 14 March 190 BC. It was total in the region of the Hellespont (and in his birthplace, Nicaea); at the time Toomer proposes the Romans were preparing for war with Antiochus III in the area, and the eclipse is mentioned by Livy in his "Ab Urbe Condita Libri" VIII.2. It was also observed in Alexandria, where the Sun was reported to be obscured 4/5ths by the Moon. Alexandria and Nicaea are on the same meridian. Alexandria is at about 31° North, and the region of the Hellespont about 40° North. (It has been contended that authors like Strabo and Ptolemy had fairly decent values for these geographical positions, so Hipparchus must have known them too. However, Strabo's Hipparchus dependent latitudes for this region are at least 1° too high, and Ptolemy appears to copy them, placing Byzantium 2° high in latitude.) Hipparchus could draw a triangle formed by the two places and the Moon, and from simple geometry was able to establish a distance of the Moon, expressed in Earth radii. Because the eclipse occurred in the morning, the Moon was not in the meridian, and it has been proposed that as a consequence the distance found by Hipparchus was a lower limit. In any case, according to Pappus, Hipparchus found that the least distance is 71 (from this eclipse), and the greatest 83 Earth radii.
In the second book, Hipparchus starts from the opposite extreme assumption: he assigns a (minimum) distance to the Sun of 490 Earth radii. This would correspond to a parallax of 7′, which is apparently the greatest parallax that Hipparchus thought would not be noticed (for comparison: the typical resolution of the human eye is about 2′; Tycho Brahe made naked eye observation with an accuracy down to 1′). In this case, the shadow of the Earth is a cone rather than a cylinder as under the first assumption. Hipparchus observed (at lunar eclipses) that at the mean distance of the Moon, the diameter of the shadow cone is <templatestyles src="Fraction/styles.css" />2+1⁄2 lunar diameters. That apparent diameter is, as he had observed, <templatestyles src="Fraction/styles.css" />360⁄650 degrees. With these values and simple geometry, Hipparchus could determine the mean distance; because it was computed for a minimum distance of the Sun, it is the maximum mean distance possible for the Moon. With his value for the eccentricity of the orbit, he could compute the least and greatest distances of the Moon too. According to Pappus, he found a least distance of 62, a mean of <templatestyles src="Fraction/styles.css" />67+1⁄3, and consequently a greatest distance of <templatestyles src="Fraction/styles.css" />72+2⁄3 Earth radii. With this method, as the parallax of the Sun decreases (i.e., its distance increases), the minimum limit for the mean distance is 59 Earth radii—exactly the mean distance that Ptolemy later derived.
Hipparchus thus had the problematic result that his minimum distance (from book 1) was greater than his maximum mean distance (from book 2). He was intellectually honest about this discrepancy, and probably realized that especially the first method is very sensitive to the accuracy of the observations and parameters. (In fact, modern calculations show that the size of the 189 BC solar eclipse at Alexandria must have been closer to <templatestyles src="Fraction/styles.css" />9⁄10ths and not the reported <templatestyles src="Fraction/styles.css" />4⁄5ths, a fraction more closely matched by the degree of totality at Alexandria of eclipses occurring in 310 and 129 BC which were also nearly total in the Hellespont and are thought by many to be more likely possibilities for the eclipse Hipparchus used for his computations.)
Ptolemy later measured the lunar parallax directly ("Almagest" V.13), and used the second method of Hipparchus with lunar eclipses to compute the distance of the Sun ("Almagest" V.15). He criticizes Hipparchus for making contradictory assumptions, and obtaining conflicting results ("Almagest" V.11): but apparently he failed to understand Hipparchus's strategy to establish limits consistent with the observations, rather than a single value for the distance. His results were the best so far: the actual mean distance of the Moon is 60.3 Earth radii, within his limits from Hipparchus's second book.
Theon of Smyrna wrote that according to Hipparchus, the Sun is 1,880 times the size of the Earth, and the Earth twenty-seven times the size of the Moon; apparently this refers to volumes, not diameters. From the geometry of book 2 it follows that the Sun is at 2,550 Earth radii, and the mean distance of the Moon is <templatestyles src="Fraction/styles.css" />60+1⁄2 radii. Similarly, Cleomedes quotes Hipparchus for the sizes of the Sun and Earth as 1050:1; this leads to a mean lunar distance of 61 radii. Apparently Hipparchus later refined his computations, and derived accurate single values that he could use for predictions of solar eclipses.
See Toomer (1974) for a more detailed discussion.
Eclipses.
Pliny ("Naturalis Historia" II.X) tells us that Hipparchus demonstrated that lunar eclipses can occur five months apart, and solar eclipses seven months (instead of the usual six months); and the Sun can be hidden twice in thirty days, but as seen by different nations. Ptolemy discussed this a century later at length in "Almagest" VI.6. The geometry, and the limits of the positions of Sun and Moon when a solar or lunar eclipse is possible, are explained in "Almagest" VI.5. Hipparchus apparently made similar calculations. The result that two solar eclipses can occur one month apart is important, because this can not be based on observations: one is visible on the northern and the other on the southern hemisphere—as Pliny indicates—and the latter was inaccessible to the Greek.
Prediction of a solar eclipse, i.e., exactly when and where it will be visible, requires a solid lunar theory and proper treatment of the lunar parallax. Hipparchus must have been the first to be able to do this. A rigorous treatment requires spherical trigonometry, thus those who remain certain that Hipparchus lacked it must speculate that he may have made do with planar approximations. He may have discussed these things in "Perí tēs katá plátos mēniaías tēs selēnēs kinēseōs" ("On the monthly motion of the Moon in latitude"), a work mentioned in the "Suda".
Pliny also remarks that "he also discovered for what exact reason, although the shadow causing the eclipse must from sunrise onward be below the earth, it happened once in the past that the Moon was eclipsed in the west while both luminaries were visible above the earth" (translation H. Rackham (1938), Loeb Classical Library 330 p. 207). Toomer argued that this must refer to the large total lunar eclipse of 26 November 139 BC, when over a clean sea horizon as seen from Rhodes, the Moon was eclipsed in the northwest just after the Sun rose in the southeast. This would be the second eclipse of the 345-year interval that Hipparchus used to verify the traditional Babylonian periods: this puts a late date to the development of Hipparchus's lunar theory. We do not know what "exact reason" Hipparchus found for seeing the Moon eclipsed while apparently it was not in exact opposition to the Sun. Parallax lowers the altitude of the luminaries; refraction raises them, and from a high point of view the horizon is lowered.
Astronomical instruments and astrometry.
Hipparchus and his predecessors used various instruments for astronomical calculations and observations, such as the gnomon, the astrolabe, and the armillary sphere.
Hipparchus is credited with the invention or improvement of several astronomical instruments, which were used for a long time for naked-eye observations. According to Synesius of Ptolemais (4th century) he made the first "astrolabion": this may have been an armillary sphere (which Ptolemy however says he constructed, in "Almagest" V.1); or the predecessor of the planar instrument called astrolabe (also mentioned by Theon of Alexandria). With an astrolabe Hipparchus was the first to be able to measure the geographical latitude and time by observing fixed stars. Previously this was done at daytime by measuring the shadow cast by a gnomon, by recording the length of the longest day of the year or with the portable instrument known as a "scaphe".
Ptolemy mentions ("Almagest" V.14) that he used a similar instrument as Hipparchus, called "dioptra", to measure the apparent diameter of the Sun and Moon. Pappus of Alexandria described it (in his commentary on the "Almagest" of that chapter), as did Proclus ("Hypotyposis" IV). It was a four-foot rod with a scale, a sighting hole at one end, and a wedge that could be moved along the rod to exactly obscure the disk of Sun or Moon.
Hipparchus also observed solar equinoxes, which may be done with an equatorial ring: its shadow falls on itself when the Sun is on the equator (i.e., in one of the equinoctial points on the ecliptic), but the shadow falls above or below the opposite side of the ring when the Sun is south or north of the equator. Ptolemy quotes (in "Almagest" III.1 (H195)) a description by Hipparchus of an equatorial ring in Alexandria; a little further he describes two such instruments present in Alexandria in his own time.
Hipparchus applied his knowledge of spherical angles to the problem of denoting locations on the Earth's surface. Before him a grid system had been used by Dicaearchus of Messana, but Hipparchus was the first to apply mathematical rigor to the determination of the latitude and longitude of places on the Earth. Hipparchus wrote a critique in three books on the work of the geographer Eratosthenes of Cyrene (3rd century BC), called "Pròs tèn Eratosthénous geographían" ("Against the Geography of Eratosthenes"). It is known to us from Strabo of Amaseia, who in his turn criticised Hipparchus in his own "Geographia". Hipparchus apparently made many detailed corrections to the locations and distances mentioned by Eratosthenes. It seems he did not introduce many improvements in methods, but he did propose a means to determine the geographical longitudes of different cities at lunar eclipses (Strabo "Geographia" 1 January 2012). A lunar eclipse is visible simultaneously on half of the Earth, and the difference in longitude between places can be computed from the difference in local time when the eclipse is observed. His approach would give accurate results if it were correctly carried out but the limitations of timekeeping accuracy in his era made this method impractical.
Star catalog.
Late in his career (possibly about 135 BC) Hipparchus compiled his star catalog. Scholars have been searching for it for centuries. In 2022, it was announced that a part of it was discovered in a medieval parchment manuscript, Codex Climaci Rescriptus, from Saint Catherine's Monastery in the Sinai Peninsula, Egypt as hidden text (palimpsest).
Hipparchus also constructed a celestial globe depicting the constellations, based on his observations. His interest in the fixed stars may have been inspired by the observation of a supernova (according to Pliny), or by his discovery of precession, according to Ptolemy, who says that Hipparchus could not reconcile his data with earlier observations made by Timocharis and Aristillus. For more information see Discovery of precession. In Raphael's painting "The School of Athens", Hipparchus may be depicted holding his celestial globe, as the representative figure for astronomy. It is not certain that the figure is meant to represent him.
Previously, Eudoxus of Cnidus in the fourth century BC had described the stars and constellations in two books called "Phaenomena" and "Entropon". Aratus wrote a poem called "Phaenomena" or "Arateia" based on Eudoxus's work. Hipparchus wrote a commentary on the "Arateia"—his only preserved work—which contains many stellar positions and times for rising, culmination, and setting of the constellations, and these are likely to have been based on his own measurements.
According to Roman sources, Hipparchus made his measurements with a scientific instrument and he obtained the positions of roughly 850 stars. Pliny the Elder writes in book II, 24–26 of his Natural History:
<templatestyles src="Template:Blockquote/styles.css" />This same Hipparchus, who can never be sufficiently commended, ... discovered a new star that was produced in his own age, and, by observing its motions on the day in which it shone, he was led to doubt whether it does not often happen, that those stars have motion which we suppose to be fixed. And the same individual attempted, what might seem presumptuous even in a deity, viz. to number the stars for posterity and to express their relations by appropriate names; having previously devised instruments, by which he might mark the places and the magnitudes of each individual star. In this way it might be easily discovered, not only whether they were destroyed or produced, but whether they changed their relative positions, and likewise, whether they were increased or diminished; the heavens being thus left as an inheritance to any one, who might be found competent to complete his plan.
This passage reports that
It is unknown what instrument he used. The armillary sphere was probably invented only later—maybe by Ptolemy 265 years after Hipparchus. The historian of science S. Hoffmann found clues that Hipparchus may have observed the longitudes and latitudes in different coordinate systems and, thus, with different instrumentation. Right ascensions, for instance, could have been observed with a clock, while angular separations could have been measured with another device.
Stellar magnitude.
Hipparchus is conjectured to have ranked the apparent magnitudes of stars on a numerical scale from 1, the brightest, to 6, the faintest. This hypothesis is based on the vague statement by Pliny the Elder but cannot be proven by the data in Hipparchus's commentary on Aratus's poem. In this only work by his hand that has survived until today, he does not use the magnitude scale but estimates brightnesses unsystematically. However, this does not prove or disprove anything because the commentary might be an early work while the magnitude scale could have been introduced later.
Nevertheless, this system certainly precedes Ptolemy, who used it extensively about AD 150. This system was made more precise and extended by N. R. Pogson in 1856, who placed the magnitudes on a logarithmic scale, making magnitude 1 stars 100 times brighter than magnitude 6 stars, thus each magnitude is 5√100 or 2.512 times brighter than the next faintest magnitude.
Coordinate System.
It is disputed which coordinate system(s) he used. Ptolemy's catalog in the "Almagest", which is derived from Hipparchus's catalog, is given in ecliptic coordinates. Although Hipparchus strictly distinguishes between "signs" (30° section of the zodiac) and "constellations" in the zodiac, it is highly questionable whether or not he had an instrument to directly observe / measure units on the ecliptic. He probably marked them as a unit on his celestial globe but the instrumentation for his observations is unknown.
Delambre in his (1817) concluded that Hipparchus knew and used the equatorial coordinate system, a conclusion challenged by Otto Neugebauer in his "History of Ancient Mathematical Astronomy" (1975). Hipparchus seems to have used a mix of ecliptic coordinates and equatorial coordinates: in his commentary on Eudoxus he provides stars' polar distance (equivalent to the declination in the equatorial system), right ascension (equatorial), longitude (ecliptic), polar longitude (hybrid), but not celestial latitude. This opinion was confirmed by the careful investigation of Hoffmann who independently studied the material, potential sources, techniques and results of Hipparchus and reconstructed his celestial globe and its making.
As with most of his work, Hipparchus's star catalog was adopted and perhaps expanded by Ptolemy, who has (since Brahe in 1598) been accused by some of fraud for stating ("Syntaxis", book 7, chapter 4) that he observed all 1025 stars—critics claim that, for almost every star, he used Hipparchus's data and precessed it to his own epoch <templatestyles src="Fraction/styles.css" />2+2⁄3 centuries later by adding 2°40' to the longitude, using an erroneously small precession constant of 1° per century. This claim is highly exaggerated because it applies modern standards of citation to an ancient author. True is only that "the ancient star catalogue" that was initiated by Hipparchus in the second century BC, was reworked and improved multiple times in the 265 years to the Almagest (which is good scientific practise even today). Although the Almagest star catalogue is based upon Hipparchus's, it is not only a blind copy but enriched, enhanced, and thus (at least partially) re-observed.
Celestial globe.
Hipparchus's celestial globe was an instrument similar to modern electronic computers. He used it to determine risings, settings and culminations (cf. also Almagest, book VIII, chapter 3). Therefore, his globe was mounted in a horizontal plane and had a meridian ring with a scale. In combination with a grid that divided the celestial equator into 24 hour lines (longitudes equalling our right ascension hours) the instrument allowed him to determine the hours. The ecliptic was marked and divided in 12 sections of equal length (the "signs", which he called or in order to distinguish them from constellations (). The globe was virtually reconstructed by a historian of science.
Arguments for and against Hipparchus's star catalog in the Almagest.
For:
Against:
Conclusion: Hipparchus's star catalogue is one of the sources of the Almagest star catalogue but not the only source.
Precession of the equinoxes (146–127 BC).
Hipparchus is generally recognized as discoverer of the precession of the equinoxes in 127 BC. His two books on precession, "On the Displacement of the Solstitial and Equinoctial Points" and "On the Length of the Year", are both mentioned in the "Almagest" of Claudius Ptolemy. According to Ptolemy, Hipparchus measured the longitude of Spica and Regulus and other bright stars. Comparing his measurements with data from his predecessors, Timocharis and Aristillus, he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century.
Geography.
Hipparchus's treatise "Against the Geography of Eratosthenes" in three books is not preserved.
Most of our knowledge of it comes from Strabo, according to whom Hipparchus thoroughly and often unfairly criticized Eratosthenes, mainly for internal contradictions and inaccuracy in determining positions of geographical localities. Hipparchus insists that a geographic map must be based only on astronomical measurements of latitudes and longitudes and triangulation for finding unknown distances.
In geographic theory and methods Hipparchus introduced three main innovations.
He was the first to use the grade grid, to determine geographic latitude from star observations, and not only from the Sun's altitude, a method known long before him, and to suggest that geographic longitude could be determined by means of simultaneous observations of lunar eclipses in distant places. In the practical part of his work, the so-called "table of climata", Hipparchus listed latitudes for several tens of localities. In particular, he improved Eratosthenes' values for the latitudes of Athens, Sicily, and southern extremity of India. In calculating latitudes of climata (latitudes correlated with the length of the longest solstitial day), Hipparchus used an unexpectedly accurate value for the obliquity of the ecliptic, 23°40' (the actual value in the second half of the second century BC was approximately 23°43'), whereas all other ancient authors knew only a roughly rounded value 24°, and even Ptolemy used a less accurate value, 23°51'.
Hipparchus opposed the view generally accepted in the Hellenistic period that the Atlantic and Indian Oceans and the Caspian Sea are parts of a single ocean. At the same time he extends the limits of the oikoumene, i.e. the inhabited part of the land, up to the equator and the Arctic Circle. Hipparchus's ideas found their reflection in the "Geography" of Ptolemy. In essence, Ptolemy's work is an extended attempt to realize Hipparchus's vision of what geography ought to be.
Modern speculation.
Hipparchus was in the international news in 2005, when it was again proposed (as in 1898) that the data on the celestial globe of Hipparchus or in his star catalog may have been preserved in the only surviving large ancient celestial globe which depicts the constellations with moderate accuracy, the globe carried by the Farnese Atlas. Evidence suggests that the Farnese globe may show constellations in the Aratean tradition and deviate from the constellations used by Hipparchus.
A line in Plutarch's "Table Talk" states that Hipparchus counted 103,049 compound propositions that can be formed from ten simple propositions. 103,049 is the tenth Schröder–Hipparchus number, which counts the number of ways of adding one or more pairs of parentheses around consecutive subsequences of two or more items in any sequence of ten symbols. This has led to speculation that Hipparchus knew about enumerative combinatorics, a field of mathematics that developed independently in modern mathematics.
Hipparchos was suggested in a 2013 paper to have accidentally observed the planet Uranus in 128 BC and catalogued it as a star, over a millennium and a half before its formal discovery in 1781.
Legacy.
Hipparchus may be depicted opposite Ptolemy in Raphael's 1509–1511 painting "The School of Athens", although this figure is usually identified as Zoroaster.
The formal name for the ESA's Hipparcos Space Astrometry Mission is High Precision Parallax Collecting Satellite, making a backronym, HiPParCoS, that echoes and commemorates the name of Hipparchus.
The lunar crater Hipparchus, the Martian crater Hipparchus, and the asteroid 4000 Hipparchus are named after him.
He was inducted into the International Space Hall of Fame in 2004.
Jean Baptiste Joseph Delambre, historian of astronomy, mathematical astronomer and director of the Paris Observatory, in his history of astronomy in the 18th century (1821), considered Hipparchus along with Johannes Kepler and James Bradley the greatest astronomers of all time.
The "Astronomers Monument" at the Griffith Observatory in Los Angeles, California, United States features a relief of Hipparchus as one of six of the greatest astronomers of all time and the only one from Antiquity.
Johannes Kepler had great respect for Tycho Brahe's methods and the accuracy of his observations, and considered him to be the new Hipparchus, who would provide the foundation for a restoration of the science of astronomy.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{chord} \\theta = 2R \\cdot \\sin\\tfrac12\\theta"
}
]
| https://en.wikipedia.org/wiki?curid=13600 |
1360091 | Backpropagation | Optimization algorithm for artificial neural networks
<templatestyles src="Machine learning/styles.css"/>
In machine learning, backpropagation is a gradient estimation method commonly used for training neural networks to compute the network parameter updates.
It is an efficient application of the chain rule to neural networks. Backpropagation computes the gradient of a loss function with respect to the weights of the network for a single input–output example, and does so efficiently, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this can be derived through dynamic programming.
Strictly speaking, the term "backpropagation" refers only to an algorithm for efficiently computing the gradient, not how the gradient is used; but the term is often used loosely to refer to the entire learning algorithm – including how the gradient is used, such as by stochastic gradient descent, or as an intermediate step in a more complicated optimizer, such as Adam.
Backpropagation had multiple discoveries and partial discoveries, with a tangled history and terminology. See the history section for details. Some other names for the technique include "reverse mode of automatic differentiation" or "reverse accumulation".
Overview.
Backpropagation computes the gradient in weight space of a feedforward neural network, with respect to a loss function. Denote:
In the derivation of backpropagation, other intermediate quantities are used by introducing them as needed below. Bias terms are not treated specially since they correspond to a weight with a fixed input of 1. For backpropagation the specific loss function and activation functions do not matter as long as they and their derivatives can be evaluated efficiently. Traditional activation functions include sigmoid, tanh, and ReLU. swish mish, and other activation functions have since been proposed as well.
The overall network is a combination of function composition and matrix multiplication:
formula_15
For a training set there will be a set of input–output pairs, formula_16. For each input–output pair formula_17 in the training set, the loss of the model on that pair is the cost of the difference between the predicted output formula_18 and the target output formula_19:
formula_20
Note the distinction: during model evaluation the weights are fixed while the inputs vary (and the target output may be unknown), and the network ends with the output layer (it does not include the loss function). During model training the input–output pair is fixed while the weights vary, and the network ends with the loss function.
Backpropagation computes the gradient for a "fixed" input–output pair formula_17, where the weights formula_9 can vary. Each individual component of the gradient, formula_21 can be computed by the chain rule; but doing this separately for each weight is inefficient. Backpropagation efficiently computes the gradient by avoiding duplicate calculations and not computing unnecessary intermediate values, by computing the gradient of each layer – specifically the gradient of the weighted "input" of each layer, denoted by formula_22 – from back to front.
Informally, the key point is that since the only way a weight in formula_12 affects the loss is through its effect on the "next" layer, and it does so "linearly", formula_22 are the only data you need to compute the gradients of the weights at layer formula_8, and then the previous layer can be computed formula_23 and repeated recursively. This avoids inefficiency in two ways. First, it avoids duplication because when computing the gradient at layer formula_8, it is unnecessary to recompute all derivatives on later layers formula_24 each time. Second, it avoids unnecessary intermediate calculations, because at each stage it directly computes the gradient of the weights with respect to the ultimate output (the loss), rather than unnecessarily computing the derivatives of the values of hidden layers with respect to changes in weights formula_25.
Backpropagation can be expressed for simple feedforward networks in terms of matrix multiplication, or more generally in terms of the adjoint graph.
Matrix multiplication.
For the basic case of a feedforward network, where nodes in each layer are connected only to nodes in the immediate next layer (without skipping any layers), and there is a loss function that computes a scalar loss for the final output, backpropagation can be understood simply by matrix multiplication. Essentially, backpropagation evaluates the expression for the derivative of the cost function as a product of derivatives between each layer "from right to left" – "backwards" – with the gradient of the weights between each layer being a simple modification of the partial products (the "backwards propagated error").
Given an input–output pair formula_26, the loss is:
formula_27
To compute this, one starts with the input formula_0 and works forward; denote the weighted input of each hidden layer as formula_28 and the output of hidden layer formula_8 as the activation formula_29. For backpropagation, the activation formula_29 as well as the derivatives formula_30 (evaluated at formula_28) must be cached for use during the backwards pass.
The derivative of the loss in terms of the inputs is given by the chain rule; note that each term is a total derivative, evaluated at the value of the network (at each node) on the input formula_0:
formula_31
where formula_32 is a diagonal matrix.
These terms are: the derivative of the loss function; the derivatives of the activation functions; and the matrices of weights:
formula_33
The gradient formula_34 is the transpose of the derivative of the output in terms of the input, so the matrices are transposed and the order of multiplication is reversed, but the entries are the same:
formula_35
Backpropagation then consists essentially of evaluating this expression from right to left (equivalently, multiplying the previous expression for the derivative from left to right), computing the gradient at each layer on the way; there is an added step, because the gradient of the weights is not just a subexpression: there's an extra multiplication.
Introducing the auxiliary quantity formula_22 for the partial products (multiplying from right to left), interpreted as the "error at level formula_8" and defined as the gradient of the input values at level formula_8:
formula_36
Note that formula_22 is a vector, of length equal to the number of nodes in level formula_8; each component is interpreted as the "cost attributable to (the value of) that node".
The gradient of the weights in layer formula_8 is then:
formula_37
The factor of formula_38 is because the weights formula_12 between level formula_7 and formula_8 affect level formula_8 proportionally to the inputs (activations): the inputs are fixed, the weights vary.
The formula_22 can easily be computed recursively, going from right to left, as:
formula_39
The gradients of the weights can thus be computed using a few matrix multiplications for each level; this is backpropagation.
Compared with naively computing forwards (using the formula_22 for illustration):
formula_40
There are two key differences with backpropagation:
Adjoint graph.
For more general graphs, and other advanced variations, backpropagation can be understood in terms of automatic differentiation, where backpropagation is a special case of reverse accumulation (or "reverse mode").
Intuition.
Motivation.
The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output. The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output.
Learning as an optimization problem.
To understand the mathematical derivation of the backpropagation algorithm, it helps to first develop some intuition about the relationship between the actual output of a neuron and the correct output for a particular training example. Consider a simple neural network with two input units, one output unit and no hidden units, and in which each neuron uses a linear output (unlike most work on neural networks, in which mapping from inputs to outputs is non-linear) that is the weighted sum of its input.
Initially, before training, the weights will be set randomly. Then the neuron learns from training examples, which in this case consist of a set of tuples formula_47 where formula_48 and formula_49 are the inputs to the network and t is the correct output (the output the network should produce given those inputs, when it has been trained). The initial network, given formula_48 and formula_49, will compute an output y that likely differs from t (given random weights). A loss function formula_50 is used for measuring the discrepancy between the target output t and the computed output y. For regression analysis problems the squared error can be used as a loss function, for classification the categorical cross-entropy can be used.
As an example consider a regression problem using the square error as a loss:
formula_51
where E is the discrepancy or error.
Consider the network on a single training case: formula_52. Thus, the input formula_48 and formula_49 are 1 and 1 respectively and the correct output, t is 0. Now if the relation is plotted between the network's output y on the horizontal axis and the error E on the vertical axis, the result is a parabola. The minimum of the parabola corresponds to the output y which minimizes the error E. For a single training case, the minimum also touches the horizontal axis, which means the error will be zero and the network can produce an output y that exactly matches the target output t. Therefore, the problem of mapping inputs to outputs can be reduced to an optimization problem of finding a function that will produce the minimal error.
However, the output of a neuron depends on the weighted sum of all its inputs:
formula_53
where formula_54 and formula_55 are the weights on the connection from the input units to the output unit. Therefore, the error also depends on the incoming weights to the neuron, which is ultimately what needs to be changed in the network to enable learning.
In this example, upon injecting the training data formula_52, the loss function becomes
formula_56
Then, the loss function formula_57 takes the form of a parabolic cylinder with its base directed along formula_58. Since all sets of weights that satisfy formula_58 minimize the loss function, in this case additional constraints are required to converge to a unique solution. Additional constraints could either be generated by setting specific conditions to the weights, or by injecting additional training data.
One commonly used algorithm to find the set of weights that minimizes the error is gradient descent. By backpropagation, the steepest descent direction is calculated of the loss function versus the present synaptic weights. Then, the weights can be modified along the steepest descent direction, and the error is minimized in an efficient way.
Derivation.
The gradient descent method involves calculating the derivative of the loss function with respect to the weights of the network. This is normally done using backpropagation. Assuming one output neuron, the squared error function is
formula_59
where
formula_5 is the loss for the output formula_1 and target value formula_60,
formula_60 is the target output for a training sample, and
formula_1 is the actual output of the output neuron.
For each neuron formula_11, its output formula_61 is defined as
formula_62
where the activation function formula_63 is non-linear and differentiable over the activation region (the ReLU is not differentiable at one point). A historically used activation function is the logistic function:
formula_64
which has a convenient derivative of:
formula_65
The input formula_66 to a neuron is the weighted sum of outputs formula_67 of previous neurons. If the neuron is in the first layer after the input layer, the formula_67 of the input layer are simply the inputs formula_68 to the network. The number of input units to the neuron is formula_69. The variable formula_70 denotes the weight between neuron formula_10 of the previous layer and neuron formula_11 of the current layer.
Finding the derivative of the error.
Calculating the partial derivative of the error with respect to a weight formula_71 is done using the chain rule twice:
In the last factor of the right-hand side of the above, only one term in the sum formula_66 depends on formula_71, so that
If the neuron is in the first layer after the input layer, formula_72 is just formula_73.
The derivative of the output of neuron formula_11 with respect to its input is simply the partial derivative of the activation function:
which for the logistic activation function
formula_74
This is the reason why backpropagation requires that the activation function be differentiable. (Nevertheless, the ReLU activation function, which is non-differentiable at 0, has become quite popular, e.g. in AlexNet)
The first factor is straightforward to evaluate if the neuron is in the output layer, because then formula_75 and
If half of the square error is used as loss function we can rewrite it as
formula_76
However, if formula_11 is in an arbitrary inner layer of the network, finding the derivative formula_57 with respect to formula_61 is less obvious.
Considering formula_57 as a function with the inputs being all neurons formula_77 receiving input from neuron formula_11,
formula_78
and taking the total derivative with respect to formula_61, a recursive expression for the derivative is obtained:
Therefore, the derivative with respect to formula_61 can be calculated if all the derivatives with respect to the outputs formula_79 of the next layer – the ones closer to the output neuron – are known. [Note, if any of the neurons in set formula_5 were not connected to neuron formula_11, they would be independent of formula_71 and the corresponding partial derivative under the summation would vanish to 0.]
Substituting Eq. 2, Eq. 3 Eq.4 and Eq. 5 in Eq. 1 we obtain:
formula_80
formula_81
with
formula_82
if formula_63 is the logistic function, and the error is the square error:
formula_83
To update the weight formula_71 using gradient descent, one must choose a learning rate, formula_84. The change in weight needs to reflect the impact on formula_57 of an increase or decrease in formula_71. If formula_85, an increase in formula_71 increases formula_57; conversely, if formula_86, an increase in formula_71 decreases formula_57. The new formula_87 is added to the old weight, and the product of the learning rate and the gradient, multiplied by formula_88 guarantees that formula_71 changes in a way that always decreases formula_57. In other words, in the equation immediately below, formula_89 always changes formula_71 in such a way that formula_57 is decreased:
formula_90
Second-order gradient descent.
Using a Hessian matrix of second-order derivatives of the error function, the Levenberg–Marquardt algorithm often converges faster than first-order gradient descent, especially when the topology of the error function is complicated. It may also find solutions in smaller node counts for which other methods might not converge. The Hessian can be approximated by the Fisher information matrix.
Loss function.
The loss function is a function that maps values of one or more variables onto a real number intuitively representing some "cost" associated with those values. For backpropagation, the loss function calculates the difference between the network output and its expected output, after a training example has propagated through the network.
Assumptions.
The mathematical expression of the loss function must fulfill two conditions in order for it to be possibly used in backpropagation. The first is that it can be written as an average formula_91 over error functions formula_92, for formula_93 individual training examples, formula_94. The reason for this assumption is that the backpropagation algorithm calculates the gradient of the error function for a single training example, which needs to be generalized to the overall error function. The second assumption is that it can be written as a function of the outputs from the neural network.
Example loss function.
Let formula_95 be vectors in formula_96.
Select an error function formula_97 measuring the difference between two outputs. The standard choice is the square of the Euclidean distance between the vectors formula_1 and formula_98:formula_99The error function over formula_93 training examples can then be written as an average of losses over individual examples:formula_100
History.
Precursors.
Backpropagation had been derived repeatedly, as it is essentially an efficient application of the chain rule (first written down by Gottfried Wilhelm Leibniz in 1676) to neural networks.
The terminology "back-propagating error correction" was introduced in 1962 by Frank Rosenblatt, but he did not know how to implement this. In any case, he only studied neurons whose outputs were discrete levels, which only had zero derivatives, making backpropagation impossible.
Precursors to backpropagation appeared in optimal control theory since 1950s. Yann LeCun et al credits 1950s work by Pontryagin and others in optimal control theory, especially the adjoint state method, for being a continuous-time version of backpropagation. Hecht-Nielsen credits the Robbins–Monro algorithm (1951) and Arthur Bryson and Yu-Chi Ho's "Applied Optimal Control" (1969) as presages of backpropagation. Other precursors were Henry J. Kelley 1960, and Arthur E. Bryson (1961). In 1962, Stuart Dreyfus published a simpler derivation based only on the chain rule. In 1973, he adapted parameters of controllers in proportion to error gradients. Unlike modern backpropagation, these precursors used standard Jacobian matrix calculations from one stage to the previous one, neither addressing direct links across several stages nor potential additional efficiency gains due to network sparsity.
The ADALINE (1960) learning algorithm was gradient descent with a squared error loss for a single layer. The first multilayer perceptron (MLP) with more than one layer trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. The MLP had 5 layers, with 2 learnable layers, and it learned to classify patterns not linearly separable.
Modern backpropagation.
Modern backpropagation was first published by Seppo Linnainmaa as "reverse mode of automatic differentiation" (1970) for discrete connected networks of nested differentiable functions.
In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard. Werbos described how he developed backpropagation in an interview. In 1971, during his PhD work, he developed backpropagation to mathematicize Freud's "flow of psychic energy". He faced repeated difficulty in publishing the work, only managing in 1981.
Around 1982, David E. Rumelhart independently developed backpropagation and taught the algorithm to others in his research circle. He did not cite previous work as he was unaware of them. He published the algorithm first in a 1985 paper, then in a 1986 "Nature" paper an experimental analysis of the technique. These papers became highly cited, contributed to the popularization of backpropagation, and coincided with the resurging research interest in neural networks during the 1980s.
In 1985, the method was also described by David Parker. Yann LeCun proposed an alternative form of backpropagation for neural networks in his PhD thesis in 1987.
Gradient descent took a considerable amount of time to reach acceptance. Some early objections were: there were no guarantees that gradient descent could reach a global minimum, only local minimum; neurons were "known" by physiologists as making discrete signals (0/1), not continuous ones, and with discrete signals, there is no gradient to take. See the interview with Geoffrey Hinton.
Early successes.
Contributing to the acceptance were several applications in training neural networks via backpropagation, sometimes achieving popularity outside the research circles.
In 1987, NETtalk learned to convert English text into pronunciation. Sejnowski tried training it with both backpropagation and Boltzmann machine, but found the backpropagation significantly faster, so he used it for the final NETtalk.324 The NETtalk program became a popular success, appearing on the "Today" show.
In 1989, Dean A. Pomerleau published ALVINN, a neural network trained to drive autonomously using backpropagation.
The LeNet was published in 1989 to recognize handwritten zip codes.
In 1992, TD-Gammon achieved top human level play in backgammon. It was a reinforcement learning agent with a neural network with two layers, trained by backpropagation.
In 1993, Eric Wan won an international pattern recognition contest through backpropagation.
After backpropagation.
During the 2000s it fell out of favour, but returned in the 2010s, benefiting from cheap, powerful GPU-based computing systems. This has been especially so in speech recognition, machine vision, natural language processing, and language structure learning research (in which it has been used to explain a variety of phenomena related to first and second language learning.)
Error backpropagation has been suggested to explain human brain event-related potential (ERP) components like the N400 and P600.
In 2023, a backpropagation algorithm was implemented on a photonic processor by a team at Stanford University.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "(0.1, 0.7, 0.2)"
},
{
"math_id": 3,
"text": "(0, 1, 0)"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "L"
},
{
"math_id": 6,
"text": "W^l = (w^l_{jk})"
},
{
"math_id": 7,
"text": "l - 1"
},
{
"math_id": 8,
"text": "l"
},
{
"math_id": 9,
"text": "w^l_{jk}"
},
{
"math_id": 10,
"text": "k"
},
{
"math_id": 11,
"text": "j"
},
{
"math_id": 12,
"text": "W^l"
},
{
"math_id": 13,
"text": "f^l"
},
{
"math_id": 14,
"text": "a^l_j"
},
{
"math_id": 15,
"text": "g(x) := f^L(W^L f^{L-1}(W^{L-1} \\cdots f^1(W^1 x)\\cdots))"
},
{
"math_id": 16,
"text": "\\left\\{(x_i, y_i)\\right\\}"
},
{
"math_id": 17,
"text": "(x_i, y_i)"
},
{
"math_id": 18,
"text": "g(x_i)"
},
{
"math_id": 19,
"text": "y_i"
},
{
"math_id": 20,
"text": "C(y_i, g(x_i))"
},
{
"math_id": 21,
"text": "\\partial C/\\partial w^l_{jk},"
},
{
"math_id": 22,
"text": "\\delta^l"
},
{
"math_id": 23,
"text": "\\delta^{l-1}"
},
{
"math_id": 24,
"text": "l+1, l+2, \\ldots"
},
{
"math_id": 25,
"text": "\\partial a^{l'}_{j'}/\\partial w^l_{jk}"
},
{
"math_id": 26,
"text": "(x, y)"
},
{
"math_id": 27,
"text": "C(y, f^L(W^L f^{L-1}(W^{L-1} \\cdots f^2(W^2 f^1(W^1 x))\\cdots)))"
},
{
"math_id": 28,
"text": "z^l"
},
{
"math_id": 29,
"text": "a^l"
},
{
"math_id": 30,
"text": "(f^l)'"
},
{
"math_id": 31,
"text": "\\frac{d C}{d a^L}\\cdot \\frac{d a^L}{d z^L} \\cdot \\frac{d z^L}{d a^{L-1}} \\cdot \\frac{d a^{L-1}}{d z^{L-1}}\\cdot \\frac{d z^{L-1}}{d a^{L-2}} \\cdot \\ldots \\cdot \\frac{d a^1}{d z^1} \\cdot \\frac{\\partial z^1}{\\partial x},"
},
{
"math_id": 32,
"text": "\\frac{d a^L}{d z^L}"
},
{
"math_id": 33,
"text": "\\frac{d C}{d a^L}\\circ (f^L)' \\cdot W^L \\circ (f^{L-1})' \\cdot W^{L-1} \\circ \\cdots \\circ (f^1)' \\cdot W^1."
},
{
"math_id": 34,
"text": "\\nabla"
},
{
"math_id": 35,
"text": "\\nabla_x C = (W^1)^T \\cdot (f^1)' \\circ \\ldots \\circ (W^{L-1})^T \\cdot (f^{L-1})' \\circ (W^L)^T \\cdot (f^L)' \\circ \\nabla_{a^L} C."
},
{
"math_id": 36,
"text": "\\delta^l := (f^l)' \\circ (W^{l+1})^T\\cdot(f^{l+1})' \\circ \\cdots \\circ (W^{L-1})^T \\cdot (f^{L-1})' \\circ (W^L)^T \\cdot (f^L)' \\circ \\nabla_{a^L} C."
},
{
"math_id": 37,
"text": "\\nabla_{W^l} C = \\delta^l(a^{l-1})^T."
},
{
"math_id": 38,
"text": "a^{l-1}"
},
{
"math_id": 39,
"text": "\\delta^{l-1} := (f^{l-1})' \\circ (W^l)^T \\cdot \\delta^l."
},
{
"math_id": 40,
"text": "\\begin{align}\n\\delta^1 &= (f^1)' \\circ (W^2)^T \\cdot (f^2)' \\circ \\cdots \\circ (W^{L-1})^T \\cdot (f^{L-1})' \\circ (W^L)^T \\cdot (f^L)' \\circ \\nabla_{a^L} C\\\\\n\\delta^2 &= (f^2)' \\circ \\cdots \\circ (W^{L-1})^T \\cdot (f^{L-1})' \\circ (W^L)^T \\cdot (f^L)' \\circ \\nabla_{a^L} C\\\\\n&\\vdots\\\\\n\\delta^{L-1} &= (f^{L-1})' \\circ (W^L)^T \\cdot (f^L)' \\circ \\nabla_{a^L} C\\\\\n\\delta^L &= (f^L)' \\circ \\nabla_{a^L} C,\n\\end{align}"
},
{
"math_id": 41,
"text": "\\nabla_{a^L} C"
},
{
"math_id": 42,
"text": "(W^l)^T"
},
{
"math_id": 43,
"text": "(f^{l-1})'"
},
{
"math_id": 44,
"text": "l+2"
},
{
"math_id": 45,
"text": "W^{l+1}"
},
{
"math_id": 46,
"text": "W^{l+2}"
},
{
"math_id": 47,
"text": "(x_1, x_2, t)"
},
{
"math_id": 48,
"text": "x_1"
},
{
"math_id": 49,
"text": "x_2"
},
{
"math_id": 50,
"text": " L(t, y) "
},
{
"math_id": 51,
"text": "L(t, y)= (t-y)^2 = E, "
},
{
"math_id": 52,
"text": "(1, 1, 0)"
},
{
"math_id": 53,
"text": "y=x_1w_1 + x_2w_2,"
},
{
"math_id": 54,
"text": "w_1"
},
{
"math_id": 55,
"text": "w_2"
},
{
"math_id": 56,
"text": " E = (t-y)^2 = y^2 = (x_1w_1 + x_2w_2)^2 = (w_1 + w_2)^2."
},
{
"math_id": 57,
"text": "E"
},
{
"math_id": 58,
"text": "w_1 = -w_2"
},
{
"math_id": 59,
"text": "E = L(t, y)"
},
{
"math_id": 60,
"text": "t"
},
{
"math_id": 61,
"text": "o_j"
},
{
"math_id": 62,
"text": "o_j = \\varphi(\\text{net}_j) = \\varphi\\left(\\sum_{k=1}^n w_{kj}x_k\\right),"
},
{
"math_id": 63,
"text": "\\varphi"
},
{
"math_id": 64,
"text": " \\varphi(z) = \\frac 1 {1+e^{-z}}"
},
{
"math_id": 65,
"text": " \\frac {d \\varphi}{d z} = \\varphi(z)(1-\\varphi(z)) "
},
{
"math_id": 66,
"text": "\\text{net}_j"
},
{
"math_id": 67,
"text": "o_k"
},
{
"math_id": 68,
"text": "x_k"
},
{
"math_id": 69,
"text": "n"
},
{
"math_id": 70,
"text": "w_{kj}"
},
{
"math_id": 71,
"text": "w_{ij}"
},
{
"math_id": 72,
"text": "o_i"
},
{
"math_id": 73,
"text": "x_i"
},
{
"math_id": 74,
"text": "\\frac{\\partial o_j}{\\partial\\text{net}_j} = \\frac {\\partial}{\\partial \\text{net}_j} \\varphi(\\text{net}_j) = \\varphi(\\text{net}_j)(1-\\varphi(\\text{net}_j)) = o_j(1-o_j)"
},
{
"math_id": 75,
"text": "o_j = y"
},
{
"math_id": 76,
"text": "\\frac{\\partial E}{\\partial o_j} = \\frac{\\partial E}{\\partial y} = \\frac{\\partial}{\\partial y} \\frac{1}{2}(t - y)^2 = y - t "
},
{
"math_id": 77,
"text": "L = \\{u, v, \\dots, w\\}"
},
{
"math_id": 78,
"text": "\\frac{\\partial E(o_j)}{\\partial o_j} = \\frac{\\partial E(\\mathrm{net}_u, \\text{net}_v, \\dots, \\mathrm{net}_w)}{\\partial o_j}"
},
{
"math_id": 79,
"text": "o_\\ell"
},
{
"math_id": 80,
"text": "\\frac{\\partial E}{\\partial w_{ij}} \n= \\frac{\\partial E}{\\partial o_{j}} \\frac{\\partial o_{j}}{\\partial \\text{net}_{j}} \\frac{\\partial \\text{net}_{j}}{\\partial w_{ij}}\n\n= \\frac{\\partial E}{\\partial o_{j}} \\frac{\\partial o_{j}}{\\partial \\text{net}_{j}} o_i"
},
{
"math_id": 81,
"text": " \\frac{\\partial E}{\\partial w_{ij}} = o_i \\delta_j"
},
{
"math_id": 82,
"text": "\\delta_j \n = \n \\frac{\\partial E}{\\partial o_j} \\frac{\\partial o_j}{\\partial\\text{net}_j} \n = \\begin{cases}\n \\frac{\\partial L(t, o_j)}{\\partial o_j} \\frac {d \\varphi(\\text{net}_j)}{d \\text{net}_j} & \\text{if } j \\text{ is an output neuron,}\\\\\n (\\sum_{\\ell\\in L} w_{j \\ell} \n \\delta_\\ell)\\frac {d \\varphi(\\text{net}_j)}{d \\text{net}_j} & \\text{if } j \\text{ is an inner neuron.}\n\\end{cases}"
},
{
"math_id": 83,
"text": "\\delta_j = \\frac{\\partial E}{\\partial o_j} \\frac{\\partial o_j}{\\partial\\text{net}_j} = \\begin{cases}\n(o_j-t_j)o_j(1-o_{j}) & \\text{if } j \\text{ is an output neuron,}\\\\\n(\\sum_{\\ell\\in L} w_{j \\ell} \\delta_\\ell)o_j(1-o_j) & \\text{if } j \\text{ is an inner neuron.}\n\\end{cases}"
},
{
"math_id": 84,
"text": "\\eta >0"
},
{
"math_id": 85,
"text": "\\frac{\\partial E}{\\partial w_{ij}} > 0"
},
{
"math_id": 86,
"text": "\\frac{\\partial E}{\\partial w_{ij}} < 0"
},
{
"math_id": 87,
"text": "\\Delta w_{ij}"
},
{
"math_id": 88,
"text": "-1"
},
{
"math_id": 89,
"text": "- \\eta \\frac{\\partial E}{\\partial w_{ij}}"
},
{
"math_id": 90,
"text": " \\Delta w_{ij} = - \\eta \\frac{\\partial E}{\\partial w_{ij}} = - \\eta o_i \\delta_j"
},
{
"math_id": 91,
"text": "E=\\frac{1}{n}\\sum_xE_x"
},
{
"math_id": 92,
"text": "E_x"
},
{
"math_id": 93,
"text": "n"
},
{
"math_id": 94,
"text": "x"
},
{
"math_id": 95,
"text": "y,y'"
},
{
"math_id": 96,
"text": "\\mathbb{R}^n"
},
{
"math_id": 97,
"text": "E(y,y')"
},
{
"math_id": 98,
"text": "y'"
},
{
"math_id": 99,
"text": "E(y,y') = \\tfrac{1}{2} \\lVert y-y'\\rVert^2"
},
{
"math_id": 100,
"text": "E=\\frac{1}{2n}\\sum_x\\lVert (y(x)-y'(x)) \\rVert^2"
}
]
| https://en.wikipedia.org/wiki?curid=1360091 |
13606 | Half-life | Time for exponential decay to remove half of a quantity
Half-life (symbol "t"½) is the time required for a quantity (of substance) to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable atoms survive. The term is also used more generally to characterize any type of exponential (or, rarely, non-exponential) decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the human body. The converse of half-life (in exponential growth) is doubling time.
The original term, "half-life period", dating to Ernest Rutherford's discovery of the principle in 1907, was shortened to "half-life" in the early 1950s. Rutherford applied the principle of a radioactive element's half-life in studies of age determination of rocks by measuring the decay period of radium to lead-206.
Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed.
Probabilistic nature.
A half-life often describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom, and its half-life is one second, there will "not" be "half of an atom" left after one second.
Instead, the half-life is defined in terms of probability: "Half-life is the time required for exactly half of the entities to decay "on average"". In other words, the "probability" of a radioactive atom decaying within its half-life is 50%.
For example, the accompanying image is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not "exactly" one-half of the atoms remaining, only "approximately", because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a "very good approximation" to say that half of the atoms remain after one half-life.
Various simple exercises can demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program.
Formulas for half-life in exponential decay.
An exponential decay can be described by any of the following four equivalent formulas:formula_0
where
The three parameters "t"½, τ, and λ are directly related in the following way:formula_1where ln(2) is the natural logarithm of 2 (approximately 0.693).
Half-life and reaction orders.
In chemical kinetics, the value of the half-life depends on the reaction order:
Zero order kinetics.
The rate of this kind of reaction does not depend on the substrate concentration, [A]. Thus the concentration decreases linearly.
formula_2The integrated rate law of zero order kinetics is:
formula_3In order to find the half-life, we have to replace the concentration value for the initial concentration divided by 2: formula_4and isolate the time:formula_5This "t"½ formula indicates that the half-life for a zero order reaction depends on the initial concentration and the rate constant.
First order kinetics.
In first order reactions, the rate of reaction will be proportional to the concentration of the reactant. Thus the concentration will decrease exponentially.
formula_6as time progresses until it reaches zero, and the half-life will be constant, independent of concentration.
The time "t"½ for [A] to decrease from [A]0 to [A]0 in a first-order reaction is given by the following equation:formula_7It can be solved forformula_8For a first-order reaction, the half-life of a reactant is independent of its initial concentration. Therefore, if the concentration of A at some arbitrary stage of the reaction is [A], then it will have fallen to [A] after a further interval of &NoBreak;&NoBreak; Hence, the half-life of a first order reaction is given as the following:formula_9The half-life of a first order reaction is independent of its initial concentration and depends solely on the reaction rate constant, k.
Second order kinetics.
In second order reactions, the rate of reaction is proportional to the square of the concentration. By integrating this rate, it can be shown that the concentration [A] of the reactant decreases following this formula:
formula_10We replace [A] for in order to calculate the half-life of the reactant A formula_11and isolate the time of the half-life ("t"½):formula_12This shows that the half-life of second order reactions depends on the initial concentration and rate constant.
Decay by two or more processes.
Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life "T"½ can be related to the half-lives "t"1 and "t"2 that the quantity would have if each of the decay processes acted in isolation:
formula_13
For three or more processes, the analogous formula is:
formula_14
For a proof of these formulas, see Exponential decay § Decay by two or more processes.
Examples.
There is a half-life describing any exponential-decay process. For example:
In non-exponential decay.
The term "half-life" is almost exclusively used for decay processes that are exponential (such as radioactive decay or the other examples above), or approximately exponential (such as biological half-life discussed below). In a decay process that is not even close to exponential, the half-life will change dramatically while the decay is happening. In this situation it is generally uncommon to talk about half-life in the first place, but sometimes people will describe the decay in terms of its "first half-life", "second half-life", etc., where the first half-life is defined as the time required for decay from the initial value to 50%, the second half-life is from 50% to 25%, and so on.
In biology and pharmacology.
A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life").
The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions.
While a radioactive isotope decays almost perfectly according to so-called "first order kinetics" where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics.
For example, the biological half-life of water in a human being is about 9 to 10 days, though this can be altered by behavior and other conditions. The biological half-life of caesium in human beings is between one and four months.
The concept of a half-life has also been utilized for pesticides in plants, and certain authors maintain that pesticide risk and impact assessment models rely on and are sensitive to information describing dissipation from plants.
In epidemiology, the concept of half-life can refer to the length of time for the number of incident cases in a disease outbreak to drop by half, particularly if the dynamics of the outbreak can be modeled exponentially.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n N(t) &= N_0 \\left(\\frac {1}{2}\\right)^{\\frac{t}{t_{1/2}}} \\\\\n N(t) &= N_0 2^{-\\frac{t}{t_{1/2}}} \\\\\n N(t) &= N_0 e^{-\\frac{t}{\\tau}} \\\\\n N(t) &= N_0 e^{-\\lambda t}\n\\end{align}"
},
{
"math_id": 1,
"text": "t_{1/2} = \\frac{\\ln (2)}{\\lambda} = \\tau \\ln(2)"
},
{
"math_id": 2,
"text": "d[\\ce A]/dt = - k"
},
{
"math_id": 3,
"text": "[\\ce A] = [\\ce A]_0 - kt"
},
{
"math_id": 4,
"text": "[\\ce A]_{0}/2 = [\\ce A]_0 - kt_{1/2}"
},
{
"math_id": 5,
"text": "t_{1/2} = \\frac{[\\ce A]_0}{2k}"
},
{
"math_id": 6,
"text": "[\\ce A] = [\\ce A]_0 \\exp(-kt)"
},
{
"math_id": 7,
"text": "[\\ce A]_0 /2 = [\\ce A]_0 \\exp(-kt_{1/2})"
},
{
"math_id": 8,
"text": "kt_{1/2} = -\\ln \\left(\\frac{[\\ce A]_0 /2}{[\\ce A]_0}\\right) = -\\ln\\frac{1}{2} = \\ln 2"
},
{
"math_id": 9,
"text": "t_{1/2} = \\frac{\\ln 2}{k}"
},
{
"math_id": 10,
"text": "\\frac{1}{[\\ce A]} = kt + \\frac{1}{[\\ce A]_0}"
},
{
"math_id": 11,
"text": "\\frac{1}{[\\ce A]_0 /2} = kt_{1/2} + \\frac{1}{[\\ce A]_0}"
},
{
"math_id": 12,
"text": "t_{1/2} = \\frac{1}{[\\ce A]_0 k}"
},
{
"math_id": 13,
"text": "\\frac{1}{T_{1/2}} = \\frac{1}{t_1} + \\frac{1}{t_2}"
},
{
"math_id": 14,
"text": "\\frac{1}{T_{1/2}} = \\frac{1}{t_1} + \\frac{1}{t_2} + \\frac{1}{t_3} + \\cdots"
}
]
| https://en.wikipedia.org/wiki?curid=13606 |
13606026 | Shell balance | Method of analyzing fluid velocity across a flow
In fluid mechanics, a shell balance can be used to determine the velocity profile, i.e. how fluid velocity changes with position across a flow cross section.
A "shell" is a differential element of the flow. By looking at the momentum and forces on one small portion, it is possible to integrate over the flow to see the larger picture of the flow as a whole. The balance is determining what goes into and out of the shell. Momentum is created within the shell through fluid entering and leaving the shell and by shear stress. In addition, there are pressure and gravitational forces on the shell. From this, it is possible to find a velocity for any point across the flow.
Applications.
Shell Balances can be used in many situations. For example, flow in a pipe, the flow of multiple fluids around each other, or flow due to pressure difference. Although terms in the shell balance and boundary conditions will change, the basic set up and process is the same.
Requirements for Shell Balance Calculations.
The fluid must exhibit:
Boundary Conditions are used to find constants of integration.
Performing shell balances.
A fluid is flowing between and in contact with two horizontal surfaces of contact area A. A differential shell of height Δy is utilized (see diagram below).
The top surface is moving at velocity U and the bottom surface is stationary.
Conservation of Momentum is the Key of a Shell Balance
To perform a shell balance, follow the following basic steps:
Boundary 1: Top Surface: y = 0 and Vx = U
Boundary 2: Bottom Surface: y = D and Vx = 0
For examples of performing shell balances, visit the resources listed below. | [
{
"math_id": 0,
"text": "V_x"
}
]
| https://en.wikipedia.org/wiki?curid=13606026 |
1360654 | Gauss–Kuzmin–Wirsing operator | In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. (This is not the same as the Gauss map in differential geometry.) It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function.
Relationship to the maps and continued fractions.
The Gauss map.
The Gauss function (map) "h" is :
formula_0
where formula_1 denotes the floor function.
It has an infinite number of jump discontinuities at "x" = 1/"n", for positive integers "n". It is hard to approximate it by a single smooth polynomial.
Operator on the maps.
The Gauss–Kuzmin–Wirsing operator formula_2 acts on functions formula_3 as
formula_4
it has the fixed point formula_5, unique up to scaling, which is the density of the measure invariant under the Gauss map.
Eigenvalues of the operator.
The first eigenfunction of this operator is
formula_6
which corresponds to an eigenvalue of "λ"1 = 1. This eigenfunction gives the probability of the occurrence of a given integer in a continued fraction expansion, and is known as the Gauss–Kuzmin distribution. This follows in part because the Gauss map acts as a truncating shift operator for the continued fractions: if
formula_7
is the continued fraction representation of a number 0 < "x" < 1, then
formula_8
Because formula_9 is conjugate to a Bernoulli shift, the eigenvalue formula_10 is simple, and since the operator leaves invariant the Gauss–Kuzmin measure, the operator is ergodic with respect to the measure. This fact allows a short proof of the existence of Khinchin's constant.
Additional eigenvalues can be computed numerically; the next eigenvalue is "λ"2 = −0.3036630029... (sequence in the OEIS)
and its absolute value is known as the Gauss–Kuzmin–Wirsing constant. Analytic forms for additional eigenfunctions are not known. It is not known if the eigenvalues are irrational.
Let us arrange the eigenvalues of the Gauss–Kuzmin–Wirsing operator according to an absolute value:
formula_11
It was conjectured in 1995 by Philippe Flajolet and Brigitte Vallée that
formula_12
In 2018, Giedrius Alkauskas gave a convincing argument that this conjecture can be refined to a much stronger statement:
formula_13
here the function formula_14 is bounded, and formula_15 is the Riemann zeta function.
Continuous spectrum.
The eigenvalues form a discrete spectrum, when the operator is limited to act on functions on the unit interval of the real number line. More broadly, since the Gauss map is the shift operator on Baire space formula_16, the GKW operator can also be viewed as an operator on the function space formula_17 (considered as a Banach space, with basis functions taken to be the indicator functions on the cylinders of the product topology). In the later case, it has a continuous spectrum, with eigenvalues in the unit disk formula_18 of the complex plane. That is, given the cylinder formula_19, the operator G shifts it to the left: formula_20. Taking formula_21 to be the indicator function which is 1 on the cylinder (when formula_22), and zero otherwise, one has that formula_23. The series
formula_24
then is an eigenfunction with eigenvalue formula_25. That is, one has formula_26 whenever the summation converges: that is, when formula_18.
A special case arises when one wishes to consider the Haar measure of the shift operator, that is, a function that is invariant under shifts. This is given by the Minkowski measure formula_27. That is, one has that formula_28.
Ergodicity.
The Gauss map is in fact much more than ergodic: it is exponentially mixing, but the proof is not elementary.
Entropy.
The Gauss map, over the Gauss measure, has entropy formula_29. This can be proved by the Rokhlin formula for entropy. Then using the Shannon–McMillan–Breiman theorem, with its equipartition property, we obtain Lochs' theorem.
Measure-theoretic preliminaries.
A covering family formula_30 is a set of measurable sets, such that any open set is a "disjoint" union of sets in it. Compare this with base in topology, which is less restrictive as it allows non-disjoint unions.
Knopp's lemma. Let formula_31 be measurable, let formula_30 be a covering family and suppose that formula_32. Then formula_33.
Proof. Since any open set is a disjoint union of sets in formula_30, we have formula_34 for any open set formula_35, not just any set in formula_30.
Take the complement formula_36. Since the Lebesgue measure is outer regular, we can take an open set formula_37 that is close to formula_36, meaning the symmetric difference has arbitrarily small measure formula_38.
At the limit, formula_39 becomes have formula_40.
The Gauss map is ergodic.
Fix a sequence formula_41 of positive integers. Let formula_42. Let the interval formula_43 be the open interval with end-points formula_44.
Lemma. For any open interval formula_45, we haveformula_46Proof. For any formula_47 we have formula_48 by standard continued fraction theory. By expanding the definition, formula_49 is an interval with end points formula_50. Now compute directly. To show the fraction is formula_51, use the fact that formula_52.
Theorem. The Gauss map is ergodic.
Proof. Consider the set of all open intervals in the form formula_53. Collect them into a single family formula_30. This formula_30 is a covering family, because any open interval formula_54 where formula_55 are rational, is a disjoint union of finitely many sets in formula_30.
Suppose a set formula_56 is formula_57-invariant and has positive measure. Pick any formula_58. Since Lebesgue measure is outer regular, there exists an open set formula_59 which differs from formula_56 by only formula_60. Since formula_56 is formula_57-invariant, we also have formula_61. Therefore, formula_62By the previous lemma, we haveformula_63Take the formula_64 limit, we have formula_65. By Knopp's lemma, it has full measure.
Relationship to the Riemann zeta function.
The GKW operator is related to the Riemann zeta function. Note that the zeta function can be written as
formula_66
which implies that
formula_67
by change-of-variable.
Matrix elements.
Consider the Taylor series expansions at "x" = 1 for a function "f"("x") and formula_68. That is, let
formula_69
and write likewise for "g"("x"). The expansion is made about "x" = 1 because the GKW operator is poorly behaved at "x" = 0. The expansion is made about 1 − "x" so that we can keep "x" a positive number, 0 ≤ "x" ≤ 1. Then the GKW operator acts on the Taylor coefficients as
formula_70
where the matrix elements of the GKW operator are given by
formula_71
This operator is extremely well formed, and thus very numerically tractable. The Gauss–Kuzmin constant is easily computed to high precision by numerically diagonalizing the upper-left "n" by "n" portion. There is no known closed-form expression that diagonalizes this operator; that is, there are no closed-form expressions known for the eigenvectors.
Riemann zeta.
The Riemann zeta can be written as
formula_72
where the formula_73 are given by the matrix elements above:
formula_74
Performing the summations, one gets:
formula_75
where formula_76 is the Euler–Mascheroni constant. These formula_73 play the analog of the Stieltjes constants, but for the falling factorial expansion. By writing
formula_77
one gets: "a"0 = −0.0772156... and "a"1 = −0.00474863... and so on. The values get small quickly but are oscillatory. Some explicit sums on these values can be performed. They can be explicitly related to the Stieltjes constants by re-expressing the falling factorial as a polynomial with Stirling number coefficients, and then solving. More generally, the Riemann zeta can be re-expressed as an expansion in terms of Sheffer sequences of polynomials.
This expansion of the Riemann zeta is investigated in the following references. The coefficients are decreasing as
formula_78 | [
{
"math_id": 0,
"text": "h(x)=1/x-\\lfloor 1/x \\rfloor."
},
{
"math_id": 1,
"text": "\\lfloor 1/x \\rfloor"
},
{
"math_id": 2,
"text": " G"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "[Gf](x) = \\int_0^1 \\delta(x-h(y)) f(y) \\, dy = \\sum_{n=1}^\\infty \\frac {1}{(x+n)^2} f \\left(\\frac {1}{x+n}\\right)."
},
{
"math_id": 5,
"text": "\\rho(x) = \\frac{1}{\\ln 2 (1+x)}"
},
{
"math_id": 6,
"text": "\\frac 1{\\ln 2}\\ \\frac 1{1+x}"
},
{
"math_id": 7,
"text": "x=[0;a_1,a_2,a_3,\\dots]"
},
{
"math_id": 8,
"text": "h(x)=[0;a_2,a_3,\\dots]."
},
{
"math_id": 9,
"text": "h"
},
{
"math_id": 10,
"text": "\\lambda_1=1"
},
{
"math_id": 11,
"text": "1=|\\lambda_1|> |\\lambda_2|\\geq|\\lambda_3|\\geq\\cdots."
},
{
"math_id": 12,
"text": "\n\\lim_{n\\to\\infty} \\frac{\\lambda_n}{\\lambda_{n+1}} = -\\varphi^2, \\text{ where } \\varphi=\\frac{1+\\sqrt 5} 2.\n"
},
{
"math_id": 13,
"text": "\n\\begin{align}\n& (-1)^{n+1}\\lambda_n=\\varphi^{-2n} + C\\cdot\\frac{\\varphi^{-2n}}{\\sqrt{n}}+d(n)\\cdot\\frac{\\varphi^{-2n}}{n}, \\\\[4pt]\n& \\text{where } C=\\frac{\\sqrt[4]{5}\\cdot\\zeta(3/2)}{2\\sqrt{\\pi}}=1.1019785625880999_{+};\n\\end{align}\n"
},
{
"math_id": 14,
"text": "d(n)"
},
{
"math_id": 15,
"text": "\\zeta(\\star)"
},
{
"math_id": 16,
"text": "\\mathbb{N}^\\omega"
},
{
"math_id": 17,
"text": "\\mathbb{N}^\\omega\\to\\mathbb{C}"
},
{
"math_id": 18,
"text": "|\\lambda|<1"
},
{
"math_id": 19,
"text": "C_n[b]= \\{(a_1,a_2,\\cdots) \\in \\mathbb{N}^\\omega : a_n = b \\}"
},
{
"math_id": 20,
"text": "GC_n[b] = C_{n-1}[b]"
},
{
"math_id": 21,
"text": "r_{n,b}(x)"
},
{
"math_id": 22,
"text": "x\\in C_n[b]"
},
{
"math_id": 23,
"text": "Gr_{n,b}=r_{n-1,b}"
},
{
"math_id": 24,
"text": "f(x)=\\sum_{n=1}^\\infty \\lambda^{n-1} r_{n,b}(x)"
},
{
"math_id": 25,
"text": "\\lambda"
},
{
"math_id": 26,
"text": "[Gf](x)=\\lambda f(x)"
},
{
"math_id": 27,
"text": "?^\\prime"
},
{
"math_id": 28,
"text": "G?^\\prime = ?^\\prime"
},
{
"math_id": 29,
"text": "\\frac{\\pi^2}{6\\ln 2} "
},
{
"math_id": 30,
"text": "\\mathcal C"
},
{
"math_id": 31,
"text": "B \\subset [0, 1)"
},
{
"math_id": 32,
"text": "\\exists \\gamma > 0, \\forall A \\in \\mathcal C, \\mu(A \\cap B) \\geq \\gamma \\mu(A)"
},
{
"math_id": 33,
"text": "\\mu(B) = 1"
},
{
"math_id": 34,
"text": "\\mu(A \\cap B) \\geq \\gamma \\mu(A)"
},
{
"math_id": 35,
"text": "A"
},
{
"math_id": 36,
"text": "B^c"
},
{
"math_id": 37,
"text": "B'"
},
{
"math_id": 38,
"text": "\\mu(B' \\Delta B^c) < \\epsilon"
},
{
"math_id": 39,
"text": "\\mu(B' \\cap B) \\geq \\gamma \\mu(B')"
},
{
"math_id": 40,
"text": "0 \\geq \\gamma \\mu(B^c)"
},
{
"math_id": 41,
"text": "a_1, \\dots, a_n"
},
{
"math_id": 42,
"text": "\\frac{q_n}{p_n} = [0;a_1, \\dots, a_n]"
},
{
"math_id": 43,
"text": "\\Delta_n"
},
{
"math_id": 44,
"text": "[0;a_1, \\dots, a_n], [0;a_1, \\dots, a_n+1]"
},
{
"math_id": 45,
"text": "(a, b) \\subset (0, 1)"
},
{
"math_id": 46,
"text": "\\mu(T^{-n}(a,b) \\cap \\Delta_n) = \\mu((a,b)) \\mu(\\Delta_n) \\underbrace{\\left(\\frac{q_n(q_n + q_{n-1})}{(q_n + q_{n-1}b)(q_n + q_{n-1}a) } \\right)}_{\\geq 1/2} "
},
{
"math_id": 47,
"text": "t \\in (0, 1)"
},
{
"math_id": 48,
"text": "[0;a_1, \\dots, a_n + t] = \\frac{q_n + q_{n-1}t}{p_n + p_{n-1}t}"
},
{
"math_id": 49,
"text": "T^{-n}(a,b) \\cap \\Delta_n"
},
{
"math_id": 50,
"text": "[0;a_1, \\dots, a_n + a], [0;a_1, \\dots, a_n+ b]"
},
{
"math_id": 51,
"text": "\\geq 1/2"
},
{
"math_id": 52,
"text": "q_n \\geq q_{n-1}"
},
{
"math_id": 53,
"text": "([0;a_1, \\dots, a_n], [0;a_1, \\dots, a_n+1])"
},
{
"math_id": 54,
"text": "(a, b)\\setminus \\Q"
},
{
"math_id": 55,
"text": "a, b"
},
{
"math_id": 56,
"text": "B"
},
{
"math_id": 57,
"text": "T"
},
{
"math_id": 58,
"text": "\\Delta_n \\in \\mathcal C"
},
{
"math_id": 59,
"text": "B_0"
},
{
"math_id": 60,
"text": "\\mu(B_0 \\Delta B) < \\epsilon"
},
{
"math_id": 61,
"text": "\\mu(T^{-n}B_0 \\Delta B) = \\mu(B_0 \\Delta B) < \\epsilon"
},
{
"math_id": 62,
"text": "\\mu(T^{-n}B_0 \\cap \\Delta_n) \\in \\mu(B\\cap \\Delta_n) \\pm \\epsilon"
},
{
"math_id": 63,
"text": "\\mu(T^{-n}B_0 \\cap \\Delta_n) \\geq \\frac 12 \\mu(B_0) \\mu(\\Delta_n) \\in \\frac 12 (\\mu(B) \\pm \\epsilon) \\mu(\\Delta_n) "
},
{
"math_id": 64,
"text": "\\epsilon \\to 0"
},
{
"math_id": 65,
"text": "\\mu(B \\cap \\Delta_n) \\geq \\frac 12 \\mu(B) \\mu(\\Delta_n)"
},
{
"math_id": 66,
"text": "\\zeta(s)=\\frac{1}{s-1}-s\\int_0^1 h(x) x^{s-1} \\; dx"
},
{
"math_id": 67,
"text": "\\zeta(s)=\\frac{s}{s-1}-s\\int_0^1 x \\left[Gx^{s-1} \\right]\\, dx "
},
{
"math_id": 68,
"text": "g(x)=[Gf](x)"
},
{
"math_id": 69,
"text": "f(1-x)=\\sum_{n=0}^\\infty (-x)^n \\frac{f^{(n)}(1)}{n!}"
},
{
"math_id": 70,
"text": "(-1)^m \\frac{g^{(m)}(1)}{m!} = \\sum_{n=0}^\\infty G_{mn} (-1)^n \\frac{f^{(n)}(1)}{n!},"
},
{
"math_id": 71,
"text": "G_{mn}=\\sum_{k=0}^n (-1)^k {n \\choose k} {k+m+1 \\choose m} \\left[ \\zeta (k+m+2)- 1\\right]."
},
{
"math_id": 72,
"text": "\\zeta(s)=\\frac{s}{s-1}-s \\sum_{n=0}^\\infty (-1)^n {s-1 \\choose n} t_n"
},
{
"math_id": 73,
"text": "t_n"
},
{
"math_id": 74,
"text": "t_n=\\sum_{m=0}^\\infty \\frac{G_{mn}} {(m+1)(m+2)}."
},
{
"math_id": 75,
"text": "t_n=1-\\gamma + \\sum_{k=1}^n (-1)^k {n \\choose k} \\left[ \\frac{1}{k} - \\frac {\\zeta(k+1)} {k+1} \\right]"
},
{
"math_id": 76,
"text": "\\gamma"
},
{
"math_id": 77,
"text": "a_n=t_n - \\frac{1}{2(n+1)}"
},
{
"math_id": 78,
"text": "\\left(\\frac{2n}{\\pi}\\right)^{1/4}e^{-\\sqrt{4\\pi n}}\n\\cos\\left(\\sqrt{4\\pi n}-\\frac{5\\pi}{8}\\right) +\n\\mathcal{O} \\left(\\frac{e^{-\\sqrt{4\\pi n}}}{n^{1/4}}\\right)."
}
]
| https://en.wikipedia.org/wiki?curid=1360654 |
13609399 | Least-squares spectral analysis | Periodicity computation method
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA.
Developed in 1969 and 1971, LSSA is also known as the Vaníček method and the Gauss-Vaniček method after Petr Vaníček, and as the Lomb method or the Lomb–Scargle periodogram, based on the simplifications first by Nicholas R. Lomb and then by Jeffrey D. Scargle.
Historical background.
The close connections between Fourier analysis, the periodogram, and the least-squares fitting of sinusoids have been known for a long time.
However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what nowadays is called the Lomb method and least-squares fitting of selected frequencies of sinusoids determined from such periodograms — and connected by a procedure known today as the matching pursuit with post-back fitting or the orthogonal matching pursuit.
Petr Vaníček, a Canadian geophysicist and geodesist of the University of New Brunswick, proposed in 1969 also the matching-pursuit approach for equally and unequally spaced data, which he called "successive spectral analysis" and the result a "least-squares periodogram". He generalized this method to account for any systematic components beyond a simple mean, such as a "predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude", and applied it to a variety of samples, in 1971.
Vaníček's strictly least-squares method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out its close connection to periodogram analysis. Subsequently, the definition of a periodogram of unequally spaced data was modified and analyzed by Jeffrey D. Scargle of NASA Ames Research Center, who showed that, with minor changes, it becomes identical to Lomb's least-squares formula for fitting individual sinusoid frequencies.
Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times are unevenly spaced," and further points out regarding least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent."
Press summarizes the development this way:
<templatestyles src="Template:Blockquote/styles.css" />A completely different method of spectral analysis for unevenly sampled data, one that mitigates these difficulties and has some other very desirable properties, was developed by Lomb, based in part on earlier work by Barning and Vanicek, and additionally elaborated by Scargle.
In 1989, Michael J. Korenberg of Queen's University in Kingston, Ontario, developed the "fast orthogonal search" method of more quickly finding a near-optimal decomposition of spectra or other problems, similar to the technique that later became known as the orthogonal matching pursuit.
Development of LSSA and variants.
The Vaníček method.
In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of progressively determined frequencies using a standard linear regression or least-squares fit. The frequencies are chosen using a method similar to Barning's, but going further in optimizing the choice of each successive new frequency by picking the frequency that minimizes the residual after least-squares fitting (equivalent to the fitting technique now known as matching pursuit with pre-backfitting). The number of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids).
A data vector "Φ" is represented as a weighted sum of sinusoidal basis functions, tabulated in a matrix A by evaluating each function at the sample times, with weight vector "x":
formula_0,
where the weights vector "x" is chosen to minimize the sum of squared errors in approximating "Φ". The solution for "x" is closed-form, using standard linear regression:
formula_1
Here the matrix A can be based on any set of functions mutually independent (not necessarily orthogonal) when evaluated at the sample times; functions used for spectral analysis are typically sines and cosines evenly distributed over the frequency range of interest. If we choose too many frequencies in a too-narrow frequency range, the functions will be insufficiently independent, the matrix ill-conditioned, and the resulting spectrum meaningless.
When the basis functions in A are orthogonal (that is, not correlated, meaning the columns have zero pair-wise dot products), the matrix ATA is diagonal; when the columns all have the same power (sum of squares of elements), then that matrix is an identity matrix times a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and sinusoids chosen as sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycles per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This case is known as the discrete Fourier transform, slightly rewritten in terms of measurements and coefficients.
formula_2 — DFT case for "N" equally spaced samples and frequencies, within a scalar factor.
The Lomb method.
Trying to lower the computational burden of the Vaníček method in 1976 (no longer an issue), Lomb proposed using the above simplification in general, except for pair-wise correlations between sine and cosine bases of the same frequency, since the correlations between pairs of sinusoids are often small, at least when they are not tightly spaced. This formulation is essentially that of the traditional periodogram but adapted for use with unevenly spaced samples. The vector "x" is a reasonably good estimate of an underlying spectrum, but since we ignore any correlations, A"x" is no longer a good approximation to the signal, and the method is no longer a least-squares method — yet in the literature continues to be referred to as such.
Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified the standard periodogram formula so to find a time delay formula_3 first, such that this pair of sinusoids would be mutually orthogonal at sample times formula_4 and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency. This procedure made his modified periodogram method exactly equivalent to Lomb's method. Time delay formula_3 by definition equals to
formula_5
Then the periodogram at frequency formula_6 is estimated as:
formula_7,
which, as Scargle reports, has the same statistical distribution as the periodogram in the evenly sampled case.
At any individual frequency formula_6, this method gives the same power as does a least-squares fit to sinusoids of that frequency and of the form:
formula_8
In practice, it is always difficult to judge if a given Lomb peak is significant or not, especially when the nature of the noise is unknown, so for example a false-alarm spectral peak in the Lomb periodogram analysis of noisy periodic signal may result from noise in turbulence data. Fourier methods can also report false spectral peaks when analyzing patched-up or data edited otherwise.
The generalized Lomb–Scargle periodogram.
The standard Lomb–Scargle periodogram is only valid for a model with a zero mean. Commonly, this is approximated — by subtracting the mean of the data before calculating the periodogram. However, this is an inaccurate assumption when the mean of the model (the fitted sinusoids) is non-zero. The "generalized" Lomb–Scargle periodogram removes this assumption and explicitly solves for the mean. In this case, the function fitted is
formula_9
The generalized Lomb–Scargle periodogram has also been referred to in the literature as a "floating mean periodogram".
Korenberg's "fast orthogonal search" method.
Michael Korenberg of Queen's University in Kingston, Ontario, developed a method for choosing a sparse set of components from an over-complete set — such as sinusoidal components for spectral analysis — called the fast orthogonal search (FOS). Mathematically, FOS uses a slightly modified Cholesky decomposition in a mean-square error reduction (MSER) process, implemented as a sparse matrix inversion. As with the other LSSA methods, FOS avoids the major shortcoming of discrete Fourier analysis, so it can accurately identify embedded periodicities and excel with unequally spaced data. The fast orthogonal search method was also applied to other problems, such as nonlinear system identification.
Palmer's Chi-squared method.
Palmer has developed a method for finding the best-fit function to any chosen number of harmonics, allowing more freedom to find non-sinusoidal harmonic functions.
His is a fast (FFT-based) technique for weighted least-squares analysis on arbitrarily spaced data with non-uniform standard errors. Source code that implements this technique is available.
Because data are often not sampled at uniformly spaced discrete times, this method "grids" the data by sparsely filling a time series array at the sample times. All intervening grid points receive zero statistical weight, equivalent to having infinite error bars at times between samples.
Applications.
The most useful feature of LSSA is enabling incomplete records to be spectrally analyzed — without the need to manipulate data or to invent otherwise non-existent data.
Magnitudes in the LSSA spectrum depict the contribution of a frequency or period to the variance of the time series. Generally, spectral magnitudes thus defined enable the output's straightforward significance level regime. Alternatively, spectral magnitudes in the Vaníček spectrum can also be expressed in dB. Note that spectral magnitudes in the Vaníček spectrum follow β-distribution.
Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points. No such inverse procedure is known for the periodogram method.
Implementation.
The LSSA can be implemented in less than a page of MATLAB code. In essence:
"to compute the least-squares spectrum we must compute "m" spectral values ... which involves performing the least-squares approximation "m" times, each time to get [the spectral power] for a different frequency"
I.e., for each frequency in a desired set of frequencies, sine and cosine functions are evaluated at the times corresponding to the data samples, and dot products of the data vector with the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product; finally, a power is computed from those two amplitude components. This same process implements a discrete Fourier transform when the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record.
This method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal to data points; it is Vaníček's original method. In addition, it is possible to perform a full simultaneous or in-context least-squares fit by solving a matrix equation and partitioning the total data variance between the specified sinusoid frequencies. Such a matrix least-squares solution is natively available in MATLAB as the backslash operator.
Furthermore, the simultaneous or in-context method, as opposed to the independent or out-of-context version (as well as the periodogram version due to Lomb), cannot fit more components (sines and cosines) than there are data samples, so that:
<templatestyles src="Template:Blockquote/styles.css" />"...serious repercussions can also arise if the selected frequencies result in some of the Fourier components (trig functions) becoming nearly linearly dependent with each other, thereby producing an ill-conditioned or near singular N. To avoid such ill conditioning it becomes necessary to either select a different set of frequencies to be estimated (e.g., equally spaced frequencies) or simply neglect the correlations in N (i.e., the off-diagonal blocks) and estimate the inverse least squares transform separately for the individual frequencies..."
Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standard periodogram; that is, the frequency domain can be over-sampled by an arbitrary factor. However, as mentioned above, one should keep in mind that Lomb's simplification and diverging from the least squares criterion opened up his technique to grave sources of errors, resulting even in false spectral peaks.
In Fourier analysis, such as the Fourier transform and discrete Fourier transform, the sinusoids fitted to data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus an in-context simultaneous least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies. In the past, Fourier's was for many a method of choice thanks to its processing-efficient fast Fourier transform implementation when complete data records with equally spaced samples are available, and they used the Fourier family of techniques to analyze gapped records as well, which, however, required manipulating and even inventing non-existent data just so to be able to run a Fourier-based algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi \\approx \\textbf{A}x"
},
{
"math_id": 1,
"text": "x = (\\textbf{A}^{\\mathrm{T}}\\textbf{A})^{-1}\\textbf{A}^{\\mathrm{T}}\\phi."
},
{
"math_id": 2,
"text": "x = \\textbf{A}^{\\mathrm{T}}\\phi"
},
{
"math_id": 3,
"text": "\\tau"
},
{
"math_id": 4,
"text": "t_j"
},
{
"math_id": 5,
"text": "\\tan{2 \\omega \\tau} = \\frac{\\sum_j \\sin 2 \\omega t_j}{\\sum_j \\cos 2 \\omega t_j}.\n"
},
{
"math_id": 6,
"text": "\\omega"
},
{
"math_id": 7,
"text": "P_x(\\omega) = \\frac{1}{2} \n\\left(\n \\frac { \\left[ \\sum_j X_j \\cos \\omega ( t_j - \\tau ) \\right] ^ 2}\n { \\sum_j \\cos^2 \\omega ( t_j - \\tau ) }\n+\n \\frac {\\left[ \\sum_j X_j \\sin \\omega ( t_j - \\tau ) \\right] ^ 2}\n { \\sum_j \\sin^2 \\omega ( t_j - \\tau ) }\n\\right) "
},
{
"math_id": 8,
"text": "\\phi(t) = A \\sin \\omega t + B \\cos \\omega t."
},
{
"math_id": 9,
"text": "\\phi(t) = A \\sin \\omega t + B \\cos \\omega t + C."
}
]
| https://en.wikipedia.org/wiki?curid=13609399 |
1361076 | Klein transformation | Type of field redefinition
In quantum field theory, the Klein transformation is a redefinition of the fields to amend the spin-statistics theorem.
Bose–Einstein.
Suppose φ and χ are fields such that, if "x" and "y" are spacelike-separated points and "i" and "j" represent the spinor/tensor indices,
formula_0
Also suppose χ is invariant under the Z2 parity (nothing to do with spatial reflections!) mapping χ to −χ but leaving φ invariant. Free field theories always satisfy this property. Then, the Z2 parity of the number of χ particles is well defined and is conserved in time. Let's denote this parity by the operator Kχ which maps χ-even states to itself and χ-odd states into their negative. Then, Kχ is involutive, Hermitian and unitary.
The fields φ and χ above don't have the proper statistics relations for either a boson or a fermion. This means that they are bosonic with respect to themselves but fermionic with respect to each other. Their statistical properties, when viewed on their own, have exactly the same statistics as the Bose–Einstein statistics because:
Define two new fields φ' and χ' as follows:
formula_1
and
formula_2
This redefinition is invertible (because Kχ is). The spacelike commutation relations become
formula_3
Fermi–Dirac.
Consider the example where
formula_4
(spacelike-separated as usual).
Assume you have a Z2 conserved parity operator Kχ acting upon χ alone.
Let
formula_5
and
formula_2
Then
formula_6 | [
{
"math_id": 0,
"text": "[\\varphi_i(x),\\varphi_j(y)]=[\\chi_i(x),\\chi_j(y)]=\\{\\varphi_i(x),\\chi_j(y)\\}=0."
},
{
"math_id": 1,
"text": "\\varphi'=iK_{\\chi}\\varphi\\,"
},
{
"math_id": 2,
"text": "\\chi'=K_{\\chi}\\chi.\\,"
},
{
"math_id": 3,
"text": "[\\varphi'_i(x),\\varphi'_j(y)]=[\\chi'_i(x),\\chi'_j(y)]=[\\varphi'_i(x),\\chi'_j(y)]=0.\\,"
},
{
"math_id": 4,
"text": "\\{\\phi^i(x),\\phi^j(y)\\}=\\{\\chi^i(x),\\chi^j(y)\\}=[\\phi^i(x),\\chi^j(y)]=0"
},
{
"math_id": 5,
"text": "\\phi'=iK_{\\chi}\\phi\\,"
},
{
"math_id": 6,
"text": "\\{\\phi'^i(x),\\phi'^j(y)\\}=\\{\\chi'^i(x),\\chi'^j(y)\\}=\\{\\phi'^i(x),\\chi'^j(y)\\}=0."
}
]
| https://en.wikipedia.org/wiki?curid=1361076 |
1361116 | Control volume | Imaginary volume through which a substance's flow is modeled and analyzed
In continuum mechanics and thermodynamics, a control volume (CV) is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a fictitious region of a given volume fixed in space or moving with constant flow velocity through which the "continuuum" (a continuous medium such as gas, liquid or solid) flows. The closed surface enclosing the region is referred to as the control surface.
At steady state, a control volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free body diagram.
Overview.
Typically, to understand how a given physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume, or "representative volume". There is nothing special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model.
One can then argue that since the physical laws behave in a certain way on a particular control volume, they behave the same way on all such volumes, since that particular control volume was not special in any way. In this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire (and maybe more complex) system.
In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. They therefore apply on volumes. Finding forms of the equation that are "independent" of the control volumes allows simplification of the integral signs. The control volumes can be stationary or they can move with an arbitrary velocity.
Substantive derivative.
Computations in continuum mechanics often require that the regular time derivation operator
formula_0
is replaced by the substantive derivative operator
formula_1.
This can be seen as follows.
Consider a bug that is moving through a volume where there is some scalar,
e.g. pressure, that varies with time and position:
formula_2.
If the bug during the time interval from
formula_3
to
formula_4
moves from
formula_5
to
formula_6
then the bug experiences a change formula_7 in the scalar value,
formula_8
(the total differential). If the bug is moving with a velocity
formula_9
the change in particle position is
formula_10
and we may write
formula_11
where formula_12 is the gradient of the scalar field "p". So:
formula_13
If the bug is just moving with the flow, the same formula applies, but now the velocity vector,"v", is that of the flow, "u".
The last parenthesized expression is the substantive derivative of the scalar pressure.
Since the pressure p in this computation is an arbitrary scalar field, we may abstract it and write the substantive derivative operator as
formula_14
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d/dt\\;"
},
{
"math_id": 1,
"text": "D/Dt"
},
{
"math_id": 2,
"text": "p=p(t,x,y,z)\\;"
},
{
"math_id": 3,
"text": "t\\;"
},
{
"math_id": 4,
"text": "t+dt\\;"
},
{
"math_id": 5,
"text": "(x,y,z)\\;"
},
{
"math_id": 6,
"text": "(x+dx, y+dy, z+dz),\\;"
},
{
"math_id": 7,
"text": "dp\\;"
},
{
"math_id": 8,
"text": "dp = \\frac{\\partial p}{\\partial t}dt \n+ \\frac{\\partial p}{\\partial x}dx \n+ \\frac{\\partial p}{\\partial y}dy \n+ \\frac{\\partial p}{\\partial z}dz"
},
{
"math_id": 9,
"text": "\\mathbf v = (v_x, v_y, v_z),"
},
{
"math_id": 10,
"text": "\\mathbf v dt = (v_xdt, v_ydt, v_zdt),"
},
{
"math_id": 11,
"text": "\\begin{alignat}{2}\ndp & \n= \\frac{\\partial p}{\\partial t}dt \n+ \\frac{\\partial p}{\\partial x}v_xdt \n+ \\frac{\\partial p}{\\partial y}v_ydt \n+ \\frac{\\partial p}{\\partial z}v_zdt \\\\ &\n= \\left(\n\\frac{\\partial p}{\\partial t}\n+ \\frac{\\partial p}{\\partial x}v_x \n+ \\frac{\\partial p}{\\partial y}v_y \n+ \\frac{\\partial p}{\\partial z}v_z\n\\right)dt \\\\ &\n= \\left(\n\\frac{\\partial p}{\\partial t} \n+ \\mathbf v \\cdot\\nabla p\n\\right)dt. \\\\\n\\end{alignat}"
},
{
"math_id": 12,
"text": "\\nabla p"
},
{
"math_id": 13,
"text": "\\frac{d}{dt} = \\frac{\\partial}{\\partial t} + \\mathbf v \\cdot\\nabla."
},
{
"math_id": 14,
"text": "\\frac{D}{Dt} = \\frac{\\partial}{\\partial t} + \\mathbf u \\cdot\\nabla."
}
]
| https://en.wikipedia.org/wiki?curid=1361116 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.