id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
6688374
|
United States Consumer Price Index
|
Statistics of the U.S. Bureau of Labor Statistics
The United States Consumer Price Index (CPI) is a family of various consumer price indices published monthly by the United States Bureau of Labor Statistics (BLS). The most commonly used indices are the CPI-U and the CPI-W, though many alternative versions exist for different uses. For example, the CPI-U is the most popularly cited measure of consumer inflation in the United States, while the CPI-W is used to index Social Security benefit payments.
Methodology.
Item coverage.
The CPI measures the monthly price change of a basket of discretionary consumption goods whose price is borne by the consumer. There are eight major categories of items that are included in the CPI coverage; each includes both goods and services:
In line with this framework, the CPI excludes items such as life insurance, investment securities, financing costs, and house prices (though the value of owned housing, distinct from a house price, is included in the CPI), as these are considered to be investment items, not consumption. Also excluded are income and property taxes, employer-provided benefits, and the portion of healthcare costs paid by insurance plans or government programs such as Medicare, since these prices are not borne directly by consumers. However, sales and excise taxes, out-of-pocket healthcare costs, and health insurance premiums paid by the consumer (including Medicare Part B) are all included in the CPI, because consumers directly bear these costs. Finally, the prices of illegal goods such as marijuana are not measured, and so are also excluded. Some items, such as pleasure boats and pleasure aircraft, indeed belong in the scope of the CPI but are impractical to price; for these items, the price change is imputed to be the price change of a larger relevant category (eg. pleasure vehicles), while the weight is properly measured in the Consumer Expenditure Survey.
Population coverage and geographic sample.
The CPI-U measures inflation as experienced by a representative household in a metropolitan statistical area. Rural (non-metropolitan) households, farm households, military members, and the institutionalized (eg. prisons or hospitals) are excluded from consideration; with this exclusion, the CPI-U covers about 93 percent of the US population. The items considered, prices collected, and the locations where the prices are collected are all designed to represent the spending habits of such households.
The BLS divides the urban population into Primary Sampling Units (PSUs), equivalent to core-based statistical areas from the 2010 United States census. Prices are measured in only 75 of these PSUs. 23 of these CBSAs are known as self-representing PSUs, whose measured price changes apply to only that PSU. Of these CBSAs, 21 are metropolitan statistical areas with a population greater than 2.5 million (such as the Detroit-Warren-Dearborn, MI Metropolitan Statistical Area), while the remaining two are Anchorage, Alaska and Honolulu, Hawaii (which represent all of the CBSAs in Alaska and in Hawaii, respectively). These self-representing PSUs represent 42 percent of the CPI-U target population. The remaining 52 sampled PSUs are either metropolitan areas or micropolitan statistical areas; the price changes for all other PSUs are not measured in those PSUs themselves, but are imputed to be equivalent to the price change in one of the 52 sampled PSUs that is deemed to be equivalent. The clustering of all PSUs into equivalence classes is called stratification; nonself-representing PSUs are stratified using a variant of the k-means clustering algorithm, using four variables: latitude, longitude, median property value, and median household income. Within each stratum, the actual PSU chosen to be priced was chosen randomly. By sampling both large areas (self-representing PSUs) and smaller areas (nonself-representing PSUs), the CPI-U sample represents the full urban population of the United States, including areas of large population (such as the Houston Metropolitan Area) and areas with small population (such as the Paris, Texas micropolitan area).
Measuring Prices.
Each month, BLS data collectors collect about 94,000 prices. For most items, these prices are collected by contacting the stores where the consumer purchases were made and measuring the price directly. This is usually by visiting the outlet in person (representing 2/3 of quotes) but sometimes via telephone or the store's website: as of 2017 about 8 percent of price quotes are taken from an online store, roughly in line with the fraction of consumer retail spending that goes through e-commerce. Other specific items, such as airfares, are priced via alternative, industry-specific data sources. Most prices are made on a per-unit basis, though some commodities such as gasoline or food items are priced by unit weight.
Some items are quality adjusted, meaning that the prices that are measured are adjusted to remove the change in price due to a change in quality or features of the item being introduced since the previous price was measured, a process called "hedonic regression" or "hedonic adjustment". This is necessary in order to ensure that a constant, fixed item with fixed characteristics is being priced every period; when such an item no longer exists due to technological change in the consumer product landscape, a price is imputed using a new, existing product and estimating what its price would have been if it had the characteristics of the old product. Each category of quality-adjusted item is associated with a set of characteristics that are priced and a model for the marginal value of each of those characteristics. For example, the price of a smartphone is adjusted to remove (estimated) price changes due to the number of cameras, storage space, physical size, and other attributes of the smartphone.
Measuring Prices of Owned Housing.
Beginning in 1983 for the CPI-U (and 1985 for the CPI-W), the pricing method for owned housing uses a framework called Renters' Equivalence. Prior to this, the CPI would measure the price of homes, the cost of monthly mortgage payments, property taxes, insurance, and maintenance. However, this method conflated the investment portion of owned housing (as an investment that was financed via all of those expenditures) and the consumption portion (the flow of services - ie. shelter - to the occupant household). In addition, new financial developments at the time meant that the data sources in the old method were becoming unrepresentative of consumers total costs of housing. For example, mortgages with shorter duration or variable rates, or financing at below-bank rates all were increasingly unreflected in federal data. The previous method was also unable to account for changes in quality of the sampled housing stock, while the CPI conceptual framework measures the price of a fixed-quality basket of goods.
The current method instead measures only the rental payments for rented units, and for owned housing, imputes the rental equivalence - the monthly amount that the house would rent for, if it was rented out instead of occupied by the owner. First, rental payments are priced and adjusted for depreciation of the property (via the age-bias regression model). Then, the "economic rent" is calculated, which adjusts for any changes in the structure or facilities. Finally, a "pure rent" is calculated, which removes from the economic rent the actual provision of utilities such as electricity and gas - these are measured in a separate index, to divide a rental contract's provided services between shelter and utilities provision. Finally, the CPI for Rent measures the change in economic rents, because utilities are often provided in rental agreements; while the CPI for Owners Equivalent Rent (OER) measures the change in pure rents.
Aggregation.
For each entry level item (eg. apples in the Philadelphia metropolitan area) many different price quotes were made in each month. To aggregate these individual price measurements into an index for the item-location combination, a geometric means formula is usually used:
formula_0 where formula_1 is the growth rate of the index, formula_2 enumerates all relevant price measurements that are present in "both" months, formula_3 represents a fractional weight of the item as measured in a base period, formula_4 represents the price of the item formula_2 as measured in the current month, and formula_5 represents the price of the same item as measured in the previous month. For some shelter services, some utilities and government fees, and medical services, a Laspeyres formula is used instead: formula_6where formula_7 represents the price of the item in the base period. The geometric formula implicitly assumes that consumers exhibit substitution behavior among the various different quoted items (such as apples purchased from one particular grocery store versus another grocery store), whereas the Laspeyres formula does not assume such substitution behavior - indeed, the Laspeyres items are not considered to be exactly uniform or substitutable with each other, even if they represent the same category of item and in the same PSU.
Weights.
Each item is given a weight, used in the above formula, so that items that consumers spend more on are weighted higher (ie. with more importance) than items that consumers spend less on. These weights derive from the Consumer Expenditure survey, which collates how much money consumers spend on various consumption items. The various CPI programs (such as CPI-U and CPI-W) are designed to represent different groups of US consumers, and typically achieve this goal by using a different set of weights, derived from a different segment of the Consumer Expenditures survey. For most programs, the weights are updated annually using the results of the previous years' Consumer Expenditure survey, though prior to 2023 data the weights were updated biennially (once every two years), and prior to the 1980s, the weights were updated approximately once per decade.
The different CPI indices.
CPI for All Urban Consumers.
Introduced in 1978, the CPI-U is the most widely used CPI measure in popular media and understanding. Its sampling is designed to represent the consumption baskets of residents of urban and metropolitan areas, which collectively account for over 90 percent of the US population.
CPI for urban wage earners and clerical workers (CPI-W).
The urban wage earner and clerical worker population consists of consumer units consisting of clerical workers, sales workers, craft workers, operative, service workers, or laborers. (Excluded from this population are professional, managerial, and technical workers; the self-employed; short-term workers; the unemployed; and retirees and others not in the labor force.) More than one half of the consumer unit's income has to be earned from the above occupations, and at least one of the members must be employed for 37 weeks or more in an eligible occupation. The consumer price index for urban wage earners and clerical workers (CPI-W) is a continuation of the historical index that was introduced after World War I for use in wage negotiation. As new uses were developed for the CPI, the need for a broader and more representative index became apparent. The Social Security Administration uses the CPI-W as the basis for its periodic COLA (cost-of-living adjustment).
Core CPI.
A Core CPI index is a CPI that excludes goods with high price volatility, typically food and energy, so as to gauge a more underlying, widespread, or fundamental inflation that affects broader sets of items. More specifically, food and energy prices are subject to large changes that often fail to persist and do not represent relative price changes. In many instances, large movements in food and energy prices arise because of supply disruptions such as drought or OPEC-led cutbacks in production. This metric was introduced by Arthur F. Burns in the early 1970s, when food and especially oil prices were quite volatile, as an inflation metric that was less subject to short term shocks. Today, however, the Federal Reserve targets the average level personal consumption expenditures price index, not Core CPI, primarily because it covers a larger portion of the economy and so is a more general measure of price inflation than the CPI.
Chained CPI for all urban consumers (C-CPI-U).
This index applies to the same target population as the CPI-U, but the weights are updated each month. This allows the weights to evolve more gracefully with people's consumption patterns; for CPI-U, the weights are changed only in January of even-numbered years and are held constant for the next two years.
CPI for the elderly (CPI-E).
Since at least 1982, the BLS has also computed a consumer price index for the elderly to account for the fact that the consumption patterns of seniors are different from those of younger people. For the BLS, "elderly" means that the reference person or a spouse is at least 62 years of age; approximately 24 percent of all consumer units meet this definition. Individuals in this group consume roughly double the amount of medical care as all consumers in CPI-U or employees in CPI-W.
In January of each year, Social Security recipients receive a cost of living adjustment (COLA) "to ensure that the purchasing power of Social Security and Supplemental Security Income (SSI) benefits is not eroded by inflation. It is based on the percentage increase in the consumer price index for urban wage earners and clerical workers (CPI-W)".
However, from December 1982 through December 2011, the all-items CPI-E rose at an annual average rate of 3.1 percent, compared with increases of 2.9 percent for both the CPI-U and CPI-W. This suggests that the elderly have been losing purchasing power at the rate of roughly 0.2 (=3.1–2.9) percentage points per year.
In 2003 Hobijn and Lagakos estimated that the social security trust fund would run out of money in 40 years using CPI-W and in 35 years using CPI-E.
History.
The Consumer Price Index was initiated during World War I, when rapid increases in prices, particularly in shipbuilding centers, made an index essential for calculating cost-of-living adjustments in wages. To provide appropriate weighting patterns for the index, it reflected the relative importance of goods and services purchased in 92 different industrial centers in 1917–1919. Periodic collection of prices was started, and in 1919 the Bureau of Labor Statistics began publication of separate indexes for 32 cities. Regular publication of a national index, the U.S. city average began in 1921, and indexes were estimated back to 1913 using records of food prices.
Because people's buying habits had changed substantially, a new study was made covering expenditures in the years 1934–1936, which provided the basis for a comprehensively revised index introduced in 1940. During World War II, when many commodities were scarce and goods were rationed, the index weights were adjusted temporarily to reflect these shortages. In 1951, the BLS again made interim adjustments, based on surveys of consumer expenditures in seven cities between 1947 and 1949, to reflect the most important effects of immediate postwar changes in buying patterns. The index was again revised in 1953 and 1964.
In 1978, the index was revised to reflect the spending patterns based upon the surveys of consumer expenditures conducted in 1972–1974. A new and expanded 85-area sample was selected based on the 1970 Census of Population. The Point-of-Purchase Survey (POPS) was also introduced. POPS eliminated reliance on outdated secondary sources for screening samples of establishments or outlets where prices are collected. A second, more broadly based CPI for All Urban Consumers, the CPI-U was also introduced. The CPI-U took into account the buying patterns of professional and salaried workers, part-time workers, the self-employed, the unemployed, and retired people, in addition to wage earners and clerical workers.
Perceived errors in estimation.
Perceived overestimation of inflation.
In 1995, the Senate Finance Committee appointed a commission to study CPI's ability to estimate inflation. The CPI commission found in their study that the index overestimated the cost of living by a value between 0.8 and 1.6 percentage points.
If CPI overestimates inflation, then claims that real wages have fallen over time could be unfounded. An overestimation of only a few tenths of a percentage point per annum compounds dramatically over time. In the 1970s and 80s the federal government began indexing several transfers and taxes including social security (see below "Uses of the CPI"). The overestimation of CPI would imply that the increases in these taxes and transfers have been greater than necessary, meaning the government and taxpayers have overpaid for them.
The Commission concluded that more than half of the overestimation was due to slow adjustments in the index to new products or changes in product quality. At that time, the weights for indices like CPI-U and CPI-W were updated only once per decade; today, they are updated in January of even-numbered years. However, even with this more frequent updates, the CPI-U and CPI-W might still be excessively slow in responding to new technologies. For example, by 1996 there were over 47 million cellular phone users in the United States, but the weights for the CPI did not account for this new product until 1998. This new product lowered costs of communication when away from the home. The commission recommended that the BLS update weights more frequently to prevent upward bias in the index from a failure to properly account for the benefits of new products.
Additional upward biases were said to come from several other sources. Fixed weights do not accommodate consumer substitutions among commodities, such as buying more chicken when the price of beef increases. Because the CPI assumes that people continue to buy beef, it would increase even if people are buying chicken instead. However, this is by design: the CPI measures the change in expenses required for people to maintain the same standard of living. The Commission also found that 99% of all data were collected during the week, although an increasing amount of purchases happen during the weekend. Additional bias was said to stem from changes in retailing that were unaccounted for in the CPI.
Perceived underestimation of inflation.
Some critics believe however, that because of changes to the way that the CPI is calculated, and because energy and food price changes were excluded from the Federal Reserve's calculation of "core inflation", that inflation is being dramatically underestimated. The second argument is unrelated to the CPI, except insofar as the calculation of CPI is modified in response to a perceived overstatement of inflation.
The Federal Reserve's policy of ignoring food and energy prices when making interest rate decisions is often confused with the measurement of the CPI by the Bureau of Labor Statistics. The BLS publishes both a headline CPI which "counts" food and energy prices, and also a CPI for "all items less food and energy", or "core" CPI. None of the prominent legislated uses of the CPI excludes food and energy. However, with regard to calculating inflation, the Federal Reserve no longer uses the CPI, preferring to use core PCE instead.
Some critics believe that changes in CPI calculation due to the Boskin Commission have led to dramatic cuts in inflation estimates. They believe that using pre-Boskin methods, which they also think are still used by most other countries, the current U.S. inflation is estimated to be around 7% per year. The BLS maintains that these beliefs are based on misunderstandings of the CPI. For example, the BLS has stated that changes made due to the introduction of the geometric mean formula to account for product substitution (one of the Boskin recommended changes) have lowered the measured rate of inflation by less than 0.3% per year, and the methods now used are commonly employed in the CPIs of developed nations.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ln{G} = \\sum_{j} w_j \\ln{\\frac{P_{j, t}}{P_{j, t-1}}}"
},
{
"math_id": 1,
"text": "G "
},
{
"math_id": 2,
"text": "j "
},
{
"math_id": 3,
"text": "w_j "
},
{
"math_id": 4,
"text": "P_{j, t} "
},
{
"math_id": 5,
"text": "P_{j, t-1} "
},
{
"math_id": 6,
"text": "G = \\frac{\\sum_j w_j \\frac{P_{j, t}}{P_{j, b}}}{\\sum_j w_j \\frac{P_{j, t-1}}{P_{j, b}}}"
},
{
"math_id": 7,
"text": "P_{j, b}"
}
] |
https://en.wikipedia.org/wiki?curid=6688374
|
66885987
|
Jeonggamnok
|
Collection of prophetic writings
The Jeonggamnok (정감록; 鄭鑑錄) (also known as "Chŏng Kam nok") is a compilation of prophetic works which foretold the downfall of the Korean Yi (Joseon) dynasty and the establishment of a new utopian dynasty by a messianic "True Man" with the surname Jeong (Chŏng). Ideologies expressed in this work inspired many insurrectionist movements or claims of political legitimacy from the Joseon period to the present. The contents were circulated orally and in handwritten manuscripts since the middle of the Joseon period. They were copied and recopied many times and often the copyists updated the text to conform to the latest events and trends. Historical compilations and manuscripts related to Jeonggamnok are stored at Kyujanggak Archive.
Narrowly construed "Jeonggam Record".
Nowadays, "Jeonggamnok" is the name of a large corpus, composed of numerous works, most from the late 19th and early 20th centuries. However, some of the texts may have been written as early as 1390. Being targeted by a global ban during the late Joseon period, they have circulated underground, being hand-copied again and again. This process is actually under scholarly review and the current consensus is to use another name to design the eponymous text of this corpus, e.g. "Gam's revelations" (Gam Gyeol, 감결, 鑑訣) as done by Han Sung-Hoon.
These Revelations were written as a dialogue between two legendary characters, named Jeong Gam 정감(鄭鑑) and
Yi Sim 이심(李沁) (shortened as Jeong and Sim in §1, and as Gam and Sim in §26). In this Gamgyeol, the fall of the Yi dynasty is predicted. The Yi dynasty was to be succeeded by the Jeong (Chŏng) dynasty, destined to last 800 years. This would be accomplished by a messianic "True Man" (眞人 i.e. awakened) who would lead an army from a sea island. The Jeong dynasty would establish a nearly utopian political order, but it was not to be everlastingly utopian. In the end, like all dynasties, it was predicted to become weak and corrupt. It was to be followed by other dynasties (Jo, Beom and so on).
It is generally agreed that some elements of the text were written just after the Imjin War (1592–1598) and the Qing invasion (1636), because it contains after-the-fact "predictions" of these events.
Moreover, the fact that, circa 1750, "Jeonggamnok" was addressing precisely that text rather than a larger corpus can be inferred by various quotations from the Seungjeongwon Ilgy (i.e. the Crew Diary of the Joseon Dynasty). As emphasized by Han Seung-Hoon, Jeonjo describes this source as Questions and Answers from start to end.
Broadly construed "Jeonggamnok corpus".
The Jeonggam Record was addressing the grievances of the Korean people due to the failure of the government to prevent foreign invasions and to the widespread corruption among the ruling class. Concurrently, other texts of the same kind appeared, often attributed to historical people.
Among these "secrets" (비결) are the purported prophecies of the Silla monk Doseon (827-898) §8, the Koryo-Choson monk Muhak (무학, 1327-1405) §5, and the Joseon period seers Nam Sago (1509-1571) §10,§11 and Yi Ji-ham (이지함, 李之菡, 1517-1578) §23,§24.
Taken together, they can be described as some Jeonggamnok galaxy.
These "Jeonggamnok" prophecies appear to have played an important role in various revolutionary movements. Furthermore, many of the numerous rebellions against the throne in Joseon, over its five centuries, were justified with references to fortune-telling.
Consequently, there were attempts from the central power to suppress such works. One notable event in this regard was the order by King Sejo in 1458 that books of prophecy be collected and incinerated. Nevertheless, such works continued to circulate.
Those suspected of resistance to the government were interrogated and often forced to admit they were wronged by some sort of prophecy. An early example of such an event occurred in 1739. Another one is the 1782-12-10 art.3 entry of the Veritable Records of the Joseon Dynasty (Jeongjo sillok), which made it clear that the "Jeonggamnok" was banned.
Also, at that time it appears that copies in Korean script were circulating and were disseminated to groups by reading it out loud.
The first full compilation (handwritten) of this galaxy of texts was by the Japanese scholar Ayukai Fusanoshin 鮎貝房之進 which he transcribed in 1913. The name chosen for this compilation was "Jeonggamnok", enlarging the meaning of the Gamgyeol's title.
For this work, Ayukai consulted manuscripts held by the Japanese Governor General of Korea. These are now part of the Kyujanggak Archive. His transcription was subsequently printed in Japanese by Hosoi Hajime in February 1923. The Japanese version, first distributed in Tokyo, was brought back to Korea but a Korean compilation by Kim Yongju (金用柱) came out two weeks after the Hosoi version was published and was far more popular in Korea.
According to Pratt, this period was the moment when these various elements were taken as an interrelated corpus.
The Hosoi compilation contained 35 titles; the Kim Yougju compilation contained 51 titles. The English edition provided by Jorgensen mostly follows Hosoi and contains 32 titles.
Obscure writing.
Most of the Jeonggamnok corpus was originally written in Chinese script, and was not redacted to be understood at first sight. In fact it was deliberately written in code. One of the ways the meaning was partially hidden was by glyphomancy, which is deconstruction of a Chinese character into elements to form other characters or combination of elements of characters to form a phrase in a kind of cryptic crossword. For example:
士者橫冠 The gentleman will wear a hat,
砷人脫衣 A divine man will take off its clothes,
走遢橫己 ki will be attached to the edge of chu
聖諱加八 eight will be added to the name of the sage
can be deciphered as
The gentleman 士 will wear a hat formula_0 壬
A divine man 砷人 will take off his clothes formula_0 申
ki 己 will be attached to the edge of chu 走 formula_0 起
Eight 八 will be added to Confucius' name 丘 formula_0 兵
leading to 壬申起兵 i.e. "troops will be raised in the year of imsin". This interpretation was used during the 1812 rebellion led by Hong Gyeong-nae 홍경래 洪景來, to legitimate the movement (1812 was an imsin year). As noted by Jorgensen, any slight alteration of the text by a copyist would undermine any interpretation. The Hosoi text has 聖諱横入 instead of 聖諱加八, leading to "the sage will cross into".
Another method of partially hiding meaning was by use of allegorical references. Baker in his review of Jorgensen noted the following example: "where the high-flying dragon arrives, the fallen wild goose will have regrets" was interpreted to mean that rulers who have risen to the heights of power need to be careful lest they lose their throne and become filled with regret. However, some passages appear impenetrable, e.g., "in one pitcher, a heaven (paradise) will be built and the hunting horse still loves". Furthermore, much of the text includes far more arcane codes based on geomancy, divination, and the like.
Influences on Korean culture and history.
Joseon period.
The Korean scholar Kim Tak documented many instances in which the work was an important component of new religious and insurrectionist ideology. and Jorgensen referenced many of Kim Tak's textural interpretations in his English language translation. Religious sects with various ideologies inspired by the "Jeonggamnok" include: "Bocheongyo" (Poch'ŏn'gyo), Jeungsangyo (Chŭngsan'gyo), Baekbaekkyo (Paekpaekkyo), and Cheongnimgyo (Ch'ŏngnimgyo).
The Veritable Records of the Joseon Dynasty (Jeongjo sillok) explicitly mentions the so-called Mun Inbang treason case (Jeongjo 1782). The conspirators led by Mun Inban tried to incite a insurrection by "deceiving the people" through dissemination of "Jeonggamnok"
The Hong Gyeong-nae (Hong Kyŏngnae) rebellion (December 1811 to April 1812), was one of the largest and most serious during the Yi dynasty up to that point. It was fueled by a deep resentment by the people of the corrupt rulers. Its ideology took inspiration from "Jeonggamnok", in its claim that the True Man Jeong would lead an army to establish a new dynasty. Hong Gyeong-nae propaganda claimed that their army was his vanguard force. In preparation for the rebellion the instigators spread the "song foretelling the future" which had lines nearly identical to "the gentleman will wear a hat" text in the "Jeonggamnok" quoted above. Geomancy was a key element of "Jeonggamnok". Hong Gyeong-nae, one of the chief leaders of the rebellion, was a professional geomancer from Pyongan province who claimed that the gravesite of his father that he had chosen was a very auspicious site that would protect him. In the end the rebellion he instigated was brutally put down. Hong Gyeong-nae was shot and killed in the fighting along with most other leaders who either died in battle or were captured and executed. Thousands of others were also arrested and executed including boys as young as 10 years. Nevertheless, it provided momentum for other popular armed uprisings in different parts of Korea seeking a more just society.
Choe Je-u (Ch'oe Cheu) (1824-1864) was the founder of the Donghak religion (Eastern Learning) that opposed "Western Learning" (Catholicism). In a section of his book titled "Ch'oe Cheu, the Tonghak religion, and the Chong Kam nok", Jorgensen noted that Choe Je-u was familiar with Jeonggamnok and that passages in his writings were quite similar to those found there. At the time, Yi dynasty officials were trying to eliminate Catholicism from Korea. Due to the textural similarities with "Jeonggamnok" and his use of the Catholic translation for the word God, the authorities became suspicious of Donghak. Choe Je-u and other leaders were arrested and executed and the Donghak religion was banned. These actions further enflamed the peasant followers of the religion and helped to instigate the Donghak Revolution.
Colonial period.
The Japanese considered the "Jeonggamnok" as an example of what they viewed as the backward, superstitious nature of the Korean people. They initially promoted its distribution because it seemed to them to condone their overthrow of the Yi dynasty. However, the Korean people continued to be inspired by its revolutionary ideology which led to acts of resistance (many incited by religious sects) and these movements began to alarm Japanese officials.
Among the religious sects inspired by the "Jeonggamnok", the Cheongnimgyo (founded in 1900) was of greatest concern. Its leader had predicted that the Japanese rule would end with a war in 1914, three years after annexation. During the March 1st Movement of 1919, many followers of the "Jeonggamnok" inspired religious groups moved to Mount Kyeryong - the predicted site of the new capital of the Jeong dynasty - and built villages there to prepare themselves for a "great calamity". Their expectations were based on text such as "the flow of blood becomes a river; for a hundred leagues to the south of the Han [River] there will be no sounds of chickens and dogs, and the shadows of people will be eliminated forever." Non-religious people also moved to the area and as a result the population there was doubled. Some newspapers dispatched undercover reporters to the area to investigate what were viewed as heretical sects. After the March 1st demonstrations, there was a crackdown on free speech. Editors of the "Jeonggamnok" were blamed by the Japanese government officials, not withstanding the fact that they themselves had initially promoted it. During the Pacific War the work helped fuel hope that the Japanese would be defeated, and that Korean liberation was at hand.
A lesser known aspect of the cultural clash between the Koreans and the Japanese was the Colored Clothes Campaign. Prior to colonization Korean people did not die their clothes perhaps because the cost was prohibitive. The Japanese claimed that this practice illustrated the weakness of the Korean people and initiated a campaign to force the wearing of colored clothes. Koreans were naturally reluctant to comply. The Japanese then viewed the wearing of white as a symbol of resistance. Those Koreans with firm beliefs in the "Jeonggamnok" were apparently particularly resistant. Kim Sa-Ryang wrote a novel ("Deep in the Grass" 풀숲 깊숙이, 1940) about the Colored Clothes campaign sympathetic to the Korean perspective. Once while staying, at a Buddhist temple, he observed a group of men and women in the front yard chanting. This is his report of what he heard: "We the white-wearing Joseon people cannot be saved without the power of Jeong-gam-rok. That book foretells, it's not difficult to understand it at all. According to "Jeonggamnok", if one wears white clothes and chants a spell . . . he or she could be saved...".
Post-liberation period.
Even after liberation from Japanese colonial rule, belief in the predictions of "Jeonggamnok" continued to be influential. Prominent politicians claimed to be destined for high office based on the texts. Those making such claims include: former Presidents Roh Tae-woo, Kim Young-sam and Kim Dae-jung, and a former governor of the Nationalist Party Chung Ju-young.
In popular culture, "Gyeogam Yurok", a book with a prophetic theme was published in 1977. Similar to "Jeonggamnok", it utilized the technique of after-the-fact "prediction" to help establish fake authenticity.
The "Jeonggamnok" is the basis of the novel "For the Emperor", by the Korean writer Yi Mun-yol who won the Republic of Korea Literature Prize for this work. The protagonist, always referred to as The Emperor, is a Don Quixote-esque hero who believes that he is ordained by heaven to found a new dynasty to replace the Yi (Joseon) dynasty and that his new dynasty would prosper for 800 years as predicted in the "Jeonggamnok". His dream is to be a ruler who will free the kingdom from foreign domination, military and cultural. The latter is presented as a seemingly impossible task, a struggle that would require "madness" to sustain for a lifetime. Sol Sun-bung, author of the preface to his English translation, noted that although the Emperor's dream of becoming a ruler of the people failed in a practical sense, nonetheless at his death, he achieves "greater eminence by transcending all worldly preoccupations".
Ten superior sites of refuge.
---to be developed---
According to several texts of the Jeonggamnok, the sipseungji 十勝地 are ten places where you can live in peace and take refuge from hunger and war. Here the names of nowadays are from the Chosun Ilbo, and the comments from the Nam Sago Secret as translated by Jorgensen. The blue line is 백두대간, Baekdu-daegan, the largest and longest mountain range on the Korean Peninsula from Mt. Baekdu to Mt. Jirisan.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\longrightarrow"
}
] |
https://en.wikipedia.org/wiki?curid=66885987
|
66889008
|
1 Chronicles 14
|
First Book of Chronicles, chapter 14
1 Chronicles 14 is the fourteenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains the successes of David as he established himself in Jerusalem and defeated the Philistines. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 17 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David Established at Jerusalem (14:1–7).
This passage emphasizes the greatness of David's reign for the sake of Israel after the transportation of the ark (whereas in 2 Samuel 5, the account was placed after the conquest of Jerusalem). The accumulation of wives and sons is seen as a 'positive sign of stature' in the books of Chronicles (1 Chronicles 25:5; 26:4–5; 2 Chronicles 11:18–23; 13:21; 14:3–7).
"Now Hiram king of Tyre sent messengers to David, and cedar trees, with masons and carpenters, to build him a house."
David Defeats the Philistines (14:8–17).
The passage has similar structures as ('the advance of the Philistines, an enquiry to God with a positive response and the Philistines' defeat'), with a change of place-name "Geba" to "Gibeon" (verse 16) apparently to fit the perspective of (which refers to the battles in and ). The military successes had an astonishing effect of increasing David's fame (and name) internationally, denoting divine blessings for David.
"So they went up to Baal Perazim, and David defeated them there. Then David said, “God has broken through my enemies by my hand like a breakthrough of water.” Therefore they called the name of that place Baal Perazim."
"So David did as God commanded him, and they drove back the army of the Philistines from Gibeon as far as Gezer."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66889008
|
66892182
|
1 Chronicles 15
|
First Book of Chronicles, chapter 15
1 Chronicles 15 is the fifteenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains the account of successful transportation of the Ark of the Covenant to the City of David in Jerusalem. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 29 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
Preparing to move the Ark (15:1–13).
This section combines the account in with a list of participating priests and Levites (verses 4–10) to highlight their roles in carrying the ark ( as prescribed in the Torah: Deuteronomy 10:8; 31:25). The three traditional priest families, Gershom, Kohath, and Merari, listed in a different order, together with the families of Hebron and Uzziel (Kohath's sons according to ), and Elizaphan. David announced his intentions to the head priests and Levites (verse 11), calling upon them to sanctify themselves (verse 12; cf. Exodus 19:14-15) while referring back to the failed first attempt (verse 13).
"And David called for Zadok and Abiathar the priests, and for the Levites, for Uriel, Asaiah, and Joel, Shemaiah, and Eliel, and Amminadab,"
"For because you did not do it the first time, the Lord our God broke out against us, because we did not consult Him about the proper order.”"
Moving the Ark to Jerusalem (15:14–29).
The passage includes details of Levitical duties (verses 16–24) and the Chronicler emphasizes that the relevant instructions were carried out carefully. Musical instruments are prominently described in this passage (cf. 2 Samuel 6:12–15) as well as in ritual liturgies throughout the Chronicles (1 Chronicles 16:42; 2 Chronicles 5:13; 7:6; 23:13; 34:12). The number of sacrifices corresponds with the contemporary practices (see e.g. Numbers 23:1; Ezekiel 45:23; Job 42:8).
"And it came to pass, as the ark of the covenant of the Lord came to the city of David, that Michal, the daughter of Saul looking out at a window saw king David dancing and playing: and she despised him in her heart."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66892182
|
668934
|
Universal C*-algebra
|
In mathematics, a universal C*-algebra is a C*-algebra described in terms of generators and relations. In contrast to rings or algebras, where one can consider quotients by free rings to construct universal objects, C*-algebras must be realizable as algebras of bounded operators on a Hilbert space by the Gelfand-Naimark-Segal construction and the relations must prescribe a uniform bound on the norm of each generator. This means that depending on the generators and relations, a universal C*-algebra may not exist. In particular, free C*-algebras do not exist.
C*-Algebra Relations.
There are several problems with defining relations for C*-algebras. One is, as previously mentioned, due to the non-existence of free C*-algebras, not every set of relations defines a C*-algebra. Another problem is that one would often want to include order relations, formulas involving continuous functional calculus, and spectral data as relations. For that reason, we use a relatively roundabout way of defining C*-algebra relations. The basic motivation behind the following definitions is that we will define relations as the category of their representations.
Given a set "X", the "null C*-relation" on "X" is the category formula_0 with objects consisting of pairs ("j", "A"), where "A" is a C*-algebra and "j" is a function from "X" to "A" and with morphisms from ("j", "A") to ("k", "B") consisting of *-homomorphisms φ from "A" to "B" satisfying φ ∘ "j" = "k". A "C*-relation" on "X" is a full subcategory of formula_0 satisfying:
Given a C*-relation "R" on a set "X". then a function ι from "X" to a C*-algebra "U" is called a "universal representation" for "R" if
A C*-relation "R" has a universal representation if and only if "R" is compact.
Given a *-polynomial "p" on a set "X", we can define a full subcategory of formula_0 with objects ("j", "A") such that "p" ∘ "j" = 0. For convenience, we can call "p" a relation, and we can recover the classical concept of relations. Unfortunately, not every *-polynomial will define a compact C*-relation.
Alternative Approach.
Alternatively, one can use a more concrete characterization of universal C*-algebras that more closely resembles the construction in abstract algebra. Unfortunately, this restricts the types of relations that are possible. Given a set "G", a "relation" on "G" is a set "R" consisting of pairs ("p", η) where "p" is a *-polynomial on "X" and η is a non-negative real number. A "representation" of ("G", "R") on a Hilbert space "H" is a function ρ from "X" to the algebra of bounded operators on "H" such that formula_3 for all ("p", η) in "R". The pair ("G", "R") is called "admissible" if a representation exists and the direct sum of representations is also a representation. Then
formula_4
is finite and defines a seminorm satisfying the C*-norm condition on the free algebra on "X". The completion of the quotient of the free algebra by the ideal formula_5 is called the "universal C*-algebra" of ("G","R").
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{F}_{X}"
},
{
"math_id": 1,
"text": "\\prod_{i=1}^{n} f_i: X\\to \\prod_{i=1}^{n} A_i"
},
{
"math_id": 2,
"text": "\\prod_{i\\in I} f_i : X \\to \\prod A_i"
},
{
"math_id": 3,
"text": "\\lVert p\\circ \\rho(X) \\rVert \\leq \\eta"
},
{
"math_id": 4,
"text": "\\lVert z \\rVert_{u} = \\sup\\{ \\lVert \\rho(z)\\rVert \\colon \\rho \\text{ is a representation of } (G,R)\\}"
},
{
"math_id": 5,
"text": "\\{ z \\colon \\lVert z \\rVert_{u} = 0\\}"
},
{
"math_id": 6,
"text": "\\langle u \\mid u^*u = uu^* = 1\\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=668934
|
66895075
|
Pestov–Ionin theorem
|
Theorem that curves of bounded curvature contain a unit disk
The Pestov–Ionin theorem in the differential geometry of plane curves states that every simple closed curve of curvature at most one encloses a unit disk.
History and generalizations.
Although a version of this was published for convex curves by Wilhelm Blaschke in 1916, it is named for German Gavrilovich Pestov and Vladimir Kuzmich Ionin, who published a version of this theorem in 1959 for non-convex doubly differentiable (formula_0) curves, the curves for which the curvature is well-defined at every point. The theorem has been generalized further, to curves of bounded average curvature (singly differentiable, and satisfying a Lipschitz condition on the derivative), and to curves of bounded convex curvature (each point of the curve touches a unit disk that, within some small neighborhood of the point, remains interior to the curve).
Applications.
The theorem has been applied in algorithms for motion planning. In particular it has been used for finding Dubins paths, shortest routes for vehicles that can move only in a forwards direction and that can turn left or right with a bounded turning radius. It has also been used for planning the motion of the cutter in a milling machine for pocket machining, and in reconstructing curves from scattered data points.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C^2"
}
] |
https://en.wikipedia.org/wiki?curid=66895075
|
66897509
|
Arc spring
|
Helical spring which is pre-curved in an arc shape
The arc spring (also known as - bow spring, curved spring, circular spring or "banana" spring) is a special form of coil spring which was originally developed for use in the dual-mass flywheel of internal combustion engine drive trains. The term "arc spring" is used to describe pre-curved or arc-shaped helical compression springs. They have an arc-shaped coil axis.
Function.
Like other technical springs, arc springs are based on the fundamental principle of storing mechanical work in the form of potential energy and the ability to release this energy again. The force is applied through the ends of the spring. A torque formula_0 can be transmitted around an axis via the force formula_1 directed along this helical axis and the lever arm to the system center point formula_2. The wire of the arc spring is mainly subjected to torsional stress.
Support.
An arc spring requires suitable support to transmit torque. The support is usually provided from the outside in the form of an arcuate channel (sliding shell) or radially shaped support plates. This prevents buckling of the arc spring. Another result of this support is a hysteresis between the loading and unloading curves in the characteristic curve. This results from the friction of the spring on the radial support and is an intended effect to achieve damping in the system.
Arc spring systems.
As with compression springs, spring systems can also be used for arc springs. The main designs are series and parallel connection. With these, single-stage or multi-stage spring characteristics can be achieved. In order to make optimum use of the available space, systems consisting of inner and outer arc springs are often used.
In addition, the spring characteristic can be influenced by other parameters such as the cross-sectional geometry of the wire, the coil diameter or the number of coils. CAD configurators, which generate a CAD model after entering certain parameters, can contribute to optimal design.
Applications.
The arc spring is suitable for static and quasi-static as well as dynamic applications. Examples include:
Materials and their standardization.
In principle, the spring steels used for ordinary coil springs can also be used for arc springs. These are:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M=F\\cdot r"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "r"
}
] |
https://en.wikipedia.org/wiki?curid=66897509
|
66903099
|
1 Chronicles 16
|
First Book of Chronicles, chapter 16
1 Chronicles 16 is the sixteenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter describes the last act of transporting the Ark of the Covenant into the City of David in Jerusalem and the great religious festival for the occasion. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 43 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
The Ark placed in a tent (16:1–6).
Verses 1–3 in this section closely resemble and here serve as an introduction of the festival to praise and thank God (cf. 2 Chronicles 20:26, 28;
29:30; 30:21, 27). After David successfully arranged to place the Ark inside the specially prepared tent, he designates certain Levites and priests to lead the musical service (verses 4–6; cf. 1 Chronicles 16:37).
"and Benaiah and Jahaziel the priests were to blow trumpets regularly before the ark of the covenant of God."
Verse 6.
although the term can also be used for secular trumpets for music service (cf. verse 42).
David’s Psalm of Thanksgiving (16:7–36).
The festive psalm that David instructed the Levites to sing are a medley composed of parts (with variations) from some known psalms. At this time, there could have existed some form of the Book of Psalms as a 'liturgical collection' which may already have been 'attributed to David'.
The composition initially looks back at the history of events up to that point (verses 8–22; Psalm 105:1–15), then praising YHWH (verses 23–33; Psalm 96), and finally asking for deliverance from enemies (verses 34–36; Psalm 106:1, 47–48). The Chronicler copies seven (w. 8, 20, 24, 26, 28, 31, 35, cf. also 'all the earth', v. 30) foreign nations to show the greatness of YHWH (in contrast to other gods).
"Blessed be the Lord God of Israel for ever and ever."
"And all the people said, Amen, and praised the Lord."
David appoints worship leaders (16:37–43).
David appointed worship leaders to minister the Ark of the Covenant in Jerusalem, and also for the tabernacle at Gibeon (verses 39–42). Although the regular ceremony in Gibeon was not mentioned in other parts of Hebrew Bible, its historical authenticity is supported by the confirmation of its existence in 1 Kings 3:3–4. Here is mentioned the first time that the ark and the tabernacle were in two separate places, although the ordinary sacrifices and services, "all that is written in the Law of the Lord" (verse 40; cf. Exodus 29:38-39; Numbers 28:3-4) were carefully observed on the original altar (Exodus 38:2) in the tabernacle, whereas other and special sacrifices evidently were offered in the presence of the ark. The tabernacle constructed in the wilderness was first stationed at Shiloh (Joshua 18:1; 1 Samuel 4:3, 4), then removed to Nob (1 Samuel 21:1; 1 Samuel 22:19) until the slaughter of the priests there by Doeg the Edomite at Saul's command, before this passage informs its location in Gibeon (cf. 1 Chronicles 21:29; 2 Chronicles 1:3). The uninterrupted and legitimate (sacrificial) services were portrayed in the Chronicles as spanning the entire period from the wilderness era, including the positioning of the tabernacle at Gibeon (underlined by its priests, musicians, and gatekeepers), until Solomon established the temple in Jerusalem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66903099
|
66904553
|
Cellular deconvolution
|
Set of computational techniques
Cellular deconvolution (also referred to as cell type composition or cell proportion estimation) refers to computational techniques aiming at estimating the proportions of different cell types in samples collected from a tissue. For example, samples collected from the human brain are a mixture of various neuronal and glial cell types (e.g. microglia and astrocytes) in different proportions, where each cell type has a diverse gene expression profile. Since most high-throughput technologies use bulk samples and measure the aggregated levels of molecular information (e.g. expression levels of genes) for all cells in a sample, the measured values would be an aggregate of the values pertaining to the expression landscape of different cell types. Therefore, many downstream analyses such as differential gene expression might be confounded by the variations in cell type proportions when using the output of high-throughput technologies applied to bulk samples. The development of statistical methods to identify cell type proportions in large-scale bulk samples is an important step for better understanding of the relationship between cell type composition and diseases.
Cellular deconvolution algorithms have been applied to a variety of samples collected from saliva, buccal, cervical, PBMC, brain, kidney, and pancreatic cells, and many studies have shown that estimating and incorporating the proportions of cell types into various analyses improves the interpretability of high-throughput omics data and reduces the confounding effects of cellular heterogeneity, also known as tissue heterogeneity, in functional analysis of omics data.
Mathematical Formulation.
Most cellular deconvolution algorithms consider an input data in a form of a matrix formula_0, which represents some molecular information (e.g. gene expression data or DNA methylation data) measured over a group of formula_1 samples and formula_2 marks (e.g. genes or CpG sites). The goal of the algorithm is to use these data and return an output matrix formula_3, representing the proportions of formula_4 distinct cell types in each of the formula_1 samples. Some methods limit the sum of each column of formula_5 matrix less than or equal to one, so that the proportions of cell types sum up to the overall number of cells in the sample (less than one when there are some unknown cell types in the samples). Moreover, it is assumed that the values of formula_5 matrix are non-negative as they pertain to proportions of cell types.
Current strategies.
There are two broad categories of methods aiming at estimating the proportion of cell types in samples using some type of omics data (bulk gene expression or DNA methylation data). These approaches are labeled as reference-based (also called supervised) and reference-free (also called unsupervised) methods
Reference-based methods.
Reference-based methods require an a "priori" defined reference matrix consisting of the expected value (also called profile or signature) of gene expression (or DNA methylation) for a group of genes (or CpG sites) known to have a differential expression (or methylation)
across the cell types. A reference matrix can be represented by a matrix formula_6, representing the expected value for formula_2 markers (genes or CpG sites) for each of formula_4 cell types known to be presented in the samples. These references can be derived by exploring external single-cell epigenomics or transcriptomics datasets generated for a group of samples similar (e.g. in terms of biological condition, sex and age) to the samples for which the deconvolution method will be applied. These methods use statistical approaches such as non-negative or constrained linear regression methods to dissect the contribution of each cell type to the aggregated bulk signals of genes or CpG sites. Constrained regression is the basis for many of reference-free cellular deconvolution methods existing in the literature, aiming at estimating the cell proportion values (formula_3) that maximizes the similarity between formula_7 and formula_8. The performance of reference-based methods depends critically on the quality of the reference profiles.
Construction of reference profiles.
There are a variety of approaches for isolating different cell types to measure their gene expression or DNA methylation levels to be used as references in the deconvolution algorithms. Earlier methods used cell sorting methods such as FACS (fluorescence-activated cell sorting) based on the flow cytometry technique, which separates the populations of cells belonging to different cell types based on their cell sizes, morphologies (shape), and surface protein expressions. With the advance in single-cell technologies, newer approaches started to incorporate references for cell-types measured on a single-cell resolution obtained for a subset of subjects in the study or external subjects from a similar biological condition.
Reference-free methods.
Reference-free methods do not need the reference profiles of cell-type specific genes (or CpGs), although they might still require the identity (name) of cell-type-specific genes (or CpGs). These methods might be considered as a modification of reference-based methods where both formula_9 and formula_5 are unknown, and the goal is to jointly estimate both matrices so that the similarity between formula_7 and formula_8 is maximized. Many of the reference-free methods are based on mathematical framework of non-negative matrix factorization, which imposes a non-negativity constraint on the elements of formula_9 and formula_5. Additional constraints such as the assumption of orthogonality between the columns of formula_9 might be incorporated to improve the interpretability of results and prevent overfitting.
Advantages and limitations.
Advantages.
In silico cell-type level resolution.
The advance of single-cell technologies enables the profiling of each individual cell in a sample, which help elucidate the issue of cellular heterogeneity by measuring the proportions of different cells in samples. Even though the quality of single cell profiling technologies has been on the rise in recent years, these technologies are still costly, limiting their applications in large populations of samples. Single cell technologies such as single cell transcriptomic methods also tend to have higher error rates due to factors such as high dropout events. Cellular deconvolution methods provide a robust and cost-effective "in silico" alternatives for understanding the samples on a cell-type level resolution, by relying on single cell information of only a small subset of cells in the sample, the reference profiles generated by external sources, or even no reference profile at all.
(Re)analysis of old data.
There are large amounts of old bulk data from studies concerning various diseases and biological conditions. These datasets could be considered important resources in studying of rare disease, long follow-up studies or samples and tissues that are difficult to extract. Since the biological samples for many of these studies are not available or accessible anymore, reprofiling the data using single cell technologies might not be within the realm of possibilities for many studies. Invention of more advanced cellular deconvolution methods gives the opportunity to researchers to come back to old omics studies, reanalyze their datasets, and scrutinize their findings.
Limitations.
Reliability of reference.
Reference-based approaches rely on the availability of accurate references to estimate cell proportions. The discrepancy between the biology of the samples underlying the references and the samples for which the cell proportions are being estimated could introduce bias in estimated cell proportions. Studies have shown that using references obtained from samples with different phenotypes such as age, gender, and disease status than the population of interest reduces the performance of reference-based methods to levels lower than their reference-free counterparts.
Lack of reference for rare, unknown, or uncharacterized cell types.
Reference-based approaches assume the existence of prior knowledge on the types of cells existing in a sample. Therefore, these methods may fail to perform accurately when the data includes rare or otherwise unknown cell types with no references incorporated in the algorithm. For example, cancer tumors consist of heterogeneous mixtures of various healthy cells of different types such as immune cells and cells related to affected tissues in addition to tumor cells. Although it might be possible to provide references for the immune cells, we do not usually have access to references or signatures for cancer cells due to the unique patterns of mutations and distributions of molecular information in each individual. These situations have been addressed in some studies under the label of deconvolution methods with partial reference availability.
Applications.
Relationship between cell proportions and phenotypes.
Studies have shown that the proportions of different cell types might show correlations with various phenotypes such as different diseases. For example, the proportions of Parathyroid oxyphil cells in the samples collected from the parathyroid gland for groups of patients show a significant correlation with the presence of clinical characteristics of chronic kidney disease (CKD). Another study applying the cellular deconvolution algorithms to gene expression data of Alzheimer's patients find that patients with lower proportions of neuronal cells in the samples collected from their cerebral cortex are more likely to show the clinical characteristics of dementia. Cellular deconvolution algorithms could enable researchers to investigate the interactions between cell proportions and various diseases or biological phenotypes.
Dissecting the confounding effects of cell proportions in EWAS and TWAS studies.
Epigenome-wide association study (EWAS) and transcriptome-wide association studies (TWAS) aim at finding the molecular markers such as genes or methylation CpG sites that show significant correlations between their expression or methylation levels and a biological phenotype of interest such as a disease. Since the proportions of cell types in samples vary and might show significant correlations with the disease or phenotype of interest, these correlations may confound the functional relationships between genes or CpG sites and the disease or phenotypes under study. For example, studies aimed at finding genes involved in Alzheimer's disease may end up selecting genes that are exclusively expressed in neurons and therefore have lower expression levels in Alzheimer's patients due to compositional changes of cell types during neurodegeneration. Such genes are not actionable targets for the treatment of Alzheimer's since they are not causally involved in the biological mechanism underlying Alzheimer's disease, but are only brought up by the confounding effects of cell types.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_{m\\times n}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "W_{k\\times n}"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "W"
},
{
"math_id": 6,
"text": "H_{m\\times k}"
},
{
"math_id": 7,
"text": "HW^T"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "H"
}
] |
https://en.wikipedia.org/wiki?curid=66904553
|
6690773
|
Compact operator on Hilbert space
|
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators (representable by finite-dimensional matrices) in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
For example, the spectral theory of compact operators on Banach spaces takes a form that is very similar to the Jordan canonical form of matrices. In the context of Hilbert spaces, a square matrix is unitarily diagonalizable if and only if it is normal. A corresponding result holds for normal compact operators on Hilbert spaces. More generally, the compactness assumption can be dropped. As stated above, the techniques used to prove results, e.g., the spectral theorem, in the non-compact case are typically different, involving operator-valued measures on the spectrum.
Some results for compact operators on Hilbert space will be discussed, starting with general properties before considering subclasses of compact operators.
Definition.
Let formula_0 be a Hilbert space and formula_1 be the set of bounded operators on "formula_0". Then, an operator formula_2 is said to be a compact operator if the image of each bounded set under formula_3 is relatively compact.
Some general properties.
We list in this section some general properties of compact operators.
If "X" and "Y" are separable Hilbert spaces (in fact, "X" Banach and "Y" normed will suffice), then "T" : "X" → "Y" is compact if and only if it is sequentially continuous when viewed as a map from "X" with the weak topology to "Y" (with the norm topology). (See , and note in this reference that the uniform boundedness will apply in the situation where "F" ⊆ "X" satisfies (∀φ ∈ Hom("X", "K")) sup{"x**"(φ) = φ("x") : "x"} < ∞, where "K" is the underlying field. The uniform boundedness principle applies since Hom("X", "K") with the norm topology will be a Banach space, and the maps "x**" : Hom("X","K") → "K" are continuous homomorphisms with respect to this topology.)
The family of compact operators is a norm-closed, two-sided, *-ideal in "L"("H"). Consequently, a compact operator "T" cannot have a bounded inverse if "H" is infinite-dimensional. If "ST" = "TS" = "I", then the identity operator would be compact, a contradiction.
If sequences of bounded operators "Bn" → "B", "Cn" → "C" in the strong operator topology and "T" is compact, then formula_4 converges to formula_5 in norm. For example, consider the Hilbert space formula_6 with standard basis {"en"}. Let "Pm" be the orthogonal projection on the linear span of {"e"1, ..., "em"}. The sequence {"Pm"} converges to the identity operator "I" strongly but not uniformly. Define "T" by formula_7 "T" is compact, and, as claimed above, "PmT" → "IT" = "T" in the uniform operator topology: for all "x",
formula_8
Notice each "Pm" is a finite-rank operator. Similar reasoning shows that if "T" is compact, then "T" is the uniform limit of some sequence of finite-rank operators.
By the norm-closedness of the ideal of compact operators, the converse is also true.
The quotient C*-algebra of "L"("H") modulo the compact operators is called the Calkin algebra, in which one can consider properties of an operator up to compact perturbation.
Compact self-adjoint operator.
A bounded operator "T" on a Hilbert space "H" is said to be self-adjoint if "T" = "T*", or equivalently,
formula_9
It follows that ⟨"Tx", "x"⟩ is real for every "x" ∈ "H", thus eigenvalues of "T", when they exist, are real. When a closed linear subspace "L" of "H" is invariant under "T", then the restriction of "T" to "L" is a self-adjoint operator on "L", and furthermore, the orthogonal complement "L"⊥ of "L" is also invariant under "T". For example, the space "H" can be decomposed as the orthogonal direct sum of two "T"–invariant closed linear subspaces: the kernel of "T", and the orthogonal complement (ker "T")⊥ of the kernel (which is equal to the closure of the range of "T", for any bounded self-adjoint operator). These basic facts play an important role in the proof of the spectral theorem below.
The classification result for Hermitian "n" × "n" matrices is the spectral theorem: If "M" = "M*", then "M" is unitarily diagonalizable, and the diagonalization of "M" has real entries. Let "T" be a compact self-adjoint operator on a Hilbert space "H". We will prove the same statement for "T": the operator "T" can be diagonalized by an orthonormal set of eigenvectors, each of which corresponds to a real eigenvalue.
Spectral theorem.
Theorem For every compact self-adjoint operator "T" on a real or complex Hilbert space "H", there exists an orthonormal basis of "H" consisting of eigenvectors of "T". More specifically, the orthogonal complement of the kernel of "T" admits either a finite orthonormal basis of eigenvectors of "T", or a countably infinite orthonormal basis {"en"} of eigenvectors of "T", with corresponding eigenvalues {"λn"} ⊂ R, such that "λn" → 0.
In other words, a compact self-adjoint operator can be unitarily diagonalized. This is the spectral theorem.
When "H" is separable, one can mix the basis {"en"} with a countable orthonormal basis for the kernel of "T", and obtain an orthonormal basis {"fn"} for "H", consisting of eigenvectors of "T" with real eigenvalues {"μn"} such that "μn" → 0.
Corollary For every compact self-adjoint operator "T" on a real or complex separable infinite-dimensional Hilbert space "H", there exists a countably infinite orthonormal basis {"fn"} of "H" consisting of eigenvectors of "T", with corresponding eigenvalues {"μn"} ⊂ R, such that "μn" → 0.
The idea.
Let us discuss first the finite-dimensional proof. Proving the spectral theorem for a Hermitian "n" × "n" matrix "T" hinges on showing the existence of one eigenvector "x". Once this is done, Hermiticity implies that both the linear span and orthogonal complement of "x" (of dimension "n" − 1) are invariant subspaces of "T". The desired result is then obtained by induction for formula_10.
The existence of an eigenvector can be shown in (at least) two alternative ways:
Note. In the finite-dimensional case, part of the first approach works in much greater generality; any square matrix, not necessarily Hermitian, has an eigenvector. This is simply not true for general operators on Hilbert spaces. In infinite dimensions, it is also not immediate how to generalize the concept of the characteristic polynomial.
The spectral theorem for the compact self-adjoint case can be obtained analogously: one finds an eigenvector by extending the second finite-dimensional argument above, then apply induction. We first sketch the argument for matrices.
Since the closed unit sphere "S" in R2"n" is compact, and "f" is continuous, "f"("S") is compact on the real line, therefore "f" attains a maximum on "S", at some unit vector "y". By Lagrange's multiplier theorem, "y" satisfies
formula_11
for some λ. By Hermiticity, "Ty" = λ"y".
Alternatively, let "z" ∈ C"n" be any vector. Notice that if a unit vector "y" maximizes ⟨"Tx", "x"⟩ on the unit sphere (or on the unit ball), it also maximizes the Rayleigh quotient:
formula_12
Consider the function:
formula_13
By calculus, "h"′(0) = 0, i.e.,
formula_14
Define:
formula_15
After some algebra the above expression becomes (Re denotes the real part of a complex number)
formula_16
But "z" is arbitrary, therefore "Ty" − "my" = 0. This is the crux of proof for spectral theorem in the matricial case.
Note that while the Lagrange multipliers generalize to the infinite-dimensional case, the compactness of the unit sphere is lost. This is where the assumption that the operator "T" be compact is useful.
Details.
Claim If "T" is a compact self-adjoint operator on a non-zero Hilbert space "H" and
formula_17
then "m"("T") or −"m"("T") is an eigenvalue of "T".
If "m"("T") = 0, then "T" = 0 by the polarization identity, and this case is clear. Consider the function
formula_18
Replacing "T" by −"T" if necessary, one may assume that the supremum of "f" on the closed unit ball "B" ⊂ "H" is equal to "m"("T") > 0. If "f" attains its maximum "m"("T") on "B" at some unit vector "y", then, by the same argument used for matrices, "y" is an eigenvector of "T", with corresponding eigenvalue λ = ⟨"λy", "y"⟩ = ⟨"Ty", "y"⟩ = "f"("y") = "m"("T").
By the Banach–Alaoglu theorem and the reflexivity of "H", the closed unit ball "B" is weakly compact. Also, the compactness of "T" means (see above) that "T" : "X" with the weak topology → "X" with the norm topology is continuous . These two facts imply that "f" is continuous on "B" equipped with the weak topology, and "f" attains therefore its maximum "m" on "B" at some "y" ∈ "B". By maximality, formula_19 which in turn implies that "y" also maximizes the Rayleigh quotient "g"("x") (see above). This shows that "y" is an eigenvector of "T", and ends the proof of the claim.
Note. The compactness of "T" is crucial. In general, "f" need not be continuous for the weak topology on the unit ball "B". For example, let "T" be the identity operator, which is not compact when "H" is infinite-dimensional. Take any orthonormal sequence {"yn"}. Then "yn" converges to 0 weakly, but lim "f"("yn") = 1 ≠ 0 = "f"(0).
Let "T" be a compact operator on a Hilbert space "H". A finite (possibly empty) or countably infinite orthonormal sequence {"en"} of eigenvectors of "T", with corresponding non-zero eigenvalues, is constructed by induction as follows. Let "H"0 = "H" and "T"0 = "T". If "m"("T"0) = 0, then "T" = 0 and the construction stops without producing any eigenvector "en". Suppose that orthonormal eigenvectors "e"0, ..., "e""n" − 1 of "T" have been found. Then "En" := span("e"0, ..., "e""n" − 1) is invariant under "T", and by self-adjointness, the orthogonal complement "Hn" of "E""n" is an invariant subspace of "T". Let "Tn" denote the restriction of "T" to "Hn". If "m"("Tn") = 0, then "Tn" = 0, and the construction stops. Otherwise, by the "claim" applied to "Tn", there is a norm one eigenvector "en" of "T" in "Hn", with corresponding non-zero eigenvalue λ"n" = ± "m"("Tn").
Let "F" = (span{"en"})⊥, where {"en"} is the finite or infinite sequence constructed by the inductive process; by self-adjointness, "F" is invariant under "T". Let "S" denote the restriction of "T" to "F". If the process was stopped after finitely many steps, with the last vector "e""m"−1, then "F"= "Hm" and "S" = "Tm" = 0 by construction. In the infinite case, compactness of "T" and the weak-convergence of "en" to 0 imply that "Ten" = "λnen" → 0, therefore "λn" → 0. Since "F" is contained in "Hn" for every "n", it follows that "m"("S") ≤ "m"({"Tn"}) = |"λn"| for every "n", hence "m"("S") = 0. This implies again that "S" = 0.
The fact that "S" = 0 means that "F" is contained in the kernel of "T". Conversely, if "x" ∈ ker("T") then by self-adjointness, "x" is orthogonal to every eigenvector {"en"} with non-zero eigenvalue. It follows that "F" = ker("T"), and that {"en"} is an orthonormal basis for the orthogonal complement of the kernel of "T". One can complete the diagonalization of "T" by selecting an orthonormal basis of the kernel. This proves the spectral theorem.
A shorter but more abstract proof goes as follows: by Zorn's lemma, select "U" to be a maximal subset of "H" with the following three properties: all elements of "U" are eigenvectors of "T", they have norm one, and any two distinct elements of "U" are orthogonal. Let "F" be the orthogonal complement of the linear span of "U". If "F" ≠ {0}, it is a non-trivial invariant subspace of "T", and by the initial claim, there must exist a norm one eigenvector "y" of "T" in "F". But then "U" ∪ {"y"} contradicts the maximality of "U". It follows that "F" = {0}, hence span("U") is dense in "H". This shows that "U" is an orthonormal basis of "H" consisting of eigenvectors of "T".
Functional calculus.
If "T" is compact on an infinite-dimensional Hilbert space "H", then "T" is not invertible, hence σ("T"), the spectrum of "T", always contains 0. The spectral theorem shows that σ("T") consists of the eigenvalues {"λn"} of "T" and of 0 (if 0 is not already an eigenvalue). The set σ("T") is a compact subset of the complex numbers, and the eigenvalues are dense in σ("T").
Any spectral theorem can be reformulated in terms of a functional calculus. In the present context, we have:
Theorem. Let "C"(σ("T")) denote the C*-algebra of continuous functions on σ("T"). There exists a unique isometric homomorphism Φ : "C"(σ("T")) → "L"("H") such that Φ(1) = "I" and, if "f" is the identity function "f"("λ") = "λ", then Φ("f") = "T". Moreover, σ("f"("T")) = "f"(σ("T")).
The functional calculus map Φ is defined in a natural way: let {"en"} be an orthonormal basis of eigenvectors for "H", with corresponding eigenvalues {"λn"}; for "f" ∈ "C"(σ("T")), the operator Φ("f"), diagonal with respect to the orthonormal basis {"en"}, is defined by setting
formula_20
for every "n". Since Φ("f") is diagonal with respect to an orthonormal basis, its norm is equal to the supremum of the modulus of diagonal coefficients,
formula_21
The other properties of Φ can be readily verified. Conversely, any homomorphism Ψ satisfying the requirements of the theorem must coincide with Φ when "f" is a polynomial. By the Weierstrass approximation theorem, polynomial functions are dense in "C"(σ("T")), and it follows that Ψ = Φ. This shows that Φ is unique.
The more general continuous functional calculus can be defined for any self-adjoint (or even normal, in the complex case) bounded linear operator on a Hilbert space. The compact case described here is a particularly simple instance of this functional calculus.
Simultaneous diagonalization.
Consider an Hilbert space "H" (e.g. the finite-dimensional C"n"), and a commuting set formula_22 of self-adjoint operators. Then under suitable conditions, it can be simultaneously (unitarily) diagonalized. "Viz.", there exists an orthonormal basis "Q" consisting of common eigenvectors for the operators — i.e.,
formula_23
<templatestyles src="Math_theorem/styles.css" />
Lemma — Suppose all the operators in formula_24 are compact. Then every closed non-zero formula_24-invariant sub-space formula_25 has a common eigenvector for formula_24.
<templatestyles src="Math_proof/styles.css" />Proof
"Case I:" all the operators have each exactly one eigenvalue on formula_26.
Take any formula_27 of unit length. It is a common eigenvector.
"Case II:" there is some operator formula_28 with at least 2 eigenvalues on formula_26 and let formula_29. Since "T" is compact and α is non-zero, we have formula_30 is a finite-dimensional (and therefore closed) non-zero formula_24-invariant sub-space (because the operators all commute with "T", we have for formula_31 and formula_32, that formula_33). In particular, since α is just one of the eigenvalues of formula_3 on formula_26, we definitely have formula_34. Thus we could in principle argue by induction over dimension, yielding that formula_35 has a common eigenvector for formula_24.
<templatestyles src="Math_theorem/styles.css" />
Theorem 1 — If all the operators in formula_24 are compact then the operators can be simultaneously (unitarily) diagonalized.
<templatestyles src="Math_proof/styles.css" />Proof
The following set
formula_36
is partially ordered by inclusion. This clearly has the Zorn property. So taking "Q" a maximal member, if "Q" is a basis for the whole Hilbert space "H", we are done. If this were not the case, then letting formula_37, it is easy to see that this would be an formula_24-invariant non-trivial closed subspace; and thus by the lemma above, therein would lie a common eigenvector for the operators (necessarily orthogonal to "Q"). But then there would then be a proper extension of "Q" within P; a contradiction to its maximality.
<templatestyles src="Math_theorem/styles.css" />
Theorem 2 — If there is an injective compact operator in formula_24; then the operators can be simultaneously (unitarily) diagonalized.
<templatestyles src="Math_proof/styles.css" />Proof
Fix formula_38 compact injective. Then we have, by the spectral theory of compact symmetric operators on Hilbert spaces:
formula_39
where formula_40 is a discrete, countable subset of positive real numbers, and all the eigenspaces are finite-dimensional. Since formula_24 a commuting set, we have all the eigenspaces are invariant. Since the operators restricted to the eigenspaces (which are finite-dimensional) are automatically all compact, we can apply Theorem 1 to each of these, and find orthonormal bases "Q"σ for the formula_41. Since "T"0 is symmetric, we have that
formula_42
is a (countable) orthonormal set. It is also, by the decomposition we first stated, a basis for "H".
<templatestyles src="Math_theorem/styles.css" />
Theorem 3 — If "H" a finite-dimensional Hilbert space, and formula_22 a commutative set of operators, each of which is diagonalisable; then the operators can be simultaneously diagonalized.
<templatestyles src="Math_proof/styles.css" />Proof
"Case I:" all operators have exactly one eigenvalue. Then any basis for "H" will do.
"Case II:" Fix formula_38 an operator with at least two eigenvalues, and let formula_43 so that formula_44 is a symmetric operator. Now let α be an eigenvalue of formula_44. Then it is easy to see that both:
formula_45
are non-trivial formula_46-invariant subspaces. By induction over dimension we have that there are linearly independent bases "Q"1, "Q"2 for the subspaces, which demonstrate that the operators in formula_46 can be simultaneously diagonalisable on the subspaces. Clearly then formula_47 demonstrates that the operators in formula_24 can be simultaneously diagonalised.
Notice we did not have to directly use the machinery of matrices at all in this proof. There are other versions which do.
We can strengthen the above to the case where all the operators merely commute with their adjoint; in this case we remove the term "orthogonal" from the diagonalisation. There are weaker results for operators arising from representations due to Weyl–Peter. Let "G" be a fixed locally compact hausdorff group, and formula_48 (the space of square integrable measurable functions with respect to the unique-up-to-scale Haar measure on "G"). Consider the continuous shift action:
formula_49
Then if "G" were compact then there is a unique decomposition of "H" into a countable direct sum of finite-dimensional, irreducible, invariant subspaces (this is essentially diagonalisation of the family of operators formula_50). If "G" were not compact, but were abelian, then diagonalisation is not achieved, but we get a unique "continuous" decomposition of "H" into 1-dimensional invariant subspaces.
Compact normal operator.
The family of Hermitian matrices is a proper subset of matrices that are unitarily diagonalizable. A matrix "M" is unitarily diagonalizable if and only if it is normal, i.e., "M*M" = "MM*". Similar statements hold for compact normal operators.
Let "T" be compact and "T*T" = "TT*". Apply the "Cartesian decomposition" to "T": define
formula_51
The self-adjoint compact operators "R" and "J" are called the real and imaginary parts of "T," respectively. That "T" is compact implies that "T*" and, consequently, "R" and "J" are compact. Furthermore, the normality of "T" implies that "R" and "J" commute. Therefore they can be simultaneously diagonalized, from which follows the claim.
A hyponormal compact operator (in particular, a subnormal operator) is normal.
Unitary operator.
The spectrum of a unitary operator "U" lies on the unit circle in the complex plane; it could be the entire unit circle. However, if "U" is identity plus a compact perturbation, "U" has only a countable spectrum, containing 1 and possibly, a finite set or a sequence tending to 1 on the unit circle. More precisely, suppose "U" = "I" + "C" where "C" is compact. The equations "UU*" = "U*U" = "I" and "C" = "U" − "I" show that "C" is normal. The spectrum of "C" contains 0, and possibly, a finite set or a sequence tending to 0. Since "U" = "I" + "C", the spectrum of "U" is obtained by shifting the spectrum of "C" by 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "L(H)"
},
{
"math_id": 2,
"text": "T\\in L(H)"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "B_nTC_n^*"
},
{
"math_id": 5,
"text": "BTC^*"
},
{
"math_id": 6,
"text": "\\ell^2(\\mathbf{N}),"
},
{
"math_id": 7,
"text": "Te_n = \\tfrac{1}{n^2} e_n."
},
{
"math_id": 8,
"text": "\\left\\| P_m T x - T x \\right \\| \\leq \\left( \\frac{1}{m+1}\\right)^2 \\| x \\|."
},
{
"math_id": 9,
"text": "\\langle T x, y \\rangle = \\langle x, T y \\rangle, \\quad x, y \\in H."
},
{
"math_id": 10,
"text": "T_{x^\\perp}"
},
{
"math_id": 11,
"text": "\\nabla f = \\nabla y^* T y = \\lambda \\cdot \\nabla y^* y"
},
{
"math_id": 12,
"text": "g(x) = \\frac{\\langle Tx, x \\rangle}{\\|x\\|^2}, \\qquad 0 \\ne x \\in \\mathbf{C}^n."
},
{
"math_id": 13,
"text": "\\begin{cases} h : \\mathbf{R} \\to \\mathbf{R} \\\\ h(t) = g(y+tz) \\end{cases}"
},
{
"math_id": 14,
"text": "\\begin{align}\nh'(0) &= \\lim_{t \\to 0} \\frac{h(t)-h(0)}{t - 0} \\\\\n&= \\lim_{t \\to 0} \\frac{g(y+tz)-g(y)}{t} \\\\\n&= \\lim_{t \\to 0} \\frac{1}{t} \\left (\\frac{\\langle T(y+tz), y+tz \\rangle}{\\|y+tz\\|^2}-\\frac{\\langle Ty, y \\rangle}{\\|y\\|^2} \\right ) \\\\\n&= \\lim_{t \\to 0} \\frac{1}{t} \\left (\\frac{\\langle T(y+tz), y+tz \\rangle - \\langle Ty, y \\rangle}{\\|y\\|^2} \\right ) \\\\\n&= \\frac{1}{\\|y\\|^2} \\lim_{t \\to 0} \\frac{\\langle T(y+tz), y+tz \\rangle - \\langle Ty, y \\rangle}{t} \\\\\n&= \\frac{1}{\\|y\\|^2} \\left (\\frac{d}{dt} \\frac{\\langle T (y + t z), y + tz \\rangle}{\\langle y + tz, y + tz \\rangle} \\right)(0) \\\\\n&= 0.\n\\end{align}"
},
{
"math_id": 15,
"text": "m=\\frac{\\langle Ty, y \\rangle}{\\langle y, y \\rangle}"
},
{
"math_id": 16,
"text": "\\operatorname{Re}(\\langle T y - m y, z \\rangle) = 0."
},
{
"math_id": 17,
"text": "m(T) := \\sup \\bigl\\{ |\\langle T x, x \\rangle| : x \\in H, \\, \\|x\\| \\le 1 \\bigr\\},"
},
{
"math_id": 18,
"text": "\\begin{cases} f : H \\to \\mathbf{R} \\\\ f(x) = \\langle T x, x \\rangle \\end{cases}"
},
{
"math_id": 19,
"text": "\\|y\\|=1,"
},
{
"math_id": 20,
"text": "\\Phi(f)(e_n) = f(\\lambda_n) e_n"
},
{
"math_id": 21,
"text": "\\|\\Phi(f)\\| = \\sup_{\\lambda_n \\in \\sigma(T)} |f(\\lambda_n)| = \\|f\\|_{C(\\sigma(T))}."
},
{
"math_id": 22,
"text": "\\mathcal{F}\\subseteq\\operatorname{Hom}(H,H)"
},
{
"math_id": 23,
"text": "(\\forall{q\\in Q,T\\in\\mathcal{F}})(\\exists{\\sigma\\in\\mathbf{C}})(T-\\sigma)q=0"
},
{
"math_id": 24,
"text": "\\mathcal{F}"
},
{
"math_id": 25,
"text": "S\\subseteq H"
},
{
"math_id": 26,
"text": "S"
},
{
"math_id": 27,
"text": "s\\in S"
},
{
"math_id": 28,
"text": "T\\in\\mathcal{F}"
},
{
"math_id": 29,
"text": "0 \\neq \\alpha \\in \\sigma(T\\upharpoonright S)"
},
{
"math_id": 30,
"text": "S' := \\ker(T \\upharpoonright S - \\alpha)"
},
{
"math_id": 31,
"text": "T'\\in\\mathcal{F}"
},
{
"math_id": 32,
"text": "x\\in\\ker(T\\upharpoonright S - \\alpha)"
},
{
"math_id": 33,
"text": "(T-\\alpha)(T'x)=(T'(T~x)-\\alpha T'x)=0"
},
{
"math_id": 34,
"text": "\\dim S' < \\dim S"
},
{
"math_id": 35,
"text": "S'\\subseteq S"
},
{
"math_id": 36,
"text": "\\mathbf{P}=\\{ A \\subseteq H : A \\text{ is an orthonormal set of common eigenvectors for } \\mathcal{F}\\},"
},
{
"math_id": 37,
"text": "S=\\langle Q\\rangle^{\\bot}"
},
{
"math_id": 38,
"text": "T_0\\in\\mathcal{F}"
},
{
"math_id": 39,
"text": "H=\\overline{\\bigoplus_{\\lambda\\in\\sigma(T_0)} \\ker(T_0-\\sigma)},"
},
{
"math_id": 40,
"text": "\\sigma(T_0)"
},
{
"math_id": 41,
"text": "\\ker(T_0-\\sigma)"
},
{
"math_id": 42,
"text": "Q:=\\bigcup_{\\sigma\\in\\sigma(T_0)} Q_{\\sigma}"
},
{
"math_id": 43,
"text": "P\\in\\operatorname{Hom}(H,H)^{\\times}"
},
{
"math_id": 44,
"text": "P^{-1}T_0P"
},
{
"math_id": 45,
"text": "\\ker\\left(P^{-1}~T_0(P-\\alpha)\\right), \\quad \\ker\\left(P^{-1}~T_0(P-\\alpha) \\right)^{\\bot}"
},
{
"math_id": 46,
"text": "P^{-1}\\mathcal{F}P"
},
{
"math_id": 47,
"text": "P(Q_1\\cup Q_2)"
},
{
"math_id": 48,
"text": "H=L^2(G)"
},
{
"math_id": 49,
"text": "\\begin{cases} G\\times H\\to H \\\\ (gf)(x)=f(g^{-1}x) \\end{cases}"
},
{
"math_id": 50,
"text": "G\\subseteq U(H)"
},
{
"math_id": 51,
"text": "R = \\frac{T + T^*}{2}, \\quad J = \\frac{T - T^*}{2i}."
},
{
"math_id": 52,
"text": "(M f)(x) = x f(x), \\quad f \\in H, \\, \\, x \\in [0, 1]"
},
{
"math_id": 53,
"text": "f\\in L^2([0,1])"
},
{
"math_id": 54,
"text": "t \\in [0,1]"
},
{
"math_id": 55,
"text": "V(f)(t) = \\int_{0}^{t} f(s)\\, ds."
},
{
"math_id": 56,
"text": "K: \\Omega \\times \\Omega \\to \\mathbb{C}"
},
{
"math_id": 57,
"text": "\\Omega = [0,1]"
},
{
"math_id": 58,
"text": "T_K : L^{2}(\\Omega)\\to L^2(\\Omega)"
},
{
"math_id": 59,
"text": "(T_K f)(x) = \\int_0^1 K(x, y) f(y) \\, \\mathrm{d} y."
},
{
"math_id": 60,
"text": "T_K"
},
{
"math_id": 61,
"text": "\\|T_k\\|_{\\mathrm{HS}}=\\|K\\|_{L^2}"
},
{
"math_id": 62,
"text": "K(x,y)"
},
{
"math_id": 63,
"text": "K(x, y) = \\sum \\lambda_n \\varphi_n(x) \\overline{\\varphi_n(y)},"
},
{
"math_id": 64,
"text": "\\{\\varphi_n\\}"
},
{
"math_id": 65,
"text": "\\{\\lambda_n\\}"
},
{
"math_id": 66,
"text": "[0,1]"
}
] |
https://en.wikipedia.org/wiki?curid=6690773
|
6690902
|
Exposure assessment
|
Measuring toxic or environment exposure
Exposure assessment is a branch of environmental science and occupational hygiene that focuses on the processes that take place at the interface between the environment containing the contaminant of interest and the organism being considered. These are the final steps in the path to release an environmental contaminant, through transport to its effect in a biological system. It tries to measure how much of a contaminant can be absorbed by an exposed target organism, in what form, at what rate and how much of the absorbed amount is actually available to produce a biological effect. Although the same general concepts apply to other organisms, the overwhelming majority of applications of exposure assessment are concerned with human health, making it an important tool in public health.
Definition.
Exposure assessment is the process of estimating or measuring the magnitude, frequency and duration of exposure to an agent, along
with the number and characteristics of the population exposed. Ideally, it describes the sources, pathways, routes, and the uncertainties in the assessment. It is a necessary part of risk analysis and hence risk assessment.
Exposure analysis is the science that describes how an individual or population comes in contact with a contaminant, including quantification of the amount of contact across space and time. 'Exposure assessment' and 'exposure analysis' are often used as synonyms in many practical contexts. Risk is a function of exposure and hazard. For example, even for an extremely toxic (high hazard) substance, the risk of an adverse outcome is unlikely if exposures are near zero. Conversely, a moderately toxic substance may present substantial risk if an individual or a population is highly exposed.
Applications.
Quantitative measures of exposure are used: in risk assessment, together with inputs from toxicology, to determine risk from substances released to the environment, to establish protective standards, in epidemiology, to distinguish between exposed and control groups, and to protect workers from occupational hazards.
Receptor-based approach.
The receptor-based approach is used in exposure science. It starts by looking at different contaminants and concentrations that reach people. An exposure analyst can use direct or indirect measurements to determine if a person has been in contact with a specific contaminant or has been exposed to a specific risk (e.g. accident). Once a contaminant has been proved to reach people, exposure analysts work backwards to determine its source. After the identification of the source, it is important to find out the most efficient way to reduce adverse health effects. If the contaminant reaches a person, it is very hard to reduce the associated adverse effects. Therefore, it is very important to reduce exposure in order to diminish the risk of adverse health effects. It is highly important to use both regulatory and non-regulatory approaches in order to decrease people's exposure to contaminants. In many cases, it is better to change people's activities in order to reduce their exposures rather than regulating a source of contaminants.
The receptor-based approach can be opposed to the source-based approach. This approach begins by looking at different sources of contaminants such as industries and power plants. Then, it is important to find out if the contaminant of interest has reached a receptor (usually humans). With this approach, it is very hard to prove that a pollutant from a source has reached a target.
Exposure.
In this context "exposure" is defined as the contact between an agent and a target. Contact takes place at an exposure surface over an exposure period.
Mathematically, exposure is defined as <br>
formula_0
where "E" is exposure, "C"("t") is a concentration that varies with time between the beginning and end of exposure. It has dimensions of mass times time divided by volume. This quantity is related to the potential dose of contaminant by multiplying it by the relevant contact rate, such as breathing rate, food intake rate etc. The contact rate itself may be a function of time.
Routes of exposure.
Contact between a contaminant and an organism can occur through any route. The possible routes of exposure are: inhalation, if the contaminant is present in the air; ingestion, through food, drinking or hand-to-mouth behavior; and dermal absorption, if the contaminant can be absorbed through the skin.
Exposure to a contaminant can and does occur through multiple routes, simultaneously or at different times. In many cases the main route of exposure is not obvious and needs to be investigated carefully. For example, exposure to byproducts of water chlorination can obviously occur by drinking, but also through the skin, while swimming or washing, and even through inhalation from droplets aerosolized during a shower. The relative proportion of exposure from these different routes cannot be determined "a priori". Therefore, the equation in the previous section is correct in a strict mathematical sense, but it is a gross oversimplification of actual exposures, which are the sum of the integrals of all activities in all microenvironments. For example, the equation would have to be calculated with the specific concentration of a compound in the air in the room during the time interval. Similarly, the concentration in the ambient air would apply to the time that the person spends outdoors, whereas the concentration in the food that the person ingests would be added. The concentration integrals via all routes would be added for the exposure duration, e.g. hourly, daily or annually as
formula_1
where "y" is the initial time and "z" the ending time of last in the series of time periods spent in each microenvironment over the exposure duration.
Measurement of exposure.
To quantify the exposure of particular individuals or populations two approaches are used, primarily based on practical considerations:
Direct approach.
The direct approach measures the exposures to pollutants by monitoring the pollutant concentrations reaching the respondents. The pollutant concentrations are directly monitored on or within the person through point of contact, biological monitoring, or biomarkers. In a workplace setting, methods of workplace exposure monitoring are used.
The point of contact approach indicates the total concentration reaching the host, while biological monitoring and the use of biomarkers infer the dosage of the pollutant through the determination of the body burden. The respondents often record their daily activities and locations during the measurement of the pollutants to identify the potential sources, microenvironments, or human activities contributing the pollutant exposure. An advantage of the direct approach is that the exposures through multiple media (air, soil, water, food, etc.) are accounted for through one study technique. The disadvantages include the invasive nature of the data collection and associated costs. Point of contact is continuous measure of the contaminant reaching the target through all routes.
Biological monitoring is another approach to measuring exposure measures the amount of a pollutant within body tissues or fluids (such as blood or urine). Biological monitoring measures the body burden of a pollutant but not the source from whence it came. The substance measured may be either the contaminant itself or a biomarker which is specific to and indicative of an exposure to the contaminant. Biomarkers of exposure assessment is a measure of the contaminant or other proportionally related variable in the body.
Air sampling measures the contaminant in the air as concentration units of ppmv (parts per million by volume), mg/m3 (milligrams per cubic meter) or other mass per unit volume of air. Samplers can be worn by workers or researchers to estimate concentrations found in the breathing zone (personal) or samples collected in general areas can be used to estimate human exposure by integrating time and activity patterns. Validated and semi-validated air sampling methods are published by NIOSH, OSHA, ISO and other bodies.
Surface or dermal sampling measures of the contaminant on touchable surfaces or on skin. Concentrations are typically reported in mass per unit surface area such as mg/100 cm2.
In general, direct methods tend to be more accurate but more costly in terms of resources and demands placed on the subject being measured and may not always be feasible, especially for a population exposure study.
Examples of direct methods include air sampling though a personal portable pump, split food samples, hand rinses, breath samples or blood samples.
Indirect approach.
The indirect approach measures the pollutant concentrations in various locations or during specific human activities to predict the exposure distributions within a population. The indirect approach focuses on the pollutant concentrations within microenvironments or activities rather than the concentrations directly reaching the respondents. The measured concentrations are correlated to large-scale activity pattern data, such as the National Human Activity Pattern Survey (NHAPS), to determine the predicted exposure by multiplying the pollutant concentrations by the time spent in each microenvironment or activity for by multiplying the pollutant concentrations b the contact rate with each media. The advantage is that process is minimally invasive to the population and is associated with lower costs than the direct approach. A disadvantage of the indirect approach is that the results were determined independently of any actual exposures, so the exposure distribution is open to errors from any inaccuracies in the assumptions made during the study, the time-activity data, or the measured pollutant concentrations. Examples of indirect methods include environmental water, air, dust, soil or consumer product sampling coupled with information such as activity/location diaries.
Mathematical exposure models may also be used to explore hypothetical situations of exposure.
Exposure factors.
Especially when determining the exposure of a population rather than individuals, indirect methods can often make use of relevant statistics about the activities that can lead to an exposure. These statistics are called "exposure factors". They are generally drawn from the scientific literature or governmental statistics. For example, they may report informations such as amount of different food eaten by specific populations, divided by location or age, breathing rates, time spent for different modes of commuting, showering or vacuuming, as well as information on types of residences. Such information can be combined with contaminant concentrations from "ad-hoc" studies or monitoring network to produce estimates of the exposure in the population of interest. These are especially useful in establishing protective standards.
Exposure factor values can be used to obtain a range of exposure estimates such as average, high-end and bounding estimates. For example, to calculate the lifetime average daily dose one would use the equation below:
formula_2
All of the variables in the above equation, with the exception of contaminant concentration, are considered exposure factors. Each of the exposure factors involves humans, either in terms of their characteristics (e.g., body weight) or behaviors (e.g., amount of time spent in a specific location, which affects exposure duration). These characteristics and behaviors can carry a great deal of variability and uncertainty. In the case of lifetime average daily dose, variability pertains to the distribution and range of LADDs amongst individuals in the population. The uncertainty, on the other hand, refers to exposure analyst's lack of knowledge of the standard deviation, mean, and general shape when dealing with calculating LADD.
The U.S. Environmental Protection Agency's "Exposure Factors Handbook" provides solutions when confronting variability and reducing uncertainty. The general points are summarized below:
Defining acceptable exposure for occupational environments.
Occupational exposure limits are based on available toxicology and epidemiology data to protect nearly all workers over a working lifetime. Exposure assessments in occupational settings are most often performed by occupational/industrial hygiene (OH/IH) professionals who gather "basic characterization" consisting of all relevant information and data related to workers, agents of concern, materials, equipment and available exposure controls. The exposure assessment is initiated by selecting the appropriate exposure limit averaging time and "decision statistic" for the agent. Typically the statistic for deciding acceptable exposure is chosen to be the majority (90%, 95% or 99%) of all exposures to be below the selected occupational exposure limit. For retrospective exposure assessments performed in occupational environments, the "decision statistic" is typically a central tendency such as the arithmetic mean or geometric mean or median for each worker or group of workers. Methods for performing occupational exposure assessments can be found in "A Strategy for Assessing and Managing Occupational Exposures".
Exposure assessment is a continuous process that is updated as new information and data becomes available.
Systemic errors.
In the estimation of human exposures to environmental chemicals, the following systemic errors have been known to occur:
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "E=\\int_{t_1}^{t_2} C(t)\\, dt"
},
{
"math_id": 1,
"text": "E=sum(\\int_{t_1}^{t_2} C(t)\\, dt ... \\int_{t_y}^{t_z} C(t)\\, dt)"
},
{
"math_id": 2,
"text": "LADD = (Contaminant Concentration)(Intake Rate)(Exposure Duration)/(Body Weight)(Average Lifetime)"
}
] |
https://en.wikipedia.org/wiki?curid=6690902
|
6691212
|
DNA adenine methylase
|
Prokaryotic enzyme
DNA adenine methylase, (Dam) (also site-specific DNA-methyltransferase (adenine-specific), EC 2.1.1.72, "modification methylase", "restriction-modification system") is an enzyme that adds a methyl group to the adenine of the sequence 5'-GATC-3' in newly synthesized DNA. Immediately after DNA synthesis, the daughter strand remains unmethylated for a short time. It is an orphan methyltransferase that is not part of a restriction-modification system and regulates gene expression. This enzyme catalyses the following chemical reaction
S-adenosyl-L-methionine + DNA adenine formula_0 S-adenosyl-L-homocysteine + DNA 6-methylaminopurine
This is a large group of enzymes unique to prokaryotes and bacteriophages.
The "E. coli" DNA adenine methyltransferase enzyme (Dam), is widely used for the chromatin profiling technique DamID, in which the Dam is fused to a DNA-binding protein of interest and expressed as a transgene in a genetically tractable model organism to identify protein binding sites.
Role in mismatch repair of DNA.
When DNA polymerase makes an error resulting in a mismatched base-pair or a small insertion or deletion during DNA synthesis, the cell will repair the DNA by a pathway called mismatch repair. However, the cell must be able to differentiate between the template strand and the newly synthesized strand. In some bacteria, DNA strands are methylated by Dam methylase, and therefore, "immediately" after replication, the DNA will be hemimethylated. A repair enzyme, MutS, binds to mismatches in DNA and recruits MutL, which subsequently activates the endonuclease MutH. MutH binds hemimethylated GATC sites and when activated will selectively cleave the unmethylated daughter strand, allowing helicase and exonucleases to excise the nascent strand in the region surrounding the mismatch. The strand is then re-synthesized by DNA polymerase III.
Role in regulation of replication.
The firing of the origin of replication (oriC) in bacteria cells is highly controlled to ensure DNA replication occurs only once during each cell division. Part of this can be explained by the slow hydrolysis of ATP by DnaA, a protein that binds to repeats in the oriC to initiate replication. Dam methylase also plays a role because the oriC has 11 5'-GATC-3' sequences (in "E. coli"). Immediately after DNA replication, the oriC is hemimethylated and sequestered for a period of time. Only after this, the oriC is released and must be fully methylated by Dam methylase before DnaA binding occurs.
Role in regulation of protein expression.
Dam also plays a role in the promotion and repression of RNA transcription. In "E. coli" downstream GATC sequences are methylated, promoting transcription. For example, pyelonephritis-associated pili (PAP) phase variation in uropathogenic "E. coli" is controlled by Dam through the methylation of the two GATC sites proximal and distal to the PAP promoter. Given its role of protein regulation in "E. coli", the Dam methylase gene is nonessential as a knockout of the gene still leaves the bacteria viable. The retainment of viability despite a "dam" gene knockout is also seen in Salmonella and Aggregatibacter actinomycetemcomitans. However, in organisms like Vibrio cholerae and Yersinia pseudotuberculosis, the "dam" gene is essential for viability. A knockout of the "dam" gene in "Aggregatibacter actinomycetemcomitans" resulted in dysregulated levels of the protein, leukotoxin, and also reduced the microbe's ability to invade oral epithelial cells. Additionally a study on Dam methylase deficient "Streptococcus mutans", a dental pathogen, revealed the dysregulation of a 103 genes some of which include cariogenic potential.
Structural features.
The similarity in the catalytic domains of C5-cytosine methyltransferases and N6 and N4-adenine methyltransferases provided great interest in understanding the basis for functional similarities and dissimilarities. The methyltransferases or methylases are classified into three groups (Groups α, β, and γ) based on the sequential order of certain 9 motifs and the Target Recognition Domain (TRD). Motif I consists of a Gly-X-Gly tripeptide and is referred to as the G-loop and is implicated in the binding of the S-Adenosyl methionine cofactor. Motif II is highly conserved among N4 and N6-adenine methylases and contains a negatively charged amino acid followed by a hydrophobic side chain in the last positions of the β2 strand to bind the AdoMet. Motif III is also implicated in the binding of Adomet. Motif IV is especially important and well known in methylase characterizations. It consists of a diprolyl component and is highly conserved among N6-adenine methyltransferases as the DPPY motif, however, this motif can vary for N4-adenine and C5-cytosine methyltransferases. The DPPY motif has been found to be essential for AdoMet binding. Motifs IV-VIII play a role in the catalytic activity, while motifs 1-III and X play a role in binding of the cofactor. For N6-adenine methylases the sequential order for these motifs is as such: N-terminal - X - I - II - III - TRD - IV - V - VI - VII - VIII - C-terminal and "E". "coli" Dam methylase follows this structural sequence. A 2015 crystallography experiment showed that "E". "coli" Dam methylase was able to bind non-GATC DNA with the same sequence of motifs discussed; the authors posit that the obtained structure could serve as grounds for repression of transcription that is not methylation based.
Orphan bacterial and bacteriophage methylases.
Dam methylase is an orphan methyltransferase that is not part of a restriction-modification system but operates independently to regulate gene expression, mismatch repair, and bacterial replication amongst many other functions. This is not the only example of an orphan methyltransferase as there exists the Cell cycle regulated Methyltransferase (CcrM) which methylates 5'-GANTC-'3 hemi-methylated DNA to control the life cycle of "Caulobacter crescentus" and other related species.
Distinct from their bacterial counterparts, phage orphan methyltransferases also do exist and most notably in the T2, T4, and other T-even bacteriophages that infect "E. coli." In a study it was identified that despite sharing any sequence homology, the "E. coli" and T4 Dam methylase amino acids sequences share sequence identity of up to 64% in four regions of 11 to 33 residues long which suggests a common evolutionary origin for the bacterial and phage methylase genes. The T2 and T4 methylases differ from "E." coli Dam methylase in not only their ability to methylate 5-hydroxymethylcytosine but to also methylate non-canonical DNA sites. Despite extensive "in vitro" characterization of these select few phage orphan methyltransferases their biological purpose is still not clear.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=6691212
|
6691213
|
Basic pitch count estimator
|
In baseball statistics, the basic pitch count estimator is a statistic used to estimate the number of pitches thrown by a pitcher where there is no pitch count data available. The formula was first derived by Tom Tango. The formula is formula_0, where PA refers to the number of plate appearances against the pitcher, SO to strikeouts and BB to base on balls.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "3.3 PA + 1.5 SO + 2.2 BB"
}
] |
https://en.wikipedia.org/wiki?curid=6691213
|
66914169
|
AltaRica
|
Modeling language
AltaRica is an object-oriented modeling language dedicated to probabilistic risk and safety analyses. It is a representative of the so-called model-based approach in engineering. Since its version 3.0, it is developed by the non-profit AltaRica Association, which develops jointly the associated modeling environment AltaRica Wizard.
History.
The design of AltaRica started at the end of the nineties at the computer science department of Bordeaux University (LaBRI). The rationale for the creation of a new modeling language was to overcome difficulties encountered by safety analysts (in avionic, nuclear, automotive and oil and gas industries) with "classical" modeling formalisms such as , or . These formalisms lack actually either of expressive power, or of structuring constructs, or both. The first scientific articles. about the language were published from 1998 to 2008. The original version of the language relied of three technologies: finite state automata that were extensively studied by the LaBRI's team working of the formal methods for software verification, structured programming taking inspiration of the modeling language Lustre, and constraint programming. This last technology, though elegant and powerful, proved inefficient in practice. Constraint resolution was too computationally expensive to scale on industrial size systems. The LaBRI team went on working however on this original version, mainly for educational purposes, improving tools over the years. A first turn has been therefore taken with the design of a data-flow version of the language. In AltaRica data-flow, variables are updated by propagating values in a fixed order. This order is determined at compile time, from the annotations given in the model. AltaRica Data-Flow raised a significant academic and industrial interest. Integrated modeling environments have been developed for the language: Cecilia OCAS by Dassault Aviation, Simfia v2 by Airbus-Apsys and Safety Designer by Dassault Systèmes (this latter tool was initially a clone of Cecilia OCAS, but evolved separately afterward). Successful industrial applications have been realized. For example, AltaRica Data-Flow was used to certify the flight control system of the aircraft Falcon 7X (Dassault Aviation). A number of PhD theses were also dedicated to the language and its use in various contexts. In a word, AltaRica Data-Flow reached scientific and industrial maturity. It is still daily used for a wide variety of applications.
Experience showed however that AltaRica Data-Flow could be improved in several ways, hence justifying to seriously rework the language. This rework gave eventually raise to AltaRica 3.0 which improves AltaRica Data-Flow into several directions. The syntax of AltaRica 3.0 is closer to Modelica than to AltaRica data-flow, so to facilitate bridges between multiphysics modeling and simulation and probabilistic risk and safety analyses. Object-oriented and prototype-oriented structuring constructs have been assembled so to give the language a versatile and coherent set of structuring constructs, via S2ML (for System Structure Modeling Language), which is probably the most complete of all existing behavioral modeling languages. Moreover, AltaRica 3.0 semantics has been reinforced, via GTS (for Guarded Transition Systems), which opens new opportunities in terms of assessment of models.
Guarded transition systems.
Guarded transition systems belong to the family of mathematical models of computation gathered under the generic term of (stochastic) finite-state machines or (stochastic) finite-state automata. They have been introduced in 2008 and later refined
To illustrate the ideas behind guarded transition systems, consider a motor pump that is normally in stand-by, that can be started on demand and stopped when there is no more demand. Assume moreover that this pump may fail in operation, with a certain failure rate λ, and that it can also fail on-demand with a certain probability γ, Assume finally that the pump can be repaired, with a certain mean time to repair τ.
We can then represent the behavior of this pump by means of a (stochastic) finite state automaton pictured hereafter.
From outside, the motor pump can be seen as a black box with an input flow of liquid "in", an input flow of information "demand" and an output flow of liquid "out", i.e. as a transfer function that given the values "in" and "demand" calculates the value "out". In the framework on reliability studies, the behavior of systems must be abstracted out to avoid the combinatorial explosion of situations to look at. Flows are thus typically abstracted as Boolean values, true interpreted as the presence of the flow and false as its absence.
The equation linking "in" and "demand" to "out" cannot be written directly since the motor pump has an internal state. Namely, we can consider that the pump can be in three states: "STANDBY", "WORKING" or "FAILED". On the figure before, states are represented as rounded rectangle. The output flow "out" takes the value true if and only if the pump is working and the input flow "in" is true (hence the equation on the right hand side of the figure above).
A fundamental abstraction made by finite state automata consists in considering that the system under study can change of state only under the occurrence of an event. In between two occurrences of events, nothing changes. Occurrences of events are described by means of transitions, represented as arrows on the figure. In guarded transition systems, a transition is labeled with an event, has a certain pre-condition called the guard of the transition and a certain effect called the action of the transition. For instance, the event "failure" can only occur in the state "WORKING". Its effect is to make the pump pass from this state "WORKING" to the state "FAILED". The event "start" can occur if the pump is in the state "STANDBY" and if the input flow "demand" is true. Its effect is to make the pump pass from the state "STANDBY" to the state "WORKING". And so on.
Now, some changes of states may take time, while some other happen as soon as they are possible. For instance, a failure takes a certain time before occurring, while the pump is started as soon as needed (at least at the level of abstraction of reliability models). Guarded transition systems associate delays with events, and thus transitions. These delays can be either deterministic as for the event "start", or stochastic as for the event "failure". On the figure, deterministic delays are represented by dashed arrows while stochastic ones are represented by plain arrows.
To finish, transitions can be in competition in a state. For instance, the transition "stop" is in competition with the transition "failure" in the state "WORKING". This competition is however not a real one as the transition "stop" is immediately fired (performed) when the input flow "demand" ceases to be true. A real competition occurs between the transitions "start" and "failureOnDemand" in the state "STANDBY". Both are fired immediately when the input flow "demand" becomes true. In guarded transition systems, it is possible to associate a probability of occurrence to each transition in competition, namely γ to "failureOnDemand" and formula_0 to "start" in our example.
Eventually, the AltaRica code for the guarded transition system we sketched is given in the Figure hereafter. The motor pump is represented as a "block", i.e. as a container for basic elements. The block declares four variables: a state variable "_state" that takes its value in the domain (set of symbolic constants) "MotorPumpState", and three Boolean flow variables "demand", "in", and "out". Initially, "_state" takes the value "STANDBY". The transfer function is represented by means of the assertion. Assertions tell how to calculate the values of output flow variables from the values of input flow variables and the values of state variables.
block MotorPump
MotorPumpState _state (init = STANDBY);
Boolean demand, in, out (reset = false);
event start (delay = Dirac(0), expectation=gamma);
event failureOnDemand (delay = Dirac(0), expectation=1-gamma);
event stop (delay = Dirac(0));
event failure (delay = exponential(lambda));
event repair (delay = exponential(1/tau));
parameter Real lambda = 1.0e-4;
parameter Real tau = 8;
parameter Real gamma = 0.02;
transition
start: demand and _state==STANDBY -> _state := WORKING;
failureOnDemand: demand and _state==STANDBY -> _state := FAILED;
stop: not demand and _state==WORKING -> _state := STANDBY;
failure: _state==WORKING -> _state := FAILED;
repair: _state==FAILED -> _state := STANDBY;
assertion
out := in and _state==WORKING;
end
The block "MotorPump" declares also five events and as many transitions. Guards of transitions are Boolean conditions on state and flow variables. Actions of transitions modify the values of state variables. Events are associated with delays and possibly expectations (which are used to calculate probabilities of occurrence of transitions in competition). The description of both delays and expectations may involve parameters.
System Structure Modeling Language.
In general, systems under study are not made of a single, simple components as the above motor operated pump. Rather, they consists of a network of such components that interact in an organized in a hierarchical way.
To reflect the architecture of the system in the model, one needs dedicated constructs. This is where S2ML (system structure modeling language) comes into the play. S2ML emerged first as the set of structuring constructs for AltaRica 3.0. Then, it has been studied on its own. As of today, S2ML gathers in a coherent way a versatile set of structuring constructs stemmed from object-oriented and prototype-oriented programming.
S2ML consists of height key concepts: the concepts of port, connection, container, prototype, class, cloning, instantiation, inheritance and aggregation.
block System
block Line1
// description of line 1
end
clones Line1 as Line2;
assertion
out := Line1.out or Line2.out;
end
class MotorPump
// description of the motor pump.
end
block System
MotorPump P1; // 1st instance
MotorPump P2; // 2nd instance
end
AltaRica 3.0 involves a few other constructs, such as a powerful mechanism to synchronize events. The essential has however been presented above.
Adding S2ML on top of a mathematical framework (GTS in the case of AltaRica), makes it possible to pass automatically and at no cost from the model as designed, which reflects the architecture of the system under study, to the model as assessed from which calculations of indicators and simulations can be performed efficiently.
The transformation preserves the semantics of the models and is reversible for the most part: results of calculations and simulations are directly interpretable in the model as designed.
A recent trend in the AltaRica community is the design of modeling patterns. Patterns are pervasive in engineering. They have been developed for instance in the field of technical system architecture, as well as in software engineering. They are useful in reliability engineering also, as they ease design and maintenance of models. They are also a tool for risk analysts to communicate about the models they develop and share.
Tooling and applications.
In industrial practice, AltaRica model uses include four main functions:
As these applications require different types of simulations and calculations, several tools have been developed, including:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1-\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=66914169
|
66918127
|
1 Chronicles 17
|
First Book of Chronicles, chapter 17
1 Chronicles 17 is the seventeenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter contains God's covenant with David through the prophet Nathan and David's response in the form of thanksgiving prayer. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
God’s covenant with David (17:1–15).
This section closely follows with minor redaction to suit the context. Nathan's personal opinion (verse 2) was corrected by God in the subsequent prophecy, without mentioning David's lack of suitability for building the temple (explained later in ).
"Now when David lived in his house, David said to Nathan the prophet, “Behold, I dwell in a house of cedar, but the ark of the covenant of the LORD is under a tent.""
Verse 1.
The statement "and the Lord had given him rest from all his enemies around him" in 2 Samuel 7:1 is not copied by the Chronicler, because David's wars have yet to be described (1 Chronicles 18–20).
"And I will establish him in My house and in My kingdom forever; and his throne shall be established forever."
Verse 14.
Here the Chronicler portrays 'the seed after David', arising from his sons, as the Messiah, whom the prophets announced as the "Son of David", a divergence from 2 Samuel 7:14-16, so it omits "If he commit iniquity, I will chasten him with the rod of men" (2 Samuel 7:14), because the chastisement would be important for the direct sons of David and the kings of Judah, but not for the Messiah, from whom God will never withdraw His grace.
David’s Prayer of Thanksgiving (17:16–27).
This passage contains David's prayer as a reply to the promise given by God through Nathan. Apart from a slight change in the name used for God, the section closely follows 2 Samuel 7:17–29.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66918127
|
66918380
|
1 Chronicles 18
|
First Book of Chronicles, chapter 18
1 Chronicles 18 is the eighteenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the account of David's wars against the neighboring nations and a list of his executive. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 17 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David conquers the neighboring nations (18:1–13).
This section is a summary as well as interpretation of , forming a single unified content. The condensation of multiple wars into one narrative provides the impression of David as a warrior, which would disqualified him from the task of building the temple because this requires peace (cf. ). David was successful in his wars against the Philistines to the west (verse 1; ), against Edom to the southeast (verses 12–13), against Moab (verse 2) and Ammon (; ) to the east and against a number of Aramean kings to the northeast (verses 3–8; ), as a fulfillment of Nathan's prophecy that David would subjugate all his enemies.
"And David took from him a thousand chariots, and seven thousand horsemen, and twenty thousand footmen: David also houghed all the chariot horses, but reserved of them an hundred chariots."
"And Abishai, the son of Zeruiah, killed 18,000 Edomites in the Valley of Salt."
David’s officials (18:14–17).
This passage contains a list of David's highest officers after the wars, because of the significant role of military ranks during the conquests. It reflects the growth of bureaucracy accompanying the expansion of the kingdom.
"Benaiah the son of Jehoiada was over the Cherethites and the Pelethites; and David’s sons were chief ministers at the king’s side."
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66918380
|
6693
|
Cofinality
|
Size of subsets in order theory
In mathematics, especially in order theory, the cofinality cf("A") of a partially ordered set "A" is the least of the cardinalities of the cofinal subsets of "A".
This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set "A" can alternatively be defined as the least ordinal "x" such that there is a function from "x" to "A" with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent.
Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net.
Properties.
If formula_0 admits a totally ordered cofinal subset, then we can find a subset formula_12 that is well-ordered and cofinal in formula_13 Any subset of formula_12 is also well-ordered. Two cofinal subsets of formula_12 with minimal cardinality (that is, their cardinality is the cofinality of formula_12) need not be order isomorphic (for example if formula_14 then both formula_15 and formula_16 viewed as subsets of formula_12 have the countable cardinality of the cofinality of formula_12 but are not order isomorphic). But cofinal subsets of formula_12 with minimal order type will be order isomorphic.
Cofinality of ordinals and other well-ordered sets.
The cofinality of an ordinal formula_17 is the smallest ordinal formula_18 that is the order type of a cofinal subset of formula_19 The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal formula_20 there exists a formula_18-indexed strictly increasing sequence with limit formula_19 For example, the cofinality of formula_21 is formula_22 because the sequence formula_23 (where formula_2 ranges over the natural numbers) tends to formula_24 but, more generally, any countable limit ordinal has cofinality formula_25 An uncountable limit ordinal may have either cofinality formula_26 as does formula_27 or an uncountable cofinality.
The cofinality of 0 is 0. The cofinality of any successor ordinal is 1. The cofinality of any nonzero limit ordinal is an infinite regular cardinal.
Regular and singular ordinals.
A regular ordinal is an ordinal that is equal to its cofinality. A singular ordinal is any ordinal that is not regular.
Every regular ordinal is the initial ordinal of a cardinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial but need not be regular. Assuming the axiom of choice, formula_28 is regular for each formula_19 In this case, the ordinals formula_29 and formula_30 are regular, whereas formula_31 and formula_32 are initial ordinals that are not regular.
The cofinality of any ordinal formula_17 is a regular ordinal, that is, the cofinality of the cofinality of formula_17 is the same as the cofinality of formula_19 So the cofinality operation is idempotent.
Cofinality of cardinals.
If formula_33 is an infinite cardinal number, then formula_34 is the least cardinal such that there is an unbounded function from formula_34 to formula_35 formula_34 is also the cardinality of the smallest set of strictly smaller cardinals whose sum is formula_35 more precisely
formula_36
That the set above is nonempty comes from the fact that
formula_37
that is, the disjoint union of formula_33 singleton sets. This implies immediately that formula_38
The cofinality of any totally ordered set is regular, so formula_39
Using König’s theorem, one can prove formula_40 and formula_41 for any infinite cardinal formula_42
The last inequality implies that the cofinality of the cardinality of the continuum must be uncountable. On the other hand,
formula_43
the ordinal number ω being the first infinite ordinal, so that the cofinality of formula_44 is card(ω) = formula_7 (In particular, formula_44 is singular.) Therefore,
formula_45
Generalizing this argument, one can prove that for a limit ordinal formula_18
formula_47
On the other hand, if the axiom of choice holds, then for a successor or zero ordinal formula_18
formula_48
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "n,"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "m."
},
{
"math_id": 5,
"text": "\\N"
},
{
"math_id": 6,
"text": "\\aleph_0"
},
{
"math_id": 7,
"text": "\\aleph_0."
},
{
"math_id": 8,
"text": "\\aleph_0,"
},
{
"math_id": 9,
"text": "\\R."
},
{
"math_id": 10,
"text": "\\R"
},
{
"math_id": 11,
"text": "c,"
},
{
"math_id": 12,
"text": "B"
},
{
"math_id": 13,
"text": "A."
},
{
"math_id": 14,
"text": "B = \\omega + \\omega,"
},
{
"math_id": 15,
"text": "\\omega + \\omega"
},
{
"math_id": 16,
"text": "\\{\\omega + n : n < \\omega\\}"
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": "\\delta"
},
{
"math_id": 19,
"text": "\\alpha."
},
{
"math_id": 20,
"text": "\\alpha,"
},
{
"math_id": 21,
"text": "\\omega^2"
},
{
"math_id": 22,
"text": "\\omega,"
},
{
"math_id": 23,
"text": "\\omega \\cdot m"
},
{
"math_id": 24,
"text": "\\omega^2;"
},
{
"math_id": 25,
"text": "\\omega."
},
{
"math_id": 26,
"text": "\\omega"
},
{
"math_id": 27,
"text": "\\omega_\\omega"
},
{
"math_id": 28,
"text": "\\omega_{\\alpha+1}"
},
{
"math_id": 29,
"text": "0, 1, \\omega, \\omega_1,"
},
{
"math_id": 30,
"text": "\\omega_2"
},
{
"math_id": 31,
"text": "2, 3, \\omega_\\omega,"
},
{
"math_id": 32,
"text": "\\omega_{\\omega \\cdot 2}"
},
{
"math_id": 33,
"text": "\\kappa"
},
{
"math_id": 34,
"text": "\\operatorname{cf}(\\kappa)"
},
{
"math_id": 35,
"text": "\\kappa;"
},
{
"math_id": 36,
"text": "\\operatorname{cf}(\\kappa) = \\min \\left\\{ |I|\\ :\\ \\kappa = \\sum_{i \\in I} \\lambda_i\\ \\land \\forall i \\in I \\colon \\lambda_i < \\kappa\\right\\}."
},
{
"math_id": 37,
"text": "\\kappa = \\bigcup_{i \\in \\kappa} \\{i\\}"
},
{
"math_id": 38,
"text": "\\operatorname{cf}(\\kappa) \\leq \\kappa."
},
{
"math_id": 39,
"text": "\\operatorname{cf}(\\kappa) = \\operatorname{cf}(\\operatorname{cf}(\\kappa))."
},
{
"math_id": 40,
"text": "\\kappa < \\kappa^{\\operatorname{cf}(\\kappa)}"
},
{
"math_id": 41,
"text": "\\kappa < \\operatorname{cf}\\left(2^\\kappa\\right)"
},
{
"math_id": 42,
"text": "\\kappa."
},
{
"math_id": 43,
"text": "\\aleph_\\omega = \\bigcup_{n < \\omega} \\aleph_n,"
},
{
"math_id": 44,
"text": "\\aleph_\\omega"
},
{
"math_id": 45,
"text": "2^{\\aleph_0} \\neq \\aleph_\\omega."
},
{
"math_id": 46,
"text": "2^{\\aleph_0} = \\aleph_1."
},
{
"math_id": 47,
"text": "\\operatorname{cf} (\\aleph_\\delta) = \\operatorname{cf} (\\delta)."
},
{
"math_id": 48,
"text": "\\operatorname{cf} (\\aleph_\\delta) = \\aleph_\\delta."
}
] |
https://en.wikipedia.org/wiki?curid=6693
|
66930267
|
Adaptive noise cancelling
|
Signal processing technique to reduce noise
Adaptive noise cancelling is a signal processing technique that is highly effective in suppressing additive "interference" or "noise" corrupting a received "target signal" at the main or "primary sensor" in certain common situations where the interference is known and is accessible but unavoidable and where the target signal and the interference are unrelated, that is, "uncorrelated""." Examples of such situations include:
Conventional signal processing techniques pass the received signal, consisting of the target signal and the corrupting interference, through a filter that is designed to minimise the effect of the interference. The objective of optimal filtering is to maximise the signal-to-noise ratio at the receiver output or to produce the optimal estimate of the target signal in the presence of interference (Wiener filter).
In contrast, adaptive noise cancelling relies on a second sensor, usually located near the source of the known interference, to obtain a relatively 'pure' version of the interference free from the target signal and other interference. This second version of the interference and the sensor receiving it are called the "reference".
The adaptive noise canceller consists of a self-adjusting "adaptive filter" which automatically transforms the reference signal into an optimal estimate of the interference corrupting the target signal before subtracting it from the received signal thereby cancelling (or minimising) the effect of the interference at the noise canceller output. The adaptive filter adjusts itself continuously and automatically to minimise the residual interference affecting the target signal at its output. The power of the adaptive noise cancelling concept is that it requires no detailed a priori knowledge of the target signal or the interference. The adaptive algorithm that optimises the filter relies only on ongoing sampling of the reference input and the noise canceller output.
Adaptive noise cancelling can be effective even when the target signal and the interference are similar in nature and the interference is considerably stronger than the target signal. The key requirement is that the target signal and the interference are unrelated, that is "uncorrelated". Meeting this requirement is normally not an issue in situations where adaptive noise cancelling is used.
The adaptive noise cancelling approach and the proof of the concept, the first striking demonstrations that general broadband interference can be eliminated from a target signal in practical situations using adaptive noise cancelling, were set out and demonstrated during 1971–72 at the Adaptive Systems Laboratory at the Stanford School of Electrical Engineering by Professor Bernard Widrow and John Kaunitz, an Australian doctoral student, and documented in the latter's PhD dissertation "Adaptive Filtering of Broadband signals as Applied to Noise Cancelling" (1972) (also available here). The work was also published as a Stanford Electronics Labs report by Kaunitz and Widrow, "Noise Cancelling Filter Study" (1973). The initial proof of concept demonstrations of the noise cancelling concept (see below) for eliminating broadband interference were carried out by means of a prototype hybrid adaptive signal processor designed and built by Kaunitz and described in a Stanford Electronics Labs report "General Purpose Hybrid Adaptive Signal Processor (1971)".
Adaptive noise cancelling configuration and concept.
The adaptive noise canceller configuration diagram above shows the "target signal s(t)" present at the primary sensor and the interference or "noise source n(t") and its manifestations "np(t)" and "nr(t)" at the primary and reference sensors respectively.
As np(t) and nr(t) are the manifestations of the same interference source in different locations, these will usually differ significantly in an unpredictable fashion due to different transmission paths through the environment to the two sensors. So the reference nr(t) cannot be used directly to cancel or reduce the interference corrupting the target signal np(t). It must first be appropriately processed before it can be used to minimise, by subtraction, the overall effect of the interference at the noise canceller output.
An adaptive noise canceller is based on a self-optimising adaptive filter that has a variable transform function shaped by adjustable parameters called "weights." Using an iterative adaptive algorithm, the adaptive filter transforms the reference nr(t) into an optimal estimate ñp(t) of the interference np(t) corrupting the target signal and "cancelling" the latter by subtraction, whilst leaving the target signal unchanged. So the output of the adaptive noise canceller shown above is:
z(t) = s(t)+np(t)-ñp(t).
The power of the adaptive noise cancelling approach stems from the fact that the algorithm driving the iterative adjustment of weights in an adaptive filter is a simple, fully automatic iterative process that relies only on ongoing sequence of sampling measurements of the noise canceller output and the reference r(t) = nr(t). For example, the LMS (Least Means Square) algorithm in the context of the usual tapped-delay-line digital adaptive filter (see below) leads to:
Wk+1 = Wk - μzkRk = Wk - μzkNr,k
where the vector Wk represents the set of filter weights at the kth iteration and the vector Rk represents the last set of samples of the reference which are the weight inputs. The adaptation constant μ determines the rate of adaptation and the stability of the optimal configuration.
Apart from the availability of a suitable reference signal the only other essential requirement is that the target signal and the corrupting noise source are unrelated, that is "uncorrelated", so that for all values of formula_0, where the bar represents time averaging.
Adaptive noise cancelling does not require detailed a priori knowledge of the interference or the target signal. However, the physical characteristics of the adaptive filter must be generally suitable for producing an adjustable frequency response or transfer function that will transform the reference signal nr(t) into a close estimate of the corrupting interference, ñp(t), through the iterative adjustment of the filter weights.
A 1975 paper published in the Proceedings of the IEEE by Widrow et al., "Adaptive Noise Cancelling: Principles and Applications""," is now the generally referenced introductory publication in the field. This paper sets out the basic concepts of adaptive noise cancelling and summarises subsequent early work and applications. Earlier unpublished efforts to eliminate interference using a second input are also mentioned. This paper remains the main reference for the adaptive noise cancelling concept and to date has been cited by over 2800 scientific paper and 380 patents. The topic is also covered by a number of more recent books.
Genesis.
Adaptive noise cancelling evolved from the pioneering work on adaptive systems, adaptive filtering and signal processing carried out at the Adaptive Systems Laboratories in the School of Electrical Engineering at Stanford University during the 1960s and 70's under the leadership of Professor Bernard Widrow. Adaptive filters incorporate adjustable parameters called "weights," controlled by iterative "adaptive algorithms", to produce a desired transfer function.
Adaptive filters were originally conceived to produce the optimal filters prescribed by optimal filter theory during a "training phase" by adjusting the filter weights according to an iterative adaptive algorithm such as the Least-Means-Square (LMS) algorithm. During the training phase, the filter is presented with a known input and a training signal called a "desired response."
The filter weights are adjusted by the adaptive algorithm, which is designed to minimise the "mean-squared"-"error ξ", the difference between the adaptive filter output and the desired response:
where W represents the set of weights in vector notation and X(t) the set of weight inputs so y(t) = X(t)TW.
The above expression shows ξ to be a quadratic function of the weight vector W, a multi-dimensional paraboloid with a single minimum that can be reached from any point by descending along the gradient. Gradient descent algorithms, such as the original "Least Means Squared algorithm," iteratively adjust the filter weights in small steps opposite the gradient. In the case of the usual digital tapped delay line filter, the vector Xk is simply the last set of samples of the filter input x(t) and the LMS algorithm results in:
Wk+1 = Wk - μekXk
where k represents the kth step in the iteration process, μ is the adaptation constant that controls the rate and stability of the adaptation process and ek and Xk are samples of the error and the input vector respectively
At the completion of the training phase the adaptive filter has been optimised to produce the desired optimal transfer function. In its normal "operating phase" such an optimised adaptive filter is then used passively to process received signals to improve the signal-to-noise ratio at the filter output under the assumed conditions. The theory and analysis of adaptive filters is largely based on this concept, model and terminology and took place before the introduction of the adaptive noise cancelling concept around 1970.
"Adaptive noise cancelling" is an innovation that represents a fundamentally different configuration and application of adaptive filtering in those common situations where a reference signal is available by:
Whilst the discussion of adaptive noise cancelling reflects the above terminology, it is clear from the above diagrams that the two are equivalent and the previously developed extensive adaptive filter theory therefore continues to apply in both situations.
In the adaptive noise cancelling situation the received signal does not pass through the adaptive filter but instead becomes the 'desired response' for adaptation purposes. Since the adaptation process will aim to minimise the error, it follows that, in the noise canceller configuration, the adaptation process in effect aims to minimise the overall signal power at the noise canceller output - the "error". So the adaptive filtering of the "reference" actually strives to suppress the "overall signal power" at the noise canceller output.
This counterintuitive concept can be understood by keeping in mind that the target signal s(t) and the interference n(t) are uncorrelated. So, in aiming to minimise the "error", using a reference as input, which is related only to the interference, the best the adaptive filter can do, in generating an optimal estimate of the primary input, the desired response, is to generate the optimal estimate of the interference at the primary sensor ñp(t). This will result in minimising the overall effect of the interference at the noise canceller output whilst leaving the target signal s(t) unchanged.
The iterative adaptive algorithms used in adaptive filtering require only an ongoing sequence of sampling measurements at the weight inputs and the error. As digital adaptive filters are in effect tapped-delay-line filters, the operation of an adaptive noise canceller requires only on an ongoing sequence of sampling measurements of the reference and the noise canceller output.
Adaptive filtering theory was developed in the domain of stochastic signals and statistical signal processing. However, repetitive interference typical of noise cancelling applications, such as machinery noise or ECGs, are more appropriately treated as bounded time-varying signals. A comprehensive analysis of adaptive filters when applied to stochastic signals is presented by Widrow and Stearns in their book "Adaptive Signal Processing""." In this context averaging is interpreted as statistical expectation. An analysis of noise cancelling where s(t) and n(t) are assumed to be bounded deterministic signals was presented by Kaunitz in his PhD dissertation, where time averaging is used.
Original proof of concept demonstrations.
The first practical demonstration of the adaptive noise cancelling concept, typical of general practical situations involving broadband signals, was carried out in 1971 at the Stanford School of Electrical Engineering Adaptive Systems Laboratory by Kaunitz using a prototype hybrid adaptive signal processor. The ambient noise from the output of a microphone used by a speaker (the primary sensor) in a very noisy room was largely eliminated using adaptive noise cancellation.
A triangular signal, representing a typical broadband signal, emitted by a loudspeaker situated in the room, was used as the interfering noise source. A second microphone situated near this loudspeaker served to provide the reference input. The output of the noise canceller was channeled to the earphones of a listener outside the room.
The adaptive filter used in these experiments was a hybrid adaptive filter consisting of a preprocessor of 16 RC-filter circuits which provided the inputs to 16 digitally controlled analogue amplifiers as weights that were summed as a linear combiner produce the adaptive filter output. This linear combiner interfaced to a small HP 2116B digital computer that ran a version of the LMS algorithm.
The experimental arrangement used by Kaunitz in the photo below shows the loudspeaker emitting the interference, the two microphones used to provide the primary and reference signals, the equipment rack, containing the hybrid adaptive filter and the digital interface, and the HP 2116B minicomputer on the right of the picture. (Only some of the equipment in the photo is part of the adaptive noise cancelling demonstration).
The noise canceller effectively reduced the ambient noise overlaying the speech signal from an initially almost overwhelming level to barely audible and successfully re-adapted to the change in frequency of the triangular noise source and to changes in the environment when people moved around in the room. Recordings of these demonstrations are still available here and here.
The second application of this original noise canceller was to process ECGs from heart transplant animals studied by the pioneering heart transplant team at the Stanford Medical Centre at the time led by Dr Norman Shumway. Data was provided by Drs Eugene Dong and Walter B Cannon in the form of a multi-track magnetic tape recording of electrocardiograms.
In heart transplant recipients the part of the heart stem that contains the recipient's pacemaker (called the sinoatrial or SA node) remains in place and continues to fire controlled by the brain and the nervous system. Normally this pacemaker controls the rate at which the heart is beating by triggering the atrioventricular (AV) nodes and thus controlling heart rate to respond to the demands of the body. (See diagram below). In normal patients, this represents a feedback loop, but in transplant patients, the connection between the remnant SA node and the implanted AV node is not re-established and the remnant pacemaker and the implanted heart are beating independently, at differing rates.
The behavior of the remnant pacemaker in the open loop situation of a heart transplant patient was of considerable interest to researchers, but studying the ECG of the pacemaker (the p-wave) was made difficult because the weaker signal from the pacemaker was swamped by the signal from the implanted heart even when a bipolar catheter sensor (primary sensor) is inserted through the jugular vein close to the SA-node. (See the third trace from top in the diagram below). The noise cancelling arrangement to eliminate the effect of the donor heart from the ECG of the p-wave is shown below.
A reference signal was obtained through a limb-to-limb ECG of the patient (See top trace in the diagram below), which provided the main ECG of the donor heart largely free from the pacemaker p-wave. Adaptive noise cancelling was used to transform the reference into an estimate of the donor heart signal present at the primary input (see second trace from top) and used to substantially reduce the effect of the donor heart from the primary ECG (third trace), providing a substantially cleaned up version of the p-wave at the noise canceller output (see bottom trace) suitable for further study and analysis.
Applications.
Adaptive noise cancelling techniques have found use in a wide range of situations, including the following:
In these situations a suitable reference signal can be readily obtained by placing a sensor near the source of the interference or by other means (e.g. a version of the interfering ECG free from the target signal).
Adaptive noise cancelling can be effective even when the target signal and the interference are similar in nature and the interference is considerably stronger than the target signal. Apart from the availability of a suitable reference signal the only other critical requirement is that the target signal and the corrupting noise source are unrelated, that is "uncorrelated", so that for all values of formula_0, where the bar represents time averaging.
Adaptive noise cancelling does not require detailed a priori knowledge of the interference or the target signal. However, the characteristics of the adaptive filter must be generally suitable for producing an adjustable frequency response or transfer function that is able to transform the reference signal nr(t) into an estimate of the corrupting interference, ñp(t), through the iterative adjustment of the filter weights. The interference in the above examples are usually irregular "repetitive" signals. Although the theory of adaptive filtering does not rely on this as an assumption, in practice this characteristic is very helpful as it limits the need for the adaptive filter to compensate for time shifts between the versions of the interference at the primary and reference sensors to appropriately compensating for phase shifts.
Adaptive noise cancelling and active noise control.
Adaptive Noise Cancelling is not to be confused with active noise control. These terms refer to different areas of scientific investigation in two different disciplines and the term "noise" has a different meaning in the two contexts.
Active noise control is a method in acoustics to reduce unwanted sound in physical spaces and an area of research that preceded the development of adaptive noise cancelling. The term "noise" is used here with its common meaning of unwanted audible sound.
As explained above, adaptive noise cancelling is a technique used in communication and control to reduce the effect of additive interference corrupting an electric or electromagnetic target signal. In this context "noise" refers to such interference and the two terms are used interchangeably. In the book by Widrow and Stearns the relevant chapter is in fact entitled "Adaptive Interference Cancelling". However, "adaptive noise cancelling" is the term that prevailed and is now in common usage.
After its development in signal processing, the adaptive noise-cancelling approach was also adopted in active noise control, for example in some (but not all), noise-cancelling headphones. So the two areas in fact significantly intersect. Nevertheless, active noise control is just one of the many applications of adaptive noise cancelling and, conversely, adaptive noise cancelling is just one technique used in the field of active noise control.
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tau"
}
] |
https://en.wikipedia.org/wiki?curid=66930267
|
669402
|
Zonohedron
|
Convex polyhedron projected from hypercube
In geometry, a zonohedron is a convex polyhedron that is centrally symmetric, every face of which is a polygon that is centrally symmetric (a zonogon). Any zonohedron may equivalently be described as the Minkowski sum of a set of line segments in three-dimensional space, or as a three-dimensional projection of a hypercube. Zonohedra were originally defined and studied by E. S. Fedorove, a Russian crystallographer. More generally, in any dimension, the Minkowski sum of line segments forms a polytope known as a zonotope.
Zonohedra that tile space.
The original motivation for studying zonohedra is that the Voronoi diagram of any lattice forms a convex uniform honeycomb in which the cells are zonohedra. Any zonohedron formed in this way can tessellate 3-dimensional space and is called a primary parallelohedron. Each primary parallelohedron is combinatorially equivalent to one of five types: the rhombohedron (including the cube), hexagonal prism, truncated octahedron, rhombic dodecahedron, and the rhombo-hexagonal dodecahedron.
Zonohedra from Minkowski sums.
Let formula_0 be a collection of three-dimensional vectors. With each vector formula_1 we may associate a line segment formula_2. The Minkowski sum formula_3 forms a zonohedron, and all zonohedra that contain the origin have this form. The vectors from which the zonohedron is formed are called its generators. This characterization allows the definition of zonohedra to be generalized to higher dimensions, giving zonotopes.
Each edge in a zonohedron is parallel to at least one of the generators, and has length equal to the sum of the lengths of the generators to which it is parallel. Therefore, by choosing a set of generators with no parallel pairs of vectors, and by setting all vector lengths equal, we may form an equilateral version of any combinatorial type of zonohedron.
By choosing sets of vectors with high degrees of symmetry, we can form in this way, zonohedra with at least as much symmetry. For instance, generators equally spaced around the equator of a sphere, together with another pair of generators through the poles of the sphere, form zonohedra in the form of prism over regular formula_4-gons: the cube, hexagonal prism, octagonal prism, decagonal prism, dodecagonal prism, etc.
Generators parallel to the edges of an octahedron form a truncated octahedron, and generators parallel to the long diagonals of a cube form a rhombic dodecahedron.
The Minkowski sum of any two zonohedra is another zonohedron, generated by the union of the generators of the two given zonohedra. Thus, the Minkowski sum of a cube and a truncated octahedron forms the truncated cuboctahedron, while the Minkowski sum of the cube and the rhombic dodecahedron forms the truncated rhombic dodecahedron. Both of these zonohedra are simple (three faces meet at each vertex), as is the truncated small rhombicuboctahedron formed from the Minkowski sum of the cube, truncated octahedron, and rhombic dodecahedron.
Zonohedra from arrangements.
The Gauss map of any convex polyhedron maps each face of the polygon to a point on the unit sphere, and maps each edge of the polygon separating a pair of faces to a great circle arc connecting the corresponding two points. In the case of a zonohedron, the edges surrounding each face can be grouped into pairs of parallel edges, and when translated via the Gauss map any such pair becomes a pair of contiguous segments on the same great circle. Thus, the edges of the zonohedron can be grouped into zones of parallel edges, which correspond to the segments of a common great circle on the Gauss map, and the 1-skeleton of the zonohedron can be viewed as the planar dual graph to an arrangement of great circles on the sphere. Conversely any arrangement of great circles may be formed from the Gauss map of a zonohedron generated by vectors perpendicular to the planes through the circles.
Any simple zonohedron corresponds in this way to a simplicial arrangement, one in which each face is a triangle. Simplicial arrangements of great circles correspond via central projection to simplicial arrangements of lines in the projective plane. There are three known infinite families of simplicial arrangements, one of which leads to the prisms when converted to zonohedra, and the other two of which correspond to additional infinite families of simple zonohedra. There are also many sporadic examples that do not fit into these three families.
It follows from the correspondence between zonohedra and arrangements, and from the Sylvester–Gallai theorem which (in its projective dual form) proves the existence of crossings of only two lines in any arrangement, that every zonohedron has at least one pair of opposite parallelogram faces. (Squares, rectangles, and rhombuses count for this purpose as special cases of parallelograms.) More strongly, every zonohedron has at least six parallelogram faces, and every zonohedron has a number of parallelogram faces that is linear in its number of generators.
Types of zonohedra.
Any prism over a regular polygon with an even number of sides forms a zonohedron. These prisms can be formed so that all faces are regular: two opposite faces are equal to the regular polygon from which the prism was formed, and these are connected by a sequence of square faces. Zonohedra of this type are the cube, hexagonal prism, octagonal prism, decagonal prism, dodecagonal prism, etc.
In addition to this infinite family of regular-faced zonohedra, there are three Archimedean solids, all omnitruncations of the regular forms:
In addition, certain Catalan solids (duals of Archimedean solids) are again zonohedra:
Others with congruent rhombic faces:
There are infinitely many zonohedra with rhombic faces that are not all congruent to each other. They include:
Dissection of zonohedra.
Every zonohedron with formula_5 zones can be partitioned into formula_6 parallelepipeds, each having three of the same zones, and with one parallelepiped for each triple of zones.
The Dehn invariant of any zonohedron is zero. This implies that any two zonohedra with the same volume can be dissected into each other. This means that it is possible to cut one of the two zonohedra into polyhedral pieces that can be reassembled into the other.
Zonohedrification.
Zonohedrification is a process defined by George W. Hart for creating a zonohedron from another polyhedron.
First the vertices of any seed polyhedron are considered vectors from the polyhedron center. These vectors create the zonohedron which we call the zonohedrification of the original polyhedron. If the seed polyhedron has central symmetry, opposite points define the same direction, so the number of zones in the zonohedron is half the number of vertices of the seed. For any two vertices of the original polyhedron, there are two opposite planes of the zonohedrification which each have two edges parallel to the vertex vectors.
Zonotopes.
The Minkowski sum of line segments in any dimension forms a type of polytope called a zonotope. Equivalently, a zonotope formula_7 generated by vectors formula_8 is given by formula_9. Note that in the special case where formula_10, the zonotope formula_7 is a (possibly degenerate) parallelotope.
The facets of any zonotope are themselves zonotopes of one lower dimension; for instance, the faces of zonohedra are zonogons. Examples of four-dimensional zonotopes include the tesseract (Minkowski sums of "d" mutually perpendicular equal length line segments), the omnitruncated 5-cell, and the truncated 24-cell. Every permutohedron is a zonotope.
Zonotopes and Matroids.
Fix a zonotope formula_7 defined from the set of vectors formula_11 and let formula_12 be the formula_13 matrix whose columns are the formula_1. Then the vector matroid formula_14 on the columns of formula_12 encodes a wealth of information about formula_7, that is, many properties of formula_7 are purely combinatorial in nature.
For example, pairs of opposite facets of formula_7 are naturally indexed by the cocircuits of formula_15 and if we consider the oriented matroid formula_15 represented by formula_16, then we obtain a bijection between facets of formula_7 and signed cocircuits of formula_15 which extends to a poset anti-isomorphism between the face lattice of formula_7 and the covectors of formula_15 ordered by component-wise extension of formula_17. In particular, if formula_12 and formula_18 are two matrices that differ by a projective transformation then their respective zonotopes are combinatorially equivalent. The converse of the previous statement does not hold: the segment formula_19 is a zonotope and is generated by both formula_20 and by formula_21 whose corresponding matrices, formula_22 and formula_23, do not differ by a projective transformation.
Tilings.
Tiling properties of the zonotope formula_7 are also closely related to the oriented matroid formula_15 associated to it. First we consider the space-tiling property. The zonotope formula_7 is said to "tile" formula_24 if there is a set of vectors formula_25 such that the union of all translates formula_26 (formula_27) is formula_24 and any two translates intersect in a (possibly empty) face of each. Such a zonotope is called a "space-tiling zonotope." The following classification of space-tiling zonotopes is due to McMullen: The zonotope formula_7 generated by the vectors formula_28 tiles space if and only if the corresponding oriented matroid is regular. So the seemingly geometric condition of being a space-tiling zonotope actually depends only on the combinatorial structure of the generating vectors.
Another family of tilings associated to the zonotope formula_7 are the "zonotopal tilings" of formula_7. A collection of zonotopes is a zonotopal tiling of formula_7 if it a polyhedral complex with support formula_7, that is, if the union of all zonotopes in the collection is formula_7 and any two intersect in a common (possibly empty) face of each. Many of the images of zonohedra on this page can be viewed as zonotopal tilings of a 2-dimensional zonotope by simply considering them as planar objects (as opposed to planar representations of three dimensional objects). The Bohne-Dress Theorem states that there is a bijection between zonotopal tilings of the zonotope formula_7 and "single-element lifts" of the oriented matroid formula_15 associated to formula_7.
Volume.
Zonohedra, and "n"-dimensional zonotopes in general, are noteworthy for admitting a simple analytic formula for their volume.
Let formula_29 be the zonotope formula_9 generated by a set of vectors formula_30. Then the n-dimensional volume of formula_29 is given by
formula_31
The determinant in this formula makes sense because (as noted above) when the set formula_32 has cardinality equal to the dimension formula_5 of the ambient space, the zonotope is a parallelotope.
Note that when formula_33, this formula simply states that the zonotope has n-volume zero.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{v_0, v_1, \\dots\\}"
},
{
"math_id": 1,
"text": "v_i"
},
{
"math_id": 2,
"text": "\\{ x_i v_i \\mid 0 \\leq x_i \\leq 1 \\}"
},
{
"math_id": 3,
"text": "\\{ \\textstyle \\sum_i x_i v_i \\mid 0 \\leq x_i \\leq 1 \\}"
},
{
"math_id": 4,
"text": "2k"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\tbinom{n}{3}"
},
{
"math_id": 7,
"text": "Z"
},
{
"math_id": 8,
"text": "v_1,...,v_k\\in\\mathbb{R}^n"
},
{
"math_id": 9,
"text": "Z = \\{a_1 v_1 + \\cdots + a_k v_k | \\; \\forall(j) a_j\\in [0,1]\\}"
},
{
"math_id": 10,
"text": "k \\leq n"
},
{
"math_id": 11,
"text": "V = \\{v_1,\\dots,v_n\\}\\subset\\mathbb{R}^d"
},
{
"math_id": 12,
"text": "M"
},
{
"math_id": 13,
"text": "d \\times n"
},
{
"math_id": 14,
"text": "\\underline{\\mathcal{M}}"
},
{
"math_id": 15,
"text": "\\mathcal{M}"
},
{
"math_id": 16,
"text": "{M}"
},
{
"math_id": 17,
"text": "0 \\prec +, -"
},
{
"math_id": 18,
"text": "N"
},
{
"math_id": 19,
"text": "[0,2] \\subset \\mathbb{R}"
},
{
"math_id": 20,
"text": "\\{2\\mathbf{e}_1\\}"
},
{
"math_id": 21,
"text": "\\{\\mathbf{e}_1, \\mathbf{e}_1\\}"
},
{
"math_id": 22,
"text": "[2]"
},
{
"math_id": 23,
"text": "[1~1]"
},
{
"math_id": 24,
"text": "\\mathbb{R}^d"
},
{
"math_id": 25,
"text": "\\Lambda \\subset \\mathbb{R}^d"
},
{
"math_id": 26,
"text": "Z + \\lambda"
},
{
"math_id": 27,
"text": "\\lambda \\in \\Lambda"
},
{
"math_id": 28,
"text": "V"
},
{
"math_id": 29,
"text": "Z(S)"
},
{
"math_id": 30,
"text": "S = \\{v_1,\\dots,v_k\\in\\mathbb{R}^n\\}"
},
{
"math_id": 31,
"text": "\\sum_{T\\subset S \\; : \\; |T| = n} |\\det(Z(T))|"
},
{
"math_id": 32,
"text": "T"
},
{
"math_id": 33,
"text": "k<n"
}
] |
https://en.wikipedia.org/wiki?curid=669402
|
66940304
|
Liquid phase sintering
|
Liquid phase sintering is a sintering technique that uses a liquid phase to accelerate the interparticle bonding of the solid phase. In addition to rapid initial particle rearrangement due to capillary forces, mass transport through liquid is generally orders of magnitude faster than through solid, enhancing the diffusional mechanisms that drive densification. The liquid phase can be obtained either through mixing different powders—melting one component or forming a eutectic—or by sintering at a temperature between the liquidus and solidus. Additionally, since the softer phase is generally the first to melt, the resulting microstructure typically consists of hard particles in a ductile matrix, increasing the toughness of an otherwise brittle component. However, liquid phase sintering is inherently less predictable than solid phase sintering due to the complexity added by the presence of additional phases and rapid solidification rates. Activated sintering is the solid-state analog to the process of liquid phase sintering.
Process.
Historically, liquid phase sintering was used to process ceramic materials like clay bricks, earthenware, and porcelain. Modern liquid phase sintering was first applied in the 1930s to materials like cemented carbides (e.g. WC-Co) for cutting tools, porous brass (Cu-Sn) for oil-less bearings, and tungsten-heavy alloys (W-Ni-Cu), but now finds applications ranging from superalloys to dental ceramics to capacitors. Liquid phase sintering occurs in three overlapping stages.
Rearrangement.
Two powders, a base and an additive, are mixed and pressed into a green compact. The green compact is then heated to a temperature where a liquid forms; volume fractions between 5-15% liquid are typical. The capillary force due to the wetting of the solid particles by the liquid rapidly pulls the liquid into interparticle voids and causes particles to rearrange. Wettability is described by the contact angle, formula_0, which can be given as a difference of relative surface energies between the solid, liquid, and vapor (formula_1, formula_2, formula_3, respectively):
formula_4
Low contact angles indicate good wettability, and will result in a capillary force pulling the compact together. High contact angles indicate poor wettability, which will result in compact swelling. Wettability can be improved by alloying or by increasing temperature, and is also aided by small, regularly shaped particles and a homogeneous green compact. An extremely effective approach is to directly coat powders with the liquid-forming component, allowing the liquid phase to form directly on the particle boundaries. However, components can experience “slumping”, or shape distortion, if too much liquid is formed during this stage. The rearrangement stage proceeds very rapidly, with the majority of densification occurring within three minutes of melt formation.
Solution-Reprecipitation.
As porosity is eliminated and rearrangement slows, diffusive mechanisms, analogous to those present in diffusional creep, become dominant and change the sizes and shapes of powder particles. These mechanisms proceed via the dissolution of solid into the liquid phase, diffusion through the liquid, and reprecipitation; hence, the solubility and diffusivity of the solid in the liquid controls the rates of these processes. The process of grain growth or particle coarsening is called Ostwald ripening and occurs because smaller grains are more soluble in the liquid than larger grains. The resulting concentration gradient causes material to diffuse through the liquid, causing larger grains to grow at the expense of smaller grains. Shape change proceeds similarly; in a process termed “contact flattening”, solid preferentially dissolves in areas with high capillary pressure (i.e. where particles are close together) and reprecipitates elsewhere. Thus, two curved surfaces in close proximity will flatten over time. Shape change can also be driven by anisotropy in the surface energy of the solid and/or differences in the magnitudes of the solid-solid and solid-liquid interfacial energies. These shape changes allow the grains to pack more tightly, further eliminating porosity and densifying the compact. Early models of solution-reprecipitation demonstrate that the rate of densification can be increased by increasing temperature, decreasing the grain size, and increasing the solid solubility in the liquid.
Final Densification.
In the final stage, densification is slowed even further because the compact strengthens with neck growth and the formation of a solid skeletal microstructure. This regime is typically best described by classical solid phase sintering. Rearrangement is inhibited, but coarsening continues to occur via diffusion. Additionally, pores containing trapped gas can expand until the pore pressure, formula_5, is balanced against the liquid-vapor surface energy. For spherical pores with diameter formula_6, this is described by
formula_7
where formula_8 is the liquid/vapor interfacial energy. Generally, due to coarsening and pore expansion, extensive time in this final stage tends to degrade the properties of compacts.
Properties.
Generally, the liquid phase will solidify into a continuous ductile matrix that encapsulates the harder, brittle particles. Mechanical properties are typically the primary concern of sintered components, which is a composite with the hard phase providing strength and the matrix providing toughness. The mechanical properties are largely dictated by the residual porosity, but in fully dense components, the dominant factor is the microstructure that forms as a result of sintering. As a first approximation, many mechanical properties, such as hardness and elastic modulus, can be linked to the volume fraction of each phase, with the rule of mixtures giving an upper bound and the inverse rule of mixtures giving a lower bound. High-temperature mechanical properties are typically controlled by the creep behavior of the matrix, due to its lower melting point. Thus, property optimization can be difficult, as reducing the volume fraction of matrix improves creep behavior, but may negatively impact the sintering behavior. For high-temperature materials, a variation of the process termed "transient liquid phase sintering" is typically used, in which the liquid is highly soluble in the solid phase and disappears over time.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\theta"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": " \\theta = \\arccos \\left( \\frac{\\gamma_{SV}}{\\gamma_{LV}} - \\frac{\\gamma_{SL}}{\\gamma_{LV}} \\right) "
},
{
"math_id": 5,
"text": "P_{pore}"
},
{
"math_id": 6,
"text": "d_{pore}"
},
{
"math_id": 7,
"text": "P_{pore}=\\frac{4\\gamma_{LV}}{d_{pore}}"
},
{
"math_id": 8,
"text": "\\gamma_{LV}"
}
] |
https://en.wikipedia.org/wiki?curid=66940304
|
66943476
|
1 Chronicles 19
|
First Book of Chronicles, chapter 19
1 Chronicles 19 is the nineteenth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the account of David's wars against the neighboring nations, especially the Ammonites and the Arameans. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE, which extant ancient manuscripts include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David's messengers disgraced (19:1–9).
This section a part of the accounts largely corresponding with 2 Samuel 10:1–11:1; 12:26–31, omitting the episode of David, Bathsheba and Uriah the Hittite and . The death of a king, such as Nahash, the Ammonite, could signal then end of international arrangements with other kingdoms, so David wanted to confirm a good relationship with Nahash's successor, Hanun, but David's successive victories against the Philistines, Moabites, Edomites, and Arameans, made Hanun's counselors suspicious (verse 3). 1 Chronicles 19:4-8 and 2 Samuel 10:4-7 have a parallel in the Qumran (Dead Sea Scrolls) text (4Q51; 4Q Samuela or 4QSama, dates from c. 200 BCE), which shows that the 'relationship between Samuel and Chronicles was not one of unilateral or unambiguous independence', with distinctive differences such as the spelling of "David" in the books of Samuel () differs from that in the Chronicles and 4Q51 () as well as some details in numbers.
"When the Ammonites saw that they had become a stench to David, Hanun and the Ammonites sent one thousand talents of silver to hire chariots and horsemen from Aram Naharaim, Aram Maakah, and Zobah."
"So they hired thirty and two thousand chariots, and the king of Maachah and his people; who came and pitched before Medeba. And the children of Ammon gathered themselves together from their cities, and came to battle."
David defeated the Ammonites and Arameans (19:10–19).
This passage parallels 2 Samuel 10:9–19 with a few differences. The victory of David's army against the Arameans (Syrians) left the Ammonites isolated from their allies.
"But the Arameans fled before Israel, and David killed seven thousand chariot drivers and forty thousand infantry men of the Arameans, and killed Shophak, the commander of the army."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66943476
|
669440
|
Fundamental class
|
In mathematics, the fundamental class is a homology class ["M"] associated to a connected orientable compact manifold of dimension "n", which corresponds to the generator of the homology group formula_0 . The fundamental class can be thought of as the orientation of the top-dimensional simplices of a suitable triangulation of the manifold.
Definition.
Closed, orientable.
When "M" is a connected orientable closed manifold of dimension "n", the top homology group is infinite cyclic: formula_1, and an orientation is a choice of generator, a choice of isomorphism formula_2. The generator is called the fundamental class.
If "M" is disconnected (but still orientable), a fundamental class is the direct sum of the fundamental classes for each connected component (corresponding to an orientation for each component).
In relation with de Rham cohomology it represents "integration over M"; namely for "M" a smooth manifold, an "n"-form ω can be paired with the fundamental class as
formula_3
which is the integral of ω over "M", and depends only on the cohomology class of ω.
Stiefel-Whitney class.
If "M" is not orientable, formula_4, and so one cannot define a fundamental class "M" living inside the integers. However, every closed manifold is formula_5-orientable, and
formula_6 (for "M" connected). Thus, every closed manifold is formula_5-oriented (not just orient"able": there is no ambiguity in choice of orientation), and has a formula_5-fundamental class.
This formula_5-fundamental class is used in defining Stiefel–Whitney class.
With boundary.
If "M" is a compact orientable manifold with boundary, then the top relative homology group is again infinite cyclic formula_7, and so the notion of the fundamental class can be extended to the manifold with boundary case.
Poincaré duality.
The Poincaré duality theorem relates the homology and cohomology groups of "n"-dimensional oriented closed manifolds: if "R" is a commutative ring and "M" is an "n"-dimensional "R"-orientable closed manifold with fundamental class "[M]", then for all "k", the map
formula_8
given by
formula_9
is an isomorphism.
Using the notion of fundamental class for manifolds with boundary, we can extend Poincaré duality to that case too (see Lefschetz duality). In fact, the cap product with a fundamental class gives a stronger duality result saying that we have isomorphisms formula_10, assuming we have that formula_11 are formula_12-dimensional manifolds with formula_13 and formula_14.
See also Twisted Poincaré duality
Applications.
In the Bruhat decomposition of the flag variety of a Lie group, the fundamental class corresponds to the top-dimension Schubert cell, or equivalently the longest element of a Coxeter group.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H_n(M,\\partial M;\\mathbf{Z})\\cong\\mathbf{Z}"
},
{
"math_id": 1,
"text": "H_n(M;\\mathbf{Z}) \\cong \\mathbf{Z}"
},
{
"math_id": 2,
"text": "\\mathbf{Z} \\to H_n(M;\\mathbf{Z})"
},
{
"math_id": 3,
"text": "\\langle\\omega, [M]\\rangle = \\int_M \\omega\\ ,"
},
{
"math_id": 4,
"text": "H_n(M;\\mathbf{Z}) \\ncong \\mathbf{Z}"
},
{
"math_id": 5,
"text": "\\mathbf{Z}_2"
},
{
"math_id": 6,
"text": "H_n(M;\\mathbf{Z}_2)=\\mathbf{Z}_2"
},
{
"math_id": 7,
"text": "H_n(M,\\partial M)\\cong \\mathbf{Z}"
},
{
"math_id": 8,
"text": " H^k(M;R) \\to H_{n-k}(M;R) "
},
{
"math_id": 9,
"text": " \\alpha \\mapsto [M] \\frown \\alpha "
},
{
"math_id": 10,
"text": "H^q(M, A;R) \\cong H_{n-q}(M, B;R)"
},
{
"math_id": 11,
"text": "A, B"
},
{
"math_id": 12,
"text": "(n-1)"
},
{
"math_id": 13,
"text": "\\partial A=\\partial B= A\\cap B"
},
{
"math_id": 14,
"text": "\\partial M=A\\cup B"
}
] |
https://en.wikipedia.org/wiki?curid=669440
|
66946224
|
Gamma ray cross section
|
Probability that a gamma ray interacts with matter
A gamma ray cross section is a measure of the probability that a gamma ray interacts with matter. The total cross section of gamma ray interactions is composed of several independent processes: photoelectric effect, Compton (incoherent) scattering, electron-positron pair production in the nucleus field and electron-positron pair production in the electron field (triplet production). The cross section for single process listed above is a part of the total gamma ray cross section.
Other effects, like the photonuclear absorption, Thomson or Rayleigh (coherent) scattering can be omitted because of their nonsignificant contribution in the gamma ray range of energies.
The detailed equations for cross sections (barn/atom) of all mentioned effects connected with gamma ray interaction with matter are listed below.
Photoelectric effect cross section.
The photoelectric effect phenomenon describes the interaction of a gamma photon with an electron located in the atomic structure. This results the ejection of that electron from the atom. The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV. It is much less important at higher energies, but still needs to be taken into consideration.
Usually, the cross section of the photoeffect can be approximated by the simplified equation of
formula_0
where "k = Eγ / Ee", and where "Eγ = hν" is the photon energy given in eV and "Ee = me c2" ≈ 5,11∙105 eV is the electron rest mass energy, "Z" is an atomic number of the absorber's element, "α = e2/(ħc)" ≈ 1/137 is the fine structure constant, and "re2 = e4/Ee2" ≈ 0.07941 b is the square of the classical electron radius in barns.
For higher precision, however, the Sauter equation is more appropriate:
formula_1
where
formula_2
and "EB" is a binding energy of electron, and ϕ"0" is a Thomson cross section (ϕ"0" = 8"πe4/(3Ee2)" ≈ 0.66526 barn).
For higher energies (>0.5 MeV) the cross section of the photoelectric effect is very small because other effects (especially Compton scattering) dominates. However, for precise calculations of the photoeffect cross section in high energy range, the Sauter equation shall be substituted by the Pratt-Scofield equation
formula_3
where all input parameters are presented in the Table below.
Compton scattering cross section.
Compton scattering (or Compton effect) is an interaction in which an incident gamma photon interacts with an atomic electron to cause its ejection and scatter of the original photon with lower energy. The probability of Compton scattering decreases with increasing photon energy. Compton scattering is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV.
The cross section of the Compton effect is described by the Klein-Nishina equation:
formula_4
for energies higher than 100 keV (k>0.2). For lower energies, however, this equation shall be substituted by:
formula_5
which is proportional to the absorber's atomic number, "Z".
The additional cross section connected with the Compton effect can be calculated for the energy transfer coefficient only – the absorption of the photon energy by the electron:
formula_6
which is often used in radiation protection calculations.
Pair production (in nucleus field) cross section.
By interaction with the electric field of a nucleus, the energy of the incident photon is converted into the mass of an electron-positron (e−e+) pair. The cross section for the pair production effect is usually described by the Maximon equation:
formula_7 for low energies ("k"<4),
where
formula_8.
However, for higher energies ("k">4) the Maximon equation has a form of
formula_9
where ζ(3)≈1.2020569 is the Riemann zeta function. The energy threshold for the pair production effect is "k"=2 (the positron and electron rest mass energy).
Triplet production cross section.
The triplet production effect, where positron and electron is produced in the field of other electron, is similar to the pair production, with the threshold at "k"=4. This effect, however, is much less probable than the pair production in the nucleus field. The most popular form of the triplet cross section was formulated as Borsellino-Ghizzetti equation
formula_10
where "a"=-2.4674 and "b"=-1.8031. This equation is quite long, so Haug proposed simpler analytical forms of triplet cross section. Especially for the lowest energies 4<"k"<4.6:
formula_11
For 4.6<"k"<6:
formula_12
For 6<"k"<18:
formula_13
For "k">14 Haug proposed to use a shorter form of Borsellino equation:
formula_14
Total cross section.
One can present the total cross section per atom as a simple sum of each effects:
formula_15
Next, using the Beer–Lambert–Bouguer law, one can calculate the linear attenuation coefficient for the photon interaction with an absorber of atomic density "N":
formula_16
or the mass attenuation coefficient:
formula_17
where "ρ" is mass density, "u" is an atomic mass unit, a "A" is the atomic mass of the absorber.
This can be directly used in practice, e.g. in the radiation protection.
The analytical calculation of the cross section of each specific phenomenon is rather difficult because appropriate equations are long and complicated. Thus, the total cross section of gamma interaction can be presented in one phenomenological equation formulated by Fornalski, which can be used instead:
formula_18
where ai,j parameters are presented in Table below. This formula is an approximation of the total cross section of gamma rays interaction with matter, for different energies (from 1 MeV to 10 GeV, namely 2<"k"<20,000) and absorber's atomic numbers (from "Z"=1 to 100).
For lower energy region (<1 MeV) the Fornalski equation is more complicated due to the larger function variability of different elements. Therefore, the modified equation
formula_19
is a good approximation for photon energies from 150 keV to 10 MeV, where the photon energy "E" is given in MeV, and ai,j parameters are presented in Table below with much better precision. Analogically, the equation is valid for all "Z" from 1 to 100.
XCOM Database of cross sections.
The US National Institute of Standards and Technology published on-line a complete and detailed database of cross section values of X-ray and gamma ray interactions with different materials in different energies. The database, called XCOM, contains also linear and mass attenuation coefficients, which are useful for practical applications.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma_{ph} = \\frac{16}{3}\\sqrt{2}\\pi r_e^2 \\alpha^4 \\frac{Z^5}{k^{3.5}} \\approx 5 \\cdot 10^{11} \\frac{Z^5}{E_\\gamma^{3.5}}\\, \\mathrm{b} "
},
{
"math_id": 1,
"text": "\\sigma_{ph} = \\frac{3}{2} \\phi_0 \\alpha^4 \\biggl(Z \\frac{E_e}{E_\\gamma} \\biggr)^5 (\\gamma^2-1)^{3/2} \\Biggl[\\frac{4}{3}+\\frac{\\gamma(\\gamma-2)}{\\gamma+1} \\Biggl(1-\\frac{1}{2\\gamma(\\gamma^2-1)^{1/2} } \\ln \\frac{\\gamma+(\\gamma^2-1)^{1/2}}{\\gamma-(\\gamma^2-1)^{1/2}}\\Biggr)\\Biggr]"
},
{
"math_id": 2,
"text": "\\gamma=\\frac{E_\\gamma-E_B+E_e}{E_e} "
},
{
"math_id": 3,
"text": "\\sigma_{ph}=Z^5 \\Biggl(\\sum_{n=1}^4 \\frac{a_n+b_n Z}{1+c_n Z} k^{-p_n} \\Biggr)"
},
{
"math_id": 4,
"text": "\\sigma_C = Z 2 \\pi r_e^2 \\Biggl\\{ \\frac{1+k}{k^2} \\Biggl[ \\frac{2(1+k)}{1+2k}-\\frac{\\ln{(1+2k)}}{k} \\Biggr] + \\frac{\\ln{(1+2k)}}{2k} - \\frac{1+3k}{(1+2k)^2} \\Biggr\\}"
},
{
"math_id": 5,
"text": "\\sigma_C=Z \\frac{8}{3} \\pi r_e^2 \\frac{1}{(1+2k)^2} \\biggl(1 + 2k + \\frac{6}{5} k^2 - \\frac{1}{2} k^3+\\frac{2}{7} k^4-\\frac{6}{35} k^5+\\frac{8}{105} k^6+\\frac{4}{105} k^7 \\biggr)"
},
{
"math_id": 6,
"text": "\\sigma_{C,abs}=Z 2 \\pi r_e^2 \\biggl[ \\frac{2(1+k)^2}{k^2 (1+2k) } - \\frac{1+3k}{(1+2k)^2} -\\frac{(1+k)(2k^2-2k-1)}{k^2(1+2k)^2}-\\frac{4k^2}{3(1+2k)^3}- \\Bigl( \\frac{1+k}{k^3} -\\frac{1}{2k} + \\frac{1}{2k^3} \\Bigr) \\ln{(1+2k)} \\biggr]"
},
{
"math_id": 7,
"text": "\\sigma_{pair}=Z^2 \\alpha r_e^2 \\frac{2\\pi}{3} \\biggl(\\frac{k-2}{k}\\biggr)^3 \\biggl( 1 + \\frac{1}{2}\\rho + \\frac{23}{40}\\rho^2 + \\frac{11}{60}\\rho^3 + \\frac{29}{960}\\rho^4 \\biggr) "
},
{
"math_id": 8,
"text": "\\rho = \\frac{2k-4}{2+k+2\\sqrt{2k}}"
},
{
"math_id": 9,
"text": "\\sigma_{pair}=Z^2 \\alpha r_e^2 \\Biggl\\{ \\frac{28}{9} \\ln{2k}-\\frac{218}{27}\n+(\\frac{2}{k})^2 \\biggl[6 \\ln{2k}-\\frac{7}{2}+\\frac{2}{3} \\ln^3{2k}-\\ln^2{2k} -\\frac{1}{3} \\pi^2 \\ln{2k}+2 \\zeta (3)+\\frac{\\pi^2}{6} \\biggr]\n-(\\frac{2}{k})^4 \\biggl[\\frac{3}{16} \\ln{2k}+\\frac{1}{8} \\biggr]-(\\frac{2}{k})^6 \\biggl[\\frac{29}{9\\cdot256} \\ln{2k}-\\frac{77}{27\\cdot512} \\biggr] \\Biggr\\}"
},
{
"math_id": 10,
"text": "\\begin{align}\\sigma_{trip}=Z \\alpha r_e^2 \\Biggl[ \\frac{28}{9} \\ln{2k}-\\frac{218}{27}\n&+ \\frac{1}{k}\\biggl(-\\frac{4}{3}\\ln^3{2k} + 3\\ln^2{2k} - \\frac{60+16a}{3} \\ln{2k} + \\frac{123+12a+16b}{3} \\biggr)\\\\\n&+\\frac{1}{k^2} \\biggl( \\frac{8}{3}\\ln^3{2k} - 4\\ln^2{2k} + \\frac{51+32a}{3} \\ln{2k} - \\frac{123+32a+64b}{6} \\biggr)\\\\\n&+\\frac{1}{k^3} \\biggl( \\ln^2{2k} - \\frac{53}{9} \\ln{2k} - \\frac{2915-288a}{216} \\biggr)\\\\\n&+\\frac{1}{k^4} \\biggl( -\\frac{49}{18} \\ln{2k} - \\frac{115}{432} \\biggr)\\\\\n&+\\frac{1}{k^5} \\biggl( -\\frac{77}{36} \\ln{2k} - \\frac{10831}{8640} \\biggr)\\\\\n&+\\frac{1}{k^6} \\biggl( -\\frac{641}{300} \\ln{2k} - \\frac{64573}{36000} \\biggr)\\\\\n&+\\frac{1}{k^7} \\biggl( -\\frac{4423}{1800} \\ln{2k} - \\frac{394979}{216000} \\biggr)\n\\Biggl]\\end{align}\n"
},
{
"math_id": 11,
"text": "\\sigma_{trip,H}=Z \\alpha r_e^2 [5.6+20.4 (k-4)-10.9 (k-4)^2-3.6 (k-4)^3+7.4 (k-4)^4] 10^{-3} (k-4)^2 "
},
{
"math_id": 12,
"text": "\\sigma_{trip,H}=Z \\alpha r_e^2 (0.582814-0.29842 k+0.04354 k^2-0.0012977 k^3 ) "
},
{
"math_id": 13,
"text": "\\sigma_{trip,H}=Z \\alpha r_e^2 \\biggl( \\frac{3.1247-1.3394 k+0.14612 k^2}{1+0.4648 k+0.016683 k^2} \\biggr) "
},
{
"math_id": 14,
"text": "\\sigma_{trip,H}=Z \\alpha r_e^2 \\Biggl[ \\frac{28}{9} \\ln{2k}-\\frac{218}{27}\n+ \\frac{1}{k}\\biggl(-\\frac{4}{3}\\ln^3{2k} + 3.863\\ln^2{2k} - 11 \\ln{2k} + 27.9 \\biggr) \\Biggr]\n"
},
{
"math_id": 15,
"text": "\\sigma_{total}= \\sigma_{ph}+\\sigma_C+\\sigma_{pair}+\\sigma_{trip} \n"
},
{
"math_id": 16,
"text": "\\mu= \\sigma_{total} N\n"
},
{
"math_id": 17,
"text": "\\mu_d= \\frac{\\mu}{\\rho} = \\frac{\\sigma_{total}}{u A}\n"
},
{
"math_id": 18,
"text": "\\sigma_{total} (k,Z)= \\sum_{i=0}^6 \\biggl[ (\\ln{k})^i \\sum_{j=0}^4 a_{i,j} Z^j \\biggr] \n"
},
{
"math_id": 19,
"text": "\\sigma_{total} (E,Z)= \\exp \\sum_{i=0}^6 \\biggl[ (\\ln{E})^i \\sum_{j=0}^6 a_{i,j} Z^j \\biggr] \n"
}
] |
https://en.wikipedia.org/wiki?curid=66946224
|
6694711
|
Skolem arithmetic
|
In mathematical logic, Skolem arithmetic is the first-order theory of the natural numbers with multiplication, named in honor of Thoralf Skolem. The signature of Skolem arithmetic contains only the multiplication operation and equality, omitting the addition operation entirely.
Skolem arithmetic is weaker than Peano arithmetic, which includes both addition and multiplication operations. Unlike Peano arithmetic, Skolem arithmetic is a decidable theory. This means it is possible to effectively determine, for any sentence in the language of Skolem arithmetic, whether that sentence is provable from the axioms of Skolem arithmetic. The asymptotic running-time computational complexity of this decision problem is triply exponential.
Axioms.
We define the following abbreviations.
The axioms of Skolem arithmetic are:
Expressive power.
First-order logic with equality and multiplication of positive integers can express the relation
formula_0. Using this relation and equality, we can define the following relations on positive integers:
Idea of decidability.
The truth value of formulas of Skolem arithmetic can be reduced to the truth value of sequences of non-negative integers constituting their prime factor decomposition, with multiplication becoming point-wise addition of sequences. The decidability then follows from the Feferman–Vaught theorem that can be shown using quantifier elimination. Another way of stating this is that first-order theory of positive integers is isomorphic to the first-order theory of finite multisets of non-negative integers with the multiset sum operation, whose decidability reduces to the decidability of the theory of elements.
In more detail, according to the fundamental theorem of arithmetic, a positive integer formula_12 can be represented as a product of prime powers:
formula_13
If a prime number formula_14 does not appear as a factor, we define its exponent formula_15 to be zero. Thus, only finitely many exponents are non-zero in the infinite sequence formula_16. Denote such sequences of non-negative integers by formula_17.
Now consider the decomposition of another positive number,
formula_18
The multiplication formula_19 corresponds point-wise addition of the exponents:
formula_20
Define the corresponding point-wise addition on sequences by:
formula_21
Thus we have an isomorphism between the structure of positive integers with multiplication, formula_22 and of point-wise addition of the sequences of non-negative integers in which only finitely many elements are non-zero, formula_23.
From Feferman–Vaught theorem for first-order logic, the truth value of a first-order logic formula over sequences and pointwise addition on them reduces, in an algorithmic way, to the truth value of formulas in the theory of elements of the sequence with addition, which, in this case, is Presburger arithmetic. Because Presburger arithmetic is decidable, Skolem arithmetic is also decidable.
Complexity.
establish, using Ehrenfeucht–Fraïssé games, a method to prove upper bounds on decision problem complexity of weak direct powers of theories. They apply this method to obtain triply exponential space complexity for formula_23, and thus of Skolem arithmetic.
proves that the satisfiability problem for the "quantifier-free" fragment of Skolem arithmetic belongs to the NP complexity class.
Decidable extensions.
Thanks to the above reduction using Feferman–Vaught theorem, we can obtain first-order theories whose open formulas define a larger set of relations if we strengthen the theory of multisets of prime factors. For example, consider the relation formula_24 that is true if and only if formula_25 and formula_7 have the equal number of distinct prime factors:
formula_26
For example, formula_27 because both sides denote a number that has two distinct prime factors.
If we add the relation formula_28 to Skolem arithmetic, it remains decidable. This is because the theory of sets of indices remains decidable in the presence of the equinumerosity operator on sets, as shown by the Feferman–Vaught theorem.
Undecidable extensions.
An extension of Skolem arithmetic with the successor predicate, formula_29 can define the addition relation using Tarski's identity:
formula_30
and defining the relation formula_31 on positive integers by
formula_32
Because it can express both multiplication and addition, the resulting theory is undecidable.
If we have an ordering predicate on natural numbers (less than, formula_33), we can express formula_34 by
formula_35
so the extension with formula_33 is also undecidable.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "c = a \\cdot b"
},
{
"math_id": 1,
"text": "b | c \\ \\Leftrightarrow \\ \\exists a. c = a \\cdot b"
},
{
"math_id": 2,
"text": "d = \\gcd(a,b) \\ \\Leftrightarrow \\ d | a \\land d | b \\land \\forall d'. (d' | a \\land d' | b) \\Rightarrow d'|d "
},
{
"math_id": 3,
"text": "m = \\mathrm{lcm}(a,b) \\ \\Leftrightarrow \\ a | m \\land b | m \\land \\forall m'. (a | m' \\land b | m') \\Rightarrow m|m' "
},
{
"math_id": 4,
"text": "1"
},
{
"math_id": 5,
"text": "\\forall a. 1|a"
},
{
"math_id": 6,
"text": "\\mathrm{prime}(p) \\ \\Leftrightarrow\\ p \\neq 1 \\land \\forall a. a|p \\Rightarrow (a = 1 \\lor a = p)"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "\\exists a_1,...a_k.\\ \\mathrm{prime}(a_1) \\land \\ldots \\land \\mathrm{prime}(a_k) \\land b = a_1 \\cdot \\ldots \\cdot a_k "
},
{
"math_id": 10,
"text": "\\mathrm{ppower}(b) \\ \\Leftrightarrow\\ \\exists p. \\mathrm{prime}(p) \\land \\forall a. (a \\neq 1 \\land a|b) \\Rightarrow p|a"
},
{
"math_id": 11,
"text": "\\exists a_1,...a_k.\\ \\mathrm{ppower}(a_1) \\land \\ldots \\land \\mathrm{ppower}(a_k) \\land b = a_1 \\cdot \\ldots \\cdot a_k "
},
{
"math_id": 12,
"text": "a > 1"
},
{
"math_id": 13,
"text": "\na = p_1^{a_1}p_2^{a_2} \\cdots \n"
},
{
"math_id": 14,
"text": "p_k"
},
{
"math_id": 15,
"text": "a_k"
},
{
"math_id": 16,
"text": "a_1,a_2,\\ldots"
},
{
"math_id": 17,
"text": "N^*"
},
{
"math_id": 18,
"text": "\nb = p_1^{b_1}p_2^{b_2} \\cdots \n"
},
{
"math_id": 19,
"text": "a b"
},
{
"math_id": 20,
"text": "\na b = p_1^{a_1 + b_1} p_2^{a_2 + b_2} \\cdots \n"
},
{
"math_id": 21,
"text": "\n(a_1,a_2,\\ldots) \\bar{+} (b_1, b_2, \\ldots) = (a_1 + b_1, a_2 + b_2, \\ldots)\n"
},
{
"math_id": 22,
"text": "(N,\\cdot)"
},
{
"math_id": 23,
"text": "(N^*, \\bar{+})"
},
{
"math_id": 24,
"text": "a \\sim b"
},
{
"math_id": 25,
"text": "a"
},
{
"math_id": 26,
"text": "\n | \\{ p \\mid \\mathrm{prime}(p) \\land (p|a) \\} | \\ = \\ | \\{ p \\mid \\mathrm{prime}(p) \\land (p|b) \\} |\n"
},
{
"math_id": 27,
"text": "2^{10} \\cdot 3^{100} \\sim 5^8 \\cdot 19^9"
},
{
"math_id": 28,
"text": "\\sim"
},
{
"math_id": 29,
"text": "succ(n)=n+1"
},
{
"math_id": 30,
"text": "\n(c = 0 \\lor c = a + b) \\Leftrightarrow (ac + 1)(bc + 1) = c^2 (ab + 1) + 1\n"
},
{
"math_id": 31,
"text": "c = a + b"
},
{
"math_id": 32,
"text": "\n\\mathrm{succ}(ac)\\, \\mathrm{succ}(bc) = \\mathrm{succ}(c^2 \\mathrm{succ}(ab))\n"
},
{
"math_id": 33,
"text": "<"
},
{
"math_id": 34,
"text": "\\mathrm{succ}"
},
{
"math_id": 35,
"text": "\n\\mathrm{succ}(a) = b \\ \\ \\Leftrightarrow \\ \\ a < b \\land \\forall c. \\big(a < c \\Rightarrow (b = c \\lor b < c)\\big)\n"
}
] |
https://en.wikipedia.org/wiki?curid=6694711
|
669475
|
Closed manifold
|
Topological concept in mathematics
In mathematics, a closed manifold is a manifold without boundary that is compact.
In comparison, an open manifold is a manifold without boundary that has only "non-compact" components.
Examples.
The only connected one-dimensional example is a circle.
The sphere, torus, and the Klein bottle are all closed two-dimensional manifolds. The real projective space RPn is a closed n-dimensional manifold. The complex projective space CPn is a closed 2n-dimensional manifold.
A line is not closed because it is not compact.
A closed disk is a compact two-dimensional manifold, but it is not closed because it has a boundary.
Properties.
Every closed manifold is a Euclidean neighborhood retract and thus has finitely generated homology groups.
If formula_0 is a closed connected n-manifold, the n-th homology group formula_1 is formula_2 or 0 depending on whether formula_0 is orientable or not. Moreover, the torsion subgroup of the (n-1)-th homology group formula_3 is 0 or formula_4 depending on whether formula_0 is orientable or not. This follows from an application of the universal coefficient theorem.
Let formula_5 be a commutative ring. For formula_5-orientable formula_0 with
fundamental class formula_6, the map formula_7 defined by formula_8 is an isomorphism for all k. This is the Poincaré duality. In particular, every closed manifold is formula_4-orientable. So there is always an isomorphism formula_9.
Open manifolds.
For a connected manifold, "open" is equivalent to "without boundary and non-compact", but for a disconnected manifold, open is stronger. For instance, the disjoint union of a circle and a line is non-compact since a line is non-compact, but this is not an open manifold since the circle (one of its components) is compact.
Abuse of language.
Most books generally define a manifold as a space that is, locally, homeomorphic to Euclidean space (along with some other technical conditions), thus by this definition a manifold does not include its boundary when it is embedded in a larger space. However, this definition doesn’t cover some basic objects such as a closed disk, so authors sometimes define a manifold with boundary and abusively say "manifold" without reference to the boundary. But normally, a compact manifold (compact with respect to its underlying topology) can synonymously be used for closed manifold if the usual definition for manifold is used.
The notion of a closed manifold is unrelated to that of a closed set. A line is a closed subset of the plane, and a manifold, but not a closed manifold.
Use in physics.
The notion of a "closed universe" can refer to the universe being a closed manifold but more likely refers to the universe being a manifold of constant positive Ricci curvature.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "H_{n}(M;\\mathbb{Z})"
},
{
"math_id": 2,
"text": "\\mathbb{Z}"
},
{
"math_id": 3,
"text": "H_{n-1}(M;\\mathbb{Z}) "
},
{
"math_id": 4,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "[M]\\in H_{n}(M;R) "
},
{
"math_id": 7,
"text": "D: H^k(M;R) \\to H_{n-k}(M;R)"
},
{
"math_id": 8,
"text": "D(\\alpha)=[M]\\cap\\alpha"
},
{
"math_id": 9,
"text": "H^k(M;\\mathbb{Z}_2) \\cong H_{n-k}(M;\\mathbb{Z}_2)"
}
] |
https://en.wikipedia.org/wiki?curid=669475
|
669532
|
Local class field theory
|
In mathematics, local class field theory, introduced by Helmut Hasse, is the study of abelian extensions of local fields; here, "local field" means a field which is complete with respect to an absolute value or a discrete valuation with a finite residue field: hence every local field is isomorphic (as a topological field) to the real numbers R, the complex numbers C, a finite extension of the "p"-adic numbers Q"p" (where "p" is any prime number), or the field of formal Laurent series F"q"(("T")) over a finite field F"q".
Approaches to local class field theory.
Local class field theory gives a description of the Galois group "G" of the maximal abelian extension of a local field "K" via the reciprocity map which acts from the multiplicative group "K"×="K"\{0}. For a finite abelian extension "L" of "K" the reciprocity map induces an isomorphism of the quotient group "K"×/"N"("L"×) of "K"× by the norm group "N"("L"×) of the extension "L"× to the Galois group Gal("L"/"K")
of the extension.
The existence theorem in local class field theory establishes a one-to-one correspondence between open subgroups of finite index in the multiplicative group "K"× and finite abelian extensions of the field "K". For a finite abelian extension "L" of "K" the corresponding open subgroup of finite index is the norm group "N"("L"×). The reciprocity map sends higher groups of units to higher ramification subgroups, see e.g. Ch. IV of.
Using the local reciprocity map, one defines the Hilbert symbol and its generalizations. Finding explicit formulas for it is one of subdirections of the theory of local fields, it has a long and rich history, see e.g. Sergei Vostokov's review.
There are cohomological approaches and non-cohomological approaches to local class field theory. Cohomological approaches tend to be non-explicit, since they use the cup-product of the first Galois cohomology groups.
For various approaches to local class field theory see Ch. IV and sect. 7 Ch. IV of They include the Hasse approach of using the Brauer group, cohomological approaches, the explicit methods of Jürgen Neukirch, Michiel Hazewinkel, the Lubin-Tate theory and others.
Generalizations of local class field theory.
Generalizations of local class field theory to local fields with quasi-finite residue field were easy extensions of the theory, obtained by G. Whaples in the 1950s, see chapter V of.
Explicit p-class field theory for local fields with perfect and imperfect residue fields which are not finite has to deal with the new issue of norm groups of infinite index. Appropriate theories were constructed by Ivan Fesenko.
Fesenko's noncommutative local class field theory for arithmetically profinite Galois extensions of local fields studies appropriate local reciprocity cocycle map and its properties. This arithmetic theory can be viewed as an alternative to the representation theoretical local Langlands correspondence.
Higher local class field theory.
For a higher-dimensional local field formula_0 there is a higher local reciprocity map which describes abelian extensions of the field in terms of open subgroups of finite index in the Milnor K-group of the field. Namely, if formula_0 is an formula_1-dimensional local field then one uses formula_2 or its separated quotient endowed with a suitable topology. When formula_3 the theory becomes the usual local class field theory. Unlike the classical case, Milnor K-groups do not satisfy Galois module descent if formula_4. General higher-dimensional local class field theory was developed by K. Kato and I. Fesenko.
Higher local class field theory is part of higher class field theory which studies abelian extensions (resp. abelian covers) of rational function fields of proper regular schemes flat over integers.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\mathrm{K}^{\\mathrm{M}}_n(K)"
},
{
"math_id": 3,
"text": "n=1"
},
{
"math_id": 4,
"text": "n>1"
}
] |
https://en.wikipedia.org/wiki?curid=669532
|
669552
|
Modular curve
|
Algebraic variety
In number theory and algebraic geometry, a modular curve "Y"(Γ) is a Riemann surface, or the corresponding algebraic curve, constructed as a quotient of the complex upper half-plane H by the action of a congruence subgroup Γ of the modular group of integral 2×2 matrices SL(2, Z). The term modular curve can also be used to refer to the compactified modular curves "X"(Γ) which are compactifications obtained by adding finitely many points (called the cusps of Γ) to this quotient (via an action on the extended complex upper-half plane). The points of a modular curve parametrize isomorphism classes of elliptic curves, together with some additional structure depending on the group Γ. This interpretation allows one to give a purely algebraic definition of modular curves, without reference to complex numbers, and, moreover, prove that modular curves are defined either over the field of rational numbers Q or a cyclotomic field Q(ζ"n"). The latter fact and its generalizations are of fundamental importance in number theory.
Analytic definition.
The modular group SL(2, Z) acts on the upper half-plane by fractional linear transformations. The analytic definition of a modular curve involves a choice of a congruence subgroup Γ of SL(2, Z), i.e. a subgroup containing the principal congruence subgroup of level "N" for some positive integer "N", which is defined to be
formula_0
The minimal such "N" is called the level of Γ. A complex structure can be put on the quotient Γ\H to obtain a noncompact Riemann surface called a modular curve, and commonly denoted "Y"(Γ).
Compactified modular curves.
A common compactification of "Y"(Γ) is obtained by adding finitely many points called the cusps of Γ. Specifically, this is done by considering the action of Γ on the extended complex upper-half plane H* = H ∪ Q ∪ {∞}. We introduce a topology on H* by taking as a basis:
formula_2
where "m", "n" are integers such that "an" + "cm" = 1.
This turns H* into a topological space which is a subset of the Riemann sphere P1(C). The group Γ acts on the subset Q ∪ {∞}, breaking it up into finitely many orbits called the cusps of Γ. If Γ acts transitively on Q ∪ {∞}, the space Γ\H* becomes the Alexandroff compactification of Γ\H. Once again, a complex structure can be put on the quotient Γ\H* turning it into a Riemann surface denoted "X"(Γ) which is now compact. This space is a compactification of "Y"(Γ).
Examples.
The most common examples are the curves "X"("N"), "X"0("N"), and "X"1("N") associated with the subgroups Γ("N"), Γ0("N"), and Γ1("N").
The modular curve "X"(5) has genus 0: it is the Riemann sphere with 12 cusps located at the vertices of a regular icosahedron. The covering "X"(5) → "X"(1) is realized by the action of the icosahedral group on the Riemann sphere. This group is a simple group of order 60 isomorphic to "A"5 and PSL(2, 5).
The modular curve "X"(7) is the Klein quartic of genus 3 with 24 cusps. It can be interpreted as a surface with three handles tiled by 24 heptagons, with a cusp at the center of each face. These tilings can be understood via dessins d'enfants and Belyi functions – the cusps are the points lying over ∞ (red dots), while the vertices and centers of the edges (black and white dots) are the points lying over 0 and 1. The Galois group of the covering "X"(7) → "X"(1) is a simple group of order 168 isomorphic to PSL(2, 7).
There is an explicit classical model for "X"0("N"), the classical modular curve; this is sometimes called "the" modular curve. The definition of Γ("N") can be restated as follows: it is the subgroup of the modular group which is the kernel of the reduction modulo "N". Then Γ0("N") is the larger subgroup of matrices which are upper triangular modulo "N":
formula_3
and Γ1("N") is the intermediate group defined by:
formula_4
These curves have a direct interpretation as moduli spaces for elliptic curves with "level structure" and for this reason they play an important role in arithmetic geometry. The level "N" modular curve "X"("N") is the moduli space for elliptic curves with a basis for the "N"-torsion. For "X"0("N") and "X"1("N"), the level structure is, respectively, a cyclic subgroup of order "N" and a point of order "N". These curves have been studied in great detail, and in particular, it is known that "X"0("N") can be defined over Q.
The equations defining modular curves are the best-known examples of modular equations. The "best models" can be very different from those taken directly from elliptic function theory. Hecke operators may be studied geometrically, as correspondences connecting pairs of modular curves.
Quotients of H that "are" compact do occur for Fuchsian groups Γ other than subgroups of the modular group; a class of them constructed from quaternion algebras is also of interest in number theory.
Genus.
The covering "X"("N") → "X"(1) is Galois, with Galois group SL(2, "N")/{1, −1}, which is equal to PSL(2, "N") if "N" is prime. Applying the Riemann–Hurwitz formula and Gauss–Bonnet theorem, one can calculate the genus of "X"("N"). For a prime level "p" ≥ 5,
formula_5
where χ = 2 − 2"g" is the Euler characteristic, |"G"| = ("p"+1)"p"("p"−1)/2 is the order of the group PSL(2, "p"), and "D" = π − π/2 − π/3 − π/"p" is the angular defect of the spherical (2,3,"p") triangle. This results in a formula
formula_6
Thus "X"(5) has genus 0, "X"(7) has genus 3, and "X"(11) has genus 26. For "p" = 2 or 3, one must additionally take into account the ramification, that is, the presence of order "p" elements in PSL(2, Z), and the fact that PSL(2, 2) has order 6, rather than 3. There is a more complicated formula for the genus of the modular curve "X"("N") of any level "N" that involves divisors of "N".
Genus zero.
In general a modular function field is a function field of a modular curve (or, occasionally, of some other moduli space that turns out to be an irreducible variety). Genus zero means such a function field has a single transcendental function as generator: for example the j-function generates the function field of "X"(1) = PSL(2, Z)\H*. The traditional name for such a generator, which is unique up to a Möbius transformation and can be appropriately normalized, is a Hauptmodul (main or principal modular function, plural Hauptmoduln).
The spaces "X"1("n") have genus zero for "n" = 1, ..., 10 and "n" = 12. Since each of these curves is defined over Q and has a Q-rational point, it follows that there are infinitely many rational points on each such curve, and hence infinitely many elliptic curves defined over Q with "n"-torsion for these values of "n". The converse statement, that only these values of "n" can occur, is Mazur's torsion theorem.
"X"0("N") of genus one.
The modular curves formula_7 are of genus one if and only if formula_8 equals one of the 12 values listed in the following table. As elliptic curves over formula_9, they have minimal, integral Weierstrass models formula_10. This is, formula_11 and the absolute value of the discriminant formula_12 is minimal among all integral Weierstrass models for the same curve. The following table contains the unique "reduced", minimal, integral Weierstrass models, which means formula_13 and formula_14. The last column of this table refers to the home page of the respective elliptic modular curve formula_7 on "The L-functions and modular forms database (LMFDB)".
Relation with the Monster group.
Modular curves of genus 0, which are quite rare, turned out to be of major importance in relation with the monstrous moonshine conjectures. First several coefficients of "q"-expansions of their Hauptmoduln were computed already in the 19th century, but it came as a shock that the same large integers show up as dimensions of representations of the largest sporadic simple group Monster.
Another connection is that the modular curve corresponding to the normalizer Γ0("p")+ of Γ0("p") in SL(2, R) has genus zero if and only if "p" is 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or 71, and these are precisely the prime factors of the order of the monster group. The result about Γ0("p")+ is due to Jean-Pierre Serre, Andrew Ogg and John G. Thompson in the 1970s, and the subsequent observation relating it to the monster group is due to Ogg, who wrote up a paper offering a bottle of Jack Daniel's whiskey to anyone who could explain this fact, which was a starting point for the theory of monstrous moonshine.
The relation runs very deep and, as demonstrated by Richard Borcherds, it also involves generalized Kac–Moody algebras. Work in this area underlined the importance of modular "functions" that are meromorphic and can have poles at the cusps, as opposed to modular "forms", that are holomorphic everywhere, including the cusps, and had been the main objects of study for the better part of the 20th century.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Gamma(N)=\\left\\{\n\\begin{pmatrix}\na & b\\\\\nc & d\\\\\n\\end{pmatrix} : \\ a \\equiv d \\equiv 1 \\mod N \\text{ and } b, c \\equiv0 \\mod N \\right\\}."
},
{
"math_id": 1,
"text": "\\{\\infty\\}\\cup\\{\\tau\\in \\mathbf{H} \\mid\\text{Im}(\\tau)>r\\}"
},
{
"math_id": 2,
"text": "\\begin{pmatrix}a & -m\\\\c & n\\end{pmatrix}"
},
{
"math_id": 3,
"text": "\\left \\{ \\begin{pmatrix} a & b \\\\ c & d\\end{pmatrix} : \\ c\\equiv 0 \\mod N \\right \\},"
},
{
"math_id": 4,
"text": "\\left \\{ \\begin{pmatrix} a & b \\\\ c & d\\end{pmatrix} : \\ a\\equiv d\\equiv 1\\mod N, c\\equiv 0 \\mod N \\right \\}."
},
{
"math_id": 5,
"text": "-\\pi\\chi(X(p)) = |G|\\cdot D,"
},
{
"math_id": 6,
"text": "g = \\tfrac{1}{24}(p+2)(p-3)(p-5)."
},
{
"math_id": 7,
"text": "\\textstyle X_0(N)"
},
{
"math_id": 8,
"text": "\\textstyle N"
},
{
"math_id": 9,
"text": "\\mathbb{Q}"
},
{
"math_id": 10,
"text": "y^2 + a_1 x y + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6"
},
{
"math_id": 11,
"text": "\\textstyle a_j\\in\\mathbb{Z}"
},
{
"math_id": 12,
"text": "\\Delta"
},
{
"math_id": 13,
"text": "\\textstyle a_1, a_3\\in\\{0,1\\}"
},
{
"math_id": 14,
"text": "\\textstyle a_2\\in\\{-1,0,1\\}"
}
] |
https://en.wikipedia.org/wiki?curid=669552
|
66964707
|
1 Chronicles 20
|
First Book of Chronicles, chapter 20
1 Chronicles 20 is the twentieth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the account of David's wars against the neighboring nations, especially the Ammonites and the Philistines. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 8 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
The capture of Rabbah (20:1–3).
The passage parallels 2 Samuel 11:1; 12:26a, 30–31, leaving out the episodes involving David, Bathsheba and Uriah the Hittite as well as , which would be between verse 1 and 2.
"And David took the crown of their king from off his head, and found it to weigh a talent of gold, and there were precious stones in it; and it was set upon David's head: and he brought also exceeding much spoil out of the city."
Battles against the Philistines (20:4–8).
This passage contains the accounts of three battles against the Philistines involving David's mighty warriors out of the four reported in . The episode where Abishai, the son of Zeruiah, saved David from being killed by Ishbi-benob is not included in the Chronicles, probably to avoid the unpleasant impression of a Philistine endangering David, so the number "four" appearing in is also removed in the corresponding verse 8. The Chronicles also harmonizes the confusing claims in the books of Samuel (, and ) into a clearer statement in verse 5.
Verse 5.
King James Version
"And there was war again with the Philistines; and Elhanan the son of Jair slew Lahmi the brother of Goliath the Gittite, whose spear staff was like a weaver's beam."
New English Translation
"There was another battle with the Philistines in which Elhanan son of Jair the Bethlehemite killed the brother of Goliath the Gittite, whose spear had a shaft as big as the crossbeam of a weaver’s loom."
The Hebrew text comparison with the corresponding verse demonstrates that the Chronicles (composed after the Babylonian exile) provides clarification to the older text written before the exile, as can be seen here (Hebrew text is read from right to left):
2 Samuel 21:19: ויך אלחנן בן־יערי ארגים בית הלחמי את גלית
transliteration: wa·yaḵ ’el·ḥā·nān ben-ya‘·rê ’ō·rə·ḡîm bêṯ ha·laḥ·mî, ’êṯ gā·lə·yāṯ
English: "and killed Elhanan ben Jaare-Oregim "bet-ha-"Lahmi, (brother) of Goliath"
1 Chronicles 20:5: ויך אלחנן בן־יעיר את־לחמי אחי גלית
transliteration: wa·yaḵ ’el·ḥā·nān ben-yā·‘îr ’eṯ-laḥ·mî, ’ă·ḥî gā·lə·yāṯ
English: "and killed Elhanan ben Jair Lahmi, brother of Goliath"
The relation of Lahmi to Goliath in the older text (Samuel) is only given using the word "’êṯ" which can be rendered as "together with; related to", whereas in the newer version (Chronicles), it is given using the word "’ă·ḥî" meaning "brother". Therefore it is clear in the Chronicles that David killed Goliath (as recorded in 1 Samuel 17), then Elhanan killed the brother of Goliath.
It is also noted that the word "’ōregîm" (meaning "weaver") is written only once in this verse, but it is found twice in 2 Samuel, the first of which is attached to the proper name "Jaare" to be "Jaare-oregim", which may create confusion with the second use of the word to describe the weapon of the Philistine.
"These were born to the giant in Gath, and they fell by the hand of David and by the hand of his servants."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66964707
|
669675
|
Cluster analysis
|
Grouping a set of objects by similarity
<templatestyles src="Machine learning/styles.css"/>
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some specific sense defined by the analyst) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning.
Cluster analysis refers to a family of algorithms and tasks rather than one specific algorithm. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem. The appropriate clustering algorithm and parameter settings (including parameters such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multi-objective optimization that involves trial and failure. It is often necessary to modify data preprocessing and model parameters until the result achieves the desired properties.
Besides the term "clustering", there is a number of terms with similar meanings, including "automatic classification", "numerical taxonomy", "botryology" (from 'grape'), "typological analysis", and "community detection". The subtle differences are often in the use of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest.
Cluster analysis originated in anthropology by Driver and Kroeber in 1932 and introduced to psychology by Joseph Zubin in 1938 and Robert Tryon in 1939 and famously used by Cattell beginning in 1943 for trait theory classification in personality psychology.
Definition.
The notion of a "cluster" cannot be precisely defined, which is one of the reasons why there are so many clustering algorithms. There is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these "cluster models" is key to understanding the differences between the various algorithms. Typical cluster models include:
A "clustering" is essentially a set of such clusters, usually containing all objects in the data set. Additionally, it may specify the relationship of the clusters to each other, for example, a hierarchy of clusters embedded in each other. Clusterings can be roughly distinguished as:
There are also finer distinctions possible, for example:
Algorithms.
As listed above, clustering algorithms can be categorized based on their cluster model. The following overview will only list the most prominent examples of clustering algorithms, as there are possibly over 100 published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized. An overview of algorithms explained in Wikipedia can be found in the list of statistics algorithms.
There is no objectively "correct" clustering algorithm, but as it was noted, "clustering is in the eye of the beholder." The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. An algorithm that is designed for one kind of model will generally fail on a data set that contains a radically different kind of model. For example, k-means cannot find non-convex clusters. Most traditional clustering methods assume the clusters exhibit a spherical, elliptical or convex shape.
Connectivity-based clustering (hierarchical clustering).
Connectivity-based clustering, also known as "hierarchical clustering", is based on the core idea of objects being more related to nearby objects than to objects farther away. These algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using a dendrogram, which explains where the common name "hierarchical clustering" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix.
Connectivity-based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice of distance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance) to use. Popular choices are known as single-linkage clustering (the minimum of object distances), complete linkage clustering (the maximum of object distances), and UPGMA or WPGMA ("Unweighted or Weighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions).
These methods will not produce a unique partitioning of the data set, but a hierarchy from which the user still needs to choose appropriate clusters. They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as "chaining phenomenon", in particular with single-linkage clustering). In the general case, the complexity is formula_0 for agglomerative clustering and formula_1 for divisive clustering, which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexity formula_2) are known: SLINK for single-linkage and CLINK for complete-linkage clustering.
Centroid-based clustering.
In centroid-based clustering, each cluster is represented by a central vector, which is not necessarily a member of the data set. When the number of clusters is fixed to "k", "k"-means clustering gives a formal definition as an optimization problem: find the "k" cluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized.
The optimization problem itself is known to be NP-hard, and thus the common approach is to search only for approximate solutions. A particularly well-known approximate method is Lloyd's algorithm, often just referred to as "k-means algorithm" (although another algorithm introduced this name). It does however only find a local optimum, and is commonly run multiple times with different random initializations. Variations of "k"-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set ("k"-medoids), choosing medians ("k"-medians clustering), choosing the initial centers less randomly ("k"-means++) or allowing a fuzzy cluster assignment (fuzzy c-means).
Most "k"-means-type algorithms require the number of clusters – "k" – to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid. This often leads to incorrectly cut borders of clusters (which is not surprising since the algorithm optimizes cluster centers, not cluster borders).
K-means has a number of interesting theoretical properties. First, it partitions the data space into a structure known as a Voronoi diagram. Second, it is conceptually close to nearest neighbor classification, and as such is popular in machine learning. Third, it can be seen as a variation of model-based clustering, and Lloyd's algorithm as a variation of the Expectation-maximization algorithm for this model discussed below.
Centroid-based clustering problems such as "k"-means and "k"-medoids are special cases of the uncapacitated, metric facility location problem, a canonical problem in the operations research and computational geometry communities. In a basic facility location problem (of which there are numerous variants that model more elaborate settings), the task is to find the best warehouse locations to optimally service a given set of consumers. One may view "warehouses" as cluster centroids and "consumer locations" as the data to be clustered. This makes it possible to apply the well-developed algorithmic solutions from the facility location literature to the presently considered centroid-based clustering problem.
Model-based clustering.
The clustering framework most closely related to statistics is model-based clustering, which is based on distribution models. This approach models the data as arising from a mixture of probability distributions. It has the advantages of providing principled statistical answers to questions such as how many clusters there are, what clustering method or model to use, and how to detect and deal with outliers.
While the theoretical foundation of these methods is excellent, they suffer from overfitting unless constraints are put on the model complexity. A more complex model will usually be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult. Standard model-based clustering methods include more parsimonious models based on the eigenvalue decomposition of the covariance matrices, that provide a balance between overfitting and fidelity to the data.
One prominent method is known as Gaussian mixture models (using the expectation-maximization algorithm). Here, the data set is usually modeled with a fixed (to avoid overfitting) number of Gaussian distributions that are initialized randomly and whose parameters are iteratively optimized to better fit the data set. This will converge to a local optimum, so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to; for soft clusterings, this is not necessary.
Distribution-based clustering produces complex models for clusters that can capture correlation and dependence between attributes. However, these algorithms put an extra burden on the user: for many real data sets, there may be no concisely defined mathematical model (e.g. assuming Gaussian distributions is a rather strong assumption on the data).
Density-based clustering.
In density-based clustering, clusters are defined as areas of higher density than the remainder of the data set. Objects in sparse areas – that are required to separate clusters – are usually considered to be noise and border points.
The most popular density-based clustering method is DBSCAN. In contrast to many newer methods, it features a well-defined cluster model called "density-reachability". Similar to linkage-based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range. Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it is deterministic for core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times. OPTICS is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameter formula_3, and produces a hierarchical result related to that of linkage clustering. DeLi-Clu, Density-Link-Clustering combines ideas from single-linkage clustering and OPTICS, eliminating the formula_3 parameter entirely and offering performance improvements over OPTICS by using an R-tree index.
The key drawback of DBSCAN and OPTICS is that they expect some kind of density drop to detect cluster borders. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such as EM clustering that are able to precisely model this kind of data.
Mean-shift is a clustering approach where each object is moved to the densest area in its vicinity, based on kernel density estimation. Eventually, objects converge to local maxima of density. Similar to k-means clustering, these "density attractors" can serve as representatives for the data set, but mean-shift can detect arbitrary-shaped clusters similar to DBSCAN. Due to the expensive iterative procedure and density estimation, mean-shift is usually slower than DBSCAN or k-Means. Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails.
Grid-based clustering.
The grid-based technique is used for a multi-dimensional data set. In this technique, we create a grid structure, and the comparison is performed on grids (also known as cells). The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in grid-based clustering algorithm are:
Recent developments.
In recent years, considerable effort has been put into improving the performance of existing algorithms. Among them are "CLARANS", and "BIRCH". With the recent need to process larger and larger data sets (also known as big data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such as canopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such as k-means clustering.
For high-dimensional data, many of the existing methods fail due to the curse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to new clustering algorithms for high-dimensional data that focus on subspace clustering (where only some attributes are used, and cluster models include the relevant attributes for the cluster) and correlation clustering that also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving a correlation of their attributes. Examples for such clustering algorithms are CLIQUE and SUBCLU.
Ideas from density-based clustering methods (in particular the DBSCAN/OPTICS family of algorithms) have been adapted to subspace clustering (HiSC, hierarchical subspace clustering and DiSH) and correlation clustering (HiCO, hierarchical correlation clustering, 4C using "correlation connectivity" and ERiC exploring hierarchical density-based correlation clusters).
Several different clustering systems based on mutual information have been proposed. One is Marina Meilă's "variation of information" metric; another provides hierarchical clustering. Using genetic algorithms, a wide range of different fit-functions can be optimized, including mutual information. Also belief propagation, a recent development in computer science and statistical physics, has led to the creation of new types of clustering algorithms.
Evaluation and assessment.
Evaluation (or "validation") of clustering results is as difficult as the clustering itself. Popular approaches involve ""internal" evaluation, where the clustering is summarized to a single quality score, "external"" evaluation, where the clustering is compared to an existing "ground truth" classification, ""manual" evaluation by a human expert, and "indirect"" evaluation by evaluating the utility of the clustering in its intended application.
Internal evaluation measures suffer from the problem that they represent functions that themselves can be seen as a clustering objective. For example, one could cluster the data set by the Silhouette coefficient; except that there is no known efficient algorithm for this. By using such an internal measure for evaluation, one rather compares the similarity of the optimization problems, and not necessarily how useful the clustering is.
External evaluation has similar problems: if we have such "ground truth" labels, then we would not need to cluster; and in practical applications we usually do not have such labels. On the other hand, the labels only reflect one possible partitioning of the data set, which does not imply that there does not exist a different, and maybe even better, clustering.
Neither of these approaches can therefore ultimately judge the actual quality of a clustering, but this needs human evaluation, which is highly subjective. Nevertheless, such statistics can be quite informative in identifying bad clusterings, but one should not dismiss subjective human evaluation.
Internal evaluation.
When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications. Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering.
Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another. Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for some kind of models has no chance if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion. For example, k-means clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with non-convex clusters neither the use of "k"-means, nor of an evaluation criterion that assumes convexity, is sound.
More than a dozen of internal evaluation measures exist, usually based on the intuition that items in the same cluster should be more similar than items in different clusters. For example, the following methods can be used to assess the quality of clustering algorithms based on internal criterion:
The Davies–Bouldin index can be calculated by the following formula:
formula_4
where "n" is the number of clusters, formula_5 is the centroid of cluster formula_6, formula_7 is the average distance of all elements in cluster formula_6 to centroid formula_5, and formula_8 is the distance between centroids formula_5 and formula_9. Since algorithms that produce clusters with low intra-cluster distances (high intra-cluster similarity) and high inter-cluster distances (low inter-cluster similarity) will have a low Davies–Bouldin index, the clustering algorithm that produces a collection of clusters with the smallest Davies–Bouldin index is considered the best algorithm based on this criterion.
The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:
formula_10
where "d"("i","j") represents the distance between clusters "i" and "j", and "d" '("k") measures the intra-cluster distance of cluster "k". The inter-cluster distance "d"("i","j") between two clusters may be any number of distance measures, such as the distance between the centroids of the clusters. Similarly, the intra-cluster distance "d" '("k") may be measured in a variety of ways, such as the maximal distance between any pair of elements in cluster "k". Since internal criterion seek clusters with high intra-cluster similarity and low inter-cluster similarity, algorithms that produce clusters with high Dunn index are more desirable.
The silhouette coefficient contrasts the average distance to elements in the same cluster with the average distance to elements in other clusters. Objects with a high silhouette value are considered well clustered, objects with a low value may be outliers. This index works well with "k"-means clustering, and is also used to determine the optimal number of clusters.
External evaluation.
In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by (expert) humans. Thus, the benchmark sets can be thought of as a gold standard for evaluation. These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may contain anomalies. Additionally, from a knowledge discovery point of view, the reproduction of known knowledge may not necessarily be the intended result. In the special scenario of constrained clustering, where meta information (such as class labels) is used already in the clustering process, the hold-out of information for evaluation purposes is non-trivial.
A number of measures are adapted from variants used to evaluate classification tasks. In place of counting the number of times a class was correctly assigned to a single data point (known as true positives), such "pair counting" metrics assess whether each pair of data points that is truly in the same cluster is predicted to be in the same cluster.
As with internal evaluation, several external evaluation measures exist, for example:
formula_14
This measure doesn't penalize having many clusters, and more clusters will make it easier to produce a high purity. A purity score of 1 is always possible by putting each data point in its own cluster. Also, purity doesn't work well for imbalanced data, where even poorly performing clustering algorithms will give a high purity value. For example, if a size 1000 dataset consists of two classes, one containing 999 points and the other containing 1 point, then every possible partition will have a purity of at least 99.9%.
The Rand index computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. It can be computed using the following formula:
formula_15
where formula_16 is the number of true positives, formula_17 is the number of true negatives, formula_18 is the number of false positives, and formula_19 is the number of false negatives. The instances being counted here are the number of correct "pairwise" assignments. That is, formula_16 is the number of pairs of points that are clustered together in the predicted partition and in the ground truth partition, formula_18 is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, then formula_20.
One issue with the Rand index is that false positives and false negatives are equally weighted. This may be an undesirable characteristic for some clustering applications. The F-measure addresses this concern, as does the chance-corrected adjusted Rand index.
The F-measure can be used to balance the contribution of false negatives by weighting recall through a parameter formula_21. Let precision and recall (both external evaluation measures in themselves) be defined as follows:
formula_22
formula_23
where formula_24 is the precision rate and formula_25 is the recall rate. We can calculate the F-measure by using the following formula:
formula_26
When formula_27, formula_28. In other words, recall has no impact on the F-measure when formula_27, and increasing formula_29 allocates an increasing amount of weight to recall in the final F-measure.
Also formula_17 is not taken into account and can vary from 0 upward without bound.
The Jaccard index is used to quantify the similarity between two datasets. The Jaccard index takes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula:
formula_30
This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets.
Note that formula_17 is not taken into account.
The Dice symmetric measure doubles the weight on formula_16 while still ignoring formula_17:
formula_31
The Fowlkes–Mallows index computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes–Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:
formula_32
where formula_16 is the number of true positives, formula_18 is the number of false positives, and formula_19 is the number of false negatives. The formula_33 index is the geometric mean of the precision and recall formula_24 and formula_25, and is thus also known as the G-measure, while the F-measure is their harmonic mean. Moreover, precision and recall are also known as Wallace's indices formula_34 and formula_35. Chance normalized versions of recall, precision and G-measure correspond to Informedness, Markedness and Matthews Correlation and relate strongly to Kappa.
A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster.
Cluster tendency.
To measure cluster tendency is to measure to what degree clusters exist in the data to be clustered, and may be performed as an initial test, before attempting clustering. One way to do this is to compare the data against random data. On average, random data should not have clusters.
There are multiple formulations of the Hopkins statistic. A typical one is as follows. Let formula_36 be the set of formula_37 data points in formula_38 dimensional space. Consider a random sample (without replacement) of formula_39 data points with members formula_40. Also generate a set formula_41 of formula_42 uniformly randomly distributed data points. Now define two distance measures, formula_43 to be the distance of formula_44 from its nearest neighbor in X and formula_45 to be the distance of formula_46 from its nearest neighbor in X. We then define the Hopkins statistic as:
formula_47
With this definition, uniform random data should tend to have values near to 0.5, and clustered data should tend to have values nearer to 1.
However, data containing just a single Gaussian will also score close to 1, as this statistic measures deviation from a "uniform" distribution, not multimodality, making this statistic largely useless in application (as real data never is remotely uniform).
Cluster analysis is used to describe and to make spatial and temporal comparisons of communities (assemblages) of organisms in heterogeneous environments. It is also used in plant systematics to generate artificial phylogenies or clusters of organisms (individuals) at the species, genus or higher level that share a number of attributes.
Clustering is used to build groups of genes with related expression patterns (also known as coexpressed genes) as in HCS clustering algorithm. Often such groups contain functionally related proteins, such as enzymes for a specific pathway, or genes that are co-regulated. High throughput experiments using expressed sequence tags (ESTs) or DNA microarrays can be a powerful tool for genome annotation – a general aspect of genomics.
Sequence clustering is used to group homologous sequences into gene families. This is a very important concept in bioinformatics, and evolutionary biology in general. See evolution by gene duplication.
Clustering algorithms are used to automatically assign genotypes.
The similarity of genetic data is used in clustering to infer population structures.
On PET scans, cluster analysis can be used to differentiate between different types of tissue in a three-dimensional image for many different purposes.
Cluster analysis can be used to analyse patterns of antibiotic resistance, to classify antimicrobial compounds according to their mechanism of action, to classify antibiotics according to their antibacterial activity.
Clustering can be used to divide a fluence map into distinct regions for conversion into deliverable fields in MLC-based Radiation Therapy.
Cluster analysis is widely used in market research when working with multivariate data from surveys and test panels. Market researchers use cluster analysis to partition the general population of consumers into market segments and to better understand the relationships between different groups of consumers/potential customers, and for use in market segmentation, product positioning, new product development and selecting test markets.
Clustering can be used to group all the shopping items available on the web into a set of unique products. For example, all the items on eBay can be grouped into unique products (eBay does not have the concept of a SKU).
In the study of social networks, clustering may be used to recognize communities within large groups of people.
In the process of intelligent grouping of the files and websites, clustering may be used to create a more relevant set of search results compared to normal search engines like Google. There are currently a number of web-based clustering tools such as Clusty. It also may be used to return a more comprehensive set of results in cases where a search term could refer to vastly different things. Each distinct use of the term corresponds to a unique cluster of results, allowing a ranking algorithm to return comprehensive results by picking the top result from each cluster.
Flickr's map of photos and other map sites use clustering to reduce the number of markers on a map. This makes it both faster and reduces the amount of visual clutter.
Clustering is useful in software evolution as it helps to reduce legacy properties in code by reforming functionality that has become dispersed. It is a form of restructuring and hence is a way of direct preventative maintenance.
Clustering can be used to divide a digital image into distinct regions for border detection or object recognition.
Clustering may be used to identify different niches within the population of an evolutionary algorithm so that reproductive opportunity can be distributed more evenly amongst the evolving species or subspecies.
Recommender systems are designed to recommend new items based on a user's tastes. They sometimes use clustering algorithms to predict a user's preferences based on the preferences of other users in the user's cluster.
Clustering is often utilized to locate and characterize extrema in the target distribution.
Anomalies/outliers are typically – be it explicitly or implicitly – defined with respect to clustering structure in data.
Clustering can be used to resolve lexical ambiguity.
Clustering has been used to analyse the effectiveness of DevOps teams.
Cluster analysis is used to identify patterns of family life trajectories, professional careers, and daily or weekly time use for example.
Cluster analysis can be used to identify areas where there are greater incidences of particular types of crime. By identifying these distinct areas or "hot spots" where a similar crime has happened over a period of time, it is possible to manage law enforcement resources more effectively.
Cluster analysis is for example used to identify groups of schools or students with similar properties.
From poll data, projects such as those undertaken by the Pew Research Center use cluster analysis to discern typologies of opinions, habits, and demographics that may be useful in politics and marketing.
Clustering algorithms are used for robotic situational awareness to track objects and detect outliers in sensor data.
To find structural similarity, etc., for example, 3000 chemical compounds were clustered in the space of 90 topological indices.
To find weather regimes or preferred sea level pressure atmospheric patterns.
Cluster analysis has been used to cluster stocks into sectors.
Cluster analysis is used to reconstruct missing bottom hole core data or missing log curves in order to evaluate reservoir properties.
The clustering of chemical properties in different sample locations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{O}(n^3)"
},
{
"math_id": 1,
"text": "\\mathcal{O}(2^{n-1})"
},
{
"math_id": 2,
"text": "\\mathcal{O}(n^2)"
},
{
"math_id": 3,
"text": "\\varepsilon"
},
{
"math_id": 4,
"text": "\nDB = \\frac {1} {n} \\sum_{i=1}^{n} \\max_{j\\neq i}\\left(\\frac{\\sigma_i + \\sigma_j} {d(c_i,c_j)}\\right)\n"
},
{
"math_id": 5,
"text": "c_i"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\sigma_i"
},
{
"math_id": 8,
"text": "d(c_i,c_j)"
},
{
"math_id": 9,
"text": "c_j"
},
{
"math_id": 10,
"text": "\nD = \\frac{\\min_{1 \\leq i < j \\leq n} d(i,j)}{\\max_{1 \\leq k \\leq n} d^{\\prime}(k)} \\,,\n"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": "D"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "\n\\frac{1}{N}\\sum_{m\\in M}\\max_{d\\in D}{|m \\cap d|}\n"
},
{
"math_id": 15,
"text": "\nRI = \\frac {TP + TN} {TP + FP + FN + TN}\n"
},
{
"math_id": 16,
"text": "TP"
},
{
"math_id": 17,
"text": "TN"
},
{
"math_id": 18,
"text": "FP"
},
{
"math_id": 19,
"text": "FN"
},
{
"math_id": 20,
"text": "TP + TN + FP + FN = \\binom{N}{2}"
},
{
"math_id": 21,
"text": "\\beta \\geq 0"
},
{
"math_id": 22,
"text": "\nP = \\frac {TP } {TP + FP }\n"
},
{
"math_id": 23,
"text": "\nR = \\frac {TP } {TP + FN}\n"
},
{
"math_id": 24,
"text": "P"
},
{
"math_id": 25,
"text": "R"
},
{
"math_id": 26,
"text": "\nF_{\\beta} = \\frac {(\\beta^2 + 1)\\cdot P \\cdot R } {\\beta^2 \\cdot P + R}\n"
},
{
"math_id": 27,
"text": "\\beta=0"
},
{
"math_id": 28,
"text": "F_{0}=P"
},
{
"math_id": 29,
"text": "\\beta"
},
{
"math_id": 30,
"text": "\nJ(A,B) = \\frac {|A \\cap B| } {|A \\cup B|} = \\frac{TP}{TP + FP + FN}\n"
},
{
"math_id": 31,
"text": "\nDSC = \\frac{2TP}{2TP + FP + FN}\n"
},
{
"math_id": 32,
"text": "\nFM = \\sqrt{ \\frac {TP}{TP+FP} \\cdot \\frac{TP}{TP+FN} }\n"
},
{
"math_id": 33,
"text": "FM"
},
{
"math_id": 34,
"text": "B^I"
},
{
"math_id": 35,
"text": "B^{II}"
},
{
"math_id": 36,
"text": "X"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "d"
},
{
"math_id": 39,
"text": "m \\ll n"
},
{
"math_id": 40,
"text": "x_i"
},
{
"math_id": 41,
"text": "Y"
},
{
"math_id": 42,
"text": "m"
},
{
"math_id": 43,
"text": "u_i"
},
{
"math_id": 44,
"text": "y_i \\in Y"
},
{
"math_id": 45,
"text": "w_i"
},
{
"math_id": 46,
"text": "x_i \\in X"
},
{
"math_id": 47,
"text": "\nH=\\frac{\\sum_{i=1}^m{u_i^d}}{\\sum_{i=1}^m{u_i^d}+\\sum_{i=1}^m{w_i^d}} \\,,\n"
}
] |
https://en.wikipedia.org/wiki?curid=669675
|
66968094
|
1 Chronicles 21
|
First Book of Chronicles, chapter 21
1 Chronicles 21 is the twenty-first chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the account of David's census, its consequences and the purchase of a site for the temple. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30).
Text.
This chapter was originally written in the Hebrew language. It is divided into 30 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David’s military census (21:1–6).
The Chronicler reinterprets and supplements the account in 2 Samuel 24, taking the perspective of Job chapter 1. Instead of "the anger of the LORD" (), the one who persuaded David to carry out a census is "Satan", a Hebrew word which should be translated as "an adversary" rather than a personal name, more likely is the same figure mentioned in ff and Zechariah 3:1ff.
David's guilt is pronounced strongly by Joab (more than in 2 Samuel 24) as the word 'trespass' (verse 3; NRSV, 'guilt') is used to emphasize David's responsibility. The Chronicler simply documents the result of the census, excluding the individual stages (due to its insignificance or incomprehensibility) recorded in 2 Samuel 24.
"And Joab gave the sum of the numbering of the people to David. In all Israel there were 1,100,000 men who drew the sword, and in Judah 470,000 who drew the sword."
"But he did not count Levi and Benjamin among them, for the king’s word was abominable to Joab."
Verse 6.
forbids to take a military census among the Levites, whereas the tribe of Benjamin was probably excluded because 'the tabernacle resided upon its territory'.
Judgment for David’s sin (21:7–13).
The passage emphasizes on YHWH's disapproval, not David's remorse (as in 2 Samuel 24) because David was persuaded by Satan, so it has the statement 'he struck Israel' forecasting the events reported in verse 14.
A plague on Israel (21:14–17).
The sin of David resulted in the death of Israelites (verse 14; cf. ; ; ).
"So the Lord sent a plague throughout Israel, and seventy thousand men of Israel fell."
Verse 14.
This sentence is followed in by "from the morning even to the time appointed," so if "the time appointed" means 'the time of the evening sacrifice', then God shortened the three days to a short one day.
"David lifted up his eyes and saw the angel of the Lord standing between earth and heaven with his sword drawn in his hand stretched out over Jerusalem. So David and the elders, covered in sackcloth, fell on their faces."
Verse 16.
The Chronicler describes the angel hanging in the air, recalling the descriptions in Numbers 22:31 and Joshua 5:13-15 (cf. also verse 18); furthermore cf. Daniel 8:15; 12:6.
David builds an altar (21:18–30).
In verses 21–25, the purchase of Ornan's threshingfloor is patterned after Abraham's purchase of Machpelah's cave (Genesis 23), including the insistence on paying the full price (an expression used only in Genesis 23:9 and verses 22, 24). The 600 silver shekels David pays is more than Abraham's 400 silver shekels for Machpelah's cave, alluding the higher value of temple site than Sarah's burial site (600 is also a multiple of 12, an important number in various ways in the Chronicles).
Verses 29–30 explain that because an angel obstructed his way, David had to make sacrifices on Ornan's threshing-floor, instead of at the high place at Gibeon.
"Then the angel of the Lord commanded Gad to tell David that David should go up and raise an altar to the Lord on the threshing floor of Ornan the Jebusite. "
Verse 18.
The command to erect an altar on the threshing-floor of Ornan (the later name for Araunah) was given only by Gad in 2 Samuel 24, is clarified in Chronicles as originated from the angel of YHWH.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66968094
|
669713
|
Capillary wave
|
Wave on the surface of a fluid, dominated by surface tension
A capillary wave is a wave traveling along the phase boundary of a fluid, whose dynamics and phase velocity are dominated by the effects of surface tension.
Capillary waves are common in nature, and are often referred to as ripples. The wavelength of capillary waves on water is typically less than a few centimeters, with a phase speed in excess of 0.2–0.3 meter/second.
A longer wavelength on a fluid interface will result in gravity–capillary waves which are influenced by both the effects of surface tension and gravity, as well as by fluid inertia. Ordinary gravity waves have a still longer wavelength.
When generated by light wind in open water, a nautical name for them is cat's paw waves. Light breezes which stir up such small ripples are also sometimes referred to as cat's paws. On the open ocean, much larger ocean surface waves (seas and swells) may result from coalescence of smaller wind-caused ripple-waves.
Dispersion relation.
The dispersion relation describes the relationship between wavelength and frequency in waves. Distinction can be made between pure capillary waves – fully dominated by the effects of surface tension – and gravity–capillary waves which are also affected by gravity.
Capillary waves, proper.
The dispersion relation for capillary waves is
formula_0
where formula_1 is the angular frequency, formula_2 the surface tension, formula_3 the density of the
heavier fluid, formula_4 the density of the lighter fluid and formula_5 the wavenumber. The wavelength is
formula_6
For the boundary between fluid and vacuum (free surface), the dispersion relation reduces to
formula_7
Gravity–capillary waves.
When capillary waves are also affected substantially by gravity, they are called gravity–capillary waves. Their dispersion relation reads, for waves on the interface between two fluids of infinite depth:
formula_8
where formula_9 is the acceleration due to gravity, formula_3 and formula_4 are the mass density of the two fluids formula_10. The factor formula_11 in the first term is the Atwood number.
Gravity wave regime.
For large wavelengths (small formula_12), only the first term is relevant and one has gravity waves.
In this limit, the waves have a group velocity half the phase velocity: following a single wave's crest in a group one can see the wave appearing at the back of the group, growing and finally disappearing at the front of the group.
Capillary wave regime.
Shorter (large formula_5) waves (e.g. 2 mm for the water–air interface), which are proper capillary waves, do the opposite: an individual wave appears at the front of the group, grows when moving towards the group center and finally disappears at the back of the group. Phase velocity is two thirds of group velocity in this limit.
Phase velocity minimum.
Between these two limits is a point at which the dispersion caused by gravity cancels out the dispersion due to the capillary effect. At a certain wavelength, the group velocity equals the phase velocity, and there is no dispersion. At precisely this same wavelength, the phase velocity of gravity–capillary waves as a function of wavelength (or wave number) has a minimum. Waves with wavelengths much smaller than this critical wavelength formula_13 are dominated by surface tension, and much above by gravity. The value of this wavelength and the associated minimum phase speed formula_14 are:
formula_15
For the air–water interface, formula_13 is found to be , and formula_14 is .
If one drops a small stone or droplet into liquid, the waves then propagate outside an expanding circle of fluid at rest; this circle is a caustic which corresponds to the minimal group velocity.
Derivation.
As Richard Feynman put it, "[water waves] that are easily seen by everyone and which are usually used as an example of waves in elementary courses [...] are the worst possible example [...]; they have all the complications that waves can have." The derivation of the general dispersion relation is therefore quite involved.
There are three contributions to the energy, due to gravity, to surface tension, and to hydrodynamics. The first two are potential energies, and responsible for the two terms inside the parenthesis, as is clear from the appearance of formula_9 and formula_2. For gravity, an assumption is made of the density of the fluids being constant (i.e., incompressibility), and likewise formula_9 (waves are not high enough for gravitation to change appreciably). For surface tension, the deviations from planarity (as measured by derivatives of the surface) are supposed to be small. For common waves both approximations are good enough.
The third contribution involves the kinetic energies of the fluids. It is the most complicated and calls for a hydrodynamic framework. Incompressibility is again involved (which is satisfied if the speed of the waves is much less than the speed of sound in the media), together with the flow being irrotational – the flow is then potential. These are typically also good approximations for common situations.
The resulting equation for the potential (which is Laplace equation) can be solved with the proper boundary conditions. On one hand, the velocity must vanish well below the surface (in the "deep water" case, which is the one we consider, otherwise a more involved result is obtained, see Ocean surface waves.) On the other, its vertical component must match the motion of the surface. This contribution ends up being responsible for the extra formula_5 outside the parenthesis, which causes all regimes to be dispersive, both at low values of formula_5, and high ones (except around the one value at which the two dispersions cancel out.)
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\omega^2=\\frac{\\sigma}{\\rho+\\rho'}\\, |k|^3,"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "\\sigma"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "\\rho'"
},
{
"math_id": 5,
"text": "k"
},
{
"math_id": 6,
"text": "\n\\lambda=\\frac{2 \\pi}{k}."
},
{
"math_id": 7,
"text": "\n\\omega^2=\\frac{\\sigma}{\\rho}\\, |k|^3."
},
{
"math_id": 8,
"text": "\n\\omega^2=|k|\\left( \\frac{\\rho-\\rho'}{\\rho+\\rho'}g+\\frac{\\sigma}{\\rho+\\rho'}k^2\\right),\n"
},
{
"math_id": 9,
"text": "g"
},
{
"math_id": 10,
"text": "(\\rho > \\rho')"
},
{
"math_id": 11,
"text": "(\\rho-\\rho')/(\\rho+\\rho')"
},
{
"math_id": 12,
"text": "k = 2\\pi/\\lambda"
},
{
"math_id": 13,
"text": "\\lambda_{m}"
},
{
"math_id": 14,
"text": "c_{m}"
},
{
"math_id": 15,
"text": "\n \\lambda_m = 2 \\pi \\sqrt{ \\frac{\\sigma}{(\\rho-\\rho') g}}\n \\quad \\text{and} \\quad\n c_m = \\sqrt{ \\frac{2 \\sqrt{ (\\rho - \\rho') g \\sigma }}{\\rho+\\rho'} }.\n"
}
] |
https://en.wikipedia.org/wiki?curid=669713
|
6697285
|
High-speed craft
|
High speed water vessel for civilian use
A high-speed craft (HSC) is a high-speed water vessel for civilian use, also called a fastcraft or fast ferry.
The first high-speed craft were often hydrofoils or hovercraft, but in the 1990s catamaran and monohull designs become more popular.
Most high-speed craft serve as passenger ferries, but the largest catamarans and monohulls also carry cars, buses, large trucks and freight.
In the 1990s there were a variety of builders, but due to HSC high fuel consumption, many shipbuilders have withdrawn from this market so the construction of the largest fast ferries, up to 127 metres, has been consolidated to two Australian companies, Austal of Perth and Incat of Hobart. There is still a wide variety of builders for smaller fast catamaran ferries between 24 and 60 metres.
Hulled designs are often powered by pump-jets coupled to medium-speed diesel engines. Hovercraft are usually powered by gas turbines or diesel engines driving propellers and impellers.
The design and safety of high-speed craft is regulated by the International Convention for the Safety of Life at Sea (SOLAS) Convention, Chapter 10, High-Speed Craft (HSC) Codes of 1994 and 2000, adopted by the Maritime Safety Committee of the International Maritime Organization (IMO).
In accordance with SOLAS Chapter 10 Reg. 1.3, high-speed craft are craft capable of a maximum speed, in metres per second (m/s), equal to or exceeding:
formula_0
where formula_1 = volume of displacement in cubic metres corresponding to the design waterline, excluding craft of which the hull is supported clear above the water surface in non-displacement mode by aerodynamic forces generated by ground effect.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "3.7 \\times \\triangledown^{0.1667}"
},
{
"math_id": 1,
"text": "\\triangledown"
}
] |
https://en.wikipedia.org/wiki?curid=6697285
|
66983084
|
1 Chronicles 22
|
First Book of Chronicles, chapter 22
1 Chronicles 22 is the twenty-two chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records David's preparation to build the temple, consisting of three parts: (1) David's (own) preparations for the temple's construction (verses 2–5); (2) David's speech to Solomon (verses 6–16); (3) David's speech to Israel's rulers (verses 17–19). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from this chapter to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 19 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
Preparations for the temple (22:1–5).
This section records the material and spiritual preparations for the construction of the temple which David wish to be "famous and glorified throughout all the lands" (verse 5), because the quality of the palace-temple complex projects the power of a nation, its god, and its king to other nations, gods, vassals, or foreign emissaries. The Chronicler is very particular in providing explanation how the temple site was selected (verse 1 and 2 Chronicles 3:1).
"Then David said, This is the house of the LORD God, and this is the altar of the burnt offering for Israel."
Verse 1.
The verse becomes the climax of the preceding and subsequent sections in that the future site of YHWH's temple (and place for sacrifices) is gloriously announced, regarded 'synonymous' with the desert tabernacle, the high place at Gibeon or 'all legitimate cultic sites and buildings that play an important part in Israel's history'. The selection of the site is very important for the Chronicler, as repeated in . The language is very similar to Genesis 28:17, pertaining to the construction of the holy site at Bethel.
"and cedar logs without number, for the Sidonians and the Tyrians brought much cedar wood to David."
"And David said, Solomon my son is young and tender, and the house that is to be builded for the LORD must be exceeding magnifical, of fame and of glory throughout all countries: I will therefore now make preparation for it. So David prepared abundantly before his death."
Solomon anointed to build the temple (22:6–19).
The section contains two speeches by David, the first one to Solomon (verses 6–16) and the second to the leaders of Israel (verses 17–19). The speech to Solomon parallels David's final decrees in 1 Kings 2 and quotes the dynastic promise in 1 Chronicles 17 (cf. 2 Samuel 7), with the explanation why David was not permitted to build the temple (verse 8). Only David's call to 'abide by the law and act courageously' (1 Kings 2:2–3) is transmitted here. The relationship between David and Solomon in the Chronicles resembles that of Moses and Joshua. The encouragement given by David to Solomon for the forthcoming work, forecasting success if he faithfully follows God and confirms God's presence (verses 11–13) resembles the message in Joshua 1 regarding Joshua's succession to Moses (also using the terms 'the LORD be with you' and 'success').
"but the word of the Lord came to me, saying, ‘You have shed much blood and have made great wars; you shall not build a house for My name, because you have shed much blood on the earth in My sight."
Verse 8.
Nathan's prophecy in 2 Samuel 7 and 1 Chronicles 17 does not provide the explanation why David was not allowed to build the temple. In Solomon stated that David was impeded from carrying out his plan, because of his long warfare with the surrounding nations. In the Chronicles, the statement is transformed to a greater principle, that is, because David as a warrior had shed much blood, so he was forbidden to build the temple. The reason is simply to exclude the blemish of bloodshed from the temple's construction.
"Behold, a son shall be born to you who shall be a man of rest. I will give him rest from all his surrounding enemies. For his name shall be Solomon, and I will give peace and quiet to Israel in his days."
Verse 9.
Using wordplay, "Solomon" (, "shə-lō-mōh", meaning: "peaceful") was to be given "peace" (, "shā-lōm"), and, as a "man of rest" (, "’îsh mə-nū-chāh"), was to be given "rest" (, "nuach"), so he could build the temple.
This was to fulfill the precondition in that the sacrificial services could take place when Israel had "rest" from its enemies.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=66983084
|
669864
|
Grand Riemann hypothesis
|
In mathematics, the grand Riemann hypothesis is a generalisation of the Riemann hypothesis and generalized Riemann hypothesis. It states that the nontrivial zeros of all automorphic "L"-functions lie on the critical line formula_0 with formula_1 a real number variable and formula_2 the imaginary unit.
The modified grand Riemann hypothesis is the assertion that the nontrivial zeros of all automorphic "L"-functions lie on the critical line or the real line.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{1}{2} + it"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "i"
}
] |
https://en.wikipedia.org/wiki?curid=669864
|
669899
|
Fair division
|
Problem of sharing resources
Fair division is the problem in game theory of dividing a set of resources among several people who have an entitlement to them so that each person receives their due share. That problem arises in various real-world settings such as division of inheritance, partnership dissolutions, divorce settlements, electronic frequency allocation, airport traffic management, and exploitation of Earth observation satellites. It is an active research area in mathematics, economics (especially social choice theory), and dispute resolution. The central tenet of fair division is that such a division should be performed by the players themselves, without the need for external arbitration, as only the players themselves really know how they value the goods.
The archetypal fair division algorithm is divide and choose. It demonstrates that two agents with different tastes can divide a cake such that each of them believes that he got the best piece. The research in fair division can be seen as an extension of this procedure to various more complex settings.
There are many different kinds of fair division problems, depending on the nature of goods to divide, the criteria for fairness, the nature of the players and their preferences, and other criteria for evaluating the quality of the division.
Things that can be divided.
Formally, a fair division problem is defined by a set formula_0 (often called "the cake") and a group of formula_1 players. A division is a partition of formula_0 into formula_1 disjoint subsets: formula_2, one subset per player.
The set formula_0 can be of various types:
Additionally, the set to be divided may be:
Finally, it is common to make some assumptions about whether the items to be divided are:
Based on these distinctions, several general types of fair division problems have been studied:
Combinations and special cases are also common:
Definitions of fairness.
Most of what is normally called a fair division is not considered so by the theory because of the use of arbitration. This kind of situation happens quite often with mathematical theories named after real life problems. The decisions in the Talmud on entitlement when an estate is bankrupt reflect the development of complex ideas regarding fairness. However, they are the result of legal debates by rabbis rather than divisions according to the valuations of the claimants.
According to the subjective theory of value, there cannot be an objective measure of the value of each item. Therefore, "objective fairness" is not possible, as different people may assign different values to each item. Empirical experiments on how people define the concept of fairness have given inconclusive results.
Therefore, most current research on fairness focuses on concepts of "subjective fairness". Each of the formula_1 people is assumed to have a personal, subjective "utility function" or "value function", formula_4, which assigns a numerical value to each subset of formula_0. Often the functions are assumed to be normalized, so that every person values the empty set as 0 (formula_5 for all i), and the entire set of items as 1 (formula_6 for all i) if the items are desirable, and -1 if the items are undesirable. Examples are:
Based on these subjective value functions, there are a number of widely used criteria for a fair division. Some of these conflict with each other but often they can be combined. The criteria described here are only for when each player is entitled to the same amount:
All the above criteria assume that the participants have equal entitlements. If different participants have different entitlements (e.g., in a partnership where each partner invested a different amount), then the fairness criteria should be adapted accordingly. See proportional cake-cutting with different entitlements.
Additional requirements.
In addition to fairness, it is sometimes desired that the division be Pareto optimal, i.e., no other allocation would make someone better off without making someone else worse off. The term efficiency comes from the economics idea of the efficient market. A division where one player gets everything is optimal by this definition so on its own this does not guarantee even a fair share. See also efficient cake-cutting and the price of fairness.
In the real world people sometimes have a very accurate idea of how the other players value the goods and they may care very much about it. The case where they have complete knowledge of each other's valuations can be modeled by game theory. Partial knowledge is very hard to model. A major part of the practical side of fair division is the devising and study of procedures that work well despite such partial knowledge or small mistakes.
An additional requirement is that the fair division procedure be strategyproof, i.e. it should be a dominant strategy for the participants to report their true valuations. This requirement is usually very hard to satisfy, especially in combination with fairness and Pareto-efficiency. As a result, it is often weakened to incentive compatibility, which only requires players to report their true valuations if they behave according to a specified solution concept.
Procedures.
A fair division procedure lists actions to be performed by the players in terms of the visible data and their valuations. A valid procedure is one that guarantees a fair division for every player who acts rationally according to their valuation. Where an action depends on a player's valuation the procedure is describing the strategy a rational player will follow. A player may act as if a piece had a different value but must be consistent. For instance if a procedure says the first player cuts the cake in two equal parts then the second player chooses a piece, then the first player cannot claim that the second player got more.
What the players do is:
It is assumed the aim of each player is to maximize the minimum amount they might get, or in other words, to achieve the maximin.
Procedures can be divided into "discrete" vs. "continuous" procedures. A discrete procedure would for instance only involve one person at a time cutting or marking a cake. Continuous procedures involve things like one player moving a knife and the other saying "stop". Another type of continuous procedure involves a person assigning a value to every part of the cake.
For a list of fair division procedures, see .
No finite protocol (even if unbounded) can guarantee an envy-free division of a cake among three or more players, if each player is to receive a single connected piece. However, this result applies only to the model presented in that work and not for cases where, for example, a mediator has full information of the players' valuation functions and proposes a division based on this information.
Extensions.
Recently, the model of fair division has been extended from individual agents to "families" (pre-determined groups) of agents. See fair division among groups.
History.
According to Sol Garfunkel, the cake-cutting problem had been one of the most important open problems in 20th century mathematics, when the most important variant of the problem was finally solved with the Brams-Taylor procedure by Steven Brams and Alan Taylor in 1995.
Divide and choose's origins are undocumented. The related activities of bargaining and barter are also ancient. Negotiations involving more than two people are also quite common, the Potsdam Conference is a notable recent example.
The theory of fair division dates back only to the end of the second world war. It was devised by a group of Polish mathematicians, Hugo Steinhaus, Bronisław Knaster and Stefan Banach, who used to meet in the Scottish Café in Lvov (then in Poland). A proportional (fair division) division for any number of players called 'last-diminisher' was devised in 1944. This was attributed to Banach and Knaster by Steinhaus when he made the problem public for the first time at a meeting of the Econometric Society in Washington, D.C., on 17 September 1947. At that meeting he also proposed the problem of finding the smallest number of cuts necessary for such divisions.
For the history of envy-free cake-cutting, see
envy-free cake-cutting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "C = X_1 \\sqcup X_2 \\sqcup\\cdots \\sqcup X_n"
},
{
"math_id": 3,
"text": "C = \\{\\text{piano}, \\text{car}, \\text{apartment}\\}"
},
{
"math_id": 4,
"text": "V_i"
},
{
"math_id": 5,
"text": "V_i (\\empty) = 0"
},
{
"math_id": 6,
"text": "V_i (C) = 1"
},
{
"math_id": 7,
"text": "V_i(X_i) \\ge V_i(C)/n"
},
{
"math_id": 8,
"text": "V_i(X_i) > V_i(C)/n"
},
{
"math_id": 9,
"text": "V_i(X_i) \\ge V_i(X_j)"
},
{
"math_id": 10,
"text": "V_i(X_i) = V_j(X_j)"
},
{
"math_id": 11,
"text": "V_i(X_i) = V_j(X_i)"
}
] |
https://en.wikipedia.org/wiki?curid=669899
|
66996
|
Kin selection
|
Evolutionary strategy favoring relatives
Kin selection is a process whereby natural selection favours a trait due to its positive effects on the reproductive success of an organism's relatives, even when at a cost to the organism's own survival and reproduction. Kin selection can lead to the evolution of altruistic behaviour. It is related to inclusive fitness, which combines the number of offspring produced with the number an individual can ensure the production of by supporting others (weighted by the relatedness between individuals). A broader definition of kin selection includes selection acting on interactions between individuals who share a gene of interest even if the gene is not shared due to common ancestry.
Charles Darwin discussed the concept of kin selection in his 1859 book, "On the Origin of Species", where he reflected on the puzzle of sterile social insects, such as honey bees, which leave reproduction to their mothers, arguing that a selection benefit to related organisms (the same "stock") would allow the evolution of a trait that confers the benefit but destroys an individual at the same time. J.B.S. Haldane in 1955 briefly alluded to the principle in limited circumstances (Haldane famously joked that he would willingly die for two brothers or eight cousins), and R.A. Fisher mentioned a similar principle even more briefly in 1930. However, it was not until 1964 that W.D. Hamilton generalised the concept and developed it mathematically (resulting in Hamilton's rule) that it began to be widely accepted. The mathematical treatment was made more elegant in 1970 due to advances made by George R. Price. The term "kin selection" was first used by John Maynard Smith in 1964.
According to Hamilton's rule, kin selection causes genes to increase in frequency when the genetic relatedness of a recipient to an actor multiplied by the benefit to the recipient is greater than the reproductive cost to the actor. Hamilton proposed two mechanisms for kin selection. First, kin recognition allows individuals to be able to identify their relatives. Second, in viscous populations, populations in which the movement of organisms from their place of birth is relatively slow, local interactions tend to be among relatives by default. The viscous population mechanism makes kin selection and social cooperation possible in the absence of kin recognition. In this case, nurture kinship, the interaction between related individuals, simply as a result of living in each other's proximity, is sufficient for kin selection, given reasonable assumptions about population dispersal rates. Kin selection is not the same thing as group selection, where natural selection is believed to act on the group as a whole.
In humans, altruism is both more likely and on a larger scale with kin than with unrelated individuals; for example, humans give presents according to how closely related they are to the recipient. In other species, vervet monkeys use allomothering, where related females such as older sisters or grandmothers often care for young, according to their relatedness. The social shrimp "Synalpheus regalis" protects juveniles within highly related colonies.
Historical overview.
Charles Darwin was the first to discuss the concept of kin selection (without using that term). In "On the Origin of Species", he wrote about the conundrum represented by altruistic sterile social insects that:
<templatestyles src="Template:Blockquote/styles.css" />This difficulty, though appearing insuperable, is lessened, or, as I believe, disappears, when it is remembered that selection may be applied to the family, as well as to the individual, and may thus gain the desired end. Breeders of cattle wish the flesh and fat to be well marbled together. An animal thus characterised has been slaughtered, but the breeder has gone with confidence to the same stock and has succeeded.
In this passage "the family" and "stock" stand for a kin group. These passages and others by Darwin about kin selection are highlighted in D.J. Futuyma's textbook of reference "Evolutionary Biology" and in E. O. Wilson's "".
Kin selection was briefly referred to by R.A. Fisher in 1930 and J.B.S. Haldane in 1932 and 1955. J.B.S. Haldane grasped the basic quantities in kin selection, famously writing "I would lay down my life for two brothers or eight cousins". Haldane's remark alluded to the fact that if an individual loses its life to save two siblings, four nephews, or eight cousins, it is a "fair deal" in evolutionary terms, as siblings are on average 50% identical by descent, nephews 25%, and cousins 12.5% (in a diploid population that is randomly mating and previously outbred). But Haldane also joked that he would truly die only to save more than a single identical twin of his or more than two full siblings. In 1955 he clarified:
<templatestyles src="Template:Blockquote/styles.css" />Let us suppose that you carry a rare gene that affects your behaviour so that you jump into a flooded river and save a child, but you have one chance in ten of being drowned, while I do not possess the gene, and stand on the bank and watch the child drown. If the child's your own child or your brother or sister, there is an even chance that this child will also have this gene, so five genes will be saved in children for one lost in an adult. If you save a grandchild or a nephew, the advantage is only two and a half to one. If you only save a first cousin, the effect is very slight. If you try to save your first cousin once removed the population is more likely to lose this valuable gene than to gain it. … It is clear that genes making for conduct of this kind would only have a chance of spreading in rather small populations when most of the children were fairly near relatives of the man who risked his life.
W. D. Hamilton, in 1963 and especially in 1964 generalised the concept and developed it mathematically, showing that it holds for genes even when they are not rare, deriving Hamilton's rule and defining a new quantity known as an individual's inclusive fitness. He is widely credited as the founder of the field of social evolution. A more elegant mathematical treatment was made possible by George Price in 1970.
John Maynard Smith may have coined the actual term "kin selection" in 1964:
<templatestyles src="Template:Blockquote/styles.css" />These processes I will call kin selection and group selection respectively. Kin selection has been discussed by Haldane and by Hamilton. … By kin selection I mean the evolution of characteristics which favour the survival of close relatives of the affected individual, by processes which do not require any discontinuities in the population breeding structure.
Kin selection causes changes in gene frequency across generations, driven by interactions between related individuals. This dynamic forms the conceptual basis of the theory of sociobiology. Some cases of evolution by natural selection can only be understood by considering how biological relatives influence each other's fitness. Under natural selection, a gene encoding a trait that enhances the fitness of each individual carrying it should increase in frequency within the population; and conversely, a gene that lowers the individual fitness of its carriers should be eliminated. However, a hypothetical gene that prompts behaviour which enhances the fitness of relatives but lowers that of the individual displaying the behaviour, may nonetheless increase in frequency, because relatives often carry the same gene. According to this principle, the enhanced fitness of relatives can at times more than compensate for the fitness loss incurred by the individuals displaying the behaviour, making kin selection possible. This is a special case of a more general model, "inclusive fitness". This analysis has been challenged, Wilson writing that "the foundations of the general theory of inclusive fitness based on the theory of kin selection have crumbled" and that he now relies instead on the theory of eusociality and "gene-culture co-evolution" for the underlying mechanics of sociobiology. Inclusive fitness theory is still generally accepted however, as demonstrated by the publication of a rebuttal to Wilson's claims in "Nature" from over a hundred researchers.
Kin selection is contrasted with group selection, according to which a genetic trait can become prevalent within a group because it benefits the group as a whole, regardless of any benefit to individual organisms. All known forms of group selection conform to the principle that an individual behaviour can be evolutionarily successful only if the genes responsible for this behaviour conform to Hamilton's Rule, and hence, on balance and in the aggregate, benefit from the behaviour.
Hamilton's rule.
Formally, genes should increase in frequency when
formula_0
where
"r" = the genetic relatedness of the recipient to the actor, often defined as the probability that a gene picked randomly from each at the same locus is identical by descent.
"B" = the additional reproductive benefit gained by the recipient of the altruistic act,
"C" = the reproductive cost to the individual performing the act.
This inequality is known as Hamilton's rule after W. D. Hamilton who in 1964 published the first formal quantitative treatment of kin selection.
The relatedness parameter ("r") in Hamilton's rule was introduced in 1922 by Sewall Wright as a coefficient of relationship that gives the probability that at a random locus, the alleles there will be identical by descent. Modern formulations of the rule use Alan Grafen's definition of relatedness based on the theory of linear regression.
A 2014 review of many lines of evidence for Hamilton's rule found that its predictions were confirmed in a wide variety of social behaviours across a broad phylogenetic range of birds, mammals and insects, in each case comparing social and non-social taxa. Among the experimental findings, a 2010 study used a wild population of red squirrels in Yukon, Canada. Surrogate mothers adopted related orphaned squirrel pups but not unrelated orphans. The cost of adoption was calculated by measuring a decrease in the survival probability of the entire litter after increasing the litter by one pup, while benefit was measured as the increased chance of survival of the orphan. The degree of relatedness of the orphan and surrogate mother for adoption to occur depended on the number of pups the surrogate mother already had in her nest, as this affected the cost of adoption. Females always adopted orphans when "rB" was greater than "C", but never adopted when "rB" was less than "C", supporting Hamilton's rule.
Mechanisms.
Altruism occurs where the instigating individual suffers a fitness loss while the receiving individual experiences a fitness gain. The sacrifice of one individual to help another is an example.
Hamilton outlined two ways in which kin selection altruism could be favoured:
<templatestyles src="Template:Blockquote/styles.css" />
Kin recognition and the green beard effect.
First, if individuals have the capacity to recognise kin and to discriminate (positively) on the basis of kinship, then the average relatedness of the recipients of altruism could be high enough for kin selection. Because of the facultative nature of this mechanism, kin recognition and discrimination were expected to be unimportant except among 'higher' forms of life. However, as molecular recognition mechanisms have been shown to operate in organisms such as slime moulds kin recognition has much wider importance than previously recognised. Kin recognition may be selected for inbreeding avoidance, and little evidence indicates that 'innate' kin recognition plays a role in mediating altruism. A thought experiment on the kin recognition/discrimination distinction is the hypothetical 'green beard', where a gene for social behaviour is imagined also to cause a distinctive phenotype that can be recognised by other carriers of the gene. Due to conflicting genetic similarity in the rest of the genome, there should be selection pressure for green-beard altruistic sacrifices to be suppressed, making common ancestry the most likely form of inclusive fitness. This suppression is overcome if new phenotypes -other beard colours- are formed through mutation or introduced into the population from time to time. This proposed mechanism goes by the name of 'beard chromodynamics'.
Viscous populations.
Secondly, indiscriminate altruism may be favoured in "viscous" populations, those with low rates or short ranges of dispersal. Here, social partners are typically related, and so altruism can be selective advantageous without the need for kin recognition and kin discrimination faculties—spatial proximity, together with limited dispersal, ensures that social interactions are more often with related individuals. This suggests a rather general explanation for altruism. Directional selection always favours those with higher rates of fecundity within a certain population. Social individuals can often enhance the survival of their own kin by participating in and following the rules of their own group.
Hamilton later modified his thinking to suggest that an innate ability to recognise actual genetic relatedness was unlikely to be the dominant mediating mechanism for kin altruism:
<templatestyles src="Template:Blockquote/styles.css" />But once again, we do not expect anything describable as an innate kin recognition adaptation, used for social behaviour other than mating, for the reasons already given in the hypothetical case of the trees.
Hamilton's later clarifications often go unnoticed. Stuart West and colleagues have countered the long-standing assumption that kin selection requires innate powers of kin recognition. Another doubtful assumption is that social cooperation must be based on limited dispersal and shared developmental context. Such ideas have obscured the progress made in applying kin selection to species including humans, on the basis of cue-based mediation of social bonding and social behaviours.
Special cases.
Eusociality.
Eusociality (true sociality) occurs in social systems with three characteristics: an overlap in generations between parents and their offspring, cooperative brood care, and specialised castes of non-reproductive individuals. The social insects provide good examples of organisms with what appear to be kin selected traits. The workers of some species are sterile, a trait that would not occur if individual selection was the only process at work. The relatedness coefficient "r" is abnormally high between the worker sisters in a colony of Hymenoptera due to haplodiploidy. Hamilton's rule is presumed to be satisfied because the benefits in fitness for the workers are believed to exceed the costs in terms of lost reproductive opportunity, though this has never been demonstrated empirically. Competing hypotheses have been offered to explain the evolution of social behaviour in such organisms.
The eusocial shrimp "Synalpheus regalis" protects juveniles in the colony. By defending the young, the large defender shrimp can increase its inclusive fitness. Allozyme data demonstrated high relatedness within colonies, averaging 0.50. This means that colonies represent close kin groups, supporting the hypothesis of kin selection.
Allomothering.
Vervet monkeys utilise allomothering, parenting by group members other than the actual mother or father, where the allomother is typically an older female sibling or a grandmother. Individuals act aggressively toward other individuals that were aggressive toward their relatives. The behaviour implies kin selection between siblings, between mothers and offspring, and between grandparents and grandchildren.
In humans.
Whether or not Hamilton's rule always applies, relatedness is often important for human altruism, in that humans are inclined to behave more altruistically toward kin than toward unrelated individuals. Many people choose to live near relatives, exchange sizeable gifts with relatives, and favour relatives in wills in proportion to their relatedness.
Experimental studies, interviews, and surveys.
Interviews of several hundred women in Los Angeles showed that while non-kin friends were willing to help one another, their assistance was far more likely to be reciprocal. The largest amounts of non-reciprocal help, however, were reportedly provided by kin. Additionally, more closely related kin were considered more likely sources of assistance than distant kin. Similarly, several surveys of American college students found that individuals were more likely to incur the cost of assisting kin when a high probability that relatedness and benefit would be greater than cost existed. Participants' feelings of helpfulness were stronger toward family members than non-kin. Additionally, participants were found to be most willing to help those individuals most closely related to them. Interpersonal relationships between kin in general were more supportive and less Machiavellian than those between non-kin.
In one experiment, the longer participants (from both the UK and the South African Zulus) held a painful skiing position, the more money or food was presented to a given relative. Participants repeated the experiment for individuals of different relatedness (parents and siblings at r=.5, grandparents, nieces, and nephews at r=.25, etc.). The results showed that participants held the position for longer intervals the greater the degree of relatedness between themselves and those receiving the reward.
Observational studies.
A study of food-sharing practices on the West Caroline islets of Ifaluk determined that food-sharing was more common among people from the same islet, possibly because the degree of relatedness between inhabitants of the same islet would be higher than relatedness between inhabitants of different islets. When food was shared between islets, the distance the sharer was required to travel correlated with the relatedness of the recipient—a greater distance meant that the recipient needed to be a closer relative. The relatedness of the individual and the potential inclusive fitness benefit needed to outweigh the energy cost of transporting the food over distance.
Humans may use the inheritance of material goods and wealth to maximise their inclusive fitness. By providing close kin with inherited wealth, an individual may improve his or her kin's reproductive opportunities and thus increase his or her own inclusive fitness even after death. A study of a thousand wills found that the beneficiaries who received the most inheritance were generally those most closely related to the will's writer. Distant kin received proportionally less inheritance, with the least amount of inheritance going to non-kin.
A study of childcare practices among Canadian women found that respondents with children provide childcare reciprocally with non-kin. The cost of caring for non-kin was balanced by the benefit a woman received—having her own offspring cared for in return. However, respondents without children were significantly more likely to offer childcare to kin. For individuals without their own offspring, the inclusive fitness benefits of providing care to closely related children might outweigh the time and energy costs of childcare.
Family investment in offspring among black South African households also appears consistent with an inclusive fitness model. A higher degree of relatedness between children and their caregivers was correlated with a higher degree of investment in the children, with more food, health care, and clothing. Relatedness was also associated with the regularity of a child's visits to local medical practitioners and with the highest grade the child had completed in school, and negatively associated with children being behind in school for their age.
Observation of the Dolgan hunter-gatherers of northern Russia suggested that there are larger and more frequent asymmetrical transfers of food to kin. Kin are more likely to be welcomed to non-reciprocal meals, while non-kin are discouraged from attending. Finally, when reciprocal food-sharing occurs between families, these families are often closely related, and the primary beneficiaries are the offspring.
Violence in families is more likely when step-parents are present, and that "genetic relationship is associated with a softening of conflict, and people's evident valuations of themselves and of others are systematically related to the parties' reproductive values". Numerous studies suggest how inclusive fitness may work amongst different peoples, such as the Ye'kwana of southern Venezuela, the Gypsies of Hungary, and the doomed Donner Party of the United States.
Human social patterns.
Evolutionary psychologists, following early human sociobiologists' interpretation of kin selection theory initially attempted to explain human altruistic behaviour through kin selection by stating that "behaviors that help a genetic relative are favored by natural selection." However, many evolutionary psychologists recognise that this common shorthand formulation is inaccurate:
<templatestyles src="Template:Blockquote/styles.css" />Many misunderstandings persist. In many cases, they result from conflating "coefficient of relatedness" and "proportion of shared genes", which is a short step from the intuitively appealing—but incorrect—interpretation that "animals tend to be altruistic toward those with whom they share a lot of genes." These misunderstandings don't just crop up occasionally; they are repeated in many writings, including undergraduate psychology textbooks—most of them in the field of social psychology, within sections describing evolutionary approaches to altruism.
As with the earlier sociobiological forays into the cross-cultural data, typical approaches are not able to find explanatory fit with the findings of ethnographers insofar that human kinship patterns are not necessarily built upon blood-ties. However, as Hamilton's later refinements of his theory make clear, it does not simply predict that genetically related individuals will inevitably recognise and engage in positive social behaviours with genetic relatives: rather, indirect context-based mechanisms may have evolved, which in historical environments have met the inclusive fitness criterion. Consideration of the demographics of the typical evolutionary environment of any species is crucial to understanding the evolution of social behaviours. As Hamilton himself put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start".
Under this perspective, and noting the necessity of a reliable context of interaction being available, the data on how altruism is mediated in social mammals is readily made sense of. In social mammals, primates and humans, altruistic acts that meet the kin selection criterion are typically mediated by circumstantial cues such as shared developmental environment, familiarity and social bonding. That is, it is the context that mediates the development of the bonding process and the expression of the altruistic behaviours, not genetic relatedness as such. This interpretation is compatible with the cross-cultural ethnographic data and has been called nurture kinship.
In plants.
Observations.
Though originally thought unique to the animal kingdom, evidence of kin selection has been identified in the plant kingdom.
Competition for resources between developing zygotes in plant ovaries increases when seeds had been pollinated with male gametes from different plants. How developing zygotes differentiate between full siblings and half-siblings in the ovary is undetermined, but genetic interactions are thought to play a role. Nonetheless, competition between zygotes in the ovary is detrimental to the reproductive success of the (female) plant, and fewer zygotes mature into seeds. As such, the reproductive traits and behaviors of plants suggests the evolution of behaviors and characteristics that increase the genetic relatedness of fertilized eggs in the plant ovary, thereby fostering kin selection and cooperation among the seeds as they develop. These traits differ among plant species. Some species have evolved to have fewer ovules per ovary, commonly one ovule per ovary, thereby decreasing the chance of developing multiple, differently fathered seeds within the same ovary. Multi-ovulated plants have developed mechanisms that increase the chances of all ovules within the ovary being fathered by the same parent. Such mechanisms include dispersal of pollen in aggregated packets and closure of the stigmatic lobes after pollen is introduced. The aggregated pollen packet releases pollen gametes in the ovary, thereby increasing likelihood that all ovules are fertilized by pollen from the same parent. Likewise, the closure of the ovary pore prevents entry of new pollen. Other multi-ovulated plants have evolved mechanisms that mimic the evolutionary adaption of single-ovulated ovaries; the ovules are fertilized by pollen from different individuals, but the mother ovary then selectively aborts fertilized ovules, either at the zygotic or embryonic stage.
After seeds are dispersed, kin recognition and cooperation affects root formation in developing plants. Studies have found that the total root mass developed by "Ipomoea hederacea" (morning glory shrubs) grown next to kin is significantly smaller than those grown next to non-kin; shrubs grown next to kin thus allocate less energy and resources to growing the larger root systems needed for competitive growth. When seedlings were grown in individual pots placed next to kin or non-kin relatives, no difference in root growth was observed. This indicates that kin recognition occurs via signals received by the roots. Further, groups of "I. hederacea" plants are more varied in height when grown with kin than when grown with non-kin. The evolutionary benefit provided by this was further investigated by researchers at the Université de Montpellier. They found that the alternating heights seen in kin-grouped crops allowed for optimal light availability to all plants in the group; shorter plants next to taller plants had access to more light than those surrounded by plants of similar height.
The above examples illustrate the effect of kin selection in the equitable allocation of light, nutrients, and water. The evolutionary emergence of single-ovulated ovaries in plants has eliminated the need for a developing seed to compete for nutrients, thus increasing its chance of survival and germination. Likewise, the fathering of all ovules in multi-ovulated ovaries by one father, decreases the likelihood of competition between developing seeds, thereby also increasing the seeds' chances of survival and germination. The decreased root growth in plants grown with kin increases the amount of energy available for reproduction; plants grown with kin produced more seeds than those grown with non-kin. Similarly, the increase in light made available by alternating heights in groups of related plants is associated with higher fecundity.
Kin selection has also been observed in plant responses to herbivory. In an experiment done by Richard Karban et al., leaves of potted "Artemisia tridentata" (sagebrushes) were clipped with scissors to simulate herbivory. The gaseous volatiles emitted by the clipped leaves were captured in a plastic bag. When these volatiles were transferred to leaves of a closely related sagebrush, the recipient experienced lower levels of herbivory than those that had been exposed to volatiles released by non-kin plants. Sagebrushes do not uniformly emit the same volatiles in response to herbivory: the chemical ratios and composition of emitted volatiles vary from one sagebrush to another. Closely related sagebrushes emit similar volatiles, and the similarities decrease as relatedness decreases. This suggests that the composition of volatile gasses plays a role in kin selection among plants. Volatiles from a distantly related plant are less likely to induce a protective response against herbivory in a neighboring plant, than volatiles from a closely related plant. This fosters kin selection, as the volatiles emitted by a plant will activate the herbivorous defense response in related plants only, thus increasing their chance of survival and reproduction.
Kin selection may play a role in plant-pollinator interactions, especially because pollinator attraction is influenced not only by floral displays, but by the spatial arrangement of plants in a group, which is referred to as the "magnet effect". For example, in an experiment performed on "Moricandia moricandioides", Torices et al. demonstrated that focal plants in the presence of kin show increased advertising effort (defined as total petal mass of plants in a group divided by the plant biomass) compared to those in the presence of non-kin, and that this effect is greater in larger groups. "M. moricandioides" is a good model organism for the study of plant-pollinator interactions because it relies on pollinators for reproduction, as it is self-incompatible. The study design for this experiment included planting establishing pots of "M. moricandioides" with zero, three or six neighbors (either unrelated or half-sib progeny of the same mother) and advertising effort was calculated after 26 days of flowering. The exact mechanism of kin recognition in "M. moricandioides" is unknown, but possible mechanisms include above-ground communication with volatile compounds, or below-ground communication with root exudates.
Mechanisms in plants.
The ability to differentiate between kin and non-kin is not necessary for kin selection in many animals. However, because plants do not reliably germinate in close proximity to kin, it is thought that, within the plant kingdom, kin recognition is especially important for kin selection there, but the mechanism remains unknown.
One proposed mechanism for kin recognition involves communication through roots, with secretion and reception of root exudates. This would require exudates to be actively secreted by roots of one plant, and detected by roots of neighboring plants. The root exudate allantoin produced by rice plants, "Oryza sativa", has been documented to be in greater production when growing next to cultivars that are largely unrelated. High production levels of Allantoin correlated to up regulation of auxin and auxin transporters, resulting in increased lateral root development and directional growth of their roots towards non kin, maximizing competition. This is mainly not observed in "Oryza Sativa" when surrounded by kin, invoking altruistic behaviors to promote inclusive fitness. However the root receptors responsible for recognition of kin exudates, and the pathway induced by receptor activation, remain unknown. The mycorrhiza associated with roots might facilitate reception of exudates, but again the mechanism is unknown.
Another possibility is communication through green leaf volatiles. Karban et al. studied kin recognition in sagebrushes, "Artemisia tridentata". The volatile-donating sagebrushes were kept in individual pots, separate from the plants that received the volatiles, finding that plants responded to herbivore damage to a neighbour's leaves. This suggests that root signalling is not necessary to induce a protective response against herbivory in neighbouring kin plants. Karban et al. suggest that plants may be able to differentiate between kin and non-kin based on the composition of volatiles. Because only the recipient sagebrush's leaves were exposed the volatiles presumably activated a receptor protein in the plant's leaves. The identity of this receptor, and the signalling pathway triggered by its activation, both remain to be discovered.
Objections.
The theory of kin selection has been criticised by W. J. Alonso (in 1998) and by Alonso and C. Schuck-Paim (in 2002). They argue that the behaviours which kin selection attempts to explain are not altruistic (in pure Darwinian terms) because: (1) they may directly favour the performer as an individual aiming to maximise its progeny (so the behaviours can be explained as ordinary individual selection); (2) these behaviours benefit the group (so they can be explained as group selection); or (3) they are by-products of a developmental system of many "individuals" performing different tasks (like a colony of bees, or the cells of multicellular organisms, which are the focus of selection). They also argue that the genes involved in sex ratio conflicts could be treated as "parasites" of (already established) social colonies, not as their "promoters", and, therefore the sex ratio in colonies would be irrelevant to the transition to eusociality. Those ideas were mostly ignored until they were put forward again in a series of controversial papers by E. O. Wilson, Bert Hölldobler, Martin Nowak and Corina Tarnita. Nowak, Tarnita and Wilson argued that
<templatestyles src="Template:Blockquote/styles.css" />Inclusive fitness theory is not a simplification over the standard approach. It is an alternative accounting method, but one that works only in a very limited domain. Whenever inclusive fitness does work, the results are identical to those of the standard approach. Inclusive fitness theory is an unnecessary detour, which does not provide additional insight or information.
They, like Alonso and Schuck-Paim, argue for a multi-level selection model instead. This aroused a strong response, including a rebuttal published in "Nature" from over a hundred researchers.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "rB > C"
}
] |
https://en.wikipedia.org/wiki?curid=66996
|
66997
|
Epidemiology
|
Study of health and disease within a population
Epidemiology is the study and analysis of the distribution (who, when, and where), patterns and determinants of health and disease conditions in a defined population.
It is a cornerstone of public health, and shapes policy decisions and evidence-based practice by identifying risk factors for disease and targets for preventive healthcare. Epidemiologists help with study design, collection, and statistical analysis of data, amend interpretation and dissemination of results (including peer review and occasional systematic review). Epidemiology has helped develop methodology used in clinical research, public health studies, and, to a lesser extent, basic research in the biological sciences.
Major areas of epidemiological study include disease causation, transmission, outbreak investigation, disease surveillance, environmental epidemiology, forensic epidemiology, occupational epidemiology, screening, biomonitoring, and comparisons of treatment effects such as in clinical trials. Epidemiologists rely on other scientific disciplines like biology to better understand disease processes, statistics to make efficient use of the data and draw appropriate conclusions, social sciences to better understand proximate and distal causes, and engineering for exposure assessment.
"Epidemiology", literally meaning "the study of what is upon the people", is derived from gre " epi" 'upon, among' " demos" 'people, district' and " logos" 'study, word, discourse', suggesting that it applies only to human populations. However, the term is widely used in studies of zoological populations (veterinary epidemiology), although the term "epizoology" is available, and it has also been applied to studies of plant populations (botanical or plant disease epidemiology).
The distinction between "epidemic" and "endemic" was first drawn by Hippocrates, to distinguish between diseases that are "visited upon" a population (epidemic) from those that "reside within" a population (endemic). The term "epidemiology" appears to have first been used to describe the study of epidemics in 1802 by the Spanish physician Joaquín de Villalba in "Epidemiología Española". Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic.
The term epidemiology is now widely applied to cover the description and causation of not only epidemic, infectious disease, but of disease in general, including related conditions. Some examples of topics examined through epidemiology include as high blood pressure, mental illness and obesity. Therefore, this epidemiology is based upon how the pattern of the disease causes change in the function of human beings.
History.
The Greek physician Hippocrates, taught by Democritus, was known as the father of medicine, sought a logic to sickness; he is the first person known to have examined the relationships between the occurrence of disease and environmental influences. Hippocrates believed sickness of the human body to be caused by an imbalance of the four humors (black bile, yellow bile, blood, and phlegm). The cure to the sickness was to remove or add the humor in question to balance the body. This belief led to the application of bloodletting and dieting in medicine. He coined the terms "endemic" (for diseases usually found in some places but not in others) and "epidemic" (for diseases that are seen at some times but not others).
Modern era.
In the middle of the 16th century, a doctor from Verona named Girolamo Fracastoro was the first to propose a theory that the very small, unseeable, particles that cause disease were alive. They were considered to be able to spread by air, multiply by themselves and to be destroyable by fire. In this way he refuted Galen's miasma theory (poison gas in sick people). In 1543 he wrote a book "De contagione et contagiosis morbis", in which he was the first to promote personal and environmental hygiene to prevent disease. The development of a sufficiently powerful microscope by Antonie van Leeuwenhoek in 1675 provided visual evidence of living particles consistent with a germ theory of disease.
During the Ming dynasty, Wu Youke (1582–1652) developed the idea that some diseases were caused by transmissible agents, which he called "Li Qi" (戾气 or pestilential factors) when he observed various epidemics rage around him between 1641 and 1644. His book "Wen Yi Lun" (瘟疫论, Treatise on Pestilence/Treatise of Epidemic Diseases) can be regarded as the main etiological work that brought forward the concept. His concepts were still being considered in analysing SARS outbreak by WHO in 2004 in the context of traditional Chinese medicine.
Another pioneer, Thomas Sydenham (1624–1689), was the first to distinguish the fevers of Londoners in the later 1600s. His theories on cures of fevers met with much resistance from traditional physicians at the time. He was not able to find the initial cause of the smallpox fever he researched and treated.
John Graunt, a haberdasher and amateur statistician, published "Natural and Political Observations ... upon the Bills of Mortality" in 1662. In it, he analysed the mortality rolls in London before the Great Plague, presented one of the first life tables, and reported time trends for many diseases, new and old. He provided statistical evidence for many theories on disease, and also refuted some widespread ideas on them.
John Snow is famous for his investigations into the causes of the 19th-century cholera epidemics, and is also known as the father of (modern) Epidemiology. He began with noticing the significantly higher death rates in two areas supplied by Southwark Company. His identification of the Broad Street pump as the cause of the Soho epidemic is considered the classic example of epidemiology. Snow used chlorine in an attempt to clean the water and removed the handle; this ended the outbreak. This has been perceived as a major event in the history of public health and regarded as the founding event of the science of epidemiology, having helped shape public health policies around the world. However, Snow's research and preventive measures to avoid further outbreaks were not fully accepted or put into practice until after his death due to the prevailing Miasma Theory of the time, a model of disease in which poor air quality was blamed for illness. This was used to rationalize high rates of infection in impoverished areas instead of addressing the underlying issues of poor nutrition and sanitation, and was proven false by his work.
Other pioneers include Danish physician Peter Anton Schleisner, who in 1849 related his work on the prevention of the epidemic of neonatal tetanus on the Vestmanna Islands in Iceland. Another important pioneer was Hungarian physician Ignaz Semmelweis, who in 1847 brought down infant mortality at a Vienna hospital by instituting a disinfection procedure. His findings were published in 1850, but his work was ill-received by his colleagues, who discontinued the procedure. Disinfection did not become widely practiced until British surgeon Joseph Lister 'discovered' antiseptics in 1865 in light of the work of Louis Pasteur.
In the early 20th century, mathematical methods were introduced into epidemiology by Ronald Ross, Janet Lane-Claypon, Anderson Gray McKendrick, and others. In a parallel development during the 1920s, German-Swiss pathologist Max Askanazy and others founded the International Society for Geographical Pathology to systematically investigate the geographical pathology of cancer and other non-infectious diseases across populations in different regions. After World War II, Richard Doll and other non-pathologists joined the field and advanced methods to study cancer, a disease with patterns and mode of occurrences that could not be suitably studied with the methods developed for epidemics of infectious diseases. Geography pathology eventually combined with infectious disease epidemiology to make the field that is epidemiology today.
Another breakthrough was the 1954 publication of the results of a British Doctors Study, led by Richard Doll and Austin Bradford Hill, which lent very strong statistical support to the link between tobacco smoking and lung cancer.
In the late 20th century, with the advancement of biomedical sciences, a number of molecular markers in blood, other biospecimens and environment were identified as predictors of development or risk of a certain disease. Epidemiology research to examine the relationship between these biomarkers analyzed at the molecular level and disease was broadly named "molecular epidemiology". Specifically, "genetic epidemiology" has been used for epidemiology of germline genetic variation and disease. Genetic variation is typically determined using DNA from peripheral blood leukocytes.
21st century.
Since the 2000s, genome-wide association studies (GWAS) have been commonly performed to identify genetic risk factors for many diseases and health conditions.
While most molecular epidemiology studies are still using conventional disease diagnosis and classification systems, it is increasingly recognized that disease progression represents inherently heterogeneous processes differing from person to person. Conceptually, each individual has a unique disease process different from any other individual ("the unique disease principle"), considering uniqueness of the exposome (a totality of endogenous and exogenous / environmental exposures) and its unique influence on molecular pathologic process in each individual. Studies to examine the relationship between an exposure and molecular pathologic signature of disease (particularly cancer) became increasingly common throughout the 2000s. However, the use of molecular pathology in epidemiology posed unique challenges, including lack of research guidelines and standardized statistical methodologies, and paucity of interdisciplinary experts and training programs. Furthermore, the concept of disease heterogeneity appears to conflict with the long-standing premise in epidemiology that individuals with the same disease name have similar etiologies and disease processes. To resolve these issues and advance population health science in the era of molecular precision medicine, "molecular pathology" and "epidemiology" was integrated to create a new interdisciplinary field of "molecular pathological epidemiology" (MPE), defined as "epidemiology of molecular pathology and heterogeneity of disease". In MPE, investigators analyze the relationships between (A) environmental, dietary, lifestyle and genetic factors; (B) alterations in cellular or extracellular molecules; and (C) evolution and progression of disease. A better understanding of heterogeneity of disease pathogenesis will further contribute to elucidate etiologies of disease. The MPE approach can be applied to not only neoplastic diseases but also non-neoplastic diseases. The concept and paradigm of MPE have become widespread in the 2010s.
By 2012, it was recognized that many pathogens' evolution is rapid enough to be highly relevant to epidemiology, and that therefore much could be gained from an interdisciplinary approach to infectious disease integrating epidemiology and molecular evolution to "inform control strategies, or even patient treatment." Modern epidemiological studies can use advanced statistics and machine learning to create predictive models as well as to define treatment effects. There is increasing recognition that a wide range of modern data sources, many not originating from healthcare or epidemiology, can be used for epidemiological study. Such digital epidemiology can include data from internet searching, mobile phone records and retail sales of drugs.
Types of studies.
Epidemiologists employ a range of study designs from the observational to experimental and generally categorized as descriptive (involving the assessment of data covering time, place, and person), analytic (aiming to further examine known associations or hypothesized relationships), and experimental (a term often equated with clinical or community trials of treatments and other interventions). In observational studies, nature is allowed to "take its course", as epidemiologists observe from the sidelines. Conversely, in experimental studies, the epidemiologist is the one in control of all of the factors entering a certain case study. Epidemiological studies are aimed, where possible, at revealing unbiased relationships between exposures such as alcohol or smoking, biological agents, stress, or chemicals to mortality or morbidity. The identification of causal relationships between these exposures and outcomes is an important aspect of epidemiology. Modern epidemiologists use informatics and infodemiology as tools.
Observational studies have two components, descriptive and analytical. Descriptive observations pertain to the "who, what, where and when of health-related state occurrence". However, analytical observations deal more with the 'how' of a health-related event. Experimental epidemiology contains three case types: randomized controlled trials (often used for a new medicine or drug testing), field trials (conducted on those at a high risk of contracting a disease), and community trials (research on social originating diseases).
The term 'epidemiologic triad' is used to describe the intersection of "Host", "Agent", and "Environment" in analyzing an outbreak.
Case series.
Case-series may refer to the qualitative study of the experience of a single patient, or small group of patients with a similar diagnosis, or to a statistical factor with the potential to produce illness with periods when they are unexposed.
The former type of study is purely descriptive and cannot be used to make inferences about the general population of patients with that disease. These types of studies, in which an astute clinician identifies an unusual feature of a disease or a patient's history, may lead to a formulation of a new hypothesis. Using the data from the series, analytic studies could be done to investigate possible causal factors. These can include case-control studies or prospective studies. A case-control study would involve matching comparable controls without the disease to the cases in the series. A prospective study would involve following the case series over time to evaluate the disease's natural history.
The latter type, more formally described as self-controlled case-series studies, divide individual patient follow-up time into exposed and unexposed periods and use fixed-effects Poisson regression processes to compare the incidence rate of a given outcome between exposed and unexposed periods. This technique has been extensively used in the study of adverse reactions to vaccination and has been shown in some circumstances to provide statistical power comparable to that available in cohort studies.
Case-control studies.
Case-control studies select subjects based on their disease status. It is a retrospective study. A group of individuals that are disease positive (the "case" group) is compared with a group of disease negative individuals (the "control" group). The control group should ideally come from the same population that gave rise to the cases. The case-control study looks back through time at potential exposures that both groups (cases and controls) may have encountered. A 2×2 table is constructed, displaying exposed cases (A), exposed controls (B), unexposed cases (C) and unexposed controls (D). The statistic generated to measure association is the odds ratio (OR), which is the ratio of the odds of exposure in the cases (A/C) to the odds of exposure in the controls (B/D), i.e. OR = (AD/BC).
If the OR is significantly greater than 1, then the conclusion is "those with the disease are more likely to have been exposed", whereas if it is close to 1 then the exposure and disease are not likely associated. If the OR is far less than one, then this suggests that the exposure is a protective factor in the causation of the disease.
Case-control studies are usually faster and more cost-effective than cohort studies but are sensitive to bias (such as recall bias and selection bias). The main challenge is to identify the appropriate control group; the distribution of exposure among the control group should be representative of the distribution in the population that gave rise to the cases. This can be achieved by drawing a random sample from the original population at risk. This has as a consequence that the control group can contain people with the disease under study when the disease has a high attack rate in a population.
A major drawback for case control studies is that, in order to be considered to be statistically significant, the minimum number of cases required at the 95% confidence interval is related to the odds ratio by the equation:
formula_0
where N is the ratio of cases to controls.
As the odds ratio approaches 1, the number of cases required for statistical significance grows towards infinity; rendering case-control studies all but useless for low odds ratios. For instance, for an odds ratio of 1.5 and cases = controls, the table shown above would look like this:
For an odds ratio of 1.1:
Cohort studies.
Cohort studies select subjects based on their exposure status. The study subjects should be at risk of the outcome under investigation at the beginning of the cohort study; this usually means that they should be disease free when the cohort study starts. The cohort is followed through time to assess their later outcome status. An example of a cohort study would be the investigation of a cohort of smokers and non-smokers over time to estimate the incidence of lung cancer. The same 2×2 table is constructed as with the case control study. However, the point estimate generated is the relative risk (RR), which is the probability of disease for a person in the exposed group, "P"e = "A" / ("A" + "B") over the probability of disease for a person in the unexposed group, "P""u" = "C" / ("C" + "D"), i.e. "RR" = "P"e / "P"u.
As with the OR, a RR greater than 1 shows association, where the conclusion can be read "those with the exposure were more likely to develop the disease."
Prospective studies have many benefits over case control studies. The RR is a more powerful effect measure than the OR, as the OR is just an estimation of the RR, since true incidence cannot be calculated in a case control study where subjects are selected based on disease status. Temporality can be established in a prospective study, and confounders are more easily controlled for. However, they are more costly, and there is a greater chance of losing subjects to follow-up based on the long time period over which the cohort is followed.
Cohort studies also are limited by the same equation for number of cases as for cohort studies, but, if the base incidence rate in the study population is very low, the number of cases required is reduced by <templatestyles src="Fraction/styles.css" />1⁄2.
Causal inference.
Although epidemiology is sometimes viewed as a collection of statistical tools used to elucidate the associations of exposures to health outcomes, a deeper understanding of this science is that of discovering "causal" relationships.
"Correlation does not imply causation" is a common theme for much of the epidemiological literature. For epidemiologists, the key is in the term inference. Correlation, or at least association between two variables, is a necessary but not sufficient criterion for the inference that one variable causes the other. Epidemiologists use gathered data and a broad range of biomedical and psychosocial theories in an iterative way to generate or expand theory, to test hypotheses, and to make educated, informed assertions about which relationships are causal, and about exactly how they are causal.
Epidemiologists emphasize that the "one cause – one effect" understanding is a simplistic mis-belief. Most outcomes, whether disease or death, are caused by a chain or web consisting of many component causes. Causes can be distinguished as necessary, sufficient or probabilistic conditions. If a necessary condition can be identified and controlled (e.g., antibodies to a disease agent, energy in an injury), the harmful outcome can be avoided (Robertson, 2015). One tool regularly used to conceptualize the multicausality associated with disease is the causal pie model.
Bradford Hill criteria.
In 1965, Austin Bradford Hill proposed a series of considerations to help assess evidence of causation, which have come to be commonly known as the "Bradford Hill criteria". In contrast to the explicit intentions of their author, Hill's considerations are now sometimes taught as a checklist to be implemented for assessing causality. Hill himself said "None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required "sine qua non"."
Legal interpretation.
Epidemiological studies can only go to prove that an agent could have caused, but not that it did cause, an effect in any particular case:
<templatestyles src="Template:Blockquote/styles.css" />Epidemiology is concerned with the incidence of disease in populations and does not address the question of the cause of an individual's disease. This question, sometimes referred to as specific causation, is beyond the domain of the science of epidemiology. Epidemiology has its limits at the point where an inference is made that the relationship between an agent and a disease is causal (general causation) and where the magnitude of excess risk attributed to the agent has been determined; that is, epidemiology addresses whether an agent can cause disease, not whether an agent did cause a specific plaintiff's disease.
In United States law, epidemiology alone cannot prove that a causal association does not exist in general. Conversely, it can be (and is in some circumstances) taken by US courts, in an individual case, to justify an inference that a causal association does exist, based upon a balance of probability.
The subdiscipline of forensic epidemiology is directed at the investigation of specific causation of disease or injury in individuals or groups of individuals in instances in which causation is disputed or is unclear, for presentation in legal settings.
Population-based health management.
Epidemiological practice and the results of epidemiological analysis make a significant contribution to emerging population-based health management frameworks.
Population-based health management encompasses the ability to:
Modern population-based health management is complex, requiring a multiple set of skills (medical, political, technological, mathematical, etc.) of which epidemiological practice and analysis is a core component, that is unified with management science to provide efficient and effective health care and health guidance to a population. This task requires the forward-looking ability of modern risk management approaches that transform health risk factors, incidence, prevalence and mortality statistics (derived from epidemiological analysis) into management metrics that not only guide how a health system responds to current population health issues but also how a health system can be managed to better respond to future potential population health issues.
Examples of organizations that use population-based health management that leverage the work and results of epidemiological practice include Canadian Strategy for Cancer Control, Health Canada Tobacco Control Programs, Rick Hansen Foundation, Canadian Tobacco Control Research Initiative.
Each of these organizations uses a population-based health management framework called Life at Risk that combines epidemiological quantitative analysis with demographics, health agency operational research and economics to perform:
Applied field epidemiology.
Applied epidemiology is the practice of using epidemiological methods to protect or improve the health of a population. Applied field epidemiology can include investigating communicable and non-communicable disease outbreaks, mortality and morbidity rates, and nutritional status, among other indicators of health, with the purpose of communicating the results to those who can implement appropriate policies or disease control measures.
Humanitarian context.
As the surveillance and reporting of diseases and other health factors become increasingly difficult in humanitarian crisis situations, the methodologies used to report the data are compromised. One study found that less than half (42.4%) of nutrition surveys sampled from humanitarian contexts correctly calculated the prevalence of malnutrition and only one-third (35.3%) of the surveys met the criteria for quality. Among the mortality surveys, only 3.2% met the criteria for quality. As nutritional status and mortality rates help indicate the severity of a crisis, the tracking and reporting of these health factors is crucial.
Vital registries are usually the most effective ways to collect data, but in humanitarian contexts these registries can be non-existent, unreliable, or inaccessible. As such, mortality is often inaccurately measured using either prospective demographic surveillance or retrospective mortality surveys. Prospective demographic surveillance requires much manpower and is difficult to implement in a spread-out population. Retrospective mortality surveys are prone to selection and reporting biases. Other methods are being developed, but are not common practice yet.
Characterization, validity, and bias.
Epidemic wave.
The concept of waves in epidemics has implications especially for communicable diseases. A working definition for the term "epidemic wave" is based on two key features: 1) it comprises periods of upward or downward trends, and 2) these increases or decreases must be substantial and sustained over a period of time, in order to distinguish them from minor fluctuations or reporting errors. The use of a consistent scientific definition is to provide a consistent language that can be used to communicate about and understand the progression of the COVID-19 pandemic, which would aid healthcare organizations and policymakers in resource planning and allocation.
Validities.
Different fields in epidemiology have different levels of validity. One way to assess the validity of findings is the ratio of false-positives (claimed effects that are not correct) to false-negatives (studies which fail to support a true effect). In genetic epidemiology, candidate-gene studies may produce over 100 false-positive findings for each false-negative. By contrast genome-wide association appear close to the reverse, with only one false positive for every 100 or more false-negatives. This ratio has improved over time in genetic epidemiology, as the field has adopted stringent criteria. By contrast, other epidemiological fields have not required such rigorous reporting and are much less reliable as a result.
Random error.
Random error is the result of fluctuations around a true value because of sampling variability. Random error is just that: random. It can occur during data collection, coding, transfer, or analysis. Examples of random errors include poorly worded questions, a misunderstanding in interpreting an individual answer from a particular respondent, or a typographical error during coding. Random error affects measurement in a transient, inconsistent manner and it is impossible to correct for random error. There is a random error in all sampling procedures – sampling error.
Precision in epidemiological variables is a measure of random error. Precision is also inversely related to random error, so that to reduce random error is to increase precision. Confidence intervals are computed to demonstrate the precision of relative risk estimates. The narrower the confidence interval, the more precise the relative risk estimate.
There are two basic ways to reduce random error in an epidemiological study. The first is to increase the sample size of the study. In other words, add more subjects to your study. The second is to reduce the variability in measurement in the study. This might be accomplished by using a more precise measuring device or by increasing the number of measurements.
Note, that if sample size or number of measurements are increased, or a more precise measuring tool is purchased, the costs of the study are usually increased. There is usually an uneasy balance between the need for adequate precision and the practical issue of study cost.
Systematic error.
A systematic error or bias occurs when there is a difference between the true value (in the population) and the observed value (in the study) from any cause other than sampling variability. An example of systematic error is if, unknown to you, the pulse oximeter you are using is set incorrectly and adds two points to the true value each time a measurement is taken. The measuring device could be precise but not accurate. Because the error happens in every instance, it is systematic. Conclusions you draw based on that data will still be incorrect. But the error can be reproduced in the future (e.g., by using the same mis-set instrument).
A mistake in coding that affects "all" responses for that particular question is another example of a systematic error.
The validity of a study is dependent on the degree of systematic error. Validity is usually separated into two components:
Selection bias.
Selection bias occurs when study subjects are selected or become part of the study as a result of a third, unmeasured variable which is associated with both the exposure and outcome of interest. For instance, it has repeatedly been noted that cigarette smokers and non smokers tend to differ in their study participation rates. (Sackett D cites the example of Seltzer et al., in which 85% of non smokers and 67% of smokers returned mailed questionnaires.) Such a difference in response will not lead to bias if it is not also associated with a systematic difference in outcome between the two response groups.
Information bias.
Information bias is bias arising from systematic error in the assessment of a variable. An example of this is recall bias. A typical example is again provided by Sackett in his discussion of a study examining the effect of specific exposures on fetal health: "in questioning mothers whose recent pregnancies had ended in fetal death or malformation (cases) and a matched group of mothers whose pregnancies ended normally (controls) it was found that 28% of the former, but only 20% of the latter, reported exposure to drugs which could not be substantiated either in earlier prospective interviews or in other health records". In this example, recall bias probably occurred as a result of women who had had miscarriages having an apparent tendency to better recall and therefore report previous exposures.
Confounding.
Confounding has traditionally been defined as bias arising from the co-occurrence or mixing of effects of extraneous factors, referred to as confounders, with the main effect(s) of interest. A more recent definition of confounding invokes the notion of "counterfactual" effects. According to this view, when one observes an outcome of interest, say Y=1 (as opposed to Y=0), in a given population A which is entirely exposed (i.e. exposure "X" = 1 for every unit of the population) the risk of this event will be "R"A1. The counterfactual or unobserved risk "R"A0 corresponds to the risk which would have been observed if these same individuals had been unexposed (i.e. "X" = 0 for every unit of the population). The true effect of exposure therefore is: "R"A1 − "R"A0 (if one is interested in risk differences) or "R"A1/"R"A0 (if one is interested in relative risk). Since the counterfactual risk "R"A0 is unobservable we approximate it using a second population B and we actually measure the following relations: "R"A1 − "R"B0 or "R"A1/"R"B0. In this situation, confounding occurs when "R"A0 ≠ "R"B0. (NB: Example assumes binary outcome and exposure variables.)
Some epidemiologists prefer to think of confounding separately from common categorizations of bias since, unlike selection and information bias, confounding stems from real causal effects.
The profession.
Few universities have offered epidemiology as a course of study at the undergraduate level. An undergraduate program exists at Johns Hopkins University in which students who major in public health can take graduate-level courses—including epidemiology—during their senior year at the Bloomberg School of Public Health. In addition to its master's and doctoral degrees in epidemiology, the University of Michigan School of Public Health has offered undergraduate degree programs since 2017 that include coursework in epidemiology.
Although epidemiologic research is conducted by individuals from diverse disciplines, variable levels of training in epidemiologic methods are provided during pharmacy, medical, veterinary, social work, podiatry, nursing, physical therapy, and clinical psychology doctoral programs in addition to the formal training master's and doctoral students in public health fields receive.
As public health practitioners, epidemiologists work in a number of different settings. Some epidemiologists work "in the field" (i.e., in the community; commonly in a public health service), and are often at the forefront of investigating and combating disease outbreaks. Others work for non-profit organizations, universities, hospitals, or larger government entities (e.g., state and local health departments in the United States), ministries of health, Doctors without Borders, the Centers for Disease Control and Prevention (CDC), the Health Protection Agency, the World Health Organization (WHO), or the Public Health Agency of Canada. Epidemiologists can also work in for-profit organizations (e.g., pharmaceutical and medical device companies) in groups such as market research or clinical development.
COVID-19.
An April 2020 University of Southern California article noted that, "The coronavirus epidemic... thrust epidemiology – the study of the incidence, distribution and control of disease in a population – to the forefront of scientific disciplines across the globe and even made temporary celebrities out of some of its practitioners."
See also.
<templatestyles src="Div col/styles.css"/>
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
External links.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{total cases} = A+C = 1.96^2 (1+N) \\left(\\frac{1}{\\ln(OR)}\\right)^2 \\left(\\frac{OR+2\\sqrt{OR}+1}{\\sqrt{OR}}\\right) \\approx 15.5 (1+N) \\left(\\frac{1}{\\ln(OR)}\\right)^2"
}
] |
https://en.wikipedia.org/wiki?curid=66997
|
669992
|
Star height
|
In theoretical computer science, more precisely in the theory of formal languages, the star height is a measure for the structural complexity
of regular expressions and regular languages. The star height of a regular "expression" equals the maximum nesting depth of stars appearing in that expression. The star height of a regular "language" is the least star height of any regular expression for that language.
The concept of star height was first defined and studied by Eggan (1963).
Formal definition.
More formally, the star height of a regular expression
"E" over a finite alphabet "A" is inductively defined as follows:
Here, formula_5 is the special regular expression denoting the empty set and ε the special one denoting the empty word;
"E" and "F" are arbitrary regular expressions.
The star height "h"("L") of a regular language "L" is defined as the minimum star height among all regular expressions representing "L".
The intuition is here that if the language "L" has large star height, then it is in some sense inherently complex, since it cannot be described
by means of an "easy" regular expression, of low star height.
Examples.
While computing the star height of a regular expression is easy, determining the star height of a language can be sometimes tricky.
For illustration, the regular expression
formula_6
over the alphabet "A = {a,b}"
has star height 2. However, the described language is just the set of all words ending in an "a": thus the language can also be described by the expression
formula_7
which is only of star height 1. To prove that this language indeed has star height 1, one still needs to rule out that it could be described by a regular
expression of lower star height. For our example, this can be done by an indirect proof: One proves that a language of star height 0
contains only finitely many words. Since the language under consideration is infinite, it cannot be of star height 0.
The star height of a group language is computable: for example, the star height of the language over {"a","b"} in which the number of occurrences of "a" and "b" are congruent modulo 2"n" is "n".
Eggan's theorem.
In his seminal study of the star height of regular languages, established a relation between the theories of regular expressions, finite automata, and of directed graphs. In subsequent years, this relation became known as "Eggan's theorem", cf. . We recall a few concepts from graph theory and automata theory.
In graph theory, the cycle rank "r"("G") of a directed graph (digraph) "G" = ("V", "E") is inductively defined as follows:
formula_8 where &NoBreak;&NoBreak; is the digraph resulting from deletion of vertex v and all edges beginning or ending at v.
In automata theory, a nondeterministic finite automaton with ε-transitions (ε-NFA) is defined as a 5-tuple, ("Q", Σ, "δ", "q0", "F"), consisting of
A word "w" ∈ Σ* is accepted by the ε-NFA if there exists a directed path from the initial state "q"0 to some final state in "F" using edges from "δ", such that the concatenation of all labels visited along the path yields the word "w". The set of all words over Σ* accepted by the automaton is the "language" accepted by the automaton "A".
When speaking of digraph properties of a nondeterministic finite automaton "A" with state set "Q", we naturally address the digraph with vertex set "Q" induced by its transition relation. Now the theorem is stated as follows.
Eggan's Theorem: The star height of a regular language "L" equals the minimum cycle rank among all nondeterministic finite automata with ε-transitions accepting "L".
Proofs of this theorem are given by , and more recently by .
Generalized star height.
The above definition assumes that regular expressions are built from the elements of the alphabet "A"
using only the standard operators set union, concatenation, and Kleene star. "Generalized regular expressions" are defined just as regular expressions, but here also the set complement operator is allowed
(the complement is always taken with respect to the set of all words over A). If we alter the definition such that taking complements does not increase the star height, that is,
formula_9
we can define the generalized star height of a regular language "L" as the minimum star height among all "generalized" regular expressions
representing "L". It is an open problem whether some languages can only be expressed with a generalized star height greater than one: this is the generalized star-height problem.
Note that, whereas it is immediate that a language of (ordinary) star height 0 can contain only finitely many words, there exist infinite
languages having generalized star height 0. For instance, the regular expression
formula_10
which we saw in the example above, can be equivalently described by the generalized regular expression
formula_11,
since the complement of the empty set is precisely the set of all words over "A". Thus the set of all words over the alphabet "A" ending in the letter "a" has star height one, while its
generalized star height equals zero.
Languages of generalized star height zero are also called star-free languages. It can be shown that a language "L" is star-free if and only if its syntactic monoid is aperiodic ().
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\textstyle h\\left(\\emptyset\\right)\\,=\\,0"
},
{
"math_id": 1,
"text": "\\textstyle h\\left(\\varepsilon\\right)\\,=\\,0"
},
{
"math_id": 2,
"text": "\\textstyle h\\left(a\\right)\\,=\\,0"
},
{
"math_id": 3,
"text": "\\textstyle h\\left(E F\\right)\\,=\\, h\\left(E\\, \\mid\\, F\\right)\\,=\\,\\max \\left(\\, h(E), h(F)\\,\\right)"
},
{
"math_id": 4,
"text": "\\textstyle h\\left(E^*\\right)\\,=\\,h(E)+1."
},
{
"math_id": 5,
"text": "\\scriptstyle \\emptyset"
},
{
"math_id": 6,
"text": "\\textstyle \\left(b\\, \\mid\\, a a^*b\\right)^*a a^* "
},
{
"math_id": 7,
"text": "\\textstyle (a\\, \\mid\\, b)^*a"
},
{
"math_id": 8,
"text": "r(G) = 1 + \\min_{v\\in V} r(G-v),\\,"
},
{
"math_id": 9,
"text": "\\textstyle h\\left(E^c\\right)\\,=\\,h(E)"
},
{
"math_id": 10,
"text": "\\textstyle (a\\, \\mid\\, b)^*a,"
},
{
"math_id": 11,
"text": "\\textstyle \\emptyset^c a"
}
] |
https://en.wikipedia.org/wiki?curid=669992
|
670026
|
Syntactic monoid
|
Smallest monoid that recognizes a formal language
In mathematics and computer science, the syntactic monoid formula_0 of a formal language formula_1 is the smallest monoid that recognizes the language formula_1.
Syntactic quotient.
The free monoid on a given set is the monoid whose elements are all the strings of zero or more elements from that set, with string concatenation as the monoid operation and the empty string as the identity element. Given a subset formula_2 of a free monoid formula_3, one may define sets that consist of formal left or right inverses of elements in formula_2. These are called quotients, and one may define right or left quotients, depending on which side one is concatenating. Thus, the right quotient of formula_2 by an element formula_4 from formula_3 is the set
formula_5
Similarly, the left quotient is
formula_6
Syntactic equivalence.
The syntactic quotient induces an equivalence relation on formula_3, called the syntactic relation, or syntactic equivalence (induced by formula_2).
The "right syntactic equivalence" is the equivalence relation
formula_7.
Similarly, the "left syntactic equivalence" is
formula_8.
Observe that the "right" syntactic equivalence is a "left" congruence with respect to string concatenation and vice versa; i.e., formula_9 for all formula_10.
The syntactic congruence or Myhill congruence is defined as
formula_11.
The definition extends to a congruence defined by a subset formula_2 of a general monoid formula_3. A disjunctive set is a subset formula_2 such that the syntactic congruence defined by formula_2 is the equality relation.
Let us call formula_12 the equivalence class of formula_13 for the syntactic congruence.
The syntactic congruence is compatible with concatenation in the monoid, in that one has
formula_14
for all formula_15. Thus, the syntactic quotient is a monoid morphism, and induces a quotient monoid
formula_16.
This monoid formula_17 is called the syntactic monoid of formula_2.
It can be shown that it is the smallest monoid that recognizes formula_2; that is, formula_17 recognizes formula_2, and for every monoid formula_18 recognizing formula_2, formula_17 is a quotient of a submonoid of formula_18. The syntactic monoid of formula_2 is also the transition monoid of the minimal automaton of formula_2.
A group language is one for which the syntactic monoid is a group.
Myhill–Nerode theorem.
The Myhill–Nerode theorem states: a language formula_1 is regular if and only if the family of quotients formula_19 is finite, or equivalently, the left syntactic equivalence formula_20 has "finite index" (meaning it partitions formula_3 into finitely many equivalence classes).
This theorem was first proved by Anil Nerode and the relation formula_20 is thus referred to as Nerode congruence by some authors.
Proof.
The proof of the "only if" part is as follows. Assume that a finite automaton recognizing formula_1 reads input formula_21, which leads to state formula_22. If formula_23 is another string read by the machine, also terminating in the same state formula_22, then clearly one has formula_24. Thus, the number of elements in formula_25 is at most equal to the number of states of the automaton and formula_26 is at most the number of final states.
For a proof of the "if" part, assume that the number of elements in formula_25 is finite. One can then construct an automaton where formula_27 is the set of states, formula_28 is the set of final states, the language formula_1 is the initial state, and the transition function is given by formula_29. Clearly, this automaton recognizes formula_1.
Thus, a language formula_1 is recognizable if and only if the set formula_25 is finite. Note that this proof also builds the minimal automaton.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M(L)"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "S \\ / \\ m=\\{u\\in M \\;\\vert\\; um\\in S \\}."
},
{
"math_id": 6,
"text": "m \\setminus S=\\{u\\in M \\;\\vert\\; mu\\in S \\}."
},
{
"math_id": 7,
"text": "s \\sim_S t \\ \\Leftrightarrow\\ S \\,/ \\,s \\;=\\; S \\,/ \\,t \\ \\Leftrightarrow\\ (\\forall x\\in M\\colon\\ xs \\in S \\Leftrightarrow xt \\in S)"
},
{
"math_id": 8,
"text": "s \\;{}_S{\\sim}\\; t \\ \\Leftrightarrow\\ s \\setminus S \\;=\\; t \\setminus S \\ \\Leftrightarrow\\ (\\forall y\\in M\\colon\\ sy \\in S \\Leftrightarrow ty \\in S)"
},
{
"math_id": 9,
"text": "s \\sim_S t \\ \\Rightarrow\\ xs \\sim_S xt\\ "
},
{
"math_id": 10,
"text": "x \\in M"
},
{
"math_id": 11,
"text": "s \\equiv_S t \\ \\Leftrightarrow\\ (\\forall x, y\\in M\\colon\\ xsy \\in S \\Leftrightarrow xty \\in S)"
},
{
"math_id": 12,
"text": "[s]_S"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "[s]_S[t]_S=[st]_S"
},
{
"math_id": 15,
"text": "s,t\\in M"
},
{
"math_id": 16,
"text": "M(S)= M \\ / \\ {\\equiv_S}"
},
{
"math_id": 17,
"text": "M(S)"
},
{
"math_id": 18,
"text": "N"
},
{
"math_id": 19,
"text": "\\{m \\setminus L \\,\\vert\\; m\\in M\\}"
},
{
"math_id": 20,
"text": "{}_S{\\sim}"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "p"
},
{
"math_id": 23,
"text": "y"
},
{
"math_id": 24,
"text": "x \\setminus L\\,= y \\setminus L"
},
{
"math_id": 25,
"text": "\\{m \\setminus L \\,\\vert\\; m\\in M\\}"
},
{
"math_id": 26,
"text": "\\{m \\setminus L \\,\\vert\\; m\\in L\\}"
},
{
"math_id": 27,
"text": "Q=\\{m \\setminus L \\,\\vert\\; m\\in M\\}"
},
{
"math_id": 28,
"text": "F=\\{m \\setminus L \\,\\vert\\; m\\in L\\}"
},
{
"math_id": 29,
"text": "\\delta_y \\colon x \\setminus L \\to y\\setminus(x \\setminus L) =(xy) \\setminus L"
},
{
"math_id": 30,
"text": "A = \\{a, b\\}"
},
{
"math_id": 31,
"text": "L_1"
},
{
"math_id": 32,
"text": "\\{L, L_1\\}"
},
{
"math_id": 33,
"text": "(ab+ba)^*"
},
{
"math_id": 34,
"text": "A"
},
{
"math_id": 35,
"text": "\\left|A\\right| > 1"
},
{
"math_id": 36,
"text": "\\{ww^R \\mid w \\in A^*\\}"
},
{
"math_id": 37,
"text": "w^R"
},
{
"math_id": 38,
"text": "w"
},
{
"math_id": 39,
"text": "\\left|A\\right| = 1"
},
{
"math_id": 40,
"text": "\\{a, b\\}"
},
{
"math_id": 41,
"text": "a"
},
{
"math_id": 42,
"text": "b"
},
{
"math_id": 43,
"text": "2^n"
},
{
"math_id": 44,
"text": "\\mathbb{Z} / 2^n\\mathbb{Z}"
}
] |
https://en.wikipedia.org/wiki?curid=670026
|
67004382
|
Market equilibrium computation
|
Economical computational problem
Market equilibrium computation (also called competitive equilibrium computation or clearing-prices computation) is a computational problem in the intersection of economics and computer science. The input to this problem is a "market", consisting of a set of "resources" and a set of "agents". There are various kinds of markets, such as Fisher market and Arrow–Debreu market, with divisible or indivisible resources. The required output is a "competitive equilibrium", consisting of a "price-vector" (a price for each resource), and an "allocation" (a resource-bundle for each agent), such that each agent gets the best bundle possible (for him) given the budget, and the market "clears" (all resources are allocated).
Market equilibrium computation is interesting due to the fact that a competitive equilibrium is always Pareto efficient. The special case of a Fisher market, in which all buyers have equal incomes, is particularly interesting, since in this setting a competitive equilibrium is also envy-free. Therefore, market equilibrium computation is a way to find an allocation which is both fair and efficient.
Definitions.
The input to the market-equilibrium-computation consists of the following ingredients:
The required output should contain the following ingredients:
The output should satisfy the following requirements:
A price and allocation satisfying these requirements are called "a competitive equilibrium" (CE) or a "market equilibrium"; the prices are also called "equilibrium prices" or "clearing prices".
Kinds of utility functions.
Market equilibrium computation has been studied under various assumptions regarding the agents' utility functions.
Utilities that are piecewise-linear and concave are often called PLC; if they are also separable, then they are called SPLC.
Main results.
Approximate algorithms.
Scarf was the first to show the existence of a CE using Sperner's lemma (see Fisher market). He also gave an algorithm for computing an approximate CE.
Merrill gave an extended algorithm for approximate CE.
Kakade, Kearns and Ortiz gave algorithms for approximate CE in a generalized Arrow-Debreu market in which agents are located on a graph and trade may occur only between neighboring agents. They considered non-linear utilities.
Newman and Primak studied two variants of the ellipsoid method for finding a CE in an Arrow-Debreu market with linear utilities. They prove that the inscribed ellipsoid method is more computat`ionally efficient than the circumscribed ellipsoid method.
Hardness results.
In some cases, computing an approximate CE is PPAD-hard:
Exact algorithms.
Devanur, Papadimitriou, Saberi and Vazirani gave a polynomial-time algorithm for exactly computing an equilibrium for "Fisher" markets with "linear" utility functions. Their algorithm uses the primal–dual paradigm in the enhanced setting of KKT conditions and convex programs. Their algorithm is weakly-polynomial: it solvesformula_21 maximum flow problems, and thus it runs in time formula_22, where "u"max and "B"max are the maximum utility and budget, respectively.
Orlin gave an improved algorithm for a Fisher market model with linear utilities, running in time formula_23. He then improved his algorithm to run in strongly-polynomial time: formula_24.
Devanur and Kannan gave algorithms for "Arrow-Debreu" markets with "concave" utility functions, where all resources are goods (the utilities are positive):
Codenotti, McCune, Penumatcha and Varadarajan gave an algorithm for Arrow-Debreu markes with CES utilities where the elasticity of substitution is at least 1/2.
Bads and mixed manna.
Bogomolnaia and Moulin and Sandomirskiy and Yanovskaia studied the existence and properties of CE in a Fisher market with bads (items with negative utilities) and with a mixture of goods and bads. In contrast to the setting with goods, when the resources are bads the CE does not solve any convex optimization problem even with linear utilities. CE allocations correspond to local minima, local maxima, and saddle points of the product of utilities on the Pareto frontier of the set of feasible utilities. The CE rule becomes multivalued. This work has led to several works on algorithms of finding CE in such markets:
If both "n" and "m" are variable, the problem becomes computationally hard:
Main techniques.
Bang-for-buck.
When the utilities are linear, the "bang-per-buck" of agent "i" (also called BPB or "utility-per-coin") is defined as the utility of "i" divided by the price paid. The BPB of a single resource is formula_25; the total BPB is formula_26.
A key observation for finding a CE in a Fisher market with linear utilities is that, in any CE and for any agent "i":
Assume that every product formula_3 has a potential buyer - a buyer formula_5 with formula_29. Then, the above inequalities imply that formula_30, i.e, all prices are positive.
Cell decomposition.
Cell decomposition is a process of partitioning the space of possible prices formula_31 into small "cells", either by hyperplanes or, more generally, by polynomial surfaces. A cell is defined by specifying on which side of each of these surfaces it lies (with polynomial surfaces, the cells are also known as "semialgebraic sets"). For each cell, we either find a market-clearing price-vector (i.e., a price in that cell for which a market-clearing allocation exists), or verify that the cell does not contain a market-clearing price-vector. The challenge is to find a decomposition with the following properties:
Convex optimization: homogeneous utilities.
If the utilities of all agents are homogeneous functions, then the equilibrium conditions in the Fisher model can be written as solutions to a convex optimization program called the Eisenberg-Gale convex program. This program finds an allocation that maximizes the "weighted geometric mean" of the buyers' utilities, where the weights are determined by the budgets. Equivalently, it maximizes the weighted arithmetic mean of the logarithms of the utilities:
Maximize formula_34
Subject to:
"Non-negative quantities": For every buyer formula_5 and product formula_3: formula_35
"Sufficient supplies": For every product formula_3: formula_36
(since supplies are normalized to 1).
This optimization problem can be solved using the Karush–Kuhn–Tucker conditions (KKT). These conditions introduce Lagrangian multipliers that can be interpreted as the "prices", formula_37. In every allocation that maximizes the Eisenberg-Gale program, every buyer receives a demanded bundle. I.e, a solution to the Eisenberg-Gale program represents a market equilibrium.
Vazirani's algorithm: linear utilities, weakly polynomial-time.
A special case of homogeneous utilities is when all buyers have linear utility functions. We assume that each resource has a "potential buyer" - a buyer that derives positive utility from that resource. Under this assumption, market-clearing prices exist and are unique. The proof is based on the Eisenberg-Gale program. The KKT conditions imply that the optimal solutions (allocations formula_38 and prices formula_39) satisfy the following inequalities:
Assume that every product formula_3 has a potential buyer - a buyer formula_5 with formula_29. Then, inequality 3 implies that formula_30, i.e, all prices are positive. Then, inequality 2 implies that all supplies are exhausted. Inequality 4 implies that all buyers' budgets are exhausted. I.e, the market clears. Since the log function is a strictly concave function, if there is more than one equilibrium allocation then the utility derived by each buyer in both allocations must be the same (a decrease in the utility of a buyer cannot be compensated by an increase in the utility of another buyer). This, together with inequality 4, implies that the prices are unique.
Vazirani presented an algorithm for finding equilibrium prices and allocations in a linear Fisher market. The algorithm is based on condition 4 above. The condition implies that, in equilibrium, every buyer buys only products that give him maximum BPB. Let's say that a buyer "likes" a product, if that product gives him maximum BPB in the current prices. Given a price-vector, construct a flow network in which the capacity of each edge represents the total money "flowing" through that edge. The network is as follows:
The price-vector "p" is an equilibrium price-vector, if and only if the two cuts ({s},V\{s}) and (V\{t},{t}) are min-cuts. Hence, an equilibrium price-vector can be found using the following scheme:
There is an algorithm that solves this problem in weakly polynomial time.
Online computation.
Recently, Gao, Peysakhovich and Kroer presented an algorithm for online computation of market equilibrium.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "\\mathbf{x} = x_1,\\dots,x_m"
},
{
"math_id": 2,
"text": "x_j"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "u_i"
},
{
"math_id": 7,
"text": "B_i"
},
{
"math_id": 8,
"text": " \\mathbf{e}^i"
},
{
"math_id": 9,
"text": "\\mathbf{p} = p_1,\\dots,p_m"
},
{
"math_id": 10,
"text": "\\mathbf{x}"
},
{
"math_id": 11,
"text": "\\mathbf{p} \\cdot \\mathbf{x}=\\sum_{j=1}^m p_j\\cdot x_j"
},
{
"math_id": 12,
"text": "\\mathbf{x}^i"
},
{
"math_id": 13,
"text": "\\mathbf{p} \\cdot \\mathbf{x}^i \\leq B_i"
},
{
"math_id": 14,
"text": "\\mathbf{p} \\cdot \\mathbf{x}^i \\leq \\mathbf{p} \\cdot \\mathbf{e}^i"
},
{
"math_id": 15,
"text": "\\mathbf{x}^i \\in \\text{Demand}_i(\\mathbf{p})"
},
{
"math_id": 16,
"text": " \\text{Demand}_i(\\mathbf{p}) := \\arg\\max_{\\mathbf{p} \\mathbf{x}\\leq B_i} u_i(\\mathbf{x})"
},
{
"math_id": 17,
"text": " u_i(\\mathbf{x}) = \\sum_{j=1}^m u_{i,j}(x_j)"
},
{
"math_id": 18,
"text": " u_{i,j}(x_j)"
},
{
"math_id": 19,
"text": "u_i(\\mathbf{x}) = \\sum_{j=1}^m u_{i,j}\\cdot x_j"
},
{
"math_id": 20,
"text": "u_{i,j}"
},
{
"math_id": 21,
"text": "O((n+m)^5\\log(u_{\\max}) + (n+m)^4\\log{B_{\\max}})"
},
{
"math_id": 22,
"text": "O((n+m)^8\\log(u_{\\max}) + (n+m)^7\\log{B_{\\max}})"
},
{
"math_id": 23,
"text": "O((n+m)^4\\log(u_{\\max}) + (n+m)^3 B_{\\max})"
},
{
"math_id": 24,
"text": "O((m+n)^4\\log(m+n))"
},
{
"math_id": 25,
"text": "bpb_{i,j} := \\frac{u_{i,j}}{p_j}"
},
{
"math_id": 26,
"text": "bpb_{i,total} := \\frac{\\sum_{j=1}^m u_{i,j}\\cdot x_{i,j}}{B_i}"
},
{
"math_id": 27,
"text": "\\forall j: bpb_{i,j}\\leq bpb_{i,total} "
},
{
"math_id": 28,
"text": "\\forall j: x_{i,j}>0 \\implies bpb_{i,j} = bpb_{i,total}"
},
{
"math_id": 29,
"text": "u_{i,j}>0"
},
{
"math_id": 30,
"text": "p_j>0"
},
{
"math_id": 31,
"text": "\\mathbb{R}^m_+"
},
{
"math_id": 32,
"text": "O(k^m)"
},
{
"math_id": 33,
"text": "O(k^{m+1}\\cdot d^{O(m)})"
},
{
"math_id": 34,
"text": "\\sum_{i=1}^n \\left( B_i\\cdot \\log{(u_i)} \\right)"
},
{
"math_id": 35,
"text": "x_{i,j}\\geq 0"
},
{
"math_id": 36,
"text": "\\sum_{i=1}^n x_{i,j} \\leq 1"
},
{
"math_id": 37,
"text": "p_1,\\dots,p_m"
},
{
"math_id": 38,
"text": "x_{i,j}"
},
{
"math_id": 39,
"text": "p_j"
},
{
"math_id": 40,
"text": "p_j\\geq 0"
},
{
"math_id": 41,
"text": "p_j>0 \\implies \\sum_{i=1}^n x_{i,j} = 1"
}
] |
https://en.wikipedia.org/wiki?curid=67004382
|
6701870
|
NOON state
|
Quantum-mechanical many-body entangled state
In quantum optics, a NOON state or N00N state is a quantum-mechanical many-body entangled state:
formula_0
which represents a superposition of "N" particles in mode "a" with zero particles in mode "b", and vice versa. Usually, the particles are photons, but in principle any bosonic field can support NOON states.
Applications.
NOON states are an important concept in quantum metrology and quantum sensing for their ability to make precision phase measurements when used in an optical interferometer. For example, consider the observable
formula_1
The expectation value of formula_2 for a system in a NOON state switches between +1 and −1 when formula_3 changes from 0 to formula_4. Moreover, the error in the phase measurement becomes
formula_5
This is the so-called Heisenberg limit, and gives a quadratic improvement over the standard quantum limit. NOON states are closely related to Schrödinger cat states and GHZ states, and are extremely fragile.
Towards experimental realization.
There have been several theoretical proposals for creating photonic NOON states. Pieter Kok, Hwang Lee, and Jonathan Dowling proposed the first general method based on post-selection via photodetection. The down-side of this method was its exponential scaling of the success probability of the protocol. Pryde and White subsequently introduced a simplified method using intensity-symmetric multiport beam splitters, single photon inputs, and either heralded or conditional measurement. Their method, for example, allows heralded production of the "N" = 4 NOON state without the need for postselection or zero photon detections, and has the same success probability of 3/64 as the more complicated circuit of Kok et al. Cable and Dowling proposed a method that has polynomial scaling in the success probability, which can therefore be called efficient.
Two-photon NOON states, where "N" = 2, can be created deterministically from two identical photons and a 50:50 beam splitter. This is called the Hong–Ou–Mandel effect in quantum optics. Three- and four-photon NOON states cannot be created deterministically from single-photon states, but they have been created probabilistically via post-selection using spontaneous parametric down-conversion. A different approach, involving the interference of non-classical light created by spontaneous parametric down-conversion and a classical laser beam on a 50:50 beam splitter, was used by I. Afek, O. Ambar, and Y. Silberberg to experimentally demonstrate the production of NOON states up to "N" = 5.
Super-resolution has previously been used as indicator of NOON state production, in 2005 Resch et al. showed that it could equally well be prepared by classical interferometry. They showed that only phase super-sensitivity is an unambiguous indicator of a NOON state; furthermore they introduced criteria for determining if it has been achieved based on the observed visibility and efficiency. Phase super sensitivity of NOON states with "N" = 2 was demonstrated and super resolution, but not super sensitivity as the efficiency was too low, of NOON states up to "N" = 4 photons was also demonstrated experimentally.
History and terminology.
NOON states were first introduced by Barry C. Sanders in the context of studying quantum decoherence in Schrödinger cat states. They were independently rediscovered in 2000 by Jonathan P. Dowling's group at JPL, who introduced them as the basis for the concept of quantum lithography. The term "NOON state" first appeared in print as a footnote in a paper published by Hwang Lee, Pieter Kok, and Jonathan Dowling on quantum metrology, where it was spelled N00N, with zeros instead of Os.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "|\\psi_\\text{NOON} \\rangle = \\frac{|N \\rangle_a |0\\rangle_b + e^{iN \\theta} |{0}\\rangle_a |{N}\\rangle_b}{\\sqrt{2}}, \\, "
},
{
"math_id": 1,
"text": " A = |N,0\\rangle\\langle 0,N| + |0,N\\rangle\\langle N,0|. \\, "
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "\\theta"
},
{
"math_id": 4,
"text": "\\pi/N"
},
{
"math_id": 5,
"text": " \\Delta \\theta = \\frac{\\Delta A}{|d\\langle A\\rangle / d\\theta|} = \\frac{1}{N}. "
}
] |
https://en.wikipedia.org/wiki?curid=6701870
|
6701941
|
Arylsulfatase
|
Class of enzymes
Arylsulfatase (EC 3.1.6.1, sulfatase, nitrocatechol sulfatase, phenolsulfatase, phenylsulfatase, "p"-nitrophenyl sulfatase, arylsulfohydrolase, 4-methylumbelliferyl sulfatase, estrogen sulfatase) is a type of sulfatase enzyme with systematic name "aryl-sulfate sulfohydrolase". This enzyme catalyses the following chemical reaction
an aryl sulfate + H2O formula_0 a phenol + sulfate
It catalyses an analogous reaction for sulfonated hexoses. Types include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=6701941
|
67020015
|
1 Chronicles 23
|
First Book of Chronicles, chapter 23
1 Chronicles 23 is the twenty-third chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the divisions and duties of the Levites. The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 32 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David organizes the Levites (23:1–24).
This section details David's preparation for his succession as he reached a venerable stage of life, and his priority was to instruct the leaders of Israel, the priests and the Levites, who would help Solomon reigning and building the temple. The census of the Levites (verses 3–5) does not contradict 1 Chronicles 21:6, because it is not a general population census but concerns the division of duties only ascribed to this particular tribe. The Levites are not primarily recorded according to their family trees here, but significantly to their functions: officers and
judges, gatekeepers, musicians. They are to be listed further in 1 Chronicles 23–26.
"So when David was old and full of days, he made Solomon his son king over Israel."
"And he gathered together all the princes of Israel, with the priests and the Levites."
Verse 2.
David prepared well for his death and the reign of his successor, Solomon, by convening his officials to achieve a smooth transition (without mentioning the events recorded in 2 Samuel 15–1 Kings 2.
The verse parallels 1 Chronicles 28:1 for the latter as a 'resumptive repetition'. although there are apparent differences (cf. 1 Chronicles 13:5 and 15:3 show that repetitions does not always need be 'resumptive'.
"Now the Levites were numbered from the age of thirty years and above; and the number of individual males was thirty-eight thousand."
Verse 3.
The minimum age of Levites for holding office varies, perhaps according to the number of people available for the duties: 30 years old and above in Numbers 4:3, 23, 30, as here; 25 in Numbers 8:24; 20 in Ezra 3:8;1 Chroncles 23:24–27; 2 Chroncles 31:17.
"And David divided them into courses among the sons of Levi, namely, Gershon, Kohath, and Merari."
Verse 6.
The 'tripartite segmentation' of the Levites is similar to that in Exodus 6:16–19; Numbers 3:17–39; 1 Chronicles 6:1, 16–47. The families of these three Levite clans are listed in verses 7–24, resulting 24 courses (Japhet
(1993: 43): Gershon 10, Kohath 9, Merari 5) or 22 (Rudolph
(1955: 155): Gershon 9, Kohath 9, Merari 4). Each course or division was to perform duties in rotation until a round was completed and a new round was started.
Duties of the Levites (23:25–32).
This section covers the duties of the Levites, partly repeating those mentioned in 1 Chronicles 9. The peace granted by YHWH to his people forced changes to be made in the job descriptions (verses 25–26, 28–32) in contrast to Deuteronomy 12:8–12 or 1
Chronicles 22:9.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67020015
|
6702115
|
N4-(beta-N-acetylglucosaminyl)-L-asparaginase
|
N4-(beta-N-acetylglucosaminyl)-L-asparaginase (EC 3.5.1.26, "aspartylglucosylamine deaspartylase", "aspartylglucosylaminase", "aspartylglucosaminidase", "aspartylglycosylamine amidohydrolase", "N-aspartyl-beta-glucosaminidase", "glucosylamidase", "beta-aspartylglucosylamine amidohydrolase", "4-N-(beta-N-acetyl-D-glucosaminyl)-L-asparagine amidohydrolase") is an enzyme with systematic name "N4-(beta-N-acetyl-D-glucosaminyl)-L-asparagine amidohydrolase". This enzyme catalyses the following chemical reaction
N4-(beta-N-acetyl-D-glucosaminyl)-L-asparagine + H2O formula_0 N-acetyl-beta-D-glucosaminylamine + L-aspartate
This enzyme acts only on asparagine-oligosaccharides containing one amino acid.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=6702115
|
67026852
|
1 Chronicles 24
|
First Book of Chronicles, chapter 24
1 Chronicles 24 is the twenty-fourth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the organization and departments of priests (verses 1–19) and a list of non-priestly Levites (verses 20–31). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have any parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David organizes the priests (24:1–19).
This section details the organization of the priests, the highest branch of the Levites, in a more advanced and systematic manner than anywhere else in the Hebrew Bible and was adhered rigidly until the Roman period (cf. ). Lists of the priestly families also found partially in –; ; ; .
"Now these are the divisions of the sons of Aaron. The sons of Aaron; Nadab, and Abihu, Eleazar, and Ithamar."
Verse 1.
Among the four sons of Aaron (), Nadab and Abihu died without children (verse 2); and the other two had to supply the "chief men of the house", of which Eleazar had sixteen, and Ithamar eight (verse 4).
"And David distributed them, both Zadok of the sons of Eleazar, and Ahimelech of the sons of Ithamar, according to their offices in their service."
Verse 3.
Of the two priestly families (; ; ; cf. ), Zadok represented the family of Eleazar, and Ahimelech represented the family of Ithamar, to help David organizing the priests. The Chronicler emphasizes the equal treatment of the two groups in the passage (cf 24:31; 26:13) using a procedure of drawing lots (verse 5), also in 1 Chronicles (24:31; 25:8; 26:13) and elsewhere (for examples. Nehemiah 10:35), to indicate God's hand in the distribution of the personnel.
Remaining Levite assignments (24:20–31).
This section contains the list of Levites which overlaps with the one in . The Levites had similar rotation schedule as the priests (verse 31), and used the same system of drawing lots as the priests with almost the same witnesses, indicating that the Levites are considered as important as the priests.
"Today is the holy Sabbath, the holy Sabbath unto the Lord; this day, which is the course? [Appropriate name] is the course. May the Merciful One return the course to its place soon, in our days. Amen."
After which, they would recount the number of years that have passed since the destruction of Jerusalem, and conclude with the words:
"May the Merciful One build his house and sanctuary, and let them say "Amen"."
1. In 1920, a marble stone inscription was found in Ashkelon showing a partial list of the priestly wards, attesting to the existence of such plaques, perhaps mounted to the walls of synagogues.
2. In 1962 three small fragments of one Hebrew stone inscription, dated to the 3rd/4th centuries, were found in Caesarea Maritima, bearing the partial names of places associated with the priestly courses (the rest of which had been reconstructed) as follows:
Document witnesses for priestly divisions.
This is the oldest inscription mentioning Nazareth as a location, outside the Bible and pilgrim notes.
3. In 1970 the stone inscription DJE 23 was discovered on a partially buried column in a mosque, in the Yemeni village of Bayt Ḥaḍir, showing ten names of the priestly wards and their respective towns and villages. The Yemeni inscription is the longest roster of names of this sort ever discovered, unto this day. The names legible on the stone column discovered by Walter W. Müller read as follows:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67026852
|
67026853
|
1 Chronicles 25
|
First Book of Chronicles, chapter 25
1 Chronicles 25 is the twenty-fifth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records the organization and departments of Levite temple musicians, from three main families (verses 1–19) and the drawing of lots to allocate individual musicians' duties (verses 20–31). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
Three families of musicians (25:1–6).
This section details the organization of the temple musicians who strictly belonged to the Levites (1 Chronicles 23:30–31; cf. 1 Chronicles 15:16–24; 16:4–6). There were three main musician families: Asaph, Jeduthun, and Heman, whose members would be organized into divisions. After returning from exile in Babylon occasionally all musicians are regarded as descendants of Asaph (Ezra 2:41; 3:10; Nehemiah 7:44). Nehemiah 11:17 and 1 Chronicles 9:15-16 mention both Asaph and Jeduthun, whereas a 'third tradition' speaks of three musicians comprising Asaph, Heman, and Ethan (1 Chronicles 6:44; 15:17,19). The similar way of writing both names helps the identification of Jeduthun and Ethan. The order of Asaph, Jeduthun, Heman probably reflects an older hierarchy, but the tone in this passage elevates the family of Heman as the largest ('according to the promise of God to exalt him', verse 5). The musician families are introduced with their duties, such as to prophecy (verses 1–3), and instruments, that their singing, playing, and the content of their psalms or music, can be viewed as in 1 Samuel 10:5 and 2 Kings 3:15 as emphasizing the 'close relationship between music and prophecy'. In 2 Chronicles 29:25 David gave an order, which was supported by two prophets (Gad and Nathan), to confirm permanent office of these Levites as temple musicians. Allusions to song and music as a kind of prophecy (verses 1–3; cf. 2 Chronicles 24:19–22) may be related to the tradition of regarding David as a 'prophet who composed the psalms through divine inspiration'.
"Moreover David and the captains of the host separated to the service of the sons of Asaph, and of Heman, and of Jeduthun, who should prophesy with harps, with psalteries, and with cymbals: and the number of the workmen according to their service was:"
Verse 1.
Asaph, Heman, and Jeduthun belonged respectively to the Gershon, Kohath, and Merarite families (), which are the three branches of Levites.
"All these were the sons of Heman the king’s seer in the words of God, to exalt his horn. For God gave Heman fourteen sons and three daughters."
Twenty-four divisions of musicians (25:7–31).
The allocation of 24 divisions of musicians resembles that of the priest, suggesting that 'sacrifice and music are closely intertwined' (cf. 23:29–30), but, unlike the priests, none of the names in the list can be proved to have existed in other texts. Four divisions from the family of Asaph (numbers 1, 3, 5, 7), six from the family of Jeduthun (numbers 2, 4, 8, 10, 12, 14), and 14 from the family of Heman (numbers 6, 9, 11, 13, 15–24). From the result it can be deduced that the lots were not placed separately by family, but all lots were placed in one urn, so after the lots of Asaph and Jeduthun were drawn, only sons of Heman remained. Each of the 24 musical divisions has 12 members (24 x 12 = 288).
" And they cast lots for their duty, the small as well as the great, the teacher with the student."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67026853
|
67026855
|
1 Chronicles 26
|
First Book of Chronicles, chapter 26
1 Chronicles 26 is the twenty-sixth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter describes particular duties of the Levites as gatekeepers (verses 1–19), the temple treasurers (verses 20–28), officers and judges (verses 29–32). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 32 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
The gatekeepers (26:1–19).
This section describes the gatekeepers as a part of David's administrative organization, whom are counted as Levites in the Chronicles (cf. Ezra 2:42, 70; Nehemiah 11:19). Verses 1–12 contain a list of the members, and their assignments by lots are detailed in verses 13–19 with verses 12–13 as a transition passage between the two parts. A group called "sanctuary guards" existed when David transported the ark earlier in his reign (1 Chronicles 15:18, 23–24; 16:38, 42; 23:5), and here the Levite gatekeepers were to perform guard duty, including opening the Temple gates in the morning. The gatekeepers were also to manage the temple vessels, including holy utensils, and materials, including flour, wine, spices and oil (9:17–32) as well as to perform 'administrative service on behalf of the king' (2 Chronicles 31:14; 34:13). The lottery (verse 13) determined which family to serve at which gate, so the numbers of family members did not affect the selection process. During the period of
return from exile (Ezra 2:42-3) the gatekeepers who were not of levitical rank gradually
achieved this status.
"Concerning the divisions of the gatekeepers: of the Korahites, Meshelemiah the son of Kore, of the sons of Asaph."
"4 Moreover the sons of Obed-Edom were Shemaiah the firstborn, Jehozabad the second, Joah the third, Sacar the fourth, Nethanel the fifth, 5 Ammiel the sixth, Issachar the seventh, Peulthai the eighth; for God blessed him."
"6 Also to Shemaiah his son were sons born who governed their fathers' houses, because they were men of great ability. 7 The sons of Shemaiah were Othni, Rephael, Obed, and Elzabad, whose brothers Elihu and Semachiah were able men."
"8 All these were of the sons of Obed-Edom, they and their sons and their brethren, able men with strength for the work: sixty-two of Obed-Edom."
Treasurers, regional officials, and judges (26:20–32).
Some listed here are also mentioned in 1 Chronicles 23:6-23. The Levites were given board responsibilities such as 'oversight of Israel west of Jordan' and east of Jordan ("the Reubenites, the Gadites, and the half-tribe of Manasseh"; verse 32) as 'officers and judges' (verses 29–32; cf. 23:3-5; 2 Chronicles 17:2; 19:5). The list of treasury officers (verses 20–28) is linked to verses 29–31 as the Izharites and the Hebronites (verse 23) are mentioned in both passages. The record distinguishes between 'the treasuries of the house of God' (verses 20, 22) under the responsibility of the Gershonites and 'the treasuries for the dedicated things' (verses 20,26) under the responsibility of the Kohathites. Shebuel of Amram's family (of Kohathite origin; mentioned in verse 24, but also appears in 23:16; 24:20) seems to overview both treasuries. Unlike the treasuries of the house of God, those of the dedicated things are described in detail (verses 26–28), including 'spoils of war' provided by various important persons in a 'democratic' manner which Chronicles probably take from Numbers 31:48, 52, 54 as a literary source. The wars were fought by David and Saul (vastly recorded in the Books of Samuel, Kings and Chronicles), Samuel (probably referring to 1 Samuel 7:7-14), Abner and Joab (probably those in 2 Samuel 2–4). The administrative duties of Levites (verses 29–32), in addition to their religious roles (cf also 23:4 and 2 Chronicles 19:11) would become especially important during the Maccabean period. The order of these duties is based on David's plans and was partly carried out in post-exilic times, reflecting 'a time in which spiritual and secular elements were closely intertwined and the religious and political claim to Transjordanian territories had not been relinquished', which was important for the Chronicler to include entire region (cf. 2 Chronicles 19).
"31 Jeriah was chief of the Hebronites according to the genealogical records of his fathers. In the fortieth year of the reign of David, mighty men of valor were found in the records among the Hebronites at Jazer of Gilead, 32 and his brothers, able men, two thousand seven hundred heads of the fathers, to them King David entrusted all matters of God and of the king concerning the Reubenites, the Gadites, and the half-tribe of Manasseh."
Verses 31–32.
The distinction of matters related to the king and related to God is only noted in the Chronicles (26:30, 32; 2 Chronicles 19:11) and Ezra 7:26.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67026855
|
67026856
|
1 Chronicles 27
|
First Book of Chronicles, chapter 27
1 Chronicles 27 is the twenty-seventh chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter comprises five parts: David's military divisions and their commanders (verses 1–15), the leaders of the tribes (verses 16–22), a comment on the census (verses 23–24), David's civil officers (verses 25–31), and David's advisers (verses 32–34). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 34 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David's military divisions and their commanders (27:1–15).
The organization of the military was as orderly as that of the priests and Levites. The military forces consisted of 12 divisions of 24,000 men, each subdivided into thousands and hundreds and headed by a divisional leader, reflecting David's standard administrative procedure (1 Chronicles 23:6-23; 24:1-19; 25:8-31; 26:1-12). Each division serves for one month a year, similar to Solomon's system of twelve royal officers in charge of one-month supplying the royal court (1 Kings 4:7).. The divisions' commanders are all mentioned in the list of David's heroes (11:10-47; 2 Samuel 23:8-39), though they are not the first twelve names stated and by contrast to chapter 11, the origins of names all come from the center of David's kingdom. The total army is enormous (288,000 men) and is only deployed as a militia in times of war. Some incongruities with ch. 11 as well as certain other details (such as two commanders of some
departments) suggest that this passage is based on real circumstances.
"And the children of Israel, according to their number, the heads of fathers’ houses, the captains of thousands and hundreds and their officers, served the king in every matter of the military divisions. These divisions came in and went out month by month throughout all the months of the year, each division having twenty-four thousand."
Leaders of the tribes (27:16–22).
The list following the army leaders is of the (political) leaders of the tribes. (cf. 1 Chronicles 5:6). These leaders are presumed to be involved in carrying out the census reported in verse 23. The twelve tribes are not listed according to a consistent system in the Hebrew Bible, nor using the same names (some tribal chiefs can only be found in the Chronicles. It is most similar to Numbers 1 (which also involves a census), although not identical. The omission of Gad and Asher and the separation of the Aaronites from Levi are particularly notable in this list.
Comment on the census (27:23–24).
Mathys considers these verses 'an extremely artistic attempt at twisting the story of the census (1 Chronicles 21) to grant David forgiveness for his deed', as it (implicitly) exonerates David by stating him follow the rules laid down for censuses in (by counting only men older than 20 years) and by giving a justification 'for the LORD had promised to make Israel as numerous as the stars of heaven' (cf. ), as spoken by the Lord to Abraham ().
David's civil officers (27:25–31).
This section records detailed information on David's wealth, the geographical dispersal
of his agricultural estates (verse 27), as well as the storehouse in both urban and ruler areas (verses 27–28), and his highest-ranking administrative officers to oversee the trades (verse 30; camels and donkeys are not related directly to agriculture, but to trade). The list is regarded as a reliable historical document, that correctly reflects David's treasury; its historical authenticity is supported by several impressive arguments: the administration is simpler than during Solomon's reign and nothing contradicts the list's authenticity. The Bedouin (foreigners to the Israelites) were employed in David's administrators for their skill at keeping camels and smaller livestock. The extensive discussion of agriculture is typical in the Chronicles (as also observed in Uzziah's passion of agriculture in 2 Chronicles 26:10).
David's advisers (27:32–34).
This section lists David's closest officials, but not a parallel to the list of David's state
officials in 1 Chronicles 18:15-17. With historical information given as an aside, it seems not to be an official list.
"And Ahithophel was the king's counsellor: and Hushai the Archite was the king's companion:"
Verse 33.
A specific account related to Ahitophel and Hushai is recorded in , 23–37.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67026856
|
67026858
|
1 Chronicles 28
|
First Book of Chronicles, chapter 28
1 Chronicles 28 is the twenty-eighth chapter of the Books of Chronicles in the Hebrew Bible or the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter records David's final speech to all officials of Israel (verses 2–8) and to Solomon (verses 9–10, 20–21), specifically handing him the plans for the temple's construction (verses 11–19). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 21 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
David's address to the leaders of Israel (28:1–8).
This section apparently continues from 1 Chronicles 23:1–2. After organizing the administration of his kingdom, David gathers a 'large national convocation' to prepare the reign of Solomon, to enlist the support of the leaders for the new king and to witness his final messages. Such gathering or assembly is often recorded in the Chronicles (1 Chronicles 13:5; 15:3, 2 Chronicles 5:2-3, 11:1; 20:26) God's promise given through Nathan (1 Chronicles 17) was repeated with some individual variations, along with the comparison of David and Solomon being selected for their reigns (verses 4–5) to the system of drawing lots, pointing to YHWH as the active force in creating an eternal kingdom (verses 7–8).
David delivered the plan of the temple to Solomon (28:9–21).
David addressed Solomon briefly in verses 9–10 with an 'adapted tone of a Deuteronomistic theologoumenon', calling his son to serve YHWH with single (undivided) mind and willing heart. In verses 11–19, David transferred to Solomon his plans for the temple's construction, its materials, and all matters related to it, based on God's plans given to Moses in Exodus 25–31. It also resembles the plans of the new temple's construction shown to Ezekiel (Ezekiel 40–44). Then, in verses 20–21, David reminded Solomon of God's presence together with the willing support of the priests, the Levites, and the entire population, that provide an ideal condition for executing the construction plan.
"And David said to Solomon his son, Be strong and of good courage, and do it: fear not, nor be dismayed: for the Lord God, even my God, will be with thee; he will not fail thee, nor forsake thee, until thou hast finished all the work for the service of the house of the Lord."
Verse 20.
The transitional message from David to Solomon recalls the one from Moses to Joshua, especially for the phrase "be strong and of good courage" (Deuteronomy 31:7; 31:23; Joshua 1:6–18).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67026858
|
67026859
|
1 Chronicles 29
|
First Book of Chronicles, chapter 29
1 Chronicles 29 is the twenty-ninth chapter of the Books of Chronicles in the Hebrew Bible or the final chapter in the First Book of Chronicles in the Old Testament of the Christian Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter consists of four parts: the voluntary gifts for the temple (verses 1–9), David's prayer and the people's response (verses 10–20); Solomon's accession to the throne (verses 21–25), and the concluding praise of David's reign (verses 26–30). The whole chapter belongs to the section focusing on the kingship of David (1 Chronicles 9:35 to 29:30), which from chapter 22 to the end does not have parallel in 2 Samuel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 30 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant manuscripts of a Koine Greek translation known as the Septuagint, made in the last few centuries BCE, include Codex Vaticanus (B; formula_0B; 4th century), Codex Alexandrinus (A; formula_0A; 5th century) and Codex Marchalianus (Q; formula_0Q; 6th century).
Offerings for the Temple (29:1–9).
This section records David's collections of materials for the temple construction, which encouraged other leaders of Israel to offer generous ('willing') donation, far more than David's, in parallel to Israel's gifts for the construction of the Tabernacle (; ). David contributed to
the costs of the temple's construction both as a king (cf. 1 Kings) and as an ordinary believer, with freedom and joy.
David's farewell prayer and the people's response (29:10–20).
The section records David's prayer, beginning with a doxology, continuing with an interpretation of the voluntary donations and concluding with a wish for people not to forget the past and a wish for the future reign of King Solomon. The form of the prayer (cf. 2 Samuel 23:1-7; 1 Kings 2:1-10) follows the final addresses by great leaders in the past: Jacob (Genesis 49:1-28), Moses (Deuteronomy 32:1-47; 33:1-29), Joshua (Joshua 23:1-16; 24:1-28), and Samuel (1 Samuel 12:1-25).
"Yours, O Lord, is the greatness,"
"The power and the glory,"
"The victory and the majesty;"
"For all that is in heaven and in earth is Yours;"
"Yours is the kingdom, O Lord,"
"And You are exalted as head over all."
Solomon, king of Israel (29:21–25).
The ascension of Solomon is reported as smooth and without incident, followed by a public endorsement (for the second time; cf. ) of Solomon's enthronement by all people of Israel.
The close of David’s reign (29:26–30).
The summary of an individual king's reign is a standard practice in the books of Kings, with that of David differing from the usual pattern in 1 Kings 2:10-12, but closer to the other kings' concluding formulae in the Chronicles. The Chronicles cite three prophets (with their differing titles) who provide the records of David's reign. David was said to enjoy a productive and respected life, with security and longevity as the marks of divine blessings (; ; ; ; ).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67026859
|
670279
|
Cycle detection
|
Algorithmic problem
In computer science, cycle detection or cycle finding is the algorithmic problem of finding a cycle in a sequence of iterated function values.
For any function f that maps a finite set S to itself, and any initial value "x"0 in S, the sequence of iterated function values
formula_0
must eventually use the same value twice: there must be some pair of distinct indices i and j such that "xi" = "xj". Once this happens, the sequence must continue periodically, by repeating the same sequence of values from "xi" to "x""j" − 1. Cycle detection is the problem of finding i and j, given f and "x"0.
Several algorithms are known for finding cycles quickly and with little memory. Robert W. Floyd's tortoise and hare algorithm moves two pointers at different speeds through the sequence of values until they both point to equal values. Alternatively, Brent's algorithm is based on the idea of exponential search. Both Floyd's and Brent's algorithms use only a constant number of memory cells, and take a number of function evaluations that is proportional to the distance from the start of the sequence to the first repetition. Several other algorithms trade off larger amounts of memory for fewer function evaluations.
The applications of cycle detection include testing the quality of pseudorandom number generators and cryptographic hash functions, computational number theory algorithms, detection of infinite loops in computer programs and periodic configurations in cellular automata, automated shape analysis of linked list data structures, and detection of deadlocks for transactions management in DBMS.
Example.
The figure shows a function f that maps the set "S" = {0,1,2,3,4,5,6,7,8} to itself. If one starts from "x"0 = 2 and repeatedly applies f, one sees the sequence of values
2, 0, 6, 3, 1, 6, 3, 1, 6, 3, 1, ...
The cycle in this value sequence is 6, 3, 1.
Definitions.
Let S be any finite set, f be any function from S to itself, and "x"0 be any element of S. For any "i" > 0, let "xi" = "f"("x""i" − 1). Let μ be the smallest index such that the value "x""μ" reappears infinitely often within the sequence of values "xi", and let λ (the loop length) be the smallest positive integer such that "x""μ" = "x""λ" + "μ". The cycle detection problem is the task of finding λ and μ.
One can view the same problem graph-theoretically, by constructing a functional graph (that is, a directed graph in which each vertex has a single outgoing edge) the vertices of which are the elements of S and the edges of which map an element to the corresponding function value, as shown in the figure. The set of vertices reachable from starting vertex "x"0 form a subgraph with a shape resembling the Greek letter rho (ρ): a path of length μ from "x"0 to a cycle of λ vertices.
Computer representation.
Generally, f will not be specified as a table of values, the way it is shown in the figure above. Rather, a cycle detection algorithm may be given access either to the sequence of values "xi", or to a subroutine for calculating f. The task is to find λ and μ while examining as few values from the sequence or performing as few subroutine calls as possible. Typically, also, the space complexity of an algorithm for the cycle detection problem is of importance: we wish to solve the problem while using an amount of memory significantly smaller than it would take to store the entire sequence.
In some applications, and in particular in Pollard's rho algorithm for integer factorization, the algorithm has much more limited access to S and to f. In Pollard's rho algorithm, for instance, S is the set of integers modulo an unknown prime factor of the number to be factorized, so even the size of S is unknown to the algorithm.
To allow cycle detection algorithms to be used with such limited knowledge, they may be designed based on the following capabilities. Initially, the algorithm is assumed to have in its memory an object representing a pointer to the starting value "x"0. At any step, it may perform one of three actions: it may copy any pointer it has to another object in memory, it may apply f and replace any of its pointers by a pointer to the next object in the sequence, or it may apply a subroutine for determining whether two of its pointers represent equal values in the sequence. The equality test action may involve some nontrivial computation: for instance, in Pollard's rho algorithm, it is implemented by testing whether the difference between two stored values has a nontrivial greatest common divisor with the number to be factored. In this context, by analogy to the pointer machine model of computation, an algorithm that only uses pointer copying, advancement within the sequence, and equality tests may be called a pointer algorithm.
Algorithms.
If the input is given as a subroutine for calculating f, the cycle detection problem may be trivially solved using only "λ" + "μ" function applications, simply by computing the sequence of values "xi" and using a data structure such as a hash table to store these values and test whether each subsequent value has already been stored. However, the space complexity of this algorithm is proportional to "λ" + "μ", unnecessarily large. Additionally, to implement this method as a pointer algorithm would require applying the equality test to each pair of values, resulting in quadratic time overall. Thus, research in this area has concentrated on two goals: using less space than this naive algorithm, and finding pointer algorithms that use fewer equality tests.
Floyd's tortoise and hare.
Floyd's cycle-finding algorithm is a pointer algorithm that uses only two pointers, which move through the sequence at different speeds. It is also called the "tortoise and the hare algorithm", alluding to Aesop's fable of The Tortoise and the Hare.
The algorithm is named after Robert W. Floyd, who was credited with its invention by Donald Knuth. However, the algorithm does not appear in Floyd's published work, and this may be a misattribution: Floyd describes algorithms for listing all simple cycles in a directed graph in a 1967 paper, but this paper does not describe the cycle-finding problem in functional graphs that is the subject of this article. In fact, Knuth's statement (in 1969), attributing it to Floyd, without citation, is the first known appearance in print, and it thus may be a folk theorem, not attributable to a single individual.
The key insight in the algorithm is as follows. If there is a cycle, then, for any integers "i" ≥ "μ" and "k" ≥ 0, "xi" = "x""i" + "kλ", where λ is the length of the loop to be found, μ is the index of the first element of the cycle, and k is a whole integer representing the number of loops. Based on this, it can then be shown that "i" = "kλ" ≥ "μ" for some "k" if and only if "xi" = "x"2"i" (if "xi" = "x"2"i" in the cycle, then there exists some k such that 2"i" = "i" + "kλ", which implies that "i" = "kλ"; and if there are some i and k such that "i" = "kλ", then "2i" = "i" + "kλ" and "x"2"i" = "x""i" + "kλ"). Thus, the algorithm only needs to check for repeated values of this special form, one twice as far from the start of the sequence as the other, to find a period ν of a repetition that is a multiple of λ. Once ν is found, the algorithm retraces the sequence from its start to find the first repeated value "x""μ" in the sequence, using the fact that λ divides ν and therefore that "x""μ" = "x""μ" + "v". Finally, once the value of μ is known it is trivial to find the length λ of the shortest repeating cycle, by searching for the first position "μ" + "λ" for which "x""μ" + "λ" = "x""μ".
The algorithm thus maintains two pointers into the given sequence, one (the tortoise) at "xi", and the other (the hare) at "x"2"i". At each step of the algorithm, it increases i by one, moving the tortoise one step forward and the hare two steps forward in the sequence, and then compares the sequence values at these two pointers. The smallest value of "i" > 0 for which the tortoise and hare point to equal values is the desired value ν.
The following Python code shows how this idea may be implemented as an algorithm.
def floyd(f, x0) -> (int, int):
"""Floyd's cycle detection algorithm."""
# Main phase of algorithm: finding a repetition x_i = x_2i.
# The hare moves twice as quickly as the tortoise and
# the distance between them increases by 1 at each step.
# Eventually they will both be inside the cycle and then,
# at some point, the distance between them will be
# divisible by the period λ.
tortoise = f(x0) # f(x0) is the element/node next to x0.
hare = f(f(x0))
while tortoise != hare:
tortoise = f(tortoise)
hare = f(f(hare))
# At this point the tortoise position, ν, which is also equal
# to the distance between hare and tortoise, is divisible by
# the period λ. So hare moving in cycle one step at a time,
# and tortoise (reset to x0) moving towards the cycle, will
# intersect at the beginning of the cycle. Because the
# distance between them is constant at 2ν, a multiple of λ,
# they will agree as soon as the tortoise reaches index μ.
# Find the position μ of first repetition.
mu = 0
tortoise = x0
while tortoise != hare:
tortoise = f(tortoise)
hare = f(hare) # Hare and tortoise move at same speed
mu += 1
# Find the length of the shortest cycle starting from x_μ
# The hare moves one step at a time while tortoise is still.
# lam is incremented until λ is found.
lam = 1
hare = f(tortoise)
while tortoise != hare:
hare = f(hare)
lam += 1
return lam, mu
This code only accesses the sequence by storing and copying pointers, function evaluations, and equality tests; therefore, it qualifies as a pointer algorithm. The algorithm uses "O"("λ" + "μ") operations of these types, and "O"(1) storage space.
Brent's algorithm.
Richard P. Brent described an alternative cycle detection algorithm that, like the tortoise and hare algorithm, requires only two pointers into the sequence. However, it is based on a different principle: searching for the smallest power of two 2"i" that is larger than both λ and μ. For "i" = 0, 1, 2, ..., the algorithm compares "x"2"i"−1 with each subsequent sequence value up to the next power of two, stopping when it finds a match. It has two advantages compared to the tortoise and hare algorithm: it finds the correct length λ of the cycle directly, rather than needing to search for it in a subsequent stage, and its steps involve only one evaluation of the function f rather than three.
The following Python code shows how this technique works in more detail.
def brent(f, x0) -> (int, int):
"""Brent's cycle detection algorithm."""
# main phase: search successive powers of two
power = lam = 1
tortoise = x0
hare = f(x0) # f(x0) is the element/node next to x0.
while tortoise != hare:
if power == lam: # time to start a new power of two?
tortoise = hare
power *= 2
lam = 0
hare = f(hare)
lam += 1
# Find the position of the first repetition of length λ
tortoise = hare = x0
for i in range(lam):
# range(lam) produces a list with the values 0, 1, ... , lam-1
hare = f(hare)
# The distance between the hare and tortoise is now λ.
# Next, the hare and tortoise move at same speed until they agree
mu = 0
while tortoise != hare:
tortoise = f(tortoise)
hare = f(hare)
mu += 1
return lam, mu
Like the tortoise and hare algorithm, this is a pointer algorithm that uses "O"("λ" + "μ") tests and function evaluations and "O"(1) storage space. It is not difficult to show that the number of function evaluations can never be higher than for Floyd's algorithm. Brent claims that, on average, his cycle finding algorithm runs around 36% more quickly than Floyd's and that it speeds up the Pollard rho algorithm by around 24%. He also performs an average case analysis for a randomized version of the algorithm in which the sequence of indices traced by the slower of the two pointers is not the powers of two themselves, but rather a randomized multiple of the powers of two. Although his main intended application was in integer factorization algorithms, Brent also discusses applications in testing pseudorandom number generators.
Gosper's algorithm.
R. W. Gosper's algorithm finds the period formula_1, and the lower and upper bound of the starting point, formula_2 and formula_3, of the first cycle. The difference between the lower and upper bound is of the same order as the period, i.e. formula_4.
Advantages.
The main feature of Gosper's algorithm is that it never backs up to reevaluate the generator function, and is economical in both space and time. It could be roughly described as a concurrent version of Brent's algorithm. While Brent's algorithm gradually increases the gap between the tortoise and hare, Gosper's algorithm uses several tortoises (several previous values are saved), which are roughly exponentially spaced. According to the note in HAKMEM item 132, this algorithm will detect repetition before the third occurrence of any value, i.e. the cycle will be iterated at most twice. This note also states that it is sufficient to store formula_5 previous values; however, the provided implementation stores formula_6 values. For example, assume the function values are 32-bit integers and the "second iteration" of the cycle ends after at most 232 function evaluations since the beginning (viz. formula_7). Then Gosper's algorithm will find the cycle after at most 232 function evaluations, while consuming the space of 33 values (each value being a 32-bit integer).
Complexity.
Upon the formula_8-th evaluation of the generator function, the algorithm compares the generated value with formula_9 previous values; observe that formula_8 goes up to at least formula_10 and at most formula_11. Therefore, the time complexity of this algorithm is formula_12. Since it stores formula_6 values, its space complexity is formula_6. This is under the usual assumption, present throughout this article, that the size of the function values is constant. Without this assumption, the space complexity is formula_13 since we need at least formula_10 distinct values and thus the size of each value is formula_14.
Time–space tradeoffs.
A number of authors have studied techniques for cycle detection that use more memory than Floyd's and Brent's methods, but detect cycles more quickly. In general these methods store several previously-computed sequence values, and test whether each new value equals one of the previously-computed values. In order to do so quickly, they typically use a hash table or similar data structure for storing the previously-computed values, and therefore are not pointer algorithms: in particular, they usually cannot be applied to Pollard's rho algorithm. Where these methods differ is in how they determine which values to store. Following Nivasch, we survey these techniques briefly.
Any cycle detection algorithm that stores at most M values from the input sequence must perform at least formula_16 function evaluations.
Applications.
Cycle detection has been used in many applications.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " x_0,\\ x_1=f(x_0),\\ x_2=f(x_1),\\ \\dots,\\ x_i=f(x_{i-1}),\\ \\dots"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "\\mu_l"
},
{
"math_id": 3,
"text": "\\mu_u"
},
{
"math_id": 4,
"text": "\\mu_l + \\lambda \\sim \\mu_h"
},
{
"math_id": 5,
"text": "\\Theta(\\log \\lambda)"
},
{
"math_id": 6,
"text": "\\Theta(\\log (\\mu + \\lambda))"
},
{
"math_id": 7,
"text": "\\mu + 2\\lambda \\le 2^{32}"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "O(\\log i)"
},
{
"math_id": 10,
"text": "\\mu + \\lambda"
},
{
"math_id": 11,
"text": "\\mu + 2\\lambda"
},
{
"math_id": 12,
"text": "O((\\mu + \\lambda) \\cdot \\log (\\mu + \\lambda))"
},
{
"math_id": 13,
"text": "\\Omega(\\log^2 (\\mu + \\lambda))"
},
{
"math_id": 14,
"text": "\\Omega(\\log (\\mu + \\lambda))"
},
{
"math_id": 15,
"text": "(\\lambda+\\mu)(1+cM^{-1/2})"
},
{
"math_id": 16,
"text": "(\\lambda+\\mu)\\left(1+\\frac{1}{M-1}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=670279
|
6703320
|
Inner regular measure
|
In mathematics, an inner regular measure is one for which the measure of a set can be approximated from within by compact subsets.
Definition.
Let ("X", "T") be a Hausdorff topological space and let Σ be a σ-algebra on "X" that contains the topology "T" (so that every open set is a measurable set, and Σ is at least as fine as the Borel σ-algebra on "X"). Then a measure "μ" on the measurable space ("X", Σ) is called inner regular if, for every set "A" in Σ,
formula_0
This property is sometimes referred to in words as "approximation from within by compact sets."
Some authors use the term tight as a synonym for inner regular. This use of the term is closely related to tightness of a family of measures, since a finite measure "μ" is inner regular if and only if, for all "ε" > 0, there is some compact subset "K" of "X" such that "μ"("X" \ "K") < "ε". This is precisely the condition that the singleton collection of measures {"μ"} is tight.
Examples.
When the real line R is given its usual Euclidean topology,
However, if the topology on R is changed, then these measures can fail to be inner regular. For example, if R is given the lower limit topology (which generates the same σ-algebra as the Euclidean topology), then both of the above measures fail to be inner regular, because compact sets in that topology are necessarily countable, and hence of measure zero.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu (A) = \\sup \\{ \\mu (K) \\mid \\text{compact } K \\subseteq A \\}."
}
] |
https://en.wikipedia.org/wiki?curid=6703320
|
67036263
|
Electron-on-helium qubit
|
Quantum bit
An electron-on-helium qubit is a quantum bit for which the orthonormal basis states |0⟩ and |1⟩ are defined by quantized motional states or alternatively the spin states of an electron trapped above the surface of liquid helium. The electron-on-helium qubit was proposed as the basic element for building quantum computers with electrons on helium by Platzman and Dykman in 1999.
History of electrons on helium.
The electrostrictive binding of electrons to the surface of liquid helium was first demonstrated experimentally by Bruschi and co-workers in 1966. A theoretical treatment of the electron-helium interaction was developed by Cole and Cohen in 1969 and, independently, by Shikin in 1970. An electron close to the surface of liquid helium experiences an attractive force due to the formation of a weak (~0.01"e") image charge in the dielectric liquid. However, the electron is prevented from entering the liquid by a high (~1 eV) barrier formed at the surface due to the hard-core repulsion of the electron by the helium atoms. As a result, the electron remains trapped outside the liquid. The energy of the electron in this potential well is quantised in a Hydrogen-like series with the modified Rydberg constant "R"He formula_0 10−4 "R"H. The binding energies of the ground ("n" = 1) and first excited ("n" = 2) states are -7.6 K and -1.9 K respectively and, as the energy required for excitation is higher than the typical experimental temperature (formula_11 K), the electron remains in the ground state, trapped several nanometres above the liquid surface. The first spectroscopic evidence for these surface states was presented by Grimes and co-workers in 1976.
The electron motion parallel to the helium surface is free and, as the surface is free of impurities, the electron can move across the helium with record-high mobility. The liquid surface can support electron densities up to an electrohydrodynamic limit of 2.4×109 cm−2, much lower than those typically achieved in semiconductor two-dimensional electron gases. For such low densities the electron system is described by nondegenerate statistics and, because the Coulomb interaction between electrons is only weakly screened by the helium, the spatial position of an electron in the 2D layer is strongly correlated with that of its neighbours. At low temperatures (typically below 1 K) the Coulomb interaction energy overcomes the electron thermal energy and the electrons form a 2D triangular lattice, the classical Wigner solid. The surface density can be increased towards the degenerate Fermi regime on thin helium films covering solid substrates, or on other cryogenic substrates that exhibit a negative electron affinity such as solid hydrogen or neon, although measurements on these substrates are typically hindered by surface roughness.
Since the 1970’s, electrons on helium have been used to study the properties of 2D electron liquids and solids, as well as the liquid helium (4He or 3He) substrate. Notable areas of research include collective electron excitations and edge magnetoplasmon effects, many-body transport phenomena and Kosterlitz-Thouless melting in 2D, polaronic effects at the helium interface, the observation of microwave-induced zero-resistance states and incompressible states in the nondegenerate electron gas, and the mapping of the texture of superfluid 3He via interactions between the electron solid and quasiparticle excitations in the superfluid. In recent years, micron-scale helium channels with sub-surface gate electrodes have been used to create devices in which single surface-state electrons can be manipulated, facilitating the integration of electrons on helium with semiconductor device architectures and superconducting circuits.
Proposed quantum computing schemes – Rydberg, spin and orbital states.
In the Platzman and Dykman proposal, the ground and first excited Rydberg energy levels of electrons, trapped above electrodes submerged under the helium surface, were proposed as the qubit basis states. The intrinsic low temperature of the system allowed the straightforward preparation of the qubit in the ground state. Qubit operations were performed via the excitation of the Rydberg transition with resonant microwave fields at frequencies ~120 GHz. Qubit interactions were facilitated by the long-range Coulomb interaction between electrons. Qubit read-out was achieved by the selective ionisation of excited electrons from the helium surface. In 2000, Lea and co-authors proposed that the qubit read-out could be achieved using a single electron transistor (SET) device positioned beneath the helium.
In 2006, Lyon proposed that the spin state of an electron on helium could also be used as a qubit. A CCD-like architecture was proposed for the control of the many-qubit system with dipole-dipole interaction allowing two-qubit gate operations for adjacent spins. A global magnetic field parallel to the helium surface provided the axis for spin excitation, with local magnetic fields applied by submerged conductors used to bring the spins into resonance with microwave fields for qubit excitation. Exchange interaction for adjacent qubits was proposed as a read-out scheme, as demonstrated in semiconductor double-quantum-dot devices.
In 2010 Schuster and co-workers proposed that for an electron in a lateral trapping potential the orbital states for motion parallel to the helium surface could be used as qubit basis states. The electron trap was integrated into a superconducting coplanar cavity device. It was shown that, as in many superconducting qubit systems, the resonant exchange of microwave photons between the trapped electron and the cavity could be described by the Jaynes-Cummings Hamiltonian. Distant qubits could be coupled via a cavity bus. It was also shown that local magnetic field gradients could allow coupling between the electron spin state and the lateral motion, facilitating the read-out of the spin state via microwave spectroscopy of the cavity.
Decoherence.
In any quantum computer the decoherence of the qubit wavefunction, due to energy relaxation or dephasing effects, must be limited to a suitably low rate. For electron-on-helium qubits, deformations of the helium surface due to surface or bulk excitations (ripplons or phonons) modify the image charge potential and distort the electron wavefunction. Therefore, for Rydberg and orbital states, the primary source of decoherence is expected to be the emission of ripplons or phonons in the helium substrate. However, the decay rate due to these processes is expected to be slow (~100 μs) compared with the rate at which qubit operations can be performed (~10 ns). For the spin state, the inherent purity of the qubit environment and the weak spin orbit interaction for an electron moving above the helium surface results in predicted coherence times formula_21 s.
Current developments.
The first trapping and detection of single electrons on helium was demonstrated by Lea and co-workers in 2005, using a micron-scale helium-filled trap and a single electron transistor beneath the surface to count the electrons. This experiment also demonstrated the first coupling between an electron on helium and a superconducting quantum circuit. Subsequently, other experiments have demonstrated progress towards the coherent control of single electrons on helium. These include ultra-efficient electron clocking in microchannel CCD devices, controlled single electron transport measurements, and the trapping and manipulation of 1D electron arrays, In 2019, Koolstra and co-workers at the University of Chicago demonstrated the coupling of a single electron on helium to a superconducting microwave cavity, with a coupling strength "g"/2π ~ 5 MHz much larger than the resonator linewidth ~0.5 MHz. In 2020, researchers from Michigan State University and EeroQ presented new results and fabrication progress on an electron-on-helium chip design using the lateral motional state of the electron, in frequencies in the 5–10 GHz range, using a Single-electron transistor readout device.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\approx"
},
{
"math_id": 1,
"text": "\\lesssim"
},
{
"math_id": 2,
"text": ">"
}
] |
https://en.wikipedia.org/wiki?curid=67036263
|
6703729
|
Ramanujan's congruences
|
Some remarkable congruences for the partition function
In mathematics, Ramanujan's congruences are the congruences for the partition function "p"("n") discovered by Srinivasa Ramanujan:
formula_0
In plain words, e.g., the first congruence means that If a number is 4 more than a multiple of 5, i.e. it is in the sequence
4, 9, 14, 19, 24, 29, . . .
then the number of its partitions is a multiple of 5.
Later other congruences of this type were discovered, for numbers and for Tau-functions.
Background.
In his 1919 paper, he proved the first two congruences using the following identities (using q-Pochhammer symbol notation):
formula_1
He then stated that "It appears there are no equally simple properties for any moduli involving primes other than these".
After Ramanujan died in 1920, G. H. Hardy extracted proofs of all three congruences from an unpublished manuscript of Ramanujan on "p"("n") (Ramanujan, 1921). The proof in this manuscript employs the Eisenstein series.
In 1944, Freeman Dyson defined the rank function for a partition and conjectured the existence of a "crank" function for partitions that would provide a combinatorial proof of Ramanujan's congruences modulo 11. Forty years later, George Andrews and Frank Garvan found such a function, and proved the celebrated result that the crank simultaneously "explains" the three Ramanujan congruences modulo 5, 7 and 11.
In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences for small prime moduli. For example:
formula_2
Extending the results of A. Atkin, Ken Ono in 2000 proved that there are such Ramanujan congruences modulo every integer coprime to 6. For example, his results give
formula_3
Later Ken Ono conjectured that the elusive crank also satisfies exactly the same types of general congruences. This was proved by his Ph.D. student Karl Mahlburg in his 2005 paper "Partition Congruences and the Andrews–Garvan–Dyson Crank", linked below. This paper won the first Proceedings of the National Academy of Sciences Paper of the Year prize.
A conceptual explanation for Ramanujan's observation was finally discovered in January 2011 by considering the Hausdorff dimension of the following formula_4 function in the l-adic topology:
formula_5
It is seen to have dimension 0 only in the cases where "ℓ" = 5, 7 or 11 and since the partition function can be written as a linear combination of these functions this can be considered a formalization and proof of Ramanujan's observation.
In 2001, R.L. Weaver gave an effective algorithm for finding congruences of the partition function, and tabulated 76,065 congruences. This was extended in 2012 by F. Johansson to 22,474,608,014 congruences, one large example being
formula_6
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\np(5k+4) & \\equiv 0 \\pmod 5, \\\\\np(7k+5) & \\equiv 0 \\pmod 7, \\\\\np(11k+6) & \\equiv 0 \\pmod {11}.\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n& \\sum_{k=0}^\\infty p(5k+4)q^k=5\\frac{(q^5)_\\infty^5}{(q)_\\infty^6}, \\\\[4pt]\n& \\sum_{k=0}^\\infty p(7k+5)q^k=7\\frac{(q^7)_\\infty^3}{(q)_\\infty^4}+49q\\frac{(q^7)_\\infty^7}{(q)_\\infty^8}.\n\\end{align}\n"
},
{
"math_id": 2,
"text": "p(11^3 \\cdot 13k + 237)\\equiv 0 \\pmod {13}."
},
{
"math_id": 3,
"text": "p(107^4\\cdot 31k + 30064597)\\equiv 0\\pmod{31}."
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "P_\\ell(b;z) := \\sum_{n=0}^\\infty p\\left(\\frac{\\ell^bn+1}{24}\\right)q^{n/24}."
},
{
"math_id": 6,
"text": "p(999959^4\\cdot29k+ 28995221336976431135321047) \\equiv 0 \\pmod{29}."
}
] |
https://en.wikipedia.org/wiki?curid=6703729
|
67037549
|
Giovanni Camillo Glorioso
|
Italian mathematician and astronomer
Giovanni Camillo Glorioso (or Gloriosi) (1572 – 8 January 1643) was an Italian mathematician and astronomer. He was a friend of Marino Ghetaldi and Galileo Galilei's successor as professor of mathematics at Padua.
Life.
Giovanni Camillo Glorioso was born in the village of Montecorvino Rovella, near Salerno. He earned degrees in philosophy and theology from the University of Naples and studied mathematics with Vincenzo Filliucci and Giovanni Giacomo Staserio at the Jesuit college in Naples. He was a friend and correspondent of Galileo Galilei and replaced him as professor of mathematics at the University of Padua in 1613. He became famous for his observations of the comet of 1618, of Mars, and of Saturn. He was a close friend of the mathematician Antonio Santini (1577-1662) and was involved in a series of bitter arguments with the Aristotelian philosophers Scipione Chiaramonti and Fortunio Liceti and the Swiss mathematician Barthélemy Souvey, who succeeded him in the chair of mathematics at Padua in 1624.
Glorioso was particularly harsh in his attack on Scipione Chiaramonti's efforts to defend traditional Aristotelian cosmology. He criticised Chiaramonti's "De tribus novis stellis" and in 1636 Charamonti published a refutation, "Examen censurae Gloriosi", to which Glorioso replied the following year "Castigatio examinis". To this Chiaramonti responded in turn with "Castigatio Ioannis Camilli Gloriosi aduersus Scipionem Claramontium Caesenatem" (1638). Glorioso's final contribution to this dispute was his "Responsio" (1641). As he died soon after, this allowed Chiaramonti the last word, which he took with a volume of more than 500 pages, summarising his Aristotelian positions on a wide range of topics, his "Opus Scipionis Claramontis Caesenatis de Universo" (1644).
In contrast with Galileo, Glorioso shared Brahe's conclusion that comets were heavenly bodies, a position in agreement with our modern understanding. In a letter written on 29 May 1610 to his friend Johann Schreck, Glorioso attributed the invention of the sector to Michiel Coignet and not to Galileo, although the instrument is now mainly attributed to Coignet's friend Fabrizio Mordente. In the same letter Glorioso also claimed that Galileo did not invent the telescope, but he did only modify a previous invention by a Belgian scholar.
Glorioso was one of the leading italian algebraists of the time. In his most important work, "Exercitationum Mathematicarum Decades tres" (1627-1639), he confutes the quadrature of the circle by Giambattista della Porta, comments on Viète, and gives the solution of interesting questions respecting the theory of numbers. It's one of the first work to use the notation formula_0 for the equivalent of the current formula_1.
Late in his life Glorioso returned to Naples, where he befriended Francesco Fontana. Glorioso encouraged Fontana to devote himself to astronomical research and gave him access to his own library. Glorioso died in Naples on 8 January 1643. After his death, his library was sold to the viceroy Ramiro Núñez de Guzmán.
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "Aqqc"
},
{
"math_id": 1,
"text": "A^{8}"
}
] |
https://en.wikipedia.org/wiki?curid=67037549
|
670376
|
Power-flow study
|
Numerical analysis of electric power flow
In power engineering, the power-flow study, or load-flow study, is a numerical analysis of the flow of electric power in an interconnected system. A power-flow study usually uses simplified notations such as a one-line diagram and per-unit system, and focuses on various aspects of AC power parameters, such as voltages, voltage angles, real power and reactive power. It analyzes the power systems in normal steady-state operation.
Power-flow or load-flow studies are important for planning future expansion of power systems as well as in determining the best operation of existing systems. The principal information obtained from the power-flow study is the magnitude and phase angle of the voltage at each bus, and the real and reactive power flowing in each line.
Commercial power systems are usually too complex to allow for hand solution of the power flow. Special-purpose network analyzers were built between 1929 and the early 1960s to provide laboratory-scale physical models of power systems. Large-scale digital computers replaced the analog methods with numerical solutions.
In addition to a power-flow study, computer programs perform related calculations such as short-circuit fault analysis, stability studies (transient and steady-state), unit commitment and economic dispatch. In particular, some programs use linear programming to find the "optimal power flow", the conditions which give the lowest cost per kilowatt hour delivered.
A load flow study is especially valuable for a system with multiple load centers, such as a refinery complex. The power-flow study is an analysis of the system’s capability to adequately supply the connected load. The total system losses, as well as individual line losses, also are tabulated. Transformer tap positions are selected to ensure the correct voltage at critical locations such as motor control centers. Performing a load-flow study on an existing system provides insight and recommendations as to the system operation and optimization of control settings to obtain maximum capacity while minimizing the operating costs. The results of such an analysis are in terms of active power, reactive power, voltage magnitude and phase angle. Furthermore, power-flow computations are crucial for optimal operations of groups of generating units.
In term of its approach to uncertainties, load-flow study can be divided to deterministic load flow and uncertainty-concerned load flow. Deterministic load-flow study does not take into account the uncertainties arising from both power generations and load behaviors. To take the uncertainties into consideration, there are several approaches that has been used such as probabilistic, possibilistic, information gap decision theory, robust optimization, and interval analysis.
Model.
An alternating current power-flow model is a model used in electrical engineering to analyze power grids. It provides a nonlinear system of equations which describes the energy flow through each transmission line. The problem is non-linear because the power flow into load impedances is a function of the square of the applied voltages. Due to nonlinearity, in many cases the analysis of large network via AC power-flow model is not feasible, and a linear (but less accurate) DC power-flow model is used instead.
Usually analysis of a three-phase power system is simplified by assuming balanced loading of all three phases. Sinusoidal steady-state operation is assumed, with no transient changes in power flow or voltage due to load or generation changes, meaning all current and voltage waveforms are sinusoidal with no DC offset and have the same constant frequency. The previous assumption is the same as assuming the power system is linear time-invariant (even though the system of equations is nonlinear), driven by sinusoidal sources of same frequency, and operating in steady-state, which allows to use phasor analysis, another simplification. A further simplification is to use the per-unit system to represent all voltages, power flows, and impedances, scaling the actual target system values to some convenient base. A system one-line diagram is the basis to build a mathematical model of the generators, loads, buses, and transmission lines of the system, and their electrical impedances and ratings.
Power-flow problem formulation.
The goal of a power-flow study is to obtain complete voltage angles and magnitude information for each bus in a power system for specified load and generator real power and voltage conditions. Once this information is known, real and reactive power flow on each branch as well as generator reactive power output can be analytically determined. Due to the nonlinear nature of this problem, numerical methods are employed to obtain a solution that is within an acceptable tolerance.
The solution to the power-flow problem begins with identifying the known and unknown variables in the system. The known and unknown variables are dependent on the type of bus. A bus without any generators connected to it is called a Load Bus. With one exception, a bus with at least one generator connected to it is called a Generator Bus. The exception is one arbitrarily-selected bus that has a generator. This bus is referred to as the slack bus.
In the power-flow problem, it is assumed that the real power formula_0 and reactive power formula_1 at each Load Bus are known. For this reason, Load Buses are also known as PQ Buses. For Generator Buses, it is assumed that the real power generated formula_2 and the voltage magnitude formula_3 is known. For the Slack Bus, it is assumed that the voltage magnitude formula_3 and voltage phase formula_4 are known. Therefore, for each Load Bus, both the voltage magnitude and angle are unknown and must be solved for; for each Generator Bus, the voltage angle must be solved for; there are no variables that must be solved for the Slack Bus. In a system with formula_5 buses and formula_6 generators, there are then formula_7 unknowns.
In order to solve for the formula_7 unknowns, there must be formula_7 equations that do not introduce any new unknown variables. The possible equations to use are power balance equations, which can be written for real and reactive power for each bus.
The real power balance equation is:
formula_8
where formula_9 is the net active power injected at bus "i", formula_10 is the real part of the element in the bus admittance matrix YBUS corresponding to the formula_11 row and formula_12 column, formula_13 is the imaginary part of the element in the YBUS corresponding to the formula_11 row and formula_12 column and formula_14 is the difference in voltage angle between the formula_11 and formula_12 buses (formula_15). The reactive power balance equation is:
formula_16
where formula_17 is the net reactive power injected at bus "i".
Equations included are the real and reactive power balance equations for each Load Bus and the real power balance equation for each Generator Bus. Only the real power balance equation is written for a Generator Bus because the net reactive power injected is assumed to be unknown and therefore including the reactive power balance equation would result in an additional unknown variable. For similar reasons, there are no equations written for the Slack Bus.
In many transmission systems, the impedance of the power network lines is primarily inductive, i.e. the phase angles of the power lines impedance are usually relatively large and very close to 90 degrees. There is thus a strong coupling between real power and voltage angle, and between reactive power and voltage magnitude, while the coupling between real power and voltage magnitude, as well as reactive power and voltage angle, is weak. As a result, real power is usually transmitted from the bus with higher voltage angle to the bus with lower voltage angle, and reactive power is usually transmitted from the bus with higher voltage magnitude to the bus with lower voltage magnitude. However, this approximation does not hold when the phase angle of the power line impedance is relatively small.
Newton–Raphson solution method.
There are several different methods of solving the resulting nonlinear system of equations. The most popular is a variation of the Newton–Raphson method. The Newton-Raphson method is an iterative method which begins with initial guesses of all unknown variables (voltage magnitude and angles at Load Buses and voltage angles at Generator Buses). Next, a Taylor Series is written, with the higher order terms ignored, for each of the power balance equations included in the system of equations. The result is a linear system of equations that can be expressed as:
formula_18
where formula_19 and formula_20 are called the mismatch equations:
formula_21
formula_22
and formula_23 is a matrix of partial derivatives known as a Jacobian:
formula_24.
The linearized system of equations is solved to determine the next guess ("m" + 1) of voltage magnitude and angles based on:
formula_25
formula_26
The process continues until a stopping condition is met. A common stopping condition is to terminate if the norm of the mismatch equations is below a specified tolerance.
A rough outline of solution of the power-flow problem is:
DC power-flow.
Direct current load flow gives estimations of lines power flows on AC power systems. Direct current load flow looks only at active power flows and neglects reactive power flows. This method is non-iterative and absolutely convergent but less accurate than AC Load Flow solutions. Direct current load flow is used wherever repetitive and fast load flow estimations are required.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P_D"
},
{
"math_id": 1,
"text": "Q_D"
},
{
"math_id": 2,
"text": "P_G"
},
{
"math_id": 3,
"text": "|V|"
},
{
"math_id": 4,
"text": "\\theta"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "R"
},
{
"math_id": 7,
"text": "2(N-1) - (R-1)"
},
{
"math_id": 8,
"text": "0 = -P_{i} + \\sum_{k=1}^N |V_i||V_k|(G_{ik}\\cos\\theta_{ik}+B_{ik}\\sin\\theta_{ik})"
},
{
"math_id": 9,
"text": "P_{i}"
},
{
"math_id": 10,
"text": "G_{ik}"
},
{
"math_id": 11,
"text": "i_{th}"
},
{
"math_id": 12,
"text": "k_{th}"
},
{
"math_id": 13,
"text": "B_{ik}"
},
{
"math_id": 14,
"text": "\\theta_{ik}"
},
{
"math_id": 15,
"text": "\\theta_{ik}=\\theta_i-\\theta_k"
},
{
"math_id": 16,
"text": "0 = -Q_{i} + \\sum_{k=1}^N |V_i||V_k|(G_{ik}\\sin\\theta_{ik}-B_{ik}\\cos\\theta_{ik})"
},
{
"math_id": 17,
"text": "Q_i"
},
{
"math_id": 18,
"text": "\\begin{bmatrix}\\Delta \\theta \\\\ \\Delta |V|\\end{bmatrix} = -J^{-1} \\begin{bmatrix}\\Delta P \\\\ \\Delta Q \\end{bmatrix} "
},
{
"math_id": 19,
"text": "\\Delta P"
},
{
"math_id": 20,
"text": "\\Delta Q"
},
{
"math_id": 21,
"text": "\\Delta P_i = -P_i + \\sum_{k=1}^N |V_i||V_k|(G_{ik}\\cos\\theta_{ik}+B_{ik}\\sin \\theta_{ik})"
},
{
"math_id": 22,
"text": "\\Delta Q_{i} = -Q_{i} + \\sum_{k=1}^N |V_i||V_k|(G_{ik}\\sin\\theta_{ik}-B_{ik}\\cos\\theta_{ik})"
},
{
"math_id": 23,
"text": "J"
},
{
"math_id": 24,
"text": "J=\\begin{bmatrix} \\dfrac{\\partial \\Delta P}{\\partial\\theta} & \\dfrac{\\partial \\Delta P}{\\partial |V|} \\\\ \\dfrac{\\partial \\Delta Q}{\\partial \\theta}& \\dfrac{\\partial \\Delta Q}{\\partial |V|}\\end{bmatrix}"
},
{
"math_id": 25,
"text": "\\theta_{m+1} = \\theta_m + \\Delta \\theta\\,"
},
{
"math_id": 26,
"text": "|V|_{m+1} = |V|_m + \\Delta |V|\\,"
}
] |
https://en.wikipedia.org/wiki?curid=670376
|
6703785
|
Brownian dynamics
|
Ideal molecular motion where no average acceleration takes place
In physics, Brownian dynamics is a mathematical approach for describing the dynamics of molecular systems in the diffusive regime. It is a simplified version of Langevin dynamics and corresponds to the limit where no average acceleration takes place. This approximation is also known as overdamped Langevin dynamics or as Langevin dynamics without inertia.
Definition.
In Brownian dynamics, the following equation of motion is used to describe the dynamics of a stochastic system with coordinates formula_0:
formula_1
where:
Derivation.
In Langevin dynamics, the equation of motion using the same notation as above is as follows:
formula_12
where:
The above equation may be rewritten as formula_21In Brownian dynamics, the inertial force term formula_22 is so much smaller than the other three that it is considered negligible. In this case, the equation is approximately
formula_23
For spherical particles of radius formula_24 in the limit of low Reynolds number, we can use the Stokes–Einstein relation. In this case, formula_25, and the equation reads:
formula_26
For example, when the magnitude of the friction tensor formula_15 increases, the damping effect of the viscous force becomes dominant relative to the inertial force. Consequently, the system transitions from the inertial to the diffusive (Brownian) regime. For this reason, Brownian dynamics are also known as overdamped Langevin dynamics or Langevin dynamics without inertia.
Inclusion of hydrodynamic interaction.
In 1978, Ermak and McCammon suggested an algorithm for efficiently computing Brownian dynamics with hydrodynamic interactions. Hydrodynamic interactions occur when the particles interact indirectly by generating and reacting to local velocities in the solvent. For a system of formula_27 three-dimensional particle diffusing subject to a force vector F(X), the derived Brownian dynamics scheme becomes:
formula_28
where formula_29 is a diffusion matrix specifying hydrodynamic interactions, Oseen tensor for example, in non-diagonal entries interacting between the target particle formula_30 and the surrounding particle formula_31, formula_32 is the force exerted on the particle formula_31, and formula_9 is a Gaussian noise vector with zero mean and a standard deviation of formula_33 in each vector entry. The subscripts formula_30 and formula_31 indicate the ID of the particles and formula_27 refers to the total number of particles. This equation works for the dilute system where the near-field effect is ignored.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X=X(t)"
},
{
"math_id": 1,
"text": "\\dot{X} = - \\frac{D}{k_\\text{B} T} \\nabla U(X) + \\sqrt{2 D} R(t)."
},
{
"math_id": 2,
"text": "\\dot{X}"
},
{
"math_id": 3,
"text": "U(X)"
},
{
"math_id": 4,
"text": "\\nabla"
},
{
"math_id": 5,
"text": "- \\nabla U(X)"
},
{
"math_id": 6,
"text": "k_\\text{B}"
},
{
"math_id": 7,
"text": "T"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "R(t)"
},
{
"math_id": 10,
"text": "\\left\\langle R(t) \\right\\rangle =0"
},
{
"math_id": 11,
"text": "\\left\\langle R(t)R(t') \\right\\rangle = \\delta(t-t')"
},
{
"math_id": 12,
"text": "M\\ddot{X} = - \\nabla U(X) - \\zeta \\dot{X} + \\sqrt{2 \\zeta k_\\text{B} T} R(t)"
},
{
"math_id": 13,
"text": "M"
},
{
"math_id": 14,
"text": "\\ddot{X}"
},
{
"math_id": 15,
"text": "\\zeta"
},
{
"math_id": 16,
"text": "\\text{mass} / \\text{time}"
},
{
"math_id": 17,
"text": "\\zeta=\\gamma M"
},
{
"math_id": 18,
"text": "\\gamma"
},
{
"math_id": 19,
"text": "\\text{time}^{-1}"
},
{
"math_id": 20,
"text": "\\zeta = 6 \\pi \\, \\eta \\, r"
},
{
"math_id": 21,
"text": "\\underbrace{M\\ddot{X}}_{\\text{inertial force}} + \\underbrace{\\nabla U(X)}_{\\text{potential force}} + \\underbrace{\\zeta \\dot{X}}_{\\text{viscous force}} - \\underbrace{\\sqrt{2 \\zeta k_\\text{B} T} R(t)}_{\\text{random force}} = 0\n"
},
{
"math_id": 22,
"text": "M\\ddot{X}(t)"
},
{
"math_id": 23,
"text": "0 = - \\nabla U(X) - \\zeta \\dot{X}+ \\sqrt{2 \\zeta k_\\text{B} T } R(t)"
},
{
"math_id": 24,
"text": "r"
},
{
"math_id": 25,
"text": "D = k_\\text{B} T/\\zeta"
},
{
"math_id": 26,
"text": "\\dot{X}(t) = - \\frac{D}{k_\\text{B} T} \\nabla U(X) + \\sqrt{2 D} R(t)."
},
{
"math_id": 27,
"text": "N"
},
{
"math_id": 28,
"text": "X_i(t + \\Delta t) = X_i(t) + \\sum_j^N \\frac{\\Delta t D_{ij} }{k_\\text{B} T} F[X_j(t)] + R_i(t)"
},
{
"math_id": 29,
"text": "D_{ij}"
},
{
"math_id": 30,
"text": "i"
},
{
"math_id": 31,
"text": "j"
},
{
"math_id": 32,
"text": "F"
},
{
"math_id": 33,
"text": "\\sqrt{ 2 D \\Delta t}"
}
] |
https://en.wikipedia.org/wiki?curid=6703785
|
67038444
|
Lissajous-toric knot
|
In knot theory, a Lissajous-toric knot is a knot defined by parametric equations of the form:
formula_0
where formula_1, formula_2, and formula_3 are integers, the phase shift formula_4 is a real number
and the parameter formula_5 varies between 0 and formula_6.
For formula_7 the knot is a torus knot.
Braid and billiard knot definitions.
In braid form these knots can be defined in a square solid torus (i.e. the cube formula_8 with identified top and bottom) as
formula_9.
The projection of this Lissajous-toric knot onto the x-y-plane is a Lissajous curve.
Replacing the sine and cosine functions in the parametrization by a triangle wave transforms a Lissajous-toric
knot isotopically into a billiard curve inside the solid torus. Because of this property Lissajous-toric knots are also called billiard knots in a solid torus.
Lissajous-toric knots were first studied as billiard knots and they share many properties with billiard knots in a cylinder.
They also occur in the analysis of singularities of minimal surfaces with branch points and in the study of
the Three-body problem.
The knots in the subfamily with formula_10, with an integer formula_11, are known as ′Lemniscate knots′. Lemniscate knots have period formula_3 and are fibred. The knot shown on the right is of this type (with formula_12).
Properties.
Lissajous-toric knots are denoted by formula_13. To ensure that the knot is traversed only once in the parametrization
the conditions formula_14 are needed. In addition, singular values for the phase, leading to self-intersections, have to be excluded.
The isotopy class of Lissajous-toric knots surprisingly does not depend on the phase formula_4 (up to mirroring).
If the distinction between a knot and its mirror image is not important, the notation formula_15 can be used.
The properties of Lissajous-toric knots depend on whether formula_2 and formula_3 are coprime or formula_16. The main properties are:
formula_17 (up to mirroring).
If formula_2 and formula_3 are coprime, formula_15 is a symmetric union and therefore a ribbon knot.
If formula_16, the Lissajous-toric knot has period formula_18 and the factor knot is a ribbon knot.
If formula_2 and formula_3 have different parity, then formula_15 is strongly-plus-amphicheiral.
If formula_2 and formula_3 are both odd, then formula_15 has period 2 (for even formula_1) or is freely 2-periodic (for odd formula_1).
Example.
The knot T(3,8,7), shown in the graphics, is a symmetric union and a ribbon knot (in fact, it is the composite knot formula_19).
It is strongly-plus-amphicheiral: a rotation by formula_20 maps the knot to its mirror image, keeping its orientation.
An additional horizontal symmetry occurs as a combination of the vertical symmetry and the rotation (′double palindromicity′ in Kin/Nakamura/Ogawa).
′Classification′ of billiard rooms.
In the following table a systematic overview of the possibilities to build billiard rooms from the interval and the circle (interval with identified boundaries) is given:
In the case of Lissajous knots reflections at the boundaries occur in all of the three cube's dimensions.
In the second case reflections occur in two dimensions and we have a uniform movement in the third dimension.
The third case is nearly equal to the usual movement on a torus, with an additional triangle wave movement in the first dimension.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x(t)=(2+\\sin qt)\\cos Nt, \\qquad y(t)=(2+\\sin qt)\\sin Nt, \\qquad z(t)=\\cos p(t+\\phi),"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "\\phi"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "2\\pi"
},
{
"math_id": 7,
"text": "p=q"
},
{
"math_id": 8,
"text": "[-1,1]^3"
},
{
"math_id": 9,
"text": "x(t)=\\sin 2\\pi qt, \\qquad y(t)=\\cos 2\\pi p(t+\\phi), \\qquad z(t)=2(N t - \\lfloor N t\\rfloor )-1, \\qquad t \\in [0,1]"
},
{
"math_id": 10,
"text": "p = q \\cdot l"
},
{
"math_id": 11,
"text": "l \\ge 1"
},
{
"math_id": 12,
"text": "l=5"
},
{
"math_id": 13,
"text": "K(N,q,p,\\phi)"
},
{
"math_id": 14,
"text": "\\gcd(N,q)=\\gcd(N,p)=1"
},
{
"math_id": 15,
"text": "K(N,q,p)"
},
{
"math_id": 16,
"text": "d=\\gcd(p,q)>1"
},
{
"math_id": 17,
"text": "K(N,q,p)=K(N,p,q)"
},
{
"math_id": 18,
"text": "d"
},
{
"math_id": 19,
"text": "5_1 \\sharp -5_1"
},
{
"math_id": 20,
"text": "\\pi"
}
] |
https://en.wikipedia.org/wiki?curid=67038444
|
670398
|
Ancillary statistic
|
In statistics, ancillarity is a property of a statistic computed on a sample dataset in relation to a parametric model of the dataset. An ancillary statistic has the same distribution regardless of the value of the parameters and thus provides no information about them.
It is opposed to the concept of a complete statistic which contains no ancillary information. It is closely related to the concept of a sufficient statistic which contains all of the information that the dataset provides about the parameters.
A ancillary statistic is a specific case of a pivotal quantity that is computed only from the data and not from the parameters. They can be used to construct prediction intervals. They are also used in connection with Basu's theorem to prove independence between statistics.
This concept was first introduced by Ronald Fisher in the 1920s, but its formal definition was only provided in 1964 by Debabrata Basu.
Examples.
Suppose "X"1, ..., "X""n" are independent and identically distributed, and are normally distributed with unknown expected value "μ" and known variance 1. Let
formula_0
be the sample mean.
The following statistical measures of dispersion of the sample
formula_1
are all "ancillary statistics", because their sampling distributions do not change as "μ" changes. Computationally, this is because in the formulas, the "μ" terms cancel – adding a constant number to a distribution (and all samples) changes its sample maximum and minimum by the same amount, so it does not change their difference, and likewise for others: these measures of dispersion do not depend on location.
Conversely, given i.i.d. normal variables with known mean 1 and unknown variance "σ"2, the sample mean formula_2 is "not" an ancillary statistic of the variance, as the sampling distribution of the sample mean is "N"(1, "σ"2/"n"), which does depend on "σ" 2 – this measure of location (specifically, its standard error) depends on dispersion.
In location-scale families.
In a location family of distributions, formula_3 is an ancillary statistic.
In a scale family of distributions, formula_4 is an ancillary statistic.
In a location-scale family of distributions, formula_5, where formula_6 is the sample variance, is an ancillary statistic.
In recovery of information.
It turns out that, if formula_7 is a non-sufficient statistic and formula_8 is ancillary, one can sometimes recover all the information about the unknown parameter contained in the entire data by reporting formula_7 while conditioning on the observed value of formula_8. This is known as "conditional inference".
For example, suppose that formula_9 follow the formula_10 distribution where formula_11 is unknown. Note that, even though formula_12 is not sufficient for formula_11 (since its Fisher information is 1, whereas the Fisher information of the complete statistic formula_2 is 2), by additionally reporting the ancillary statistic formula_13, one obtains a joint distribution with Fisher information 2.
Ancillary complement.
Given a statistic "T" that is not sufficient, an ancillary complement is a statistic "U" that is ancillary and such that ("T", "U") is sufficient. Intuitively, an ancillary complement "adds the missing information" (without duplicating any).
The statistic is particularly useful if one takes "T" to be a maximum likelihood estimator, which in general will not be sufficient; then one can ask for an ancillary complement. In this case, Fisher argues that one must condition on an ancillary complement to determine information content: one should consider the Fisher information content of "T" to not be the marginal of "T", but the conditional distribution of "T", given "U": how much information does "T" "add"? This is not possible in general, as no ancillary complement need exist, and if one exists, it need not be unique, nor does a maximum ancillary complement exist.
Example.
In baseball, suppose a scout observes a batter in "N" at-bats. Suppose (unrealistically) that the number "N" is chosen by some random process that is independent of the batter's ability – say a coin is tossed after each at-bat and the result determines whether the scout will stay to watch the batter's next at-bat. The eventual data are the number "N" of at-bats and the number "X" of hits: the data ("X", "N") are a sufficient statistic. The observed batting average "X"/"N" fails to convey all of the information available in the data because it fails to report the number "N" of at-bats (e.g., a batting average of 0.400, which is very high, based on only five at-bats does not inspire anywhere near as much confidence in the player's ability than a 0.400 average based on 100 at-bats). The number "N" of at-bats is an ancillary statistic because
This ancillary statistic is an ancillary complement to the observed batting average "X"/"N", i.e., the batting average "X"/"N" is not a sufficient statistic, in that it conveys less than all of the relevant information in the data, but conjoined with "N", it becomes sufficient.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\overline{X}_n = \\frac{X_1+\\,\\cdots\\,+X_n}{n}"
},
{
"math_id": 1,
"text": "\\hat{\\sigma}^2:=\\,\\frac{\\sum \\left(X_i-\\overline{X}\\right)^2}{n}"
},
{
"math_id": 2,
"text": "\\overline{X}"
},
{
"math_id": 3,
"text": "(X_1 - X_n, X_2 - X_n, \\dots, X_{n-1} - X_n)"
},
{
"math_id": 4,
"text": "(\\frac{X_1}{X_n}, \\frac{X_2}{X_n}, \\dots, \\frac{X_{n-1}}{X_n})"
},
{
"math_id": 5,
"text": "(\\frac{X_1 - X_n}{S}, \\frac{X_2 - X_n}{S}, \\dots, \\frac{X_{n - 1} - X_n}{S})"
},
{
"math_id": 6,
"text": "S^2"
},
{
"math_id": 7,
"text": "T_1"
},
{
"math_id": 8,
"text": "T_2"
},
{
"math_id": 9,
"text": "X_1, X_2"
},
{
"math_id": 10,
"text": "N(\\theta, 1)"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "X_1"
},
{
"math_id": 13,
"text": "X_1 - X_2"
}
] |
https://en.wikipedia.org/wiki?curid=670398
|
67043
|
Thermal insulation
|
Minimization of heat transfer
Thermal insulation is the reduction of heat transfer (i.e., the transfer of thermal energy between objects of differing temperature) between objects in thermal contact or in range of radiative influence. Thermal insulation can be achieved with specially engineered methods or processes, as well as with suitable object shapes and materials.
Heat flow is an inevitable consequence of contact between objects of different temperature. Thermal insulation provides a region of insulation in which thermal conduction is reduced, creating a thermal break or thermal barrier, or thermal radiation is reflected rather than absorbed by the lower-temperature body.
The insulating capability of a material is measured as the inverse of thermal conductivity (k). Low thermal conductivity is equivalent to high insulating capability (resistance value). In thermal engineering, other important properties of insulating materials are product density (ρ) and specific heat capacity (c).
Definition.
Thermal conductivity "k" is measured in watts-per-meter per kelvin (W·m−1·K−1 or W/mK). This is because heat transfer, measured as power, has been found to be (approximately) proportional to
From this, it follows that the power of heat loss formula_3 is given by
formula_4
Thermal conductivity depends on the material and for fluids, its temperature and pressure. For comparison purposes, conductivity under standard conditions (20 °C at 1 atm) is commonly used. For some materials, thermal conductivity may also depend upon the direction of heat transfer.
The act of insulation is accomplished by encasing an object in material with low thermal conductivity in high thickness. Decreasing the exposed surface area could also lower heat transfer, but this quantity is usually fixed by the geometry of the object to be insulated.
Multi-layer insulation is used where radiative loss dominates, or when the user is restricted in volume and weight of the insulation (e.g. emergency blanket, radiant barrier)
Insulation of cylinders.
For insulated cylinders, a "critical radius" blanket must be reached. Before the critical radius is reached, any added insulation increases heat transfer. The convective thermal resistance is inversely proportional to the surface area and therefore the radius of the cylinder, while the thermal resistance of a cylindrical shell (the insulation layer) depends on the ratio between outside and inside radius, not on the radius itself. If the outside radius of a cylinder is increased by applying insulation, a fixed amount of conductive resistance (equal to 2×π×k×L(Tin-Tout)/ln(Rout/Rin)) is added. However, at the same time, the convective resistance is reduced. This implies that adding insulation below a certain critical radius actually increases the heat transfer. For insulated cylinders, the critical radius is given by the equation
formula_5
This equation shows that the critical radius depends only on the heat transfer coefficient and the thermal conductivity of the insulation. If the radius of the insulated cylinder is smaller than the critical radius for insulation, the addition of any amount of insulation will increase heat transfer.
Applications.
Clothing and natural animal insulation in birds and mammals.
Gases possess poor thermal conduction properties compared to liquids and solids and thus make good insulation material if they can be trapped. In order to further augment the effectiveness of a gas (such as air), it may be disrupted into small cells, which cannot effectively transfer heat by natural convection. Convection involves a larger bulk flow of gas driven by buoyancy and temperature differences, and it does not work well in small cells where there is little density difference to drive it, and the high surface-to-volume ratios of the small cells retards gas flow in them by means of viscous drag.
In order to accomplish small gas cell formation in man-made thermal insulation, glass and polymer materials can be used to trap air in a foam-like structure. This principle is used industrially in building and piping insulation such as (glass wool), cellulose, rock wool, polystyrene foam (styrofoam), urethane foam, vermiculite, perlite, and cork. Trapping air is also the principle in all highly insulating clothing materials such as wool, down feathers and fleece.
The air-trapping property is also the insulation principle employed by homeothermic animals to stay warm, for example down feathers, and insulating hair such as natural sheep's wool. In both cases the primary insulating material is air, and the polymer used for trapping the air is natural keratin protein.
Buildings.
Maintaining acceptable temperatures in buildings (by heating and cooling) uses a large proportion of global energy consumption. Building insulations also commonly use the principle of small trapped air-cells as explained above, e.g. fiberglass (specifically glass wool), cellulose, rock wool, polystyrene foam, urethane foam, vermiculite, perlite, cork, etc. For a period of time, asbestos was also used, however, it caused health problems.
Window insulation film can be applied in weatherization applications to reduce incoming thermal radiation in summer and loss in winter.
When well insulated, a building is:
In industry, energy has to be expended to raise, lower, or maintain the temperature of objects or process fluids. If these are not insulated, this increases the energy requirements of a process, and therefore the cost and environmental impact.
Mechanical systems.
Space heating and cooling systems distribute heat throughout buildings by means of pipes or ductwork. Insulating these pipes using pipe insulation reduces energy into unoccupied rooms and prevents condensation from occurring on cold and chilled pipework.
Pipe insulation is also used on water supply pipework to help delay pipe freezing for an acceptable length of time.
Mechanical insulation is commonly installed in industrial and commercial facilities.
Passive radiative cooling surfaces.
Thermal insulation has been found to improve the thermal emittance of passive radiative cooling surfaces by increasing the surface's ability to lower temperatures below ambient under direct solar intensity. Different materials may be used for thermal insulation, including polyethylene aerogels that reduce solar absorption and parasitic heat gain which may improve the emitter's performance by over 20%. Other aerogels also exhibited strong thermal insulation performance for radiative cooling surfaces, including a silica-alumina nanofibrous aerogel.
Refrigeration.
A refrigerator consists of a heat pump and a thermally insulated compartment.
Spacecraft.
Launch and re-entry place severe mechanical stresses on spacecraft, so the strength of an insulator is critically important (as seen by the failure of insulating tiles on the Space Shuttle Columbia, which caused the shuttle airframe to overheat and break apart during reentry, killing the astronauts on board). Re-entry through the atmosphere generates very high temperatures due to compression of the air at high speeds. Insulators must meet demanding physical properties beyond their thermal transfer retardant properties. Examples of insulation used on spacecraft include reinforced carbon-carbon composite nose cone and silica fiber tiles of the Space Shuttle. See also Insulative paint.
Automotive.
Internal combustion engines produce a lot of heat during their combustion cycle. This can have a negative effect when it reaches various heat-sensitive components such as sensors, batteries, and starter motors. As a result, thermal insulation is necessary to prevent the heat from the exhaust from reaching these components.
High performance cars often use thermal insulation as a means to increase engine performance.
Factors influencing performance.
Insulation performance is influenced by many factors, the most prominent of which include:
It is important to note that the factors influencing performance may vary over time as material ages or environmental conditions change.
Calculating requirements.
Industry standards are often rules of thumb, developed over many years, that offset many conflicting goals: what people will pay for, manufacturing cost, local climate, traditional building practices, and varying standards of comfort. Both heat transfer and layer analysis may be performed in large industrial applications, but in household situations (appliances and building insulation), airtightness is the key in reducing heat transfer due to air leakage (forced or natural convection). Once airtightness is achieved, it has often been sufficient to choose the thickness of the insulating layer based on rules of thumb. Diminishing returns are achieved with each successive doubling of the insulating layer.
It can be shown that for some systems, there is a minimum insulation thickness required for an improvement to be realized.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " \\Delta T "
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": " d "
},
{
"math_id": 3,
"text": " P "
},
{
"math_id": 4,
"text": " P = \\frac{k A\\, \\Delta T }{d} "
},
{
"math_id": 5,
"text": "{r_{critical}} = {k \\over h}"
}
] |
https://en.wikipedia.org/wiki?curid=67043
|
670453
|
Helly family
|
Family of sets where every disjoint subfamily has k or fewer sets
In combinatorics, a Helly family of order k is a family of sets in which every minimal "subfamily with an empty intersection" has k or fewer sets in it. Equivalently, every finite subfamily such that every k-fold intersection is non-empty has non-empty total intersection. The k-Helly property is the property of being a Helly family of order k.
The number k is frequently omitted from these names in the case that "k" = 2. Thus, a set-family has the Helly property if, for every n sets formula_0 in the family, if formula_1, then formula_2.
These concepts are named after Eduard Helly (1884–1943); Helly's theorem on convex sets, which gave rise to this notion, states that convex sets in Euclidean space of dimension n are a Helly family of order "n" + 1.
Formal definition.
More formally, a Helly family of order "k" is a set system ("V", "E"), with "E" a collection of subsets of "V", such that, for every finite "G" ⊆ "E" with
formula_3
we can find "H" ⊆ "G" such that
formula_4
and
formula_5
In some cases, the same definition holds for every subcollection "G", regardless of finiteness. However, this is a more restrictive condition. For instance, the open intervals of the real line satisfy the Helly property for finite subcollections, but not for infinite subcollections: the intervals (0,1/"i") (for "i" = 0, 1, 2, ...) have pairwise nonempty intersections, but have an empty overall intersection.
Helly dimension.
If a family of sets is a Helly family of order "k", that family is said to have Helly number "k". The Helly dimension of a metric space is one less than the Helly number of the family of metric balls in that space; Helly's theorem implies that the Helly dimension of a Euclidean space equals its dimension as a real vector space.
The Helly dimension of a subset S of a Euclidean space, such as a polyhedron, is one less than the Helly number of the family of translates of S. For instance, the Helly dimension of any hypercube is 1, even though such a shape may belong to a Euclidean space of much higher dimension.
Helly dimension has also been applied to other mathematical objects. For instance defines the Helly dimension of a group (an algebraic structure formed by an invertible and associative binary operation) to be one less than the Helly number of the family of left cosets of the group.
The Helly property.
If a family of nonempty sets has an empty intersection, its Helly number must be at least two, so the smallest "k" for which the "k"-Helly property is nontrivial is "k" = 2. The 2-Helly property is also known as the Helly property. A 2-Helly family is also known as a Helly family.
A convex metric space in which the closed balls have the 2-Helly property (that is, a space with Helly dimension 1, in the stronger variant of Helly dimension for infinite subcollections) is called injective or hyperconvex. The existence of the tight span allows any metric space to be embedded isometrically into a space with Helly dimension 1.
The Helly property in hypergraphs.
A hypergraph is equivalent to a set-family. In hypergraphs terms, a hypergraph "H" = ("V", "E") has the Helly property if for every "n" hyperedges formula_6 in "E", if formula_7, then formula_8. For every hypergraph H, the following are equivalent:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "s_1,\\ldots,s_n"
},
{
"math_id": 1,
"text": "\\forall i,j\\in[n]: s_i \\cap s_j \\neq\\emptyset "
},
{
"math_id": 2,
"text": "s_1 \\cap \\cdots \\cap s_n \\neq\\emptyset "
},
{
"math_id": 3,
"text": "\\bigcap_{X\\in G} X=\\varnothing,"
},
{
"math_id": 4,
"text": "\\bigcap_{X\\in H} X=\\varnothing"
},
{
"math_id": 5,
"text": "\\left|H\\right|\\le k."
},
{
"math_id": 6,
"text": "e_1,\\ldots,e_n"
},
{
"math_id": 7,
"text": "\\forall i,j\\in[n]: e_i \\cap e_j \\neq\\emptyset "
},
{
"math_id": 8,
"text": "e_1 \\cap \\cdots \\cap e_n \\neq\\emptyset "
}
] |
https://en.wikipedia.org/wiki?curid=670453
|
67046
|
Thermal mass
|
Use of thermal energy storage in building design
In building design, thermal mass is a property of the matter of a building that requires a flow of heat in order for it to change temperature. In scientific writing the term "heat capacity" is preferred. It is sometimes known as the thermal flywheel effect. The thermal mass of heavy structural elements can be designed to work alongside a construction's lighter thermal resistance components to create energy efficient buildings.
For example, when outside temperatures are fluctuating throughout the day, a large thermal mass within the insulated portion of a house can serve to "flatten out" the daily temperature fluctuations, since the thermal mass will absorb thermal energy when the surroundings are higher in temperature than the mass, and give thermal energy back when the surroundings are cooler, without reaching thermal equilibrium. This is distinct from a material's insulative value, which reduces a building's thermal conductivity, allowing it to be heated or cooled relatively separately from the outside, or even just retain the occupants' thermal energy longer.
Scientifically, thermal mass is equivalent to thermal capacity or heat capacity, the ability of a body to store thermal energy. It is typically referred to by the symbol "C"th, and its SI unit is J/K or J/°C (which are equivalent). Thermal mass may also be used for bodies of water, machines or machine parts, living things, or any other structure or body in engineering or biology. In those contexts, the term "heat capacity" is typically used instead.
Background.
The equation relating thermal energy to thermal mass is:
formula_0
where "Q" is the thermal energy transferred, "C"th is the thermal mass of the body, and Δ"T" is the change in temperature.
For example, if 250 J of heat energy is added to a copper gear with a thermal mass of 38.46 J/°C, its temperature will rise by 6.50 °C.
If the body consists of a homogeneous material with sufficiently known physical properties, the thermal mass is simply the mass of material present times the specific heat capacity of that material. For bodies made of many materials, the sum of heat capacities for their pure components may be used in the calculation, or in some cases (as for a whole animal, for example) the number may simply be measured for the entire body in question, directly.
As an extensive property, heat capacity is characteristic of an object; its corresponding intensive property is specific heat capacity, expressed in terms of a measure of the amount of material such as mass or number of moles, which must be multiplied by similar units to give the heat capacity of the entire body of material. Thus the heat capacity can be equivalently calculated as the product of the mass "m" of the body and the specific heat capacity "c" for the material, or the product of the number of moles of molecules present "n" and the molar specific heat capacity formula_1. For discussion of "why" the thermal energy storage abilities of pure substances vary, see factors that affect specific heat capacity.
For a body of uniform composition, formula_2 can be approximated by
formula_3
where formula_4 is the mass of the body and formula_5 is the isobaric specific heat capacity of the material averaged over temperature range in question. For bodies composed of numerous different materials, the thermal masses for the different components can just be added together.
Thermal mass in buildings.
Thermal mass is effective in improving building comfort in any place that experiences these types of daily temperature fluctuations—both in winter as well as in summer. When used well and combined with passive solar design, thermal mass can play an important role in major reductions to energy use in active heating and cooling systems.
The use of materials with thermal mass is most advantageous where there is a big difference in outdoor temperatures from day to night (or, where nighttime temperatures are at least 10 degrees cooler than the thermostat set point). The terms "heavy-weight" and "light-weight" are often used to describe buildings with different thermal mass strategies, and affects the choice of numerical factors used in subsequent calculations to describe their thermal response to heating and cooling.
In building services engineering, the use of dynamic simulation computational modelling software has allowed for the accurate calculation of the environmental performance within buildings with different constructions and for different annual climate data sets. This allows the architect or engineer to explore in detail the relationship between heavy-weight and light-weight constructions, as well as insulation levels, in reducing energy consumption for mechanical heating or cooling systems, or even removing the need for such systems altogether.
Properties required for good thermal mass.
Ideal materials for thermal mass are those materials that have:
Any solid, liquid, or gas will have some thermal mass. A common misconception is that only concrete or earth soil has thermal mass; even air has thermal mass (although very little).
A table of volumetric heat capacity for building materials is available, but note that their definition of thermal mass is slightly different.
Use of thermal mass in different climates.
The correct use and application of thermal mass is dependent on the prevailing climate in a district.
Temperate and cold temperate climates.
Solar-exposed thermal mass.
Thermal mass is ideally placed within the building and situated where it still can be exposed to low-angle winter sunlight (via windows) but insulated from heat loss. In summer the same thermal mass should be obscured from higher-angle summer sunlight in order to prevent overheating of the structure.
The thermal mass is warmed passively by the sun or additionally by internal heating systems during the day. Thermal energy stored in the mass is then released back into the interior during the night. It is essential that it be used in conjunction with the standard principles of passive solar design.
Any form of thermal mass can be used. A concrete slab foundation either left exposed or covered with conductive materials, e.g. tiles, is one easy solution. Another novel method is to place the masonry facade of a timber-framed house on the inside ('reverse-brick veneer'). Thermal mass in this situation is best applied over a large area rather than in large volumes or thicknesses. 7.5–10 cm (3″–4″) is often adequate.
Since the most important source of thermal energy is the Sun, the ratio of glazing to thermal mass is an important factor to consider. Various formulas have been devised to determine this. As a general rule, additional solar-exposed thermal mass needs to be applied in a ratio from 6:1 to 8:1 for any area of sun-facing (north-facing in Southern Hemisphere or south-facing in Northern Hemisphere) glazing above 7% of the total floor area. For example, a 200 m2 house with 20 m2 of sun-facing glazing has 10% of glazing by total floor area; 6 m2 of that glazing will require additional thermal mass. Therefore, using the 6:1 to 8:1 ratio above, an additional 36–48 m2 of solar-exposed thermal mass is required. The exact requirements vary from climate to climate.
Thermal mass for limiting summertime overheating.
Thermal mass is ideally placed within a building where it is shielded from direct solar gain but exposed to the building occupants. It is therefore most commonly associated with solid concrete floor slabs in naturally ventilated or low-energy mechanically ventilated buildings where the concrete soffit is left exposed to the occupied space.
During the day heat is gained from the sun, the occupants of the building, and any electrical lighting and equipment, causing the air temperatures within the space to increase, but this heat is absorbed by the exposed concrete slab above, thus limiting the temperature rise within the space to be within acceptable levels for human thermal comfort. In addition the lower surface temperature of the concrete slab also absorbs radiant heat directly from the occupants, also benefiting their thermal comfort.
By the end of the day the slab has in turn warmed up, and now, as external temperatures decrease, the heat can be released and the slab cooled down, ready for the start of the next day. However this "regeneration" process is only effective if the building ventilation system is operated at night to carry away the heat from the slab. In naturally ventilated buildings it is normal to provide automated window openings to facilitate this process automatically.
Hot, arid climates (e.g. desert).
This is a classical use of thermal mass. Examples include adobe, rammed earth, or limestone block houses. Its function is highly dependent on marked diurnal temperature variations. The wall predominantly acts to retard heat transfer from the exterior to the interior during the day. The high volumetric heat capacity and thickness prevents thermal energy from reaching the inner surface. When temperatures fall at night, the walls re-radiate the thermal energy back into the night sky. In this application it is important for such walls to be massive to prevent heat transfer into the interior.
Hot humid climates (e.g. sub-tropical and tropical).
The use of thermal mass is the most challenging in this environment where night temperatures remain elevated. Its use is primarily as a temporary heat sink. However, it needs to be strategically located to prevent overheating. It should be placed in an area that is not directly exposed to solar gain and also allows adequate ventilation at night to carry away stored energy without increasing internal temperatures any further. If to be used at all it should be used in judicious amounts and again not in large thicknesses.
Seasonal energy storage.
If enough mass is used it can create a seasonal advantage. That is, it can heat in the winter and cool in the summer. This is sometimes called passive annual heat storage or PAHS. The PAHS system has been successfully used at 7000 ft. in Colorado and in a number of homes in Montana. The Earthships of New Mexico utilize passive heating and cooling as well as using recycled tires for foundation wall yielding a maximum PAHS/STES. It has also been used successfully in the UK at Hockerton Housing Project.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Q = C_\\mathrm{th} \\Delta T\\,"
},
{
"math_id": 1,
"text": "\\bar c"
},
{
"math_id": 2,
"text": "C_\\mathrm{th}"
},
{
"math_id": 3,
"text": "C_\\mathrm{th} = m c_\\mathrm{p}"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "c_\\mathrm{p}"
}
] |
https://en.wikipedia.org/wiki?curid=67046
|
6704603
|
Wasserstein metric
|
Distance function defined between probability distributions
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space formula_0. It is named after Leonid Vaseršteĭn.
Intuitively, if each distribution is viewed as a unit amount of earth (soil) piled on "formula_0", the metric is the minimum "cost" of turning one pile into the other, which is assumed to be the amount of earth that needs to be moved times the mean distance it has to be moved. This problem was first formalised by Gaspard Monge in 1781. Because of this analogy, the metric is known in computer science as the earth mover's distance.
The name "Wasserstein distance" was coined by R. L. Dobrushin in 1970, after learning of it in the work of Leonid Vaseršteĭn on Markov processes describing large systems of automata (Russian, 1969). However the metric was first defined by Leonid Kantorovich in "The Mathematical Method of Production Planning and Organization" (Russian original 1939) in the context of optimal transport planning of goods and materials. Some scholars thus encourage use of the terms "Kantorovich metric" and "Kantorovich distance". Most English-language publications use the German spelling "Wasserstein" (attributed to the name "Vaseršteĭn" (Russian: ) being of Yiddish origin).
Definition.
Let formula_1 be a metric space that is a Polish space. For formula_2, the Wasserstein formula_3-distance between two probability measures formula_4 and formula_5 on formula_0 with finite formula_3-moments is
formula_6
where formula_7 is the set of all couplings of formula_4 and formula_5; formula_8 is defined to be formula_9 and corresponds to a supremum norm. A coupling formula_10 is a joint probability measure on formula_11 whose marginals are formula_4 and formula_5 on the first and second factors, respectively. That is, for all measurable formula_12 a coupling fulfills
formula_13
formula_14
Intuition and connection to optimal transport.
One way to understand the above definition is to consider the optimal transport problem. That is, for a distribution of mass formula_15 on a space formula_16, we wish to transport the mass in such a way that it is transformed into the distribution formula_17 on the same space; transforming the 'pile of earth' formula_4 to the pile formula_5. This problem only makes sense if the pile to be created has the same mass as the pile to be moved; therefore without loss of generality assume that formula_4 and formula_5 are probability distributions containing a total mass of 1. Assume also that there is given some cost function
formula_18
that gives the cost of transporting a unit mass from the point formula_19 to the point formula_20.
A transport plan to move formula_4 into formula_5 can be described by a function formula_21 which gives the amount of mass to move from formula_19 to formula_20. You can imagine the task as the need to move a pile of earth of shape formula_4 to the hole in the ground of shape formula_5 such that at the end, both the pile of earth and the hole in the ground completely vanish. In order for this plan to be meaningful, it must satisfy the following properties:
That is, that the total mass moved "out of" an infinitesimal region around formula_19 must be equal to formula_22 and the total mass moved "into" a region around formula_20 must be formula_23. This is equivalent to the requirement that formula_10 be a joint probability distribution with marginals formula_4 and formula_5. Thus, the infinitesimal mass transported from formula_19 to formula_20 is formula_24, and the cost of moving is formula_25, following the definition of the cost function. Therefore, the total cost of a transport plan formula_10 is
formula_26
The plan formula_10 is not unique; the optimal transport plan is the plan with the minimal cost out of all possible transport plans. As mentioned, the requirement for a plan to be valid is that it is a joint distribution with marginals formula_4 and formula_5; letting formula_27 denote the set of all such measures as in the first section, the cost of the optimal plan is
formula_28
If the cost of a move is simply the distance between the two points, then the optimal cost is identical to the definition of the formula_29 distance.
Examples.
Point masses.
Deterministic distributions.
Let formula_30 and formula_31 be two degenerate distributions (i.e. Dirac delta distributions) located at points formula_32 and formula_33 in formula_34. There is only one possible coupling of these two measures, namely the point mass formula_35 located at formula_36. Thus, using the usual absolute value function as the distance function on formula_34, for any formula_37, the formula_3-Wasserstein distance between formula_38 and formula_39 is
formula_40
By similar reasoning, if formula_30 and formula_31 are point masses located at points formula_32 and formula_33 in formula_41, and we use the usual Euclidean norm on formula_41 as the distance function, then
formula_42
Empirical distributions.
One dimension.
If formula_43 is an empirical measure with samples formula_44 and formula_45 is an empirical measure with samples formula_46, the distance is a simple function of the order statistics:
formula_47
Higher dimensions.
If formula_43 and formula_45 are empirical distributions, each based on formula_48 observations, then
formula_49
where the infimum is over all permutations formula_50 of formula_48 elements. This is a linear assignment problem, and can be solved by the Hungarian algorithm in cubic time.
Normal distributions.
Let formula_51 and formula_52 be two non-degenerate Gaussian measures (i.e. normal distributions) on formula_53, with respective expected values formula_54 and formula_55 and symmetric positive semi-definite covariance matrices formula_56 and formula_57. Then, with respect to the usual Euclidean norm on formula_41, the 2-Wasserstein distance between formula_38 and formula_58 is
formula_59
where formula_60 denotes the principal square root of formula_61. Note that the second term (involving the trace) is precisely the (unnormalised) Bures metric between formula_62 and formula_63.
This result generalises the earlier example of the Wasserstein distance between two point masses (at least in the case formula_64), since a point mass can be regarded as a normal distribution with covariance matrix equal to zero, in which case the trace term disappears and only the term involving the Euclidean distance between the means remains.
One-dimensional distributions.
Let formula_65 be probability measures on formula_34, and denote their cumulative distribution functions by formula_66 and formula_67. Then the transport problem has an analytic solution: Optimal transport preserves the order of probability mass elements, so the mass at quantile formula_68 of formula_69 moves to quantile formula_68 of formula_39.
Thus, the formula_3-Wasserstein distance between formula_69 and formula_39 is
formula_70
where formula_71 and formula_72 are the quantile functions (inverse CDFs).
In the case of formula_73, a change of variables leads to the formula
formula_74
Applications.
The Wasserstein metric is a natural way to compare the probability distributions of two variables "X" and "Y", where one variable is derived from the other by small, non-uniform perturbations (random or deterministic).
In computer science, for example, the metric "W"1 is widely used to compare discrete distributions, "e.g." the color histograms of two digital images; see earth mover's distance for more details.
In their paper 'Wasserstein GAN', Arjovsky et al. use the Wasserstein-1 metric as a way to improve the original framework of generative adversarial networks (GAN), to alleviate the vanishing gradient and the mode collapse issues. The special case of normal distributions is used in a Frechet inception distance.
The Wasserstein metric has a formal link with Procrustes analysis, with application to chirality measures, and to shape analysis.
In computational biology, Wasserstein metric can be used to compare between persistence diagrams of cytometry datasets.
The Wasserstein metric also has been used in inverse problems in geophysics.
The Wasserstein metric is used in integrated information theory to compute the difference between concepts and conceptual structures.
The Wasserstein metric and related formulations have also been used to provide a unified theory for shape observable analysis in high energy and collider physics datasets.
Properties.
Metric structure.
It can be shown that "W""p" satisfies all the axioms of a metric on the Wasserstein space P"p"("M") consisting of all Borel probability measures on "M" having finite "p"th moment. Furthermore, convergence with respect to "W""p" is equivalent to the usual weak convergence of measures plus convergence of the first "p"th moments.
Dual representation of "W"1.
The following dual representation of "W"1 is a special case of the duality theorem of Kantorovich and Rubinstein (1958): when "μ" and "ν" have bounded support,
formula_75
where Lip("f") denotes the minimal Lipschitz constant for "f". This form shows that "W"1 is an integral probability metric.
Compare this with the definition of the Radon metric:
formula_76
If the metric "d" of the metric space ("M","d") is bounded by some constant "C", then
formula_77
and so convergence in the Radon metric (identical to total variation convergence when "M" is a Polish space) implies convergence in the Wasserstein metric, but not vice versa.
Proof.
The following is an intuitive proof which skips over technical points. A fully rigorous proof is found in.
Discrete case: When formula_0 is discrete, solving for the 1-Wasserstein distance is a problem in linear programming:
formula_78
where formula_79 is a general "cost function".
By carefully writing the above equations as matrix equations, we obtain its dual problem:
formula_80
and by the duality theorem of linear programming, since the primal problem is feasible and bounded, so is the dual problem, and the minimum in the first problem equals the maximum in the second problem. That is, the problem pair exhibits "strong duality".
For the general case, the dual problem is found by converting sums to integrals:
formula_81
and the "strong duality" still holds.
This is the Kantorovich duality theorem. Cédric Villani recounts the following interpretation from Luis Caffarelli:
Suppose you want to ship some coal from mines, distributed as formula_4, to factories, distributed as formula_5. The cost function of transport is formula_82. Now a shipper comes and offers to do the transport for you. You would pay him formula_83 per coal for loading the coal at formula_19, and pay him formula_84 per coal for unloading the coal at formula_20.
For you to accept the deal, the price schedule must satisfy formula_85. The Kantorovich duality states that the shipper can make a price schedule that makes you pay almost as much as you would ship yourself.This result can be pressed further to yield:<templatestyles src="Math_theorem/styles.css" />
Theorem (Kantorovich-Rubenstein duality) — When the probability space formula_86 is a metric space, then
for any fixed formula_87,
formula_88
where formula_89 is the Lipschitz norm.
<templatestyles src="Math_proof/styles.css" />Proof
It suffices to prove the case of formula_90.
Start with
formula_91
Then, for any choice of formula_92, one can push the term higher by setting formula_93, making it an infimal convolution of formula_94 with a cone. This implies formula_95 for any formula_96, that is, formula_97.
Thus,
formula_98
Next, for any choice of formula_99, formula_92 can be optimized by setting formula_100. Since formula_99, this implies formula_101.
The two infimal convolution steps are visually clear when the probability space is formula_102.
For notational convenience, let formula_103 denote the infimal convolution operation.
For the first step, where we used formula_104, plot out the curve of formula_94, then at each point, draw a cone of slope 1, and take the lower envelope of the cones as formula_105, as shown in the diagram, then formula_105 cannot increase with slope larger than 1. Thus all its secants have slope formula_106.
For the second step, picture the infimal convolution formula_107, then if all secants of formula_105 have slope at most 1, then the lower envelope of formula_108 are just the cone-apices themselves, thus formula_109.
1D Example. When both formula_110 are distributions on formula_102, then integration by parts give
formula_111
thus
formula_112
Fluid mechanics interpretation of "W"2.
Benamou & Brenier found a dual representation of formula_113 by fluid mechanics, which allows efficient solution by convex optimization.
Given two probability densities formula_114 on formula_115,
formula_116where formula_117 ranges over velocity fields driving the continuity equation with boundary conditions on the fluid density field:
formula_118
That is, the mass should be conserved, and the velocity field should transport the probability distribution formula_3 to formula_68 during the time interval formula_119.
Equivalence of "W"2 and a negative-order Sobolev norm.
Under suitable assumptions, the Wasserstein distance formula_113 of order two is Lipschitz equivalent to a negative-order homogeneous Sobolev norm. More precisely, if we take formula_0 to be a connected Riemannian manifold equipped with a positive measure formula_50, then we may define for formula_120 the seminorm
formula_121
and for a signed measure formula_4 on formula_0 the dual norm
formula_122
Then any two probability measures formula_4 and formula_5 on formula_0 satisfy the upper bound
formula_123
In the other direction, if formula_4 and formula_5 each have densities with respect to the standard volume measure on formula_0 that are both bounded above by some formula_124, and formula_0 has non-negative Ricci curvature, then
formula_125
Separability and completeness.
For any "p" ≥ 1, the metric space (P"p"("M"), "W""p") is separable, and is complete if ("M", "d") is separable and complete.
Wasserstein distance for "p" = ∞.
It is also possible to consider the Wasserstein metric for formula_126. In this case, the defining formula becomes:
formula_127
where formula_128 denotes the essential supremum of formula_129 with respect to measure formula_10. The metric space (P∞("M"), "W"∞) is complete if ("M", "d") is separable and complete. Here, P∞ is the space of all probability measures with bounded support.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "(M,d)"
},
{
"math_id": 2,
"text": "p \\in [1, +\\infty]"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "\\mu"
},
{
"math_id": 5,
"text": "\\nu"
},
{
"math_id": 6,
"text": "W_p(\\mu, \\nu) = \\inf_{\\gamma \\in \\Gamma(\\mu, \\nu)} \\left(\\mathbf{E}_{(x, y) \\sim \\gamma} d(x, y)^p \\right)^{1/p},"
},
{
"math_id": 7,
"text": "\\Gamma(\\mu, \\nu)"
},
{
"math_id": 8,
"text": "W_\\infty(\\mu, \\nu)"
},
{
"math_id": 9,
"text": "\\lim_{p\\rightarrow +\\infty} W_p(\\mu, \\nu)"
},
{
"math_id": 10,
"text": "\\gamma"
},
{
"math_id": 11,
"text": "M \\times M"
},
{
"math_id": 12,
"text": "A\\subset M"
},
{
"math_id": 13,
"text": "\\int_A\\int_M \\gamma(x, y) \\,\\mathrm{d}y \\,\\mathrm{d}x= \\mu(A),"
},
{
"math_id": 14,
"text": "\\int_A \\int_M \\gamma(x, y) \\,\\mathrm{d}x \\,\\mathrm{d}y= \\nu(A)."
},
{
"math_id": 15,
"text": "\\mu(x)"
},
{
"math_id": 16,
"text": "X"
},
{
"math_id": 17,
"text": "\\nu(x)"
},
{
"math_id": 18,
"text": "c(x,y) \\geq 0"
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "y"
},
{
"math_id": 21,
"text": "\\gamma(x,y)"
},
{
"math_id": 22,
"text": "\\mu(x) \\mathrm{d}x"
},
{
"math_id": 23,
"text": "\\nu(y)\\mathrm{d}y"
},
{
"math_id": 24,
"text": "\\gamma(x,y) \\, \\mathrm{d} x \\, \\mathrm{d} y"
},
{
"math_id": 25,
"text": "c(x,y) \\gamma(x,y) \\, \\mathrm{d} x \\, \\mathrm{d} y"
},
{
"math_id": 26,
"text": "\n\\iint c(x,y) \\gamma(x,y) \\, \\mathrm{d} x \\, \\mathrm{d} y = \\int c(x,y) \\, \\mathrm{d} \\gamma(x,y).\n"
},
{
"math_id": 27,
"text": "\\Gamma"
},
{
"math_id": 28,
"text": "\nC = \\inf_{\\gamma \\in \\Gamma(\\mu, \\nu)} \\int c(x,y) \\, \\mathrm{d} \\gamma(x,y).\n"
},
{
"math_id": 29,
"text": "W_1"
},
{
"math_id": 30,
"text": "\\mu_{1} = \\delta_{a_{1}}"
},
{
"math_id": 31,
"text": "\\mu_{2} = \\delta_{a_{2}}"
},
{
"math_id": 32,
"text": "a_{1}"
},
{
"math_id": 33,
"text": "a_{2}"
},
{
"math_id": 34,
"text": "\\mathbb{R}"
},
{
"math_id": 35,
"text": "\\delta_{(a_{1}, a_{2})}"
},
{
"math_id": 36,
"text": "(a_{1}, a_{2}) \\in \\mathbb{R}^{2}"
},
{
"math_id": 37,
"text": "p \\geq 1"
},
{
"math_id": 38,
"text": "\\mu_{1}"
},
{
"math_id": 39,
"text": "\\mu_2"
},
{
"math_id": 40,
"text": "W_p (\\mu_1, \\mu_2) = | a_1 - a_2 | ."
},
{
"math_id": 41,
"text": "\\mathbb{R}^{n}"
},
{
"math_id": 42,
"text": "W_p(\\mu_1, \\mu_2) = \\| a_1 - a_2 \\|_2 ."
},
{
"math_id": 43,
"text": "P"
},
{
"math_id": 44,
"text": "X_1, \\ldots, X_n"
},
{
"math_id": 45,
"text": "Q"
},
{
"math_id": 46,
"text": "Y_1, \\ldots, Y_n"
},
{
"math_id": 47,
"text": "W_p(P, Q) = \\left( \\frac{1}{n}\\sum_{i=1}^n \\|X_{(i)} - Y_{(i)}\\|^p \\right)^{1/p}."
},
{
"math_id": 48,
"text": "n"
},
{
"math_id": 49,
"text": "W_p(P, Q) = \\inf_\\pi \\left( \\frac{1}{n} \\sum_{i=1}^n \\|X_i - Y_{\\pi(i)}\\|^p \\right)^{1/p},"
},
{
"math_id": 50,
"text": "\\pi"
},
{
"math_id": 51,
"text": "\\mu_1 = \\mathcal{N}(m_1, C_1)"
},
{
"math_id": 52,
"text": "\\mu_2 = \\mathcal{N}(m_2, C_2)"
},
{
"math_id": 53,
"text": "\\mathbb{R}^n"
},
{
"math_id": 54,
"text": "m_1"
},
{
"math_id": 55,
"text": "m_2 \\in \\mathbb{R}^n"
},
{
"math_id": 56,
"text": "C_{1}"
},
{
"math_id": 57,
"text": "C_2 \\in \\mathbb{R}^{n \\times n}"
},
{
"math_id": 58,
"text": "\\mu_{2}"
},
{
"math_id": 59,
"text": "W_{2} (\\mu_1, \\mu_2)^2 = \\| m_1 - m_2 \\|_2^2 + \\mathop{\\mathrm{trace}} \\bigl( C_1 + C_2 - 2 \\bigl( C_2^{1/2} C_1 C_2^{1/2} \\bigr)^{1/2} \\bigr) ."
},
{
"math_id": 60,
"text": "C^{1/2}"
},
{
"math_id": 61,
"text": "C"
},
{
"math_id": 62,
"text": "C_1"
},
{
"math_id": 63,
"text": "C_2"
},
{
"math_id": 64,
"text": "p = 2"
},
{
"math_id": 65,
"text": "\\mu_1, \\mu_2 \\in P_p(\\mathbb{R})"
},
{
"math_id": 66,
"text": "F_1(x)"
},
{
"math_id": 67,
"text": "F_2(x)"
},
{
"math_id": 68,
"text": "q"
},
{
"math_id": 69,
"text": "\\mu_1"
},
{
"math_id": 70,
"text": "W_p(\\mu_1, \\mu_2) = \\left(\\int_0^1 \\left| F_1^{-1}(q) - F_2^{-1}(q) \\right|^p \\, \\mathrm{d} q\\right)^{1/p},"
},
{
"math_id": 71,
"text": "F_1^{-1}"
},
{
"math_id": 72,
"text": "F_2^{-1}"
},
{
"math_id": 73,
"text": "p=1"
},
{
"math_id": 74,
"text": "W_1(\\mu_1, \\mu_2) = \\int_{\\mathbb{R}} \\left| F_1(x) - F_2(x) \\right| \\, \\mathrm{d} x. "
},
{
"math_id": 75,
"text": "W_1 (\\mu, \\nu) = \\sup \\left\\{ \\left. \\int_{M} f(x) \\, \\mathrm{d} (\\mu - \\nu) (x) \\, \\right| \\text{ continuous } f : M \\to \\mathbb{R}, \\operatorname{Lip} (f) \\leq 1 \\right\\},"
},
{
"math_id": 76,
"text": "\\rho (\\mu, \\nu) := \\sup \\left\\{ \\left. \\int_M f(x) \\, \\mathrm{d} (\\mu - \\nu) (x) \\, \\right| \\text{ continuous } f : M \\to [-1, 1] \\right\\}."
},
{
"math_id": 77,
"text": "2 W_1 (\\mu, \\nu) \\leq C \\rho (\\mu, \\nu),"
},
{
"math_id": 78,
"text": "\\begin{cases}\n\t \\min_\\gamma \\sum_{x, y} c(x, y) \\gamma(x, y) \\\\\n\t \\sum_y \\gamma(x, y) = \\mu(x) \\\\\n\t \\sum_x \\gamma(x, y) = \\nu(y) \\\\\n\t \\gamma \\geq 0\n\t \\end{cases}\n"
},
{
"math_id": 79,
"text": "c: M \\times M \\to [0, \\infty)"
},
{
"math_id": 80,
"text": "\\begin{cases}\n\t \\max_{f, g} \\sum_x \\mu(x)f(x) + \\sum_y \\nu(y)g(y)\\\\\n\t f(x) + g(y) \\leq c(x, y)\n\t \\end{cases}\n"
},
{
"math_id": 81,
"text": "\\begin{cases}\n\t \\sup_{f, g} \\mathbb E_{x\\sim \\mu}[f(x)] + \\mathbb E_{y\\sim \\nu}[g(y)]\\\\\n\t f(x) + g(y) \\leq c(x, y)\n\t \\end{cases}\n"
},
{
"math_id": 82,
"text": "c"
},
{
"math_id": 83,
"text": "f(x)"
},
{
"math_id": 84,
"text": "g(y)"
},
{
"math_id": 85,
"text": "f(x) + g(y) \\leq c(x, y)"
},
{
"math_id": 86,
"text": "\\Omega"
},
{
"math_id": 87,
"text": "K > 0"
},
{
"math_id": 88,
"text": "W_1(\\mu, \\nu) = \\frac 1 K\\sup_{\\|f\\|_L \\leq K} \\mathbb{E}_{x\\sim \\mu}[f(x)] -\\mathbb E_{y\\sim \\nu}[f(y)]"
},
{
"math_id": 89,
"text": "\\|\\cdot\\|_L"
},
{
"math_id": 90,
"text": "K=1"
},
{
"math_id": 91,
"text": "W_1(\\mu, \\nu) = \\sup_{f(x) + g(y) \\leq d(x, y)} \\mathbb{E}_{x\\sim \\mu}[f(x)] +\\mathbb E_{y\\sim \\nu}[g(y)]."
},
{
"math_id": 92,
"text": "g"
},
{
"math_id": 93,
"text": "f(x) = \\inf_y d(x, y) - g(y)"
},
{
"math_id": 94,
"text": "-g"
},
{
"math_id": 95,
"text": "f(x) - f(y) \\leq d(x, y)"
},
{
"math_id": 96,
"text": "x, y"
},
{
"math_id": 97,
"text": "\\|f\\|_L \\leq 1"
},
{
"math_id": 98,
"text": "\\begin{align}\n\t\t\t W_1(\\mu, \\nu) &= \\sup_{g}\\sup_{f(x) + g(y) \\leq d(x, y)} \\mathbb{E}_{x\\sim \\mu}[f(x)] +\\mathbb E_{y\\sim \\nu}[g(y)]\\\\\n\t\t\t &= \\sup_{g}\\sup_{\\|f\\|_L \\leq 1, f(x) + g(y) \\leq d(x, y)} \\mathbb{E}_{x\\sim \\mu}[f(x)] +\\mathbb E_{y\\sim \\nu}[g(y)]\\\\\n\t\t\t &= \\sup_{\\|f\\|_L \\leq 1}\\sup_{g, f(x) + g(y) \\leq d(x, y)} \\mathbb{E}_{x\\sim \\mu}[f(x)] +\\mathbb E_{y\\sim \\nu}[g(y)].\n\t\t\t \\end{align}\n"
},
{
"math_id": 99,
"text": "\\|f\\|_L\\leq 1"
},
{
"math_id": 100,
"text": "g(y) = \\inf_x d(x, y) - f(x)"
},
{
"math_id": 101,
"text": "g(y) = -f(y)"
},
{
"math_id": 102,
"text": "\\R"
},
{
"math_id": 103,
"text": "\\square"
},
{
"math_id": 104,
"text": "f = \\text{cone} \\mathbin{\\square} (-g)"
},
{
"math_id": 105,
"text": "f"
},
{
"math_id": 106,
"text": "\\bigg| \\frac{f(x) - f(y)}{x-y}\\bigg| \\leq 1"
},
{
"math_id": 107,
"text": "\\text{cone} \\mathbin{\\square} (-f)"
},
{
"math_id": 108,
"text": "\\text{cone} \\mathbin{\\square} (-f) "
},
{
"math_id": 109,
"text": "\\text{cone} \\mathbin{\\square} (-f)=-f"
},
{
"math_id": 110,
"text": "\\mu, \\nu"
},
{
"math_id": 111,
"text": "\\mathbb{E}_{x\\sim \\mu}[f(x)] - \\mathbb E_{y\\sim \\nu}[f(y)] = \\int f'(x) (F_\\nu(x) - F_\\mu(x)) \\, \\mathrm{d}x,"
},
{
"math_id": 112,
"text": "f(x) = K \\cdot \\operatorname{sign}(F_\\nu(x) - F_\\mu(x))."
},
{
"math_id": 113,
"text": "W_2"
},
{
"math_id": 114,
"text": "p, q"
},
{
"math_id": 115,
"text": "\\R^n"
},
{
"math_id": 116,
"text": "W_2(p, q)=\\min_{\\bold{v}} \\int_0^1 \\int_{\\R^n} \\|\\bold{v}(\\bold{x}, t)\\|^2 \\rho(\\bold{x}, t) \\, d \\bold{x} \\, dt"
},
{
"math_id": 117,
"text": "\\bold{v}"
},
{
"math_id": 118,
"text": "\n\\dot{\\rho}+\\nabla\\cdot(\\rho\\bold{v})=0 \\quad\n\\rho(\\cdot, 0)=p,\\; \\rho(\\cdot, 1)=q\n"
},
{
"math_id": 119,
"text": "[0, 1]"
},
{
"math_id": 120,
"text": "f \\colon M \\to \\mathbb{R}"
},
{
"math_id": 121,
"text": "\\| f \\|_{\\dot{H}^{1}(\\pi)}^{2} = \\int_{M} \\|\\nabla f(x)\\|^{2} \\, \\pi(\\mathrm{d} x)"
},
{
"math_id": 122,
"text": "\\| \\mu \\|_{\\dot{H}^{-1}(\\pi)} = \\sup \\bigg\\{ | \\langle f, \\mu \\rangle | \\,\\bigg|\\, \\| f \\|_{\\dot{H}^{1}(\\pi)} \\leq 1 \\bigg\\} ."
},
{
"math_id": 123,
"text": "W_{2} (\\mu, \\nu) \\leq 2\\, \\| \\mu - \\nu \\|_{\\dot{H}^{-1}(\\pi)} ."
},
{
"math_id": 124,
"text": "0 < C < \\infty"
},
{
"math_id": 125,
"text": "\\| \\mu - \\nu \\|_{\\dot{H}^{-1}(\\pi)} \\leq \\sqrt{C}\\, W_{2} (\\mu, \\nu) ."
},
{
"math_id": 126,
"text": "p = \\infty"
},
{
"math_id": 127,
"text": "\n W_{\\infty}(\\mu,\\nu) = \\lim_{p \\rightarrow +\\infty} W_p(\\mu,\\nu) = \\inf_{\\gamma \\in \\Gamma(\\mu, \\nu) } \\gamma\\operatorname{-essup} d(x,y),\n"
},
{
"math_id": 128,
"text": "\\gamma\\operatorname{-essup} d(x,y)"
},
{
"math_id": 129,
"text": "d(x,y)"
}
] |
https://en.wikipedia.org/wiki?curid=6704603
|
67046831
|
(α/Fe) versus (Fe/H) diagram
|
Graph used in astrophysics
The [α/Fe] versus [Fe/H] diagram is a type of graph commonly used in stellar and galactic astrophysics. It shows the logarithmic ratio number densities of diagnostic elements in stellar atmospheres compared to the solar value. The x-axis represents the abundance of iron (Fe) vs. hydrogen (H), that is, [Fe/H]. The y-axis represents the combination of one or several of the alpha process elements (O, Ne, Mg, Si, S, Ar, Ca, and Ti) compared to iron (Fe), denoted as [α/Fe].
These diagrams enable the assessment of nucleosynthesis channels and galactic evolution in samples of stars as a first-order approximation. They are among the most commonly used tools for Galactic population analysis of the Milky Way. The diagrams use abundance ratios normalised to the Sun, (placing the Sun at (0,0) in the diagram). This normalisation allows for the easy identification of stars in the Galactic stellar high-alpha disk (historically known as the Galactic stellar thick disk), typically enhanced in [α/Fe], and stars in the Galactic stellar low-alpha disk (historically known as the Galactic stellar thin disk), with [α/Fe] values as low as the Sun. Furthermore, the diagrams facilitate the identification of stars that are likely born in times or environments significantly different from the stellar disk. This includes metal-poor stars (with low [Fe/H] < -1), which likely belong to the stellar halo or accreted features.
History.
George Wallerstein and Beatrice Tinsley were early users of the [α/Fe] vs. [Fe/H] diagrams. In 1962, George Wallerstein noted, based on the analysis of a sample of 34 Galactic field stars, that "the [α/Fe] distribution seems to consist of a normal distribution about zero, plus seven stars with [α/Fe] > 0.20. These may be called [α/Fe]-rich stars."
In 1979, Beatrice Tinsley used the interpretation of these observations with the theory throughout her work on "Stellar lifetimes and abundance ratios in chemical evolution". While discussing oxygen as one of the alpha-process elements, she wrote, 'As anticipated, the observed [O/Fe] excess in metal-poor stars can be explained qualitatively if much of the iron comes from SN I. [...] The essential ingredient in accounting for the [O/Fe] excess is that a significant fraction of oxygen must come from stars with shorter lives than those that make much of the iron.' In 1980, in "Evolution of the Stars and Gas in Galaxies," she said, 'Relative abundances of elements heavier than helium provide information on both nucleosynthesis and galactic evolution [...].'
These relative abundances and the diagrams depicting different relative abundances are now among the most commonly used diagnostic tools of Galactic Archaeology. Bensby et al. (2014) used them to explore the Milky Way disk in the solar neighbourhood. Hayden et al. (2015) used them for their work on the chemical cartography of our Milky Way disk. It has been suggested that the diagram be named for Tinsley and Wallerstein.
Notation.
The diagram depicts two astrophysical quantities of stars, their iron abundance relative to hydrogen [Fe/H] - a tracer of stellar metallicity - and the enrichment of alpha process elements relative to iron, [α/Fe].
The iron abundance is noted as the logarithm of the ratio of a star's iron abundance compared to that of the Sun:
formula_0,
where formula_1 and formula_2 are the number of iron and hydrogen atoms per unit of volume respectively.
It is a tracing the contributions of galactic chemical evolution to the nucleosynthesis of iron. These differ for the birth environments of stars, based on their star formation history and star burst strengths. Major syntheses channels of iron are supernovae Ia and II.
The ratio of alpha process elements to iron, also known as the alpha-enhancement, is written as the logarithm of the alpha process elements O, Ne, Mg, Si, S, Ar, Ca, and Ti to Fe compared to that of the Sun:
formula_3 and
where formula_4 and formula_1 are the number of the alpha process elements formula_5 and iron atoms per unit of volume respectively.
In practise, not all of these elements can be measured in stellar spectra and the alpha-enhancement is therefore commonly reported as a simple or error-weighted average of the individual alpha process element abundances.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[\\text{Fe}/\\text{H}] = \\log_{10}{\\left(\\frac{N_{\\text{Fe}}}{N_{\\text{H}}}\\right)_\\text{star}} - \\log_{10}{\\left(\\frac{N_{\\text{Fe}}}{N_{\\text{H}}}\\right)_\\text{sun}}"
},
{
"math_id": 1,
"text": "N_\\text{Fe}"
},
{
"math_id": 2,
"text": "N_\\text{H}"
},
{
"math_id": 3,
"text": "[\\alpha/\\text{Fe}] = \\langle \\text{[X/Fe]}\\rangle = \\langle \\log_{10}{\\left(\\frac{N_\\text{X}}{N_{\\text{Fe}}}\\right)_\\text{star}} - \\log_{10}{\\left(\\frac{N_\\text{X}}{N_{\\text{Fe}}}\\right)_\\text{sun}} \\rangle \\text{, where X} \\in \\text{[O, Ne, Mg, Si, S, Ar, Ca, Ti]}"
},
{
"math_id": 4,
"text": "N_\\text{X}"
},
{
"math_id": 5,
"text": "\\text{X}"
}
] |
https://en.wikipedia.org/wiki?curid=67046831
|
670531
|
Perfect graph
|
Graph with tight clique-coloring relation
In graph theory, a perfect graph is a graph in which the chromatic number equals the size of the maximum clique, both in the graph itself and in every induced subgraph. In all graphs, the chromatic number is greater than or equal to the size of the maximum clique, but they can be far apart. A graph is perfect when these numbers are equal, and remain equal after the deletion of arbitrary subsets of vertices.
The perfect graphs include many important families of graphs and serve to unify results relating colorings and cliques in those families. For instance, in all perfect graphs, the graph coloring problem, maximum clique problem, and maximum independent set problem can all be solved in polynomial time, despite their greater complexity for non-perfect graphs. In addition, several important minimax theorems in combinatorics, including Dilworth's theorem and Mirsky's theorem on partially ordered sets, Kőnig's theorem on matchings, and the Erdős–Szekeres theorem on monotonic sequences, can be expressed in terms of the perfection of certain associated graphs.
The perfect graph theorem states that the complement graph of a perfect graph is also perfect. The strong perfect graph theorem characterizes the perfect graphs in terms of certain forbidden induced subgraphs, leading to a polynomial time algorithm for testing whether a graph is perfect.
Definitions and characterizations.
A clique in an undirected graph is a subset of its vertices that are all adjacent to each other, such as the subsets of vertices connected by heavy edges in the illustration. The clique number is the number of vertices in the largest clique: two in the illustrated seven-vertex cycle, and three in the other graph shown. A graph coloring assigns a color to each vertex so that each two adjacent vertices have different colors, also shown in the illustration. The chromatic number of a graph is the minimum number of colors in any coloring. The colorings shown are optimal, so the chromatic number is three for the 7-cycle and four for the other graph shown. The vertices of any clique must have different colors, so the chromatic number is always greater than or equal to the clique number. For some graphs, they are equal; for others, such as the ones shown, they are unequal. The perfect graphs are defined as the graphs for which these two numbers are equal, not just in the graph itself, but in every induced subgraph obtained by deleting some of its vertices.
The perfect graph theorem asserts that the complement graph of a perfect graph is itself perfect. The complement graph has an edge between two vertices if and only if the given graph does not. A clique, in the complement graph, corresponds to an independent set in the given. A coloring of the complement graph corresponds to a clique cover, a partition of the vertices of the given graph into cliques. The fact that the complement of a perfect graph formula_0 is also perfect implies that, in formula_0 itself, the independence number (the size of its maximum independent set), equals its clique cover number (the fewest number of cliques needed in a clique cover). More strongly, the same thing is true in every induced subgraph of the complement graph. This provides an alternative and equivalent definition of the perfect graphs: they are the graphs for which, in each induced subgraph, the independence number equals the clique cover number.
The strong perfect graph theorem gives a different way of defining perfect graphs, by their structure instead of by their properties.
It is based on the existence of cycle graphs and their complements within a given graph. A cycle of odd length, greater than three, is not perfect: its clique number is two, but its chromatic number is three. By the perfect graph theorem, the complement of an odd cycle of length greater than three is also not perfect. The complement of a length-5 cycle is another length-5 cycle, but for larger odd lengths the complement is not a cycle; it is called an "anticycle". The strong perfect graph theorem asserts that these are the only forbidden induced subgraphs for the perfect graphs: a graph is perfect if and only if its induced subgraphs include neither an odd cycle nor an odd anticycle of five or more vertices. In this context, induced cycles that are not triangles are called "holes", and their complements are called "antiholes", so the strong perfect graph theorem can be stated more succinctly: a graph is perfect if and only if it has neither an odd hole nor an odd antihole.
These results can be combined in another characterization of perfect graphs: they are the graphs for which the product of the clique number and independence number is greater than or equal to the number of vertices, and for which the same is true for all induced subgraphs. Because the statement of this characterization remains invariant under complementation of graphs, it implies the perfect graph theorem. One direction of this characterization follows easily from the original definition of perfect: the number of vertices in any graph equals the sum of the sizes of the color classes in an optimal coloring, and is less than or equal to the number of colors multiplied by the independence number. In a perfect graph, the number of colors equals the clique number, and can be replaced by the clique number in this inequality. The other direction can be proved directly, but it also follows from the strong perfect graph theorem: if a graph is not perfect, it contains an odd cycle or its complement, and in these subgraphs the product of the clique number and independence number is one less than the number of vertices.
History.
The theory of perfect graphs developed from a 1958 result of Tibor Gallai that in modern language can be interpreted as stating that the complement of a bipartite graph is perfect; this result can also be viewed as a simple equivalent of Kőnig's theorem, a much earlier result relating matchings and vertex covers in bipartite graphs. The first formulation of the concept of perfect graphs more generally was in a 1961 paper by Claude Berge, in German, and the first use of the phrase "perfect graph" appears to be in a 1963 paper of Berge. In these works he unified Gallai's result with several similar results by defining perfect graphs, and he conjectured both the perfect graph theorem and the strong perfect graph theorem. In formulating these concepts, Berge was motivated by the concept of the Shannon capacity of a graph, by the fact that for (co-)perfect graphs it equals the independence number, and by the search for minimal examples of graphs for which this is not the case. Until the strong perfect graph theorem was proven, the graphs described by it (that is, the graphs with no odd hole and no odd antihole) were called "Berge graphs".
The perfect graph theorem was proven by László Lovász in 1972, who in the same year proved the stronger inequality between the number of vertices and the product of the clique number and independence number, without benefit of the strong perfect graph theorem. In 1991, Alfred Lehman won the Fulkerson Prize, sponsored jointly by the Mathematical Optimization Society and American Mathematical Society, for his work on generalizations of the theory of perfect graphs to logical matrices. The conjectured strong perfect graph theorem became the focus of research in the theory of perfect graphs for many years, until its proof was announced in 2002 by Maria Chudnovsky, Neil Robertson, Paul Seymour, and Robin Thomas, and published by them in 2006. This work won its authors the 2009 Fulkerson Prize. The perfect graph theorem has a short proof, but the proof of the strong perfect graph theorem is long and technical, based on a deep structural decomposition of Berge graphs. Related decomposition techniques have also borne fruit in the study of other graph classes, and in particular for the claw-free graphs. The symmetric characterization of perfect graphs in terms of the product of clique number and independence number was originally suggested by Hajnal and proven by Lovász.
Families of graphs.
Many well-studied families of graphs are perfect, and in many cases the fact that these graphs are perfect corresponds to a minimax theorem for some kinds of combinatorial structure defined by these graphs. Examples of this phenomenon include the perfection of bipartite graphs and their line graphs, associated with Kőnig's theorem relating maximum matchings and vertex covers in bipartite graphs, and the perfection of comparability graphs, associated with Dilworth's theorem and Mirsky's theorem on chains and antichains in partially ordered sets. Other important classes of graphs, defined by having a structure related to the holes and antiholes of the strong perfect graph theorem, include the chordal graphs, Meyniel graphs, and their subclasses.
Bipartite graphs and line graphs.
In bipartite graphs (with at least one edge) the chromatic number and clique number both equal two. Their induced subgraphs remain bipartite, so bipartite graphs are perfect. Other important families of graphs are bipartite, and therefore also perfect, including for instance the trees and median graphs. By the perfect graph theorem, maximum independent sets in bipartite graphs have the same size as their minimum clique covers. The maximum independent set is complementary to a minimum vertex cover, a set of vertices that touches all edges. A minimum clique cover consists of a maximum matching (as many disjoint edges as possible) together with one-vertex cliques for all remaining vertices, and its size is the number of vertices minus the number of matching edges. Therefore, this equality can be expressed equivalently as an equality between the size of the maximum matching and the minimum vertex cover in bipartite graphs, the usual formulation of Kőnig's theorem.
A matching, in any graph formula_0, is the same thing as an independent set in the line graph formula_1, a graph that has a vertex for each edge in formula_0 and an edge between two vertices in formula_1 for each pair of edges in formula_0 that share an endpoint. Line graphs have two kinds of cliques: sets of edges in formula_0 with a common endpoint, and triangles in formula_0. In bipartite graphs, there are no triangles, so a clique cover in formula_1 corresponds to a vertex cover in formula_0. Therefore, in line graphs of bipartite graphs, the independence number and clique cover number are equal. Induced subgraphs of line graphs of bipartite graphs are line graphs of subgraphs, so the line graphs of bipartite graphs are perfect. Examples include the rook's graphs, the line graphs of complete bipartite graphs. Every line graph of a bipartite graph is an induced subgraph of a rook's graph.
Because line graphs of bipartite graphs are perfect, their clique number equals their chromatic number. The clique number of the line graph of a bipartite graph is the maximum degree of any vertex of the underlying bipartite graph. The chromatic number of the line graph of a bipartite graph is the chromatic index of the underlying bipartite graph, the minimum number of colors needed to color the edges so that touching edges have different colors. Each color class forms a matching, and the chromatic index is the minimum number of matchings needed to cover all edges. The equality of maximum degree and chromatic index, in bipartite graphs, is another theorem of Dénes Kőnig. In arbitrary simple graphs, they can differ by one; this is Vizing's theorem.
The underlying graph formula_0 of a perfect line graph formula_1 is a line perfect graph. These are the graphs whose biconnected components are bipartite graphs, the complete graph formula_2, and triangular books, sets of triangles sharing an edge. These components are perfect, and their combination preserves perfection, so every line perfect graph is perfect.
The bipartite graphs, their complements, and the line graphs of bipartite graphs and their complements form four basic classes of perfect graphs that play a key role in the proof of the strong perfect graph theorem. According to the structural decomposition of perfect graphs used as part of this proof, every perfect graph that is not already in one of these four classes can be decomposed by partitioning its vertices into subsets, in one of four ways, called a 2-join, the complement of a 2-join, a homogeneous pair, or a skew partition.
Comparability graphs.
A partially ordered set is defined by its set of elements, and a comparison relation formula_3 that is reflexive (for all elements formula_4, formula_5), antisymmetric (if formula_6 and formula_7, then formula_8, and transitive (if formula_6 and formula_9, then formula_10). Elements formula_4 and formula_11 are "comparable" if formula_6 or formula_7, and "incomparable" otherwise. For instance, set inclusion (formula_12) partially orders any family of sets. The comparability graph of a partially ordered set has the set elements as its vertices, with an edge connecting any two comparable elements. Its complement is called an "incomparability graph". Different partial orders may have the same comparability graph; for instance, reversing all comparisons changes the order but not the graph.
Finite comparability graphs (and their complementary incomparability graphs) are always perfect. A clique, in a comparability graph, comes from a subset of elements that are all pairwise comparable; such a subset is called a chain, and it is linearly ordered by the given partial order. An independent set comes from a subset of elements no two of which are comparable; such a subset is called an antichain. For instance, in the illustrated partial order and comparability graph, formula_13 is a chain in the order and a clique in the graph, while formula_14 is an antichain in the order and an independent set in the graph. Thus, a coloring of a comparability graph is a partition of its elements into antichains, and a clique cover is a partition of its elements into chains. Dilworth's theorem, in the theory of partial orders, states that for every finite partial order, the size of the largest antichain equals the minimum number of chains into which the elements can be partitioned. In the language of graphs, this can be stated as: every finite comparability graph is perfect. Similarly, Mirsky's theorem states that for every finite partial order, the size of the largest chain equals the minimum number of antichains into which the elements can be partitioned, or that every finite incomparability graph is perfect. These two theorems are equivalent via the perfect graph theorem, but Mirsky's theorem is easier to prove directly than Dilworth's theorem: if each element is labeled by the size of the largest chain in which it is maximal, then the subsets with equal labels form a partition into antichains, with the number of antichains equal to the size of the largest chain overall. Every bipartite graph is a comparability graph. Thus, Kőnig's theorem can be seen as a special case of Dilworth's theorem, connected through the theory of perfect graphs.
A permutation graph is defined from a permutation on a totally ordered sequence of elements (conventionally, the integers from formula_15 to formula_16), which form the vertices of the graph. The edges of a permutation graph connect pairs of elements whose ordering is reversed by the given permutation. These are naturally incomparability graphs, for a partial order in which formula_6 whenever formula_4 occurs before formula_11 in both the given sequence and its permutation. The complement of a permutation graph is another permutation graph, for the reverse of the given permutation. Therefore, as well as being incomparability graphs, permutation graphs are comparability graphs. In fact, the permutation graphs are exactly the graphs that are both comparability and incomparability graphs. A clique, in a permutation graph, is a subsequence of elements that appear in increasing order in the given permutation, and an independent set is a subsequence of elements that appear in decreasing order. In any perfect graph, the product of the clique number and independence number are at least the number of vertices; the special case of this inequality for permutation graphs is the Erdős–Szekeres theorem.
The interval graphs are the incomparability graphs of interval orders, orderings defined by sets of intervals on the real line with formula_6 whenever interval formula_4 is completely to the left of interval formula_11. In the corresponding interval graph, there is an edge from formula_4 to formula_11 whenever the two intervals have a point in common. Coloring these graphs can be used to model problems of assigning resources to tasks (such as classrooms to classes) with intervals describing the scheduled time of each task. Both interval graphs and permutation graphs are generalized by the trapezoid graphs. Systems of intervals in which no two are nested produce a more restricted class of graphs, the indifference graphs, the incomparability graphs of semiorders. These have been used to model human preferences under the assumption that, when items have utilities that are very close to each other, they will be incomparable. Intervals where every pair is nested or disjoint produce trivially perfect graphs, the comparability graphs of ordered trees. In them, the independence number equals the number of maximal cliques.
Split graphs and random perfect graphs.
A split graph is a graph that can be partitioned into a clique and an independent set. It can be colored by assigning a separate color to each vertex of a maximal clique, and then coloring each remaining vertex the same as a non-adjacent clique vertex. Therefore, these graphs have equal clique numbers and chromatic numbers, and are perfect. A broader class of graphs, the "unipolar graphs" can be partitioned into a clique and a cluster graph, a disjoint union of cliques. These include also the bipartite graphs, for which the cluster graph is just a single clique. The unipolar graphs and their complements together form the class of "generalized split graphs". Almost all perfect graphs are generalized split graphs, in the sense that the fraction of perfect formula_16-vertex graphs that are generalized split graphs goes to one in the limit as formula_16 grows arbitrarily large.
Other limiting properties of almost all perfect graphs can be determined by studying the generalized split graphs. In this way, it has been shown that almost all perfect graphs contain a Hamiltonian cycle. If formula_17 is an arbitrary graph, the limiting probability that formula_17 occurs as an induced subgraph of a large random perfect graph is 0, 1/2, or 1, respectively as formula_17 is not a generalized split graph, is unipolar or co-unipolar but not both, or is both unipolar and co-unipolar.
Incremental constructions.
Several families of perfect graphs can be characterized by an incremental construction in which the graphs in the family are built up by adding one vertex at a time, according to certain rules, which guarantee that after each vertex is added the graph remains perfect.
If the vertices of a chordal graph are colored in the order of an incremental construction sequence using a greedy coloring algorithm, the result will be an optimal coloring. The reverse of the vertex ordering used in this construction is called an "elimination order". Similarly, if the vertices of a distance-hereditary graph are colored in the order of an incremental construction sequence, the resulting coloring will be optimal. If the vertices of a comparability graph are colored in the order of a linear extension of its underlying partial order, the resulting coloring will be optimal. This property is generalized in the family of perfectly orderable graphs, the graphs for which there exists an ordering that, when restricted to any induced subgraph, causes greedy coloring to be optimal. The cographs are exactly the graphs for which all vertex orderings have this property. Another subclass of perfectly orderable graphs are the complements of tolerance graphs, a generalization of interval graphs.
Strong perfection.
The strongly perfect graphs are graphs in which, in every induced subgraph, there exists an independent set that intersects all maximal cliques. In the Meyniel graphs or "very strongly perfect graphs", every vertex belongs to such an independent set. The Meyniel graphs can also be characterized as the graphs in which every odd cycle of length five or more has at least two chords.
A parity graph is defined by the property that between every two vertices, all induced paths have equal parity: either they are all even in length, or they are all odd in length. These include the distance-hereditary graphs, in which all induced paths between two vertices have the same length, and bipartite graphs, for which all paths (not just induced paths) between any two vertices have equal parity. Parity graphs are Meyniel graphs, and therefore perfect: if a long odd cycle had only one chord, the two parts of the cycle between the endpoints of the chord would be induced paths of different parity. The prism over any parity graph (its Cartesian product with a single edge) is another parity graph, and the parity graphs are the only graphs whose prisms are perfect.
Matrices, polyhedra, and integer programming.
Perfect graphs are closely connected to the theory of linear programming and integer programming. Both linear programs and integer programs are expressed in canonical form as seeking a vector formula_4 that maximizes a linear objective function formula_18, subject to the linear constraints formula_19 and formula_20. Here, formula_21 is given as a matrix, and formula_22 and formula_23 are given as two vectors. Although linear programs and integer programs are specified in this same way, they differ in that, in a linear program, the solution vector formula_4 is allowed to have arbitrary real numbers as its coefficients, whereas in an integer program these unknown coefficients must be integers. This makes a very big difference in the computational complexity of these problems: linear programming can be solved in polynomial time, but integer programming is NP-hard.
When the same given values formula_21, formula_22, and formula_23 are used to define both a linear program and an integer program, they commonly have different optimal solutions. The linear program is called an integral linear program if an optimal solution to the integer program is also optimal for the linear program. (Otherwise, the ratio between the two solution values is called the integrality gap, and is important in analyzing approximation algorithms for the integer program.) Perfect graphs may be used to characterize the (0, 1) matrices formula_21 (that is, matrices where all coefficients are 0 or 1) with the following property: if formula_22 is the all-ones vector, then for all choices of formula_23 the resulting linear program is integral.
As Václav Chvátal proved, every matrix formula_21 with this property is (up to removal of irrelevant "dominated" rows) the maximal clique versus vertex incidence matrix of a perfect graph. This matrix has a column for each vertex of the graph, and a row for each maximal clique, with a coefficient that is one in the columns of vertices that belong to the clique and zero in the remaining columns. The integral linear programs encoded by this matrix seek the maximum-weight independent set of the given graph, with weights given by the vector formula_23.
For a matrix formula_21 defined in this way from a perfect graph, the vectors formula_4 satisfying the system of inequalities formula_19, formula_24 form an integral polytope. It is the convex hull of the indicator vectors of independent sets in the graph, with facets corresponding to the maximal cliques in the graph. The perfect graphs are the only graphs for which the two polytopes defined in this way from independent sets and from maximal cliques coincide.
Algorithms.
In all perfect graphs, the graph coloring problem, maximum clique problem, and maximum independent set problem can all be solved in polynomial time. The algorithm for the general case involves the Lovász number of these graphs. The Lovász number of any graph can be determined by labeling its vertices by high dimensional unit vectors, so that each two non-adjacent vertices have perpendicular labels, and so that all of the vectors lie in a cone with as small an opening angle as possible. Then, the Lovász number is formula_25, where formula_26 is the half-angle of this cone. Despite this complicated definition, an accurate numerical value of the Lovász number can be computed using semidefinite programming, and for any graph the Lovász number is sandwiched between the chromatic number and clique number. Because these two numbers equal each other in perfect graphs, they also equal the Lovász number. Thus, they can be computed by approximating the Lovász number accurately enough and rounding the result to the nearest integer.
The solution method for semidefinite programs, used by this algorithm, is based on the ellipsoid method for linear programming. It leads to a polynomial time algorithm for computing the chromatic number and clique number in perfect graphs. However, solving these problems using the Lovász number and the ellipsoid method is complicated and has a high polynomial exponent. More efficient combinatorial algorithms are known for many special cases.
This method can also be generalized to find the maximum weight of a clique, in a weighted graph, instead of the clique number. A maximum or maximum weight clique itself, and an optimal coloring of the graph, can also be found by these methods, and a maximum independent set can be found by applying the same approach to the complement of the graph. For instance, a maximum clique can be found by the following algorithm:
The algorithm for finding an optimal coloring is more complicated, and depends on the duality theory of linear programs, using this clique-finding algorithm as a separation oracle.
Beyond solving these problems, another important computational problem concerning perfect graphs is their recognition, the problem of testing whether a given graph is perfect. For many years the complexity of recognizing Berge graphs and perfect graphs were considered separately (as they were not yet known to be equivalent) and both remained open. They were both known to be in co-NP; for Berge graphs, this follows from the definition, while for perfect graphs it follows from the characterization using the product of the clique number and independence number. After the strong perfect graph theorem was proved, Chudnovsky, Cornuéjols, Liu, Seymour, and Vušković discovered a polynomial time algorithm for testing the existence of odd holes or anti-holes. By the strong perfect graph theorem, this can be used to test whether a given graph is perfect, in polynomial time.
Related concepts.
Generalizing the perfect graphs, a graph class is said to be χ-bounded if the chromatic number of the graphs in the class can be bounded by a function of their clique number. The perfect graphs are exactly the graphs for which this function is the identity, both for the graph itself and for all its induced subgraphs.
The equality of the clique number and chromatic number in perfect graphs has motivated the definition of other graph classes, in which other graph invariants are set equal to each other. For instance, the domination perfect graphs are defined as graphs in which, in every induced subgraph, the smallest dominating set (a set of vertices adjacent to all remaining vertices) equals the size of the smallest independent set that is a dominating set. These include, for instance, the claw-free graphs.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "L(G)"
},
{
"math_id": 2,
"text": "K_4"
},
{
"math_id": 3,
"text": "\\le"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "x\\le x"
},
{
"math_id": 6,
"text": "x\\le y"
},
{
"math_id": 7,
"text": "y\\le x"
},
{
"math_id": 8,
"text": "x=y"
},
{
"math_id": 9,
"text": "y\\le z"
},
{
"math_id": 10,
"text": "x\\le z"
},
{
"math_id": 11,
"text": "y"
},
{
"math_id": 12,
"text": "\\subseteq"
},
{
"math_id": 13,
"text": "\\{A,B,C\\}"
},
{
"math_id": 14,
"text": "\\{C,D\\}"
},
{
"math_id": 15,
"text": "1"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "H"
},
{
"math_id": 18,
"text": "c\\cdot x"
},
{
"math_id": 19,
"text": "x\\ge 0"
},
{
"math_id": 20,
"text": "Ax\\le b"
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "b"
},
{
"math_id": 23,
"text": "c"
},
{
"math_id": 24,
"text": "Ax\\le 1"
},
{
"math_id": 25,
"text": "1/\\cos^2\\theta"
},
{
"math_id": 26,
"text": "\\theta"
},
{
"math_id": 27,
"text": "v"
}
] |
https://en.wikipedia.org/wiki?curid=670531
|
67054356
|
Judith Gersting
|
American mathematician, computer scientist, and textbook author
Judith Lee MacKenzie Gersting (born August 20, 1940) is an American mathematician, computer scientist, and textbook author. She is a professor emerita of computer science at Indiana University–Purdue University Indianapolis and at the University of Hawaiʻi at Hilo.
Education and career.
Gersting graduated from Stetson University in 1962, and completed a Ph.D. in mathematics in 1969 at Arizona State University. Her dissertation, "Some Results on formula_0-Regressive Isols", concerned recursive function theory and was supervised by Matt Hassett.
After holding a faculty position in the department of mathematical sciences at Indiana University–Purdue University Indianapolis (IUPUI) for ten years, and becoming a full professor there, she spent a year at the University of Central Florida before returning to IUPUI in 1981 as professor of mathematics and acting chair of the department of computer and information science. She came to the University of Hawaiʻi at Hilo in 1990, and chaired the computer science department there for many years. After retiring from the University of Hawaiʻi, she became a part-time faculty member at IUPUI.
Books.
Gersting's books include:
With Henry M. Walker, she was co-chair and co-editor of the annual symposium on computer science education of SIGCSE in 2002.
Recognition.
The University of Hawaii system awarded Gersting the Regents’ Excellence in Teaching Award in 2006.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t"
}
] |
https://en.wikipedia.org/wiki?curid=67054356
|
67056752
|
Fractional Pareto efficiency
|
In economics and computer science, Fractional Pareto efficiency or Fractional Pareto optimality (fPO) is a variant of Pareto efficiency used in the setting of fair allocation of discrete objects. An allocation of objects is called "discrete" if each item is wholly allocated to a single agent; it is called "fractional" if some objects are split among two or more agents. A discrete allocation is called Pareto-efficient (PO) if it is not Pareto-dominated by any discrete allocation; it is called fractionally Pareto-efficient (fPO) if it is not Pareto-dominated by any discrete "or fractional" allocation. So fPO is a stronger requirement than PO: every fPO allocation is PO, but not every PO allocation is fPO.
Formal definitions.
There is a set of "n" agents and a set of "m" objects. An "allocation" is determined by an "n"-by-"m" matrix z, where each element "z"["i","o"] is a real number between 0 and 1. It represents the fraction that agent "i" gets from object "o". For every object "o", the sum of all elements in column "o" equals 1, since the entire object is allocated.
An allocation is called "discrete" or "integral" if all its elements "z"["i","o"] are either 0 or 1; that is, each object is allocated entirely to a single agent.
An allocation y is called a Pareto improvement of an allocation z, if the utility of all agents in y is at least as large as in z, and the utility of some agents in y is strictly larger than in z. In this case, we also say that y Pareto-dominates z.
If an allocation z is not Pareto-dominated by any discrete allocation, then it is called discrete Pareto-efficient, or simply Pareto-efficient (usually abbreviated PO).
If z is not Pareto-dominated by any allocation at all - whether discrete or fractional - then it is called fractionally Pareto-efficient (usually abbreviated fPO).
Examples.
PO does not imply fPO.
Suppose there are two agents and two items. Alice values the items at 3, 2 and George values them at 4, 1. Let z be the allocation giving the first item to Alice and the second to George. The utility profile of z is (3,1).
The price of fPO.
The following example shows the "price" of fPO. The integral allocation maximizing the product of utilities (also called the Nash welfare) is PE but not fPO. Moreover, the product of utilities in any fPO allocation is at most 1/3 of the maximum product. There are five goods {h1,h2,g1,g2,g3} and 3 agents with the following values (where "C" is a large constant and "d" is a small positive constant):
A max-product integral allocation is {h1},{h2},{g1,g2,g3}, with product formula_0. It is not fPO, since it is dominated by a fractional allocation: agent 3 can give g1 to agent 1 (losing 1-"d" utility) in return to a fraction of h1 that both agents value at 1-"d"/2. This trade strictly improves the welfare of both agents. Moreover, in "any" integral fPO allocation, there exists an agent A"i" who receives only (at most) the good "gi" - otherwise a similar trade can be done. Therefore, a max-product fPO allocation is {g1,h1},{g2,h2},{g3}, with product formula_1. When "C" is sufficiently large and "d" is sufficiently small, the product ratio approaches 1/3.
No fPO allocation is almost-equitable.
The following exampleSec.6.6 shows that fPO is incompatible with a fairness notion known as EQx - equitability up to any good. There are three goods {g1,g2,g3} and two agents with the following values (where "e" is a small positive constant):
Only two discrete allocations are EQx:
The same instance shows that fPO is incompatible with a fairness notion known as EFx - envy-freeness up to any good.Rem.5
Characterization.
Maximizing a weighted sum of utilities.
An allocation is fPO if-and-only-if it maximizes a weighted sum of the agents' utilities. Formally, let w be a vector of size "n", assigning a weight "wi" to every agent "i". We say that an allocation z is w-maximal if one of the following (equivalent) properties hold:
An allocation is fPO if-and-only-if it is w-maximal for some vector w of strictly-positive weights. This equivalence was proved for goods by Negishi and Varian. The proof was extended for bads by Branzei and Sandomirskiy. It was later extended to general valuations (mixtures of goods and bads) by Sandomirskiy and Segal-Halevi.Lem.2.3, App.A
No improvements cycles in the consumption graph.
An allocation is fPO if-and-only-if it its "directed consumption graph" does not contain cycles with product smaller than 1. The directed consumption graph of an allocation z is a bipartite graph in which the nodes on one side are agents, the nodes on the other side are objects, and the directed edges represent exchanges: an edge incoming into agent "i" represents objects that agent "i" would like to accept (goods he does not own, or bads he own); an edge incoming from agent "i" represents objects that agent "i" can pay by (goods he owns, or bads he does not own). The weight of edge "i" -> "o" is |"vi,o"|, The weight of edge "i" -> "o" is |"vi,o"| and the weight of edge "o" -> "i" is 1/|"vi,o"|.
An allocation is called "malicious" if some object "o" is consumed by some agent "i" with "vi,o" ≤ 0, even though there is some other agent "j" with "vj,o" > 0; or, some object "o" is consumed by some agent "i" with "vi,o" < 0, even though there is some other agent "j" with "vj,o" ≥ 0. Clearly, every malicious allocation can be Pareto-improved by moving the object "o" from agent "i" to agent "j". Therefore, non-maliciousness is a necessary condition for fPO.
An allocation is fPO if-and-only-if it is non-malicious, and its directed consumption graph as no directed cycle in which the product of weights is smaller than 1. This equivalence was proved for goods in the context of cake-cutting by Barbanel. It was extended for bads by Branzei and Sandomirskiy. It was later extended to general valuations (mixtures of goods and bads) by Sandomirskiy and Segal-Halevi.Lem.2.1, App.A
Relation to market equilibrium.
In a Fisher market, when all agents have linear utilities, any market equilibrium is fPO. This is the first welfare theorem.
Algorithms.
Deciding whether a given allocation is fPO.
The following algorithm can be used to decide whether a given an allocation z is fPO:
The run-time of the algorithm is O(|"V"||"E"|). Here, |"V"|="m"+"n" and |"E"|≤"m n", where "m" is the number of objects and "n" the number of agents. Therefore, fPO can be decided in time O("m n" ("m"+"n")).Lem.2.2, App.A
An alternative algorithm is to find a vector w such that the given allocation is w-maximizing. This can be done by solving a linear program. The run-time is weakly-polynomial.
In contrast, deciding whether a given discrete allocation is PO is co-NP-complete. Therefore, if the divider claims that an allocation is fPO, the agents can efficiently verify this claim; but if the divider claims that an allocation is PO, it may be impossible to verify this claim efficiently.
Finding a dominating fPO allocation.
Finding an fPO allocation is easy. For example, it can be found using serial dictatorship: agent 1 takes all objects for which he has positive value; then agent 2 takes all remaining objects for which he has positive value; and so on.
A more interesting challenge is: given an initial allocation z (that may be fractional, and not be fPO), find an fPO allocation z* that is a Pareto-improvement of z. This challenge can be solved for "n" agents and "m" objects with mixed (positive and negative) valuations, in strongly-polynomial time, using O("n"2 "m"2 ("n"+"m")) operations. Moreover, in the computed allocation there are at most "n"-1 sharings.Lem.2.5, App.A
If the initial allocation z is the equal split, then the final allocation z* is proportional. Therefore, the above lemma implies an efficient algorithm for finding a fractional PROP+fPO allocation, with at most "n"-1 sharings. Similarly, if z is an unequal split, then z* is weighted-proportional (proportional for agents with different entitlements). This implies an efficient algorithm for finding a fractional WPROP+fPO allocation with at most "n"-1 sharings.
Combining the above lemma with more advanced algorithms can yield, in strongly-polynomial time, allocations that are fPO and envy-free, with at most "n"-1 sharings.Cor.2.6
Enumerating the fPO allocations.
There is an algorithm that enumerates all consumption graphs that correspond to fPO allocations.Prop.3.7 The run-time of the algorithm is formula_7, where "D" is the degree of "degeneracy" of the instance ("D"="m"-1 for identical valuations; "D"=0 for non-degenerate valuations, where for every two agents, the value-ratios of all "m" objects are different). In particular, when "n" is constant and "D"=0, the run-time of the algorithm is strongly-polynomial.
Finding fair and fPO allocations.
Several recent works have considered the existence and computation of a discrete allocation that is both fPO and satisfies a certain notion of fairness.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C^2\\cdot (3-2d)"
},
{
"math_id": 1,
"text": "(C+1)^2"
},
{
"math_id": 2,
"text": "w_i\\cdot v_{i,o}"
},
{
"math_id": 3,
"text": "z_{i,o}>0 "
},
{
"math_id": 4,
"text": "w_i v_{i,o} \\geq w_j v_{j,o} "
},
{
"math_id": 5,
"text": "\\sum_i w_i\\cdot u_i(\\mathbf{z})"
},
{
"math_id": 6,
"text": "u_i(\\mathbf{z}) := \\sum_{o} v_{i,o}\\cdot z_{i,o} = "
},
{
"math_id": 7,
"text": "O(3^{\\frac{(n-1)n}{2}\\cdot D} \\cdot m^{\\frac{(n-1)n}{2}+2})"
}
] |
https://en.wikipedia.org/wiki?curid=67056752
|
67059821
|
Shift graph
|
In graph theory, the shift graph "G""n","k" for formula_0 is the graph whose vertices correspond to the ordered formula_1-tuples formula_2 with formula_3 and where two vertices formula_4 are adjacent if and only if formula_5 or formula_6 for all formula_7. Shift graphs are triangle-free, and for fixed formula_1 their chromatic number tend to infinity with formula_8. It is natural to enhance the shift graph formula_9 with the orientation formula_10 if formula_11 for all formula_12. Let formula_13 be the resulting directed shift graph.
Note that formula_14 is the directed line graph of the transitive tournament corresponding to the identity permutation. Moreover, formula_15 is the directed line graph of formula_13 for all formula_16.
Representation of shift graphs.
The shift graph formula_18 is the line-graph of the complete graph formula_22 in the following way: Consider the numbers from formula_23 to formula_8 ordered on the line and draw line segments between every pair of numbers. Every line segment corresponds to the formula_24-tuple of its first and last number which are exactly the vertices of formula_18. Two such segments are connected if the starting point of one line segment is the end point of the other.
Note: This seems false, since formula_25 and formula_26 will be non-adjacent. Someone should check this.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " n,k \\in \\mathbb{N},\\ n > 2k > 0 "
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": " a = (a_1, a_2, \\dotsc, a_k)"
},
{
"math_id": 3,
"text": "1 \\leq a_1 < a_2 < \\cdots < a_k \\leq n "
},
{
"math_id": 4,
"text": " a, b "
},
{
"math_id": 5,
"text": "a_i = b_{i+1}"
},
{
"math_id": 6,
"text": "a_{i+1} = b_i"
},
{
"math_id": 7,
"text": " 1 \\leq i \\leq k-1 "
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "G_{n,k}"
},
{
"math_id": 10,
"text": "a \\to b"
},
{
"math_id": 11,
"text": "a_{i+1}=b_i"
},
{
"math_id": 12,
"text": "1\\leq i\\leq k-1"
},
{
"math_id": 13,
"text": "\\overrightarrow{G}_{n,k}"
},
{
"math_id": 14,
"text": "\\overrightarrow{G}_{n,2}"
},
{
"math_id": 15,
"text": "\\overrightarrow{G}_{n,k+1}"
},
{
"math_id": 16,
"text": "k \\geq 2"
},
{
"math_id": 17,
"text": "2k+1"
},
{
"math_id": 18,
"text": "G_{n,2}"
},
{
"math_id": 19,
"text": "\\chi(G_{n,k}) = (1 + o(1))\\log\\log\\cdots\\log n "
},
{
"math_id": 20,
"text": "{\\displaystyle k-1}"
},
{
"math_id": 21,
"text": "G_{n,3}"
},
{
"math_id": 22,
"text": "K_n"
},
{
"math_id": 23,
"text": "1"
},
{
"math_id": 24,
"text": "2"
},
{
"math_id": 25,
"text": "\\{1,2\\}"
},
{
"math_id": 26,
"text": "\\{1,3\\}"
}
] |
https://en.wikipedia.org/wiki?curid=67059821
|
6706053
|
Varifold
|
In mathematics, a varifold is, loosely speaking, a measure-theoretic generalization of the concept of a differentiable manifold, by replacing differentiability requirements with those provided by rectifiable sets, while maintaining the general algebraic structure usually seen in differential geometry. Varifolds generalize the idea of a rectifiable current, and are studied in geometric measure theory.
Historical note.
Varifolds were first introduced by Laurence Chisholm Young in , under the name ""generalized surfaces". Frederick J. Almgren Jr. slightly modified the definition in his mimeographed notes and coined the name "varifold": he wanted to emphasize that these objects are substitutes for ordinary manifolds in problems of the calculus of variations. The modern approach to the theory was based on Almgren's notes and laid down by William K. Allard, in the paper .
Definition.
Given an open subset formula_0 of Euclidean space formula_1, an "m"-dimensional varifold on formula_0 is defined as a Radon measure on the set
formula_2
where formula_3 is the Grassmannian of all "m"-dimensional linear subspaces of an "n"-dimensional vector space. The Grassmannian is used to allow the construction of analogs to differential forms as duals to vector fields in the approximate tangent space of the set formula_0.
The particular case of a rectifiable varifold is the data of a "m"-rectifiable set "M" (which is measurable with respect to the "m"-dimensional Hausdorff measure), and a density function defined on "M", which is a positive function θ measurable and locally integrable with respect to the "m"-dimensional Hausdorff measure. It defines a Radon measure "V" on the Grassmannian bundle of formula_1
formula_4
where
Rectifiable varifolds are weaker objects than locally rectifiable currents: they do not have any orientation. Replacing "M" with more regular sets, one easily see that differentiable submanifolds are particular cases of rectifiable manifolds.
Due to the lack of orientation, there is no boundary operator defined on the space of varifolds.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "\\Omega \\times G(n,m)"
},
{
"math_id": 3,
"text": "G(n,m)"
},
{
"math_id": 4,
"text": "V(A) := \\int_{\\Gamma_{M,A}}\\!\\!\\!\\!\\!\\!\\!\\theta(x) \\mathrm{d} \\mathcal{H}^m(x)"
},
{
"math_id": 5,
"text": "\\Gamma_{M,A}=M \\cap \\{x : (x, \\mathrm{Tan}^m(x,M)) \\in A \\}"
},
{
"math_id": 6,
"text": " \\mathcal{H}^m(x)"
},
{
"math_id": 7,
"text": "m"
}
] |
https://en.wikipedia.org/wiki?curid=6706053
|
6706108
|
Schramm–Loewner evolution
|
In probability theory, the Schramm–Loewner evolution with parameter "κ", also known as stochastic Loewner evolution (SLE"κ"), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Given a parameter "κ" and a domain in the complex plane "U", it gives a family of random curves in "U", with "κ" controlling how much the curve turns. There are two main variants of SLE, "chordal SLE" which gives a family of random curves from two fixed boundary points, and "radial SLE", which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfy conformal invariance and a domain Markov property.
It was discovered by Oded Schramm (2000) as a conjectured scaling limit of the planar uniform spanning tree (UST) and the planar loop-erased random walk (LERW) probabilistic processes, and developed by him together with Greg Lawler and Wendelin Werner in a series of joint papers.
Besides UST and LERW, the Schramm–Loewner evolution is conjectured or proven to describe the scaling limit of various stochastic processes in the plane, such as critical percolation, the critical Ising model, the double-dimer model, self-avoiding walks, and other critical statistical mechanics models that exhibit conformal invariance. The SLE curves are the scaling limits of interfaces and other non-self-intersecting random curves in these models. The main idea is that the conformal invariance and a certain Markov property inherent in such stochastic processes together make it possible to encode these planar curves into a one-dimensional Brownian motion running on the boundary of the domain (the driving function in Loewner's differential equation). This way, many important questions about the planar models can be translated into exercises in Itô calculus. Indeed, several mathematically non-rigorous predictions made by physicists using conformal field theory have been proven using this strategy.
The Loewner equation.
If formula_0 is a simply connected, open complex domain not equal to formula_1, and formula_2 is a simple curve in formula_0 starting on the boundary (a continuous function with formula_3 on the boundary of formula_0 and formula_4 a subset of formula_0), then for each formula_5, the complement formula_6
of formula_7 is simply connected and therefore conformally isomorphic to formula_0 by the Riemann mapping theorem. If formula_8 is a suitable normalized isomorphism from formula_0 to formula_9, then it satisfies a differential equation found by in his work on the Bieberbach conjecture.
Sometimes it is more convenient to use the inverse function formula_10 of formula_8, which is a conformal mapping from formula_9 to formula_0.
In Loewner's equation, formula_11, formula_5, and the boundary values at time formula_12 are formula_13 or
formula_14. The equation depends on a driving function formula_15 taking values in the boundary of formula_0. If formula_0
is the unit disk and the curve formula_2 is parameterized by "capacity", then Loewner's equation is
formula_16 or formula_17
When formula_0 is the upper half plane the Loewner equation differs from this by changes of variable and is
formula_18 or formula_19
The driving function formula_20 and the curve formula_2 are related by
formula_21
where formula_8 and formula_10 are extended by continuity.
Example.
Let formula_0 be the upper half plane and consider an SLE0, so the driving function formula_20 is a Brownian motion of diffusivity zero. The function formula_20 is thus identically zero almost surely and
formula_22
formula_23
formula_24
formula_9 is the upper half-plane with the line from 0 to formula_25 removed.
Schramm–Loewner evolution.
Schramm–Loewner evolution is the random curve "γ" given by the Loewner equation as in the previous section, for the driving function
formula_26
where "B"("t") is Brownian motion on the boundary of "D", scaled by some real "κ". In other words, Schramm–Loewner evolution is a probability measure on planar curves, given as the image of Wiener measure under this map.
In general the curve γ need not be simple, and the domain "Dt" is not the complement of "γ"([0,"t"]) in "D", but is instead the unbounded component of the complement.
There are two versions of SLE, using two families of curves, each depending on a non-negative real parameter "κ":
SLE depends on a choice of Brownian motion on the boundary of the domain, and there are several variations depending on what sort of Brownian motion is used: for example it might start at a fixed point, or start at a uniformly distributed point on the unit circle, or might have a built in drift, and so on. The parameter "κ" controls the rate of diffusion of the Brownian motion, and the behavior of SLE depends critically on its value.
The two domains most commonly used in Schramm–Loewner evolution are the upper half plane and the unit disk. Although the Loewner differential equation in these two cases look different, they are equivalent up to changes of variables as the unit disk and the upper half plane are conformally equivalent. However a conformal equivalence between them does not preserve the Brownian motion on their boundaries used to drive Schramm–Loewner evolution.
Special values of "κ".
When SLE corresponds to some conformal field theory, the parameter "κ" is related to the central charge "c"
of the conformal field theory by
formula_27
Each value of "c" < 1 corresponds to two values of "κ", one value "κ" between 0 and 4, and a "dual" value 16/"κ" greater than 4. (see )
showed that the Hausdorff dimension of the paths (with probability 1) is equal to min(2, 1 + "κ"/8).
Left passage probability formulas for SLE"κ".
The probability of chordal SLE"κ" "γ" being on the left of fixed point formula_28 was computed by
formula_29
where formula_30 is the Gamma function and formula_31 is the hypergeometric function. This was derived by using the martingale property of
formula_32
and Itô's lemma to obtain the following partial differential equation for formula_33
formula_34
For "κ" = 4, the RHS is formula_35, which was used in the construction of the harmonic explorer, and for "κ" = 6, we obtain Cardy's formula, which was used by Smirnov to prove conformal invariance in percolation.
Applications.
used SLE6 to prove the conjecture of that the boundary of planar Brownian motion has fractal dimension 4/3.
Critical percolation on the triangular lattice was proved to be related to SLE6 by Stanislav Smirnov. Combined with earlier work of Harry Kesten, this led to the determination of many of the critical exponents for percolation. This breakthrough, in turn, allowed further analysis of many aspects of this model.
Loop-erased random walk was shown to converge to SLE2 by Lawler, Schramm and Werner. This allowed derivation of many quantitative properties of loop-erased random walk (some of which were derived earlier by Richard Kenyon). The related random Peano curve outlining the uniform spanning tree was shown to converge to SLE8.
Rohde and Schramm showed that "κ" is related to the fractal dimension of a curve by the following relation
formula_36
Simulation.
Computer programs (Matlab) are presented in this GitHub repository to simulate Schramm Loewner Evolution planar curves.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "D"
},
{
"math_id": 1,
"text": "\\mathbb{C}"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "\\gamma(0)"
},
{
"math_id": 4,
"text": "\\gamma((0,\\infty))"
},
{
"math_id": 5,
"text": "t\\geq 0"
},
{
"math_id": 6,
"text": "D_t = D\\smallsetminus\\gamma([0,t])"
},
{
"math_id": 7,
"text": "\\gamma([0,t])"
},
{
"math_id": 8,
"text": "f_t"
},
{
"math_id": 9,
"text": "D_t"
},
{
"math_id": 10,
"text": "g_t"
},
{
"math_id": 11,
"text": "z\\in D"
},
{
"math_id": 12,
"text": "t=0"
},
{
"math_id": 13,
"text": "f_0(z)=z"
},
{
"math_id": 14,
"text": "g_0(z)=z"
},
{
"math_id": 15,
"text": "\\zeta(t)"
},
{
"math_id": 16,
"text": " \\frac{\\partial f_t(z)}{\\partial t} = -z f^\\prime_t(z)\\frac{\\zeta(t)+z}{\\zeta(t)-z}"
},
{
"math_id": 17,
"text": " \\dfrac{\\partial g_t(z)}{\\partial t} = g_t(z)\\dfrac{\\zeta(t)+g_t(z)}{\\zeta(t)-g_t(z)}."
},
{
"math_id": 18,
"text": "\\frac{\\partial f_t(z)}{\\partial t} = \\frac{ 2f_t^\\prime(z)}{\\zeta(t)-z}"
},
{
"math_id": 19,
"text": "\\dfrac{\\partial g_t(z)}{\\partial t} = \\dfrac{2}{g_t(z)-\\zeta(t)}."
},
{
"math_id": 20,
"text": "\\zeta"
},
{
"math_id": 21,
"text": " f_t(\\zeta(t)) = \\gamma(t) \\text{ or } \\zeta(t) = g_t(\\gamma(t)) "
},
{
"math_id": 22,
"text": "f_t(z) = \\sqrt{z^2-4t}"
},
{
"math_id": 23,
"text": "g_t(z) = \\sqrt{z^2+4t}"
},
{
"math_id": 24,
"text": "\\gamma(t) = 2i\\sqrt{t}"
},
{
"math_id": 25,
"text": "2i\\sqrt{t}"
},
{
"math_id": 26,
"text": "\\zeta(t)=\\sqrt{\\kappa}B(t)"
},
{
"math_id": 27,
"text": "c = \\frac{(8-3\\kappa)(\\kappa-6)}{2\\kappa}."
},
{
"math_id": 28,
"text": "x_{0}+iy_{0}=z_{0}\\in \\mathbb{H}"
},
{
"math_id": 29,
"text": "\\mathbb{P}[\\gamma \\text{ passes to the left } z_0]=\\frac{1}{2}+\\frac{\\Gamma(\\frac{4}{\\kappa})}{\\sqrt{\\pi} \\, \\Gamma(\\frac{8-\\kappa}{2\\kappa})}\\frac{x_0}{y_0} \\, _2F_1 \\left(\\frac{1}{2},\\frac{4}{\\kappa}, \\frac{3}{2}, - \\left(\\frac{x_0}{y_0}\\right)^2 \\right)"
},
{
"math_id": 30,
"text": "\\Gamma"
},
{
"math_id": 31,
"text": "_2F_{1}(a,b,c,d)"
},
{
"math_id": 32,
"text": "h(x,y):=\\mathbb{P}[\\gamma \\text{ passes to the left } x+iy]"
},
{
"math_id": 33,
"text": "w:=\\tfrac{x}{y}"
},
{
"math_id": 34,
"text": "\\frac{\\kappa}{2}\\partial_{ww}h(w)+\\frac{4w}{w^2+1}\\partial_w h=0."
},
{
"math_id": 35,
"text": "1-\\tfrac{1}{\\pi}\\arg(z_0)"
},
{
"math_id": 36,
"text": "d = 1 + \\frac{\\kappa}{8}."
}
] |
https://en.wikipedia.org/wiki?curid=6706108
|
67064736
|
Flow-based generative model
|
Statistical model used in machine learning
<templatestyles src="Machine learning/styles.css"/>
A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one.
The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the loss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation.
In contrast, many alternative generative modeling methods such as variational autoencoder (VAE) and generative adversarial network do not explicitly represent the likelihood function.
Method.
Let formula_0 be a (possibly multivariate) random variable with distribution formula_1.
For formula_2, let formula_3 be a sequence of random variables transformed from formula_0. The functions formula_4 should be invertible, i.e. the inverse function formula_5 exists. The final output formula_6 models the target distribution.
The log likelihood of formula_6 is (see derivation):
formula_7
To efficiently compute the log likelihood, the functions formula_4 should be 1. easy to invert, and 2. easy to compute the determinant of its Jacobian. In practice, the functions formula_4 are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE, RealNVP, and Glow.
Derivation of log likelihood.
Consider formula_8 and formula_0. Note that formula_9.
By the change of variable formula, the distribution of formula_8 is:
formula_10
Where formula_11 is the determinant of the Jacobian matrix of formula_12.
By the inverse function theorem:
formula_13
By the identity formula_14 (where formula_15 is an invertible matrix), we have:
formula_16
The log likelihood is thus:
formula_17
In general, the above applies to any formula_18 and formula_19. Since formula_20 is equal to formula_21 subtracted by a non-recursive term, we can infer by induction that:
formula_7
Training method.
As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the Kullback–Leibler divergence between the model's likelihood and the target distribution to be estimated. Denoting formula_22 the model's likelihood and formula_23 the target distribution to learn, the (forward) KL-divergence is:
formula_24
The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter formula_25 we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by importance sampling. Indeed, if we have a dataset formula_26 of samples each independently drawn from the target distribution formula_27, then this term can be estimated as:
formula_28
Therefore, the learning objective
formula_29
is replaced by
formula_30
In other words, minimizing the Kullback–Leibler divergence between the model's likelihood and the target distribution is equivalent to maximizing the model likelihood under observed samples of the target distribution.
A pseudocode for training normalizing flows is as follows:
Variants.
Planar Flow.
The earliest example. Fix some activation function formula_35, and let formula_36 with the appropriate dimensions, thenformula_37The inverse formula_38 has no closed-form solution in general.
The Jacobian is formula_39.
For it to be invertible everywhere, it must be nonzero everywhere. For example, formula_40 and formula_41 satisfies the requirement.
Nonlinear Independent Components Estimation (NICE).
Let formula_42 be even-dimensional, and split them in the middle. Then the normalizing flow functions areformula_43where formula_44 is any neural network with weights formula_25.
formula_38 is just formula_45, and the Jacobian is just 1, that is, the flow is volume-preserving.
When formula_46, this is seen as a curvy shearing along the formula_47 direction.
Real Non-Volume Preserving (Real NVP).
The Real Non-Volume Preserving model generalizes NICE model by:formula_48
Its inverse is formula_49, and its Jacobian is formula_50. The NICE model is recovered by setting formula_51.
Since the Real NVP map keeps the first and second halves of the vector formula_52 separate, it's usually required to add a permutation formula_53 after every Real NVP layer.
Generative Flow (Glow).
In generative flow model, each layer has 3 parts:
The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP.
Masked autoregressive flow (MAF).
An autoregressive model of a distribution on formula_59 is defined as the following stochastic process:
formula_60where formula_61 and formula_62 are fixed functions that define the autoregressive model.
By the reparametrization trick, the autoregressive model is generalized to a normalizing flow:formula_63The autoregressive model is recovered by setting formula_64.
The forward mapping is slow (because it's sequential), but the backward mapping is fast (because it's parallel).
The Jacobian matrix is lower-diagonal, so the Jacobian is formula_65.
Reversing the two maps formula_66 and formula_38 of MAF results in Inverse Autoregressive Flow (IAF), which has fast forward mapping and slow backward mapping.
Continuous Normalizing Flow (CNF).
Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic. Let formula_0 be the latent variable with distribution formula_67. Map this latent variable to data space with the following flow function:
formula_68
where formula_69 is an arbitrary function and can be modeled with e.g. neural networks.
The inverse function is then naturally:
formula_70
And the log-likelihood of formula_52 can be found as:
formula_71
Since the trace depends only on the diagonal of the Jacobian formula_72, this allows "free-form" Jacobian. Here, "free-form" means that there is no restriction on the Jacobian's form. It is contrasted with previous discrete models of normalizing flow, where the Jacobian is carefully designed to be only upper- or lower-diagonal, so that the Jacobian can be evaluated efficiently.
The trace can be estimated by "Hutchinson's trick":Given any matrix formula_73, and any random formula_74 with formula_75, we have formula_76. (Proof: expand the expectation directly.)Usually, the random vector is sampled from formula_77 (normal distribution) or formula_78 (Radamacher distribution).
When formula_69 is implemented as a neural network, neural ODE methods would be needed. Indeed, CNF was first proposed in the same paper that proposed neural ODE.
There are two main deficiencies of CNF, one is that a continuous flow must be a homeomorphism, thus preserve orientation and ambient isotopy (for example, it's impossible to flip a left-hand to a right-hand by continuous deforming of space, and it's impossible to turn a sphere inside out, or undo a knot), and the other is that the learned flow formula_69 might be ill-behaved, due to degeneracy (that is, there are an infinite number of possible formula_69 that all solve the same problem).
By adding extra dimensions, the CNF gains enough freedom to reverse orientation and go beyond ambient isotopy (just like how one can pick up a polygon from a desk and flip it around in 3-space, or unknot a knot in 4-space), yielding the "augmented neural ODE".
Any homeomorphism of formula_59 can be approximated by a neural ODE operating on formula_79, proved by combining Whitney embedding theorem for manifolds and the universal approximation theorem for neural networks.
To regularize the flow formula_69, one can impose regularization losses. The paper proposed the following regularization loss based on optimal transport theory:formula_80where formula_81 are hyperparameters. The first term punishes the model for oscillating the flow field over time, and the second term punishes it for oscillating the flow field over space. Both terms together guide the model into a flow that is smooth (not "bumpy") over space and time.
Downsides.
Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them.
Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set). Some hypotheses were formulated to explain this phenomenon, among which the typical set hypothesis, estimation issues when training models, or fundamental issues due to the entropy of the data distributions.
One of the most interesting properties of normalizing flows is the invertibility of their learned bijective map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the change-of-variable theorem, the computation of the Jacobian of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision.
Applications.
Flow-based generative models have been applied on a variety of modeling tasks, including:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z_0"
},
{
"math_id": 1,
"text": "p_0(z_0)"
},
{
"math_id": 2,
"text": "i = 1, ..., K"
},
{
"math_id": 3,
"text": "z_i = f_i(z_{i-1})"
},
{
"math_id": 4,
"text": "f_1, ..., f_K"
},
{
"math_id": 5,
"text": "f^{-1}_i"
},
{
"math_id": 6,
"text": "z_K"
},
{
"math_id": 7,
"text": "\\log p_K(z_K) = \\log p_0(z_0) - \\sum_{i=1}^{K} \\log \\left|\\det \\frac{df_i(z_{i-1})}{dz_{i-1}}\\right|"
},
{
"math_id": 8,
"text": "z_1"
},
{
"math_id": 9,
"text": "z_0 = f^{-1}_1(z_1)"
},
{
"math_id": 10,
"text": "p_1(z_1) = p_0(z_0)\\left|\\det \\frac{df_1^{-1}(z_1)}{dz_1}\\right|"
},
{
"math_id": 11,
"text": "\\det \\frac{df_1^{-1}(z_1)}{dz_1}"
},
{
"math_id": 12,
"text": "f^{-1}_1"
},
{
"math_id": 13,
"text": "p_1(z_1) = p_0(z_0)\\left|\\det \\left(\\frac{df_1(z_0)}{dz_0}\\right)^{-1}\\right|"
},
{
"math_id": 14,
"text": "\\det(A^{-1}) = \\det(A)^{-1}"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "p_1(z_1) = p_0(z_0)\\left|\\det \\frac{df_1(z_0)}{dz_0}\\right|^{-1}"
},
{
"math_id": 17,
"text": "\\log p_1(z_1) = \\log p_0(z_0) - \\log \\left|\\det \\frac{df_1(z_0)}{dz_0}\\right|"
},
{
"math_id": 18,
"text": "z_i"
},
{
"math_id": 19,
"text": "z_{i-1}"
},
{
"math_id": 20,
"text": "\\log p_i(z_i)"
},
{
"math_id": 21,
"text": "\\log p_{i-1}(z_{i-1})"
},
{
"math_id": 22,
"text": "p_\\theta"
},
{
"math_id": 23,
"text": "p^*"
},
{
"math_id": 24,
"text": "D_{KL}[p^{*}(x)||p_{\\theta}(x)] = -\\mathbb{E}_{p^{*}(x)}[\\log(p_{\\theta}(x))] + \\mathbb{E}_{p^{*}(x)}[\\log(p^{*}(x))]"
},
{
"math_id": 25,
"text": "\\theta"
},
{
"math_id": 26,
"text": "\\{x_{i}\\}_{i=1:N}"
},
{
"math_id": 27,
"text": "p^*(x)"
},
{
"math_id": 28,
"text": "-\\hat{\\mathbb{E}}_{p^{*}(x)}[\\log(p_{\\theta}(x))] = -\\frac{1}{N} \\sum_{i=0}^{N} \\log(p_{\\theta}(x_{i})) "
},
{
"math_id": 29,
"text": "\\underset{\\theta}{\\operatorname{arg\\,min}}\\ D_{KL}[p^{*}(x)||p_{\\theta}(x)]"
},
{
"math_id": 30,
"text": "\\underset{\\theta}{\\operatorname{arg\\,max}}\\ \\sum_{i=0}^{N} \\log(p_{\\theta}(x_{i}))"
},
{
"math_id": 31,
"text": "x_{1:n}"
},
{
"math_id": 32,
"text": "f_\\theta(\\cdot), p_0 "
},
{
"math_id": 33,
"text": "\\max_\\theta \\sum_j \\ln p_\\theta(x_j)"
},
{
"math_id": 34,
"text": "\\hat\\theta"
},
{
"math_id": 35,
"text": "h"
},
{
"math_id": 36,
"text": "\\theta = (u, w, b)"
},
{
"math_id": 37,
"text": "x = f_\\theta(z) = z + u h(\\langle w, z \\rangle + b)"
},
{
"math_id": 38,
"text": "f_\\theta^{-1}"
},
{
"math_id": 39,
"text": "|\\det (I + h'(\\langle w, z \\rangle + b) uw^T)| = |1 + h'(\\langle w, z \\rangle + b) \\langle u, w\\rangle|"
},
{
"math_id": 40,
"text": "h = \\tanh"
},
{
"math_id": 41,
"text": "\\langle u, w \\rangle > -1"
},
{
"math_id": 42,
"text": "x, z\\in \\R^{2n}"
},
{
"math_id": 43,
"text": "x = \\begin{bmatrix}\n\t x_1 \\\\ x_2\n\t \\end{bmatrix}= f_\\theta(z) = \\begin{bmatrix}\n\t z_1 \\\\z_2\n\t \\end{bmatrix} + \\begin{bmatrix}\n\t 0 \\\\ m_\\theta(z_1)\n\t \\end{bmatrix}"
},
{
"math_id": 44,
"text": "m_\\theta"
},
{
"math_id": 45,
"text": "z_1 = x_1, z_2 = x_2 - m_\\theta(x_1)"
},
{
"math_id": 46,
"text": "n=1"
},
{
"math_id": 47,
"text": "x_2"
},
{
"math_id": 48,
"text": "x = \\begin{bmatrix}\n\t x_1 \\\\ x_2\n\t \\end{bmatrix}= f_\\theta(z) = \\begin{bmatrix}\n\t z_1 \\\\ e^{s_\\theta(z_1)} \\odot z_2\n\t \\end{bmatrix} + \\begin{bmatrix}\n\t 0 \\\\ m_\\theta(z_1)\n\t \\end{bmatrix}"
},
{
"math_id": 49,
"text": "z_1 = x_1, z_2 = e^{-s_\\theta (x_1)}\\odot (x_2 - m_\\theta (x_1))"
},
{
"math_id": 50,
"text": "\\prod^n_{i=1} e^{s_\\theta(z_{1, })}"
},
{
"math_id": 51,
"text": "s_\\theta = 0"
},
{
"math_id": 52,
"text": "x"
},
{
"math_id": 53,
"text": "(x_1, x_2) \\mapsto (x_2, x_1)"
},
{
"math_id": 54,
"text": "y_{cij} = s_c(x_{cij} + b_c)"
},
{
"math_id": 55,
"text": "\\prod_c s_c^{HW}"
},
{
"math_id": 56,
"text": "z_{cij} = \\sum_{c'} K_{cc'} y_{cij}"
},
{
"math_id": 57,
"text": "\\det(K)^{HW}"
},
{
"math_id": 58,
"text": "K"
},
{
"math_id": 59,
"text": "\\R^n"
},
{
"math_id": 60,
"text": "\\begin{align}\n\t\t x_1 \\sim& N(\\mu_1, \\sigma_1^2)\\\\\n\t\t x_2 \\sim& N(\\mu_2(x_1), \\sigma_2(x_1)^2)\\\\\n\t\t &\\cdots \\\\\n\t\t x_n \\sim& N(\\mu_n(x_{1:n-1}), \\sigma_n(x_{1:n-1})^2)\\\\\n\\end{align}"
},
{
"math_id": 61,
"text": "\\mu_i: \\R^{i-1} \\to \\R"
},
{
"math_id": 62,
"text": "\\sigma_i: \\R^{i-1} \\to (0, \\infty)"
},
{
"math_id": 63,
"text": "\\begin{align}\n\t\t x_1 =& \\mu_1 + \\sigma_1 z_1\\\\\n\t\t x_2 =& \\mu_2(x_1) + \\sigma_2(x_1) z_2\\\\\n\t\t &\\cdots \\\\\n\t\t x_n =& \\mu_n(x_{1:n-1}) + \\sigma_n(x_{1:n-1}) z_n\\\\\n\\end{align}"
},
{
"math_id": 64,
"text": "z \\sim N(0, I_{n})"
},
{
"math_id": 65,
"text": "\\sigma_1 \\sigma_2(x_1)\\cdots \\sigma_n(x_{1:n-1})"
},
{
"math_id": 66,
"text": "f_\\theta"
},
{
"math_id": 67,
"text": "p(z_0)"
},
{
"math_id": 68,
"text": "x = F(z_0) = z_T = z_0 + \\int_0^T f(z_t, t) dt"
},
{
"math_id": 69,
"text": "f"
},
{
"math_id": 70,
"text": "z_0 = F^{-1}(x) = z_T + \\int_T^0 f(z_t, t) dt = z_T - \\int_0^T f(z_t,t) dt "
},
{
"math_id": 71,
"text": "\\log(p(x)) = \\log(p(z_0)) - \\int_0^T \\text{Tr}\\left[\\frac{\\partial f}{\\partial z_t} \\right] dt"
},
{
"math_id": 72,
"text": "\\partial_{z_t} f"
},
{
"math_id": 73,
"text": "W\\in \\R^{n\\times n}"
},
{
"math_id": 74,
"text": "u\\in \\R^n"
},
{
"math_id": 75,
"text": "E[uu^T] = I"
},
{
"math_id": 76,
"text": "E[u^T W u] = tr(W)"
},
{
"math_id": 77,
"text": "N(0, I)"
},
{
"math_id": 78,
"text": "\\{\\pm n^{-1/2}\\}^n"
},
{
"math_id": 79,
"text": "\\R^{2n+1}"
},
{
"math_id": 80,
"text": "\\lambda_{K} \\int_{0}^{T}\\left\\|f(z_t, t)\\right\\|^{2} dt\n+\\lambda_{J} \\int_{0}^{T}\\left\\|\\nabla_z f(z_t, t)\\right\\|_F^{2} dt\n"
},
{
"math_id": 81,
"text": "\\lambda_K, \\lambda_J >0\n"
}
] |
https://en.wikipedia.org/wiki?curid=67064736
|
67065314
|
Almgren's isomorphism theorem
|
Almgren isomorphism theorem is a result in geometric measure theory and algebraic topology about the topology of the space of flat cycles in a Riemannian manifold.
The theorem plays a fundamental role in the Almgren–Pitts min-max theory as it establishes existence of topologically non-trivial families of cycles, which were used by Frederick J. Almgren Jr., Jon T. Pitts and others to prove existence of (possibly singular) minimal submanifolds in every Riemannian manifold. In the special case of the space of null-homologous codimension 1 cycles with mod 2 coefficients on a closed Riemannian manifold Almgren isomorphism theorem implies that it is weakly homotopy equivalent to
the infinite real projective space.
Statement of the theorem.
Let M be a Riemannian manifold. Almgren isomorphism theorem asserts that the m-th homotopy group of the space of flat k-dimensional cycles in M is isomorphic to the (m+k)-th dimensional homology group of M. This result is a generalization of the Dold–Thom theorem, which can be thought of as the k=0 case of Almgren's (1962a (ver. PhD thesis),1962b (ver. Topology (Elsevier)) theorem.
The isomorphism is defined as follows. Let G be an abelian group and formula_0 denote the space of flat cycles with coefficients in group G. To each family of cycles formula_1 we associate an (m+k)-cycle C as follows. Fix a fine triangulation T of formula_2. To each vertex v in the 0-skeletion of T we associate a cycle f(v). To each edge E in the 1-skeleton of T with ∂E=v-w we associate a (k+1)-chain with boundary f(v)-f(w) of minimal mass. We proceed this way by induction over the skeleton of T. The sum of all chains corresponding to m-dimensional faces of T will be the desired (m+k)-cycle C. Even though the choices of triangulation and minimal mass fillings were not unique, they all result in an (m+k)-cycle in the same homology class.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Z_k(M;G)"
},
{
"math_id": 1,
"text": "f: S^m \\rightarrow Z_k(M;G)"
},
{
"math_id": 2,
"text": "S^m"
}
] |
https://en.wikipedia.org/wiki?curid=67065314
|
6706815
|
Hypercube graph
|
Graphs formed by a hypercube's edges and vertices
In graph theory, the hypercube graph Qn is the graph formed from the vertices and edges of an n-dimensional hypercube. For instance, the cube graph "Q"3 is the graph formed by the 8 vertices and 12 edges of a three-dimensional cube.
Qn has 2"n" vertices, 2"n" – 1"n" edges, and is a regular graph with n edges touching each vertex.
The hypercube graph Qn may also be constructed by creating a vertex for each subset of an n-element set, with two vertices adjacent when their subsets differ in a single element, or by creating a vertex for each n-digit binary number, with two vertices adjacent when their binary representations differ in a single digit. It is the n-fold Cartesian product of the two-vertex complete graph, and may be decomposed into two copies of "Q""n" – 1 connected to each other by a perfect matching.
Hypercube graphs should not be confused with cubic graphs, which are graphs that have exactly three edges touching each vertex. The only hypercube graph Qn that is a cubic graph is the cubical graph "Q"3.
Construction.
The hypercube graph "Q""n" may be constructed from the family of subsets of a set with n elements, by making a vertex for each possible subset and joining two vertices by an edge whenever the corresponding subsets differ in a single element. Equivalently, it may be constructed using 2"n" vertices labeled with n-bit binary numbers and connecting two vertices by an edge whenever the Hamming distance of their labels is one. These two constructions are closely related: a binary number may be interpreted as a set (the set of positions where it has a 1 digit), and two such sets differ in a single element whenever the corresponding two binary numbers have Hamming distance one.
Alternatively, "Q""n" may be constructed from the disjoint union of two hypercubes "Q""n" − 1, by adding an edge from each vertex in one copy of "Q""n" − 1 to the corresponding vertex in the other copy, as shown in the figure. The joining edges form a perfect matching.
The above construction gives a recursive algorithm for constructing the adjacency matrix of a hypercube, "A""n". Copying is done via the Kronecker product, so that the two copies of "Q""n" − 1 have an adjacency matrix formula_0 ,where formula_1 is the identity matrix in formula_2 dimensions. Meanwhile the joining edges have an adjacency matrix formula_3. The sum of these two terms gives a recursive function function for the adjacency matrix of a hypercube:formula_4Another construction of "Q""n" is the Cartesian product of n two-vertex complete graphs "K"2. More generally the Cartesian product of copies of a complete graph is called a Hamming graph; the hypercube graphs are examples of Hamming graphs.
Examples.
The graph "Q"0 consists of a single vertex, while "Q"1 is the complete graph on two vertices.
"Q"2 is a cycle of length 4.
The graph "Q"3 is the 1-skeleton of a cube and is a planar graph with eight vertices and twelve edges.
The graph "Q"4 is the Levi graph of the Möbius configuration. It is also the knight's graph for a toroidal formula_5 chessboard.
Properties.
Bipartiteness.
Every hypercube graph is bipartite: it can be colored with only two colors. The two colors of this coloring may be found from the subset construction of hypercube graphs, by giving one color to the subsets that have an even number of elements and the other color to the subsets with an odd number of elements.
Hamiltonicity.
Every hypercube "Q""n" with "n" > 1 has a Hamiltonian cycle, a cycle that visits each vertex exactly once. Additionally, a Hamiltonian path exists between two vertices u and v if and only if they have different colors in a 2-coloring of the graph. Both facts are easy to prove using the principle of induction on the dimension of the hypercube, and the construction of the hypercube graph by joining two smaller hypercubes with a matching.
Hamiltonicity of the hypercube is tightly related to the theory of Gray codes. More precisely there is a bijective correspondence between the set of n-bit cyclic Gray codes and the set of Hamiltonian cycles in the hypercube "Q"n. An analogous property holds for acyclic "n"-bit Gray codes and Hamiltonian paths.
A lesser known fact is that every perfect matching in the hypercube extends to a Hamiltonian cycle. The question whether every matching extends to a Hamiltonian cycle remains an open problem.
Other properties.
The hypercube graph "Q""n" (for "n" > 1) :
The family "Q""n" for all "n" > 1 is a Lévy family of graphs.
Problems.
The problem of finding the longest path or cycle that is an induced subgraph of a given hypercube graph is known as the snake-in-the-box problem.
Szymanski's conjecture concerns the suitability of a hypercube as a network topology for communications. It states that, no matter how one chooses a permutation connecting each hypercube vertex to another vertex with which it should be connected, there is always a way to connect these pairs of vertices by paths that do not share any directed edge.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{1}_2\\otimes_K A_{n-1}"
},
{
"math_id": 1,
"text": "1_d"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "A_{1} \\otimes_K 1_{2^{n-1} }"
},
{
"math_id": 4,
"text": "A_{n} = \\begin{cases}\n1_2\\otimes_K A_{n-1}+A_1\\otimes_K 1_{2^{n-1}} & \\text{if } n>1\\\\\n\\begin{bmatrix}\n0 & 1\\\\\n1 & 0\n\\end{bmatrix}\n&\\text{if }n=1\n\\end{cases}"
},
{
"math_id": 5,
"text": "4\\times 4"
},
{
"math_id": 6,
"text": "2^{2^n-n-1}\\prod_{k=2}^n k^{{n\\choose k}}"
},
{
"math_id": 7,
"text": "\\sum_{i=0}^{n-1} \\binom{i}{\\lfloor i/2\\rfloor}"
},
{
"math_id": 8,
"text": "\\sqrt{n2^n}"
},
{
"math_id": 9,
"text": "\\binom{n}{k}"
}
] |
https://en.wikipedia.org/wiki?curid=6706815
|
67068386
|
Heikin-Ashi chart
|
Heikin-Ashi is a Japanese trading indicator and financial chart that means "average bar". Heikin-Ashi charts resemble candlestick charts, but have a smoother appearance as they track a range of price movements, rather than tracking every price movement as with candlesticks. Heikin-Ashi was created in the 1700s by Munehisa Homma, who also created the candlestick chart. These charts are used by traders and investors to help determine and predict price movements.
Description.
Like standard candlesticks, a Heikin-Ashi candle has a body and a wick, however, they do not have the same purpose as on a candlestick chart. The last price of a Heikin-Ashi candle is calculated by the average price of the current bar or timeframe (e.g., a daily timeframe would have each bar represent the price movements of that specific day). The formula for the last price of the Heikin-Ashi bar or candle is calculated by: (open + high + low + close) formula_0 4. The open of a Heikin-Ashi starts at the midpoint of the previous candle; it is calculated by: (the open of previous bar + the close of the previous bar) formula_0 2. The highest and lowest price points are represented by wicks similarly to candlesticks.
To calculate the highest and lowest price of a period:
Heikin-Ashi High=Max value of (High-0, Open-0, and Close-0)
Heikin-Ashi Low=Min value (Low-0, Open-0, and Close-0)
(where -0 indicates that values are being taken from the current bar or period).
The main purpose of a Heikin-Ashi chart is to show the general trend of the price (direction of price) and the strength of each trend; these are represented by the wicks: small lines that extend from the main body of the candle. A series of candles rising with no lower wick signifies a strong uptrend, and vice versa with candles falling with no upper wick. A doji signifies a possible change in the price trend.
Heikin-Ashi is normally paired with other indicators to indicate long (buy) and short (sell) positions.
Advantages of using Heikin Ashi charts.
Heikin Ashi charts have been shown to have a lower mean entropy than candlestick charts, and thus a lower level of uncertainty/disorder when displaying market data. This study was conducted over one year of the historical prices from 10 different stocks. The results showed a mean entropy of 4.2675 for Heikin Ashi charts and a mean entropy of 5.001 using raw market data. These studies also indicate that Heikin Ashi charts display a much higher probability of success when predicting the next move in a market. The results from this test show a 72.3% chance of predicting the next day of the market, this is in contrast to using raw market data which only gives a 49.1% chance of a successful prediction of the next day. The study conducts a hypothesis test with significance level of 0.05, the result of this hypothesis test confirms that using Heikin Ashi, with a confidence of 95% we can predict the next move of the market with up to 75% accuracy. This is in reference to predicting the nature of the next candle (bullish or bearish) that will form in the market and so thus in conclusion it is more reliable to establish a trend in Heikin Ashi charts compared with using just raw market data and candlestick charts.
Detailed backtesting of the Heikin-Ashi trading methodology using 12 years of data on each security in the Dow Jones Industrial Average index confirmed the approach's efficacy. 66% of the equities tested outperformed the underlying index over the 12 years.
Limitations of Heikin Ashi charts.
Heikin Ashi charts use average data values and so the actual opening and closing prices of the bars in a set period are not shown, therefore, traders looking for exact prices e.g. in some price action based systems should not rely on the averaged prices shown on these charts.
As the nature of Heikin Ashi charts is to filter out market noise and reduce the frequency of false signals being shown, some important price gaps (areas where no trading has taken place and so the market has jumped in price) will also be missed from these charts. Candlestick charts will however show price gaps.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\div"
}
] |
https://en.wikipedia.org/wiki?curid=67068386
|
67071329
|
Michael Belmore
|
Canadian (Anishnaabe-Ojibwa), born 1971; artist
Michael Belmore (born 1971) is a Canadian sculptor of Anishinaabe descent who works primarily in resistant stone, copper and other metals. His works are in public collections including the National Gallery of Canada, the McMichael Canadian Art Collection, Agnes Etherington Art Centre, National Museum of the American Indian – Smithsonian Museum, and the Art Gallery of Ontario, and he has held exhibitions in both nations.
Artistic career.
Born in 1971, Michael Belmore graduated from the Ontario College of Art and Design in 1994 and completed his Masters of Fine Arts at the University of Ottawa in 2019. He is a member of the Royal Canadian Academy of Arts and represented in public collections including the National Gallery of Canada, the McMichael Canadian Art Collection, Agnes Etherington Art Centre, National Museum of the American Indian – Smithsonian Museum, and the Art Gallery of Ontario.
Belmore has staged over ten solo exhibitions and has participated in more than fifteen group shows, including "Into the Woods: Two Icons Revisited" (2015 Art Gallery of Ontario), "Changing Hands: Art without Reservation" (2012 Museum of Art & Design), "Close Encounters: The Next 500 Years" (2011 Winnipeg), "HIDE: Skin as Material and Metaphor" (2010 National Museum of the American Indian), and "Terra Incognita" (2007 Macdonald Stewart Art Centre).
Working in resistant stone, copper and other metals, Belmore's process is intricate and time-consuming. Given his deliberate and thoughtful pace, his sculptures and installations are founded on a deep understanding of the qualities – physical and symbolic – of the materials. Curator Olexander Wlasenko has described his approach as “alchemic; vacillating between determination and serendipity. Human intervention into the landscape comes with and without consequence."
In 2023, Belmore was commissioned to create a high sculpture at the Gordie Howe International Bridge, with the work recognizing and celebrating First Nations.
Selected solo exhibitions.
2020 – "Michael Belmore", Art Gallery of Ontario, Toronto
2018 – "thunder sky turbulent water", Central Art Garage gallery, Ottawa
2017 – "mskwiformula_0bloodformula_0sang", Karsh-Masson Gallery, Ottawa
2016 – fenda, Nogueira da Silva Museum, Braga, Portugal
2015 – "Michael Belmore", Royal Melbourne Institute of Technologies Project Space Gallery, Melbourne, Australia
2013 – "Toil", Woodstock Art Gallery, Woodstock, ON
2009 – "Embankment", Station Gallery, Whitby, ON
2006 – "Downstream", Forest City Gallery, London, ON
2005 – "Stream", Rails End Gallery & Arts Centre, Haliburton, ON
2002 – "Vantage Point", Sacred Circle Gallery of American Indian Art, Seattle, Washington
2001 – "fly by wire", AKA Artist-Run Centre/Tribe, Saskatoon, SK
2000 – "Eating Crow", Sâkêwêwak Artists’ Collective, Regina, SK
1999 – "Ravens Wait", Indian Art Centre, Hull, QC
|
[
{
"math_id": 0,
"text": "\\bullet"
}
] |
https://en.wikipedia.org/wiki?curid=67071329
|
67072520
|
Anelasticity
|
Anelasticity is a property of materials that describes their behaviour when undergoing deformation. Its formal definition does not include the physical or atomistic mechanisms but still interprets the anelastic behaviour as a manifestation of internal relaxation processes. It is a behaviour differing (usually very slightly) from elastic behaviour.
Definition and elasticity.
Considering first an ideal elastic material, Hooke's law defines the relation between stress formula_0 and strain formula_1 as:
formula_2
formula_3
formula_4
The constant formula_5 is called the modulus of elasticity (or just modulus) while its reciprocal formula_6 is called the modulus of compliance (or just compliance).
There are three postulates that define the ideal elastic behaviour:
These conditions may be lifted in various combinations to describe different types of behaviour, summarized in the following table:
Anelasticity is therefore by the existence of a part of time dependent reaction, in addition to the elastic one in the material considered. It is also usually a very small fraction of the total response and so, in this sense, the usual meaning of "anelasticity" as "without elasticity" is improper in a physical sense.
The formal definition of linearity is: "If a given stress history formula_7 produces the strain formula_8, and if a stress formula_9 gives rise to formula_10, then the stress formula_11 will give rise to the strain formula_12." The postulate of linearity is used because of its practical usefulness. The theory would become much more complicated otherwise, but in cases of materials under low stress this postulate can be considered true.
In general, the change of an external variable of a thermodynamic system causes a response from the system called thermal relaxation that leads it to a new equilibrium state. In the case of mechanical changes, the response is known as anelastic relaxation, and in the same formal way can be also described for example dielectric or magnetic relaxation. The internal values are coupled to stress and strain through kinetic processes such as diffusion. So that the external manifestation of the internal relaxation behaviours is the stress strain relation, which in this case is time dependant.
Static response functions.
Experiments can be made where either the stress or strain is held constant for a certain time. These are called quasi-static, and in this case, anelastic materials exhibit creep, elastic aftereffect, and stress relaxation.
In these experiments a stress applied and held constant while the strain is observed as a function of time. This response function is called creep defined by formula_13 and characterizes the properties of the solid. The initial value of formula_14 is called the unrelaxed compliance, the equilibrium value is called relaxed compliance formula_15 and their difference formula_16 is called the relaxation of the compliance.
After a creep experiment has been run for a while, when stress is released the elastic spring-back is in general followed by a time dependent decay of the strain. This effect is called the elastic aftereffect or “creep recovery”. The ideal elastic solid returns to zero strain immediately, without any after-effect, while in the case of anelasticity total recovery takes time, and that is the aftereffect. The linear viscoelastic solid only recovers partially, because the viscous contribution to strain cannot be recovered.
In a stress relaxation experiment the stress σ is observed as a function of time while keeping a constant strain formula_17 and defining a stress relaxation function formula_18 similarly to the creep function, with unrelaxed and relaxed modulus "M"U and "M"R.
At equilibrium, formula_19, and at a short timescale, when the material behaves as if ideally elastic, formula_20 also holds.
Dynamic response functions and loss angle.
To get information about the behaviour of a material over short periods of time dynamic experiments are needed. In this kind of experiment a periodic stress (or strain) is imposed on the system, and the phase lag of the strain (or stress) is determined.
The stress can be written as a complex number formula_21 where formula_22 is the amplitude and formula_23 the frequency of vibration. Then the strain is periodic with the same frequency formula_24 where formula_17 is the strain amplitude and formula_25 is the angle by which the strain lags, called loss angle. For ideal elasticity formula_26. For the anelastic case formula_25 is in general not zero, so the ratio formula_27 is complex. This quantity is called the complex compliance formula_28. Thus,
formula_29
where formula_30, the absolute value of formula_31, is called the absolute dynamic compliance, given by formula_32.
This way two real dynamic response functions are defined, formula_30 and formula_33. Two other real response functions can also be introduced by writing the previous equation in another notation:
formula_34
where the real part is called "storage compliance" and the imaginary part is called "loss compliance".
"J"1 and "J"2 being called "storage compliance" and "loss compliance" respectively is significant, because calculating the energy stored and the energy dissipated in a cycle of vibration gives following equations:
formula_35
where formula_36 is the energy dissipated in a full cycle per unit of volume while the maximum stored energy formula_37 per unit volume is given by:
formula_38
The ratio of the energy dissipated to the maximum stored energy is called the "specific damping capacity”. This ratio can be written as a function of the loss angle by formula_39.
This shows that the loss angle formula_25 gives a measure of the fraction of energy lost per cycle due to anelastic behaviour, and so it is known as the internal friction of the material.
Resonant and wave propagation methods.
The dynamic response functions can only be measured in an experiment at frequencies below any resonance of the system used. While theoretically easy to do, in practice the angle formula_33 is difficult to measure when very small, for example in crystalline materials. Therefore, subresonant methods are not generally used. Instead, methods where the inertia of the system is considered are used. These can be divided into two categories:
Forced vibrations.
The response of a system in a forced-vibration experiment with a periodic force has a maximum of the displacement formula_40 at a certain frequency of the force. This is known as resonance, and formula_41 the resonant frequency. The resonance equation is simplified in the case of formula_42. In this case the dependence of formula_43 on frequency is plotted as a Lorentzian curve. If the two values formula_44and formula_45 are the ones at which formula_43 falls to half maximum value, then:
formula_46
The loss angle that measures the internal friction can be obtained directly from the plot, since it is the width of the resonance peak at half-maximum. With this and the resonant frequency it is then possible to obtain the primary response functions. By changing the inertia of the sample the resonant frequency changes, and so can the response functions at different frequencies can be obtained.
Free vibrations.
The more common way of obtaining the anelastic response is measuring the damping of the free vibrations of a sample. Solving the equation of motion for this case includes the constant formula_47 called logarithmic decrement. Its value is constant and is formula_48. It represents the natural logarithm of the ratio of successive vibrations' amplitudes:
formula_49
It is a convenient and direct way of measuring the damping, as it is directly related to the internal friction.
Wave propagation.
Wave propagation methods utilize a wave traveling down the specimen in one direction at a time to avoid any interference effects. If the specimen is long enough and the damping high enough, this can be done by continuous wave propagation. More commonly, for crystalline materials with low damping, a pulse propagation method is used. This method employs a wave packet whose length is small compared to the specimen. The pulse is produced by a transducer at one end of the sample, and the velocity of the pulse is determined either by the time it takes to reach the end of the sample, or the time it takes to come back after a reflection at the end. The attenuation of the pulse is determined by the decrease in amplitude after successive reflections.
Boltzmann superposition principle.
Each response function constitutes a complete representation of the anelastic properties of the solid. Therefore, any one of the response functions can be used to completely describe the anelastic behaviour of the solid, and every other response function can be derived from the chosen one.
The Boltzmann superposition principle states that every stress applied at a different time deforms the material as it if were the only one. This can be written generally for a series of stresses formula_50 that are applied at successive times formula_51. In this situation, the total strain will be:
formula_52
or in the integral form, is the stress is varied continuously:
formula_53
The controlled variable can always be changed, expressing the stress as a function of time in a similar way:
formula_54
These integral expressions are a generalization of Hooke's law in the case of anelasticity, and they show that material acts almost as they have a memory of their history of stress and strain. These two of equations imply that there is a relation between the J(t) and M(t). To obtain it the method of Laplace transforms can be used, or they can be related implicitly by:
formula_55
In this way though they are correlated in a complicated manner and it is not easy to evaluate one of these functions knowing the other. Hover it is still possible in principle to derive the stress relaxation function from the creep function and vice versa thanks to the Boltzamann principle.
Mechanical models.
It is possible to describe anelastic behaviour considering a set of parameters of the material. Since the definition of anelasticity includes linearity and a time dependant stress–strain relation, it can be described by using a differential equation with terms including stress, strain, and their derivatives.
To better visualize the anelastic behaviour appropriate mechanical models can be used. The simplest one contains three elements (two springs and a dashpot) since that is the least number of parameters necessary for a stress–strain equation describing a simple anelastic solid. This specific basic behaviour is of such importance that a material that exhibits it is called standard anelastic solid.
Differential stress–strain equations.
Since from the definition of anelasticity linearity is required, all differential stress–strain equations of anelasticity must be of first degree. These equations can contain many different constants to the describe the specific solid. The most general one can be written as:
formula_56
For the specific case of anelasticity, which requires the existence of an equilibrium relation, additional restrictions must be placed on this equation.
Each stress–strain equation can be accompanied by a mechanical model to help visualizing the behaviour of materials.
Mechanical models.
In the case where only the constants formula_57 and formula_58 are not zero, the body is ideally elastic and is modelled by the Hookean spring.
To add internal friction to a model, the Newtonian dashpot is used, represented by a piston moving in an ideally viscous liquid. Its velocity is proportional to the applied force, therefore entirely dissipating work as heat.
These two mechanical elements can be combined in series or in parallel. In a series combination the stresses are equal, while the strains are additive. Similarly, for a parallel combination of the same elements the strains are equal and the stresses additive. Having said that, the two simplest models that combine more than one element are the following:
The Voigt model, described by the equation formula_59, allows for no instantaneous deformation, therefore it is not a realistic representation of a crystalline solid.
The generalized stress–strain equation for the Maxwell model is formula_60, and since it displays steady viscous creep rather than recoverable creep is yet again not suited to describe an anelastic material.
Standard anelastic solid.
Considering the Voigt model, what it lacks is the instantaneous elastic response, characteristic of crystals. To obtain this missing feature, a spring is attached in series with the Voigt model. This is called the Voigt unit. A spring in series with a Voigt unit shows all the characteristics of an anelastic material despite its simplicity. It is differential stress–strain equation it therefore interesting, and can be calculated to be:
formula_61
The solid whose properties are defined by this equation is called the standard anelastic solid. The solution of this equation for the creep function is:
formula_62
where formula_63 is called the relaxation time at constant stress.
To describe the stress relaxation behaviour, one can also consider another three-parameter model more suited to the stress relaxation experiment, consisting of a Maxwell unit placed in parallel with a spring. Its differential stress–strain equation is the same as the other model considered, therefore the two models are equivalent. The Voigt-type is more convenient in the analysis of creep, while the Maxwell-type for the stress relaxation.
Dynamic properties of the standard anelastic solid.
The dynamic response functions formula_64 and formula_65, are:
formula_66
formula_67
These are often called the Debye equations since were first derived by P. Debye for the case of dielectric relaxation phenomena. The width of the peak at half maximum value for formula_65 is given by formula_68
The equation for the internal friction formula_69 may also be expressed as a Debye peak, in the case where formula_70 as:
formula_71
The relaxation strength formula_72 can be obtained from the height of such a peak, while the relaxation time formula_73 from the frequency at which the peak occurs.
Dynamic properties as functions of time.
The dynamic properties plotted as function of formula_74 are considered keeping formula_73 constant while varying formula_23. However, taking a sample through a Debye peak by varying the frequency continuously is not possible with the more common resonance methods. It is however possible to plot the peak by varying formula_73 while keeping formula_23 constant.
The basis of why this is possible is that in many cases the relaxation rate formula_75 is expressible by an Arrhenius equation:
formula_76
where formula_77 is the absolute temperature, formula_78 is a frequency factor, formula_79 is the activation energy, formula_80 is the Boltzmann constant.
Therefore, where this equation applies, the quantity formula_73 may be varied over a wide range simply by changing the temperature. It then becomes possible to treat the dynamic response functions as functions of temperature.
Discrete spectra.
The next level of complexity in the description of an anelastic solid is a model containing "n" Voigt units in series with each other and with a spring. This corresponds to a differential stress–strain equation which contains all terms up to order "n" in both the stress and the strain. Similarly, a model containing "n" Maxwell units all in parallel with each other and with a spring is also equivalent to a differential stress–strain equation of the same form.
In order to have both elastic and anelastic behaviour, the differential stress–strain equation must be of the same order in the stress and strain and must start from terms of order zero.
A solid described by such function shows a “discrete spectrum” of relaxation processes, or simply a "discrete relaxation spectrum". Each "line" of the spectrum is characterized by a relaxation time formula_81, and a magnitude formula_82. The standard anelastic solid considered before is just a particular case of a one-line spectrum, that can be also called having a "single relaxation time".
Mechanical spectroscopy applications.
A technique that measures internal friction and modulus of elasticity is called Mechanical Spectroscopy. It is extremely sensitive and can give information not attainable with other experimental methodologies.
Despite being historically uncommon, it has some great utility in solving practical problems regarding industrial production where knowledge and control of the microscopic structure of materials is becoming more and more important. Some of these applications are the following.
Measurement of quantity of C, N, O and H in solution in metals.
Unlike other chemical methods of analysis, mechanical spectroscopy is the only technique that can determine the quantity of interstitial elements in a solid solution.
In body centered cubic structures, like iron's, interstitial atoms position themselves in octahedral sites. In an undeformed lattice all octahedral positions are the same, having the same probability of being occupied. Applying a certain tensile stress in one direction parallel to a side of the cube dilates the side while compressing other orthogonal ones. Because of this, the octahedral positions stop being equivalent, and the larger ones will be occupied instead of the smallest ones, making the interstitial atom jump from one to the other. Inverting the direction of the stress has obviously the opposite effect. By applying an alternating stress, the interstitial atom will keep jumping from one site to the other, in a reversible way, causing dissipation of energy and a producing a so-called Snoek peak. The more atoms take part in this process the more the Snoek peak will be intense. Knowing the energy dissipation of a single event and the height of the Snoek peak can make possible to determine the concentration of atoms involved in the process.
Structural stability in nanocrystalline materials.
Grain boundaries in nanocrystalline materials form are significant enough to be responsible for some specific properties of these types of materials. Both their size and structure are important to determine the mechanical effects they have. High resolution microscopy show that material put under severe plastic deformation are characterized by significant distortions and dislocations over and near the grain boundaries.
Using mechanical spectroscopy techniques one can determine whether nanocrystalline metals under thermal treatments change their mechanical behaviour by changing their grain boundaries structure. One example is nanocrystalline aluminium.
Determination of critical points in martensitic transformations.
Mechanical spectroscopy allows to determine the critical points martensite start formula_83 and martensite finish formula_84 in martensitic transformations for steel and other metals and alloys. They can be identified by anomalies in the trend of the modulus. Using steel AISI 304 as an example, an anomaly in the distribution of the elements in the alloy can cause a local increase in formula_83, especially in areas with less nickel, and when usually martensite formation can only be induced by plastic deformation, around 9% can get formed anyway during cooling.
Magnetoelastic effects in ferromagnetic materials.
Ferromagnetic materials have specific anelastic effects that influence internal friction and dynamic modulus.
A non-magnetized ferromagnetic material forms Weiss domains, each one possessing a spontaneous and randomly directed magnetization. The boundary zones, called Bloch walls, are about one hundred atoms long, and here the orientation of one domain gradually changes into the one of the adjacent one. Applying an external magnetic field makes domains with the same orientations increase in size, until all Bloch walls are removed, and the material is magnetized.
Crystalline defects tend to anchor the domains, opposing their movement. So, materials can be divided into magnetically soft or hard based on how much the walls are strongly anchored.
In these kind of materials magnetic and elastic phenomena are correlated, like in the case of magnetostriction, that is the property of changing size when under a magnetic field, or the opposite case, changing magnetic properties when a mechanical stress is applied. These effects are dependent on the Weiss domains and their ability to re-orient.
When a magnetoelastic material is put under stress, the deformation is caused by the sum of the elastic and magnetoelastic ones. The presence of this last one changes the internal friction, by adding an additional dissipation mechanism.
|
[
{
"math_id": 0,
"text": "\\sigma"
},
{
"math_id": 1,
"text": "\\epsilon"
},
{
"math_id": 2,
"text": "\\sigma = M \\epsilon"
},
{
"math_id": 3,
"text": "\\epsilon = J \\sigma"
},
{
"math_id": 4,
"text": "M = \\frac{1}{J}"
},
{
"math_id": 5,
"text": "M"
},
{
"math_id": 6,
"text": "J"
},
{
"math_id": 7,
"text": "\\sigma_1(t)"
},
{
"math_id": 8,
"text": "\\epsilon_1(t)"
},
{
"math_id": 9,
"text": "\\sigma_2(t)"
},
{
"math_id": 10,
"text": "\\epsilon_2(t)"
},
{
"math_id": 11,
"text": "\\sigma_1(t) + \\sigma_2(t)"
},
{
"math_id": 12,
"text": "\\epsilon_1(t) + \\epsilon_2(t)"
},
{
"math_id": 13,
"text": "J(t)\\equiv \\epsilon(t)/\\sigma_0"
},
{
"math_id": 14,
"text": "J(t)"
},
{
"math_id": 15,
"text": "J_R"
},
{
"math_id": 16,
"text": "\\delta J"
},
{
"math_id": 17,
"text": "\\epsilon_0"
},
{
"math_id": 18,
"text": "M(t)\\equiv \\sigma(t)/\\epsilon_0"
},
{
"math_id": 19,
"text": "M_\\text{R} = 1/J_\\text{R}"
},
{
"math_id": 20,
"text": "M_\\text{U} = 1/J_\\text{U}"
},
{
"math_id": 21,
"text": "\\sigma=\\sigma_0 a^{i\\omega t}"
},
{
"math_id": 22,
"text": "\\sigma_0"
},
{
"math_id": 23,
"text": "\\omega"
},
{
"math_id": 24,
"text": "\\epsilon=\\epsilon_0 a^{i(\\omega t-\\phi)}"
},
{
"math_id": 25,
"text": "\\varphi"
},
{
"math_id": 26,
"text": "\\varphi = 0"
},
{
"math_id": 27,
"text": "\\epsilon/\\sigma"
},
{
"math_id": 28,
"text": "J^\\star(\\omega)"
},
{
"math_id": 29,
"text": "J^*(\\omega)=\\frac{\\epsilon}{\\sigma}=|J|(\\omega)e^{-i\\phi(\\omega)}"
},
{
"math_id": 30,
"text": "|J|(\\omega)"
},
{
"math_id": 31,
"text": "J^\\star"
},
{
"math_id": 32,
"text": "\\epsilon_0/\\sigma_0"
},
{
"math_id": 33,
"text": "\\varphi(\\omega)"
},
{
"math_id": 34,
"text": "J^*(\\omega)=J_1(\\omega)-iJ_2(\\omega)"
},
{
"math_id": 35,
"text": "\\Delta W =\\oint \\sigma d \\epsilon = \\pi J_2 \\sigma_0^2"
},
{
"math_id": 36,
"text": "\\Delta W"
},
{
"math_id": 37,
"text": "W"
},
{
"math_id": 38,
"text": "W =\\int_{\\omega t =0}^{\\pi /2} \\sigma d \\epsilon = \\frac{1}{2} J_1 \\sigma_0^2"
},
{
"math_id": 39,
"text": "\\Delta W/W=2\\pi\\tan\\phi"
},
{
"math_id": 40,
"text": "x_0"
},
{
"math_id": 41,
"text": "\\omega_\\text{r}"
},
{
"math_id": 42,
"text": "\\phi\\ll1"
},
{
"math_id": 43,
"text": "x_0^2"
},
{
"math_id": 44,
"text": "\\omega_1"
},
{
"math_id": 45,
"text": "\\omega_2"
},
{
"math_id": 46,
"text": "\\frac{\\omega_2-\\omega_1}{\\omega_\\text{r}}=Q^{-1}=\\phi"
},
{
"math_id": 47,
"text": "\\delta"
},
{
"math_id": 48,
"text": "\\delta\\simeq\\pi\\phi"
},
{
"math_id": 49,
"text": "\\delta=\\ln\\left(\\frac{A_n}{A_{n+1}}\\right)"
},
{
"math_id": 50,
"text": "\\sigma_i(i=1,2,...,m)"
},
{
"math_id": 51,
"text": "t_1',t_2',...,t_m'"
},
{
"math_id": 52,
"text": "\\epsilon(t)=\\sum_{i=1}^m\\sigma_iJ(t-t_i')"
},
{
"math_id": 53,
"text": "\\epsilon(t)=\\int_{-\\infin}^{t} J(t-t')\\frac{d\\sigma(t')}{dt'}dt'"
},
{
"math_id": 54,
"text": "\\sigma(t)=\\int_{-\\infin}^{t} M(t-t')\\frac{d\\epsilon(t')}{dt'}dt'"
},
{
"math_id": 55,
"text": "1=M_UJ(t)+\\int_{0}^{t} J(t-t'){d\\sigma(t') \\over dt'}dt'"
},
{
"math_id": 56,
"text": "a_0\\sigma+a_1\\dot{\\sigma}+a_2\\ddot{\\sigma}+\\cdot\\cdot\\cdot=b_0\\epsilon+b_1\\dot{\\epsilon}+b_2\\ddot{\\epsilon}+\\cdot\\cdot\\cdot"
},
{
"math_id": 57,
"text": "a_0"
},
{
"math_id": 58,
"text": "b_0"
},
{
"math_id": 59,
"text": "J\\sigma=\\epsilon+\\tau\\dot\\epsilon"
},
{
"math_id": 60,
"text": "\\tau\\dot\\sigma+\\sigma=\\tau M\\dot\\epsilon"
},
{
"math_id": 61,
"text": "J_\\text{R}\\sigma+\\tau_\\sigma J_\\text{U}\\dot\\sigma=\\epsilon+\\tau_\\sigma \\dot\\epsilon"
},
{
"math_id": 62,
"text": "J(t)=\\frac{\\epsilon(t)}{\\sigma_0}=J_\\text{R}-(J_\\text{R}-J_\\text{U})e^{-\\frac{t}{\\tau_\\sigma}}=J_\\text{U}+\\delta (1-e^{-\\frac{t}{\\tau_\\sigma}}) ,"
},
{
"math_id": 63,
"text": "\\tau_\\sigma"
},
{
"math_id": 64,
"text": "J_1"
},
{
"math_id": 65,
"text": "J_2"
},
{
"math_id": 66,
"text": "J_1(\\omega)=J_\\text{U}+\\frac{\\delta J}{(1+\\omega^2\\tau_\\sigma^2)}"
},
{
"math_id": 67,
"text": "J_2(\\omega)=\\delta J\\frac{\\omega \\tau_\\sigma}{(1+\\omega^2\\tau_\\sigma^2)}"
},
{
"math_id": 68,
"text": "\\Delta(\\log_{10}\\omega\\tau)=1.144"
},
{
"math_id": 69,
"text": "\\phi"
},
{
"math_id": 70,
"text": "\\delta J\\ll J_\\text{U}"
},
{
"math_id": 71,
"text": "\\phi \\cong \\Delta \\frac{\\omega\\tau}{1+\\omega^2\\tau^2}"
},
{
"math_id": 72,
"text": "\\Delta"
},
{
"math_id": 73,
"text": "\\tau"
},
{
"math_id": 74,
"text": "\\omega\\tau"
},
{
"math_id": 75,
"text": "\\tau^{-1}"
},
{
"math_id": 76,
"text": "\\tau^{-1}=v_0e^{-Q/kT}"
},
{
"math_id": 77,
"text": "T"
},
{
"math_id": 78,
"text": "v_o"
},
{
"math_id": 79,
"text": "Q"
},
{
"math_id": 80,
"text": "k"
},
{
"math_id": 81,
"text": "\\tau_\\sigma^{(i)}"
},
{
"math_id": 82,
"text": "\\delta J_\\sigma^{(i)}"
},
{
"math_id": 83,
"text": "M_\\text{s}"
},
{
"math_id": 84,
"text": "M_\\text{f}"
}
] |
https://en.wikipedia.org/wiki?curid=67072520
|
67072588
|
Church of San Buenaventura
|
The Church of San Buenaventura in Casablanca, Morocco, was founded by Franciscan Catholics around 1890. Built on territory Sultan Hassan I of Morocco granted to King Alfonso XII, it was the seat of the Spanish church in Casablanca from the late 19th century.
It ceased to operate as a church in 1968, after which it hosted families in need. The Spanish Embassy ceded the property to the city of Casablanca.
Around 2016, the 1250 mformula_0 site was transformed into a cultural center serving the community of the medina.
|
[
{
"math_id": 0,
"text": "^2"
}
] |
https://en.wikipedia.org/wiki?curid=67072588
|
67088
|
Conservation of energy
|
Law of physics and chemistry
The law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be "conserved" over time. In the case of a closed system the principle says that the total amount of energy within the system can only be changed through energy entering or leaving the system. Energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
Classically, conservation of energy was distinct from conservation of mass. However, special relativity shows that mass is related to energy and vice versa by formula_0, the equation representing mass–energy equivalence, and science now takes the view that mass-energy as a whole is conserved. Theoretically, this implies that any object with mass can itself be converted to pure energy, and vice versa. However, this is believed to be possible only under the most extreme of physical conditions, such as likely existed in the universe very shortly after the Big Bang or when black holes emit Hawking radiation.
Given the stationary-action principle, conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry; that is, from the fact that the laws of physics do not change over time.
A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist; that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Depending on the definition of energy, conservation of energy can arguably be violated by general relativity on the cosmological scale.
History.
Ancient philosophers as far back as Thales of Miletus c. 550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify their theories with what we know today as "mass-energy" (for example, Thales thought it was water). Empedocles (490–430 BCE) wrote that in his universal system, composed of four roots (earth, air, water, fire), "nothing comes to be or perishes"; instead, these elements suffer continual rearrangement. Epicurus (c. 350 BCE) on the other hand believed everything in the universe to be composed of indivisible units of matter—the ancient precursor to 'atoms'—and he too had some idea of the necessity of conservation, stating that "the sum total of things was always such as it is now, and such it will ever remain."
In 1605, the Flemish scientist Simon Stevin was able to solve a number of problems in statics based on the principle that perpetual motion was impossible.
In 1639, Galileo published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described (in modern language) as conservatively converting potential energy to kinetic energy and back again. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface.
In 1669, Christiaan Huygens published his laws of collision. Among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momenta as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time. This led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his "Horologium Oscillatorium", he gave a much clearer statement regarding the height of ascent of a moving body, and connected this idea with the impossibility of perpetual motion. Huygens's study of the dynamics of pendulum motion was based on a single principle: that the center of gravity of a heavy object cannot lift itself.
Between 1676 and 1689, Gottfried Leibniz first attempted a mathematical formulation of the kind of energy that is associated with "motion" (kinetic energy). Using Huygens's work on collision, Leibniz noticed that in many mechanical systems (of several masses "mi", each with velocity "vi"),
formula_1
was conserved so long as the masses did not interact. He called this quantity the "vis viva" or "living force" of the system. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, including Isaac Newton, held that the conservation of momentum, which holds even in systems with friction, as defined by the momentum:
formula_2
was the conserved "vis viva". It was later shown that both quantities are conserved simultaneously given the proper conditions, such as in an elastic collision.
In 1687, Isaac Newton published his "Principia", which set out his laws of motion. It was organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies. Some other principles were also required.
By the 1690s, Leibniz was arguing that conservation of "vis viva" and conservation of momentum undermined the then-popular philosophical doctrine of interactionist dualism. (During the 19th century, when conservation of energy was better understood, Leibniz's basic argument would gain widespread acceptance. Some modern scholars continue to champion specifically conservation-based attacks on dualism, while others subsume the argument into a more general argument about causal closure.)
The law of conservation of vis viva was championed by the father and son duo, Johann and Daniel Bernoulli. The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his "Hydrodynamica", published in 1738, on this single vis viva conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which asserts the loss to be proportional to the change in hydrodynamic pressure. Daniel also formulated the notion of work and efficiency for hydraulic machines; and he gave a kinetic theory of gases, and linked the kinetic energy of gas molecules with the temperature of the gas.
This focus on the vis viva by the continental physicists eventually led to the discovery of stationarity principles governing mechanics, such as the D'Alembert's principle, Lagrangian, and Hamiltonian formulations of mechanics.
Émilie du Châtelet (1706–1749) proposed and tested the hypothesis of the conservation of total energy, as distinct from momentum. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in 1722 in which balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy—as indicated by the quantity of material displaced—was shown to be proportional to the square of the velocity. The deformation of the clay was found to be directly proportional to the height from which the balls were dropped, equal to the initial potential energy. Some earlier workers, including Newton and Voltaire, had believed that "energy" was not distinct from momentum and therefore proportional to velocity. According to this understanding, the deformation of the clay should have been proportional to the square root of the height from which the balls were dropped. In classical physics, the correct formula is formula_3, where formula_4 is the kinetic energy of an object, formula_5 its mass and formula_6 its speed. On this basis, du Châtelet proposed that energy must always have the same dimensions in any form, which is necessary to be able to consider it in different forms (kinetic, potential, heat, ...).
Engineers such as John Smeaton, Peter Ewart, Carl Holtzmann, Gustave-Adolphe Hirn, and Marc Seguin recognized that conservation of momentum alone was not adequate for practical calculation and made use of Leibniz's principle. The principle was also championed by some chemists such as William Hyde Wollaston. Academics such as John Playfair were quick to point out that kinetic energy is clearly not conserved. This is obvious to a modern analysis based on the second law of thermodynamics, but in the 18th and 19th centuries, the fate of the lost energy was still unknown.
Gradually it came to be suspected that the heat inevitably generated by motion under friction was another form of "vis viva". In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing theories of "vis viva" and caloric theory. Count Rumford's 1798 observations of heat generation during the boring of cannons added more weight to the view that mechanical motion could be converted into heat and (that it was important) that the conversion was quantitative and could be predicted (allowing for a universal conversion constant between kinetic energy and heat). "Vis viva" then started to be known as "energy", after the term was first used in that sense by Thomas Young in 1807.
The recalibration of "vis viva" to
formula_7
which can be understood as converting kinetic energy to work, was largely the result of Gaspard-Gustave Coriolis and Jean-Victor Poncelet over the period 1819–1839. The former called the quantity "quantité de travail" (quantity of work) and the latter, "travail mécanique" (mechanical work), and both championed its use in engineering calculations.
In the paper "Über die Natur der Wärme" (German "On the Nature of Heat/Warmth"), published in the in 1837, Karl Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation of energy: "besides the 54 known chemical elements there is in the physical world one agent only, and this is called "Kraft" [energy or work]. It may appear, according to circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and from any one of these forms it can be transformed into any of the others."
Mechanical equivalent of heat.
A key stage in the development of the modern conservation principle was the demonstration of the "mechanical equivalent of heat". The caloric theory maintained that heat could neither be created nor destroyed, whereas conservation of energy entails the contrary principle that heat and mechanical work are interchangeable.
In the middle of the eighteenth century, Mikhail Lomonosov, a Russian scientist, postulated his corpusculo-kinetic theory of heat, which rejected the idea of a caloric. Through the results of empirical studies, Lomonosov came to the conclusion that heat was not transferred through the particles of the caloric fluid.
In 1798, Count Rumford (Benjamin Thompson) performed measurements of the frictional heat generated in boring cannons and developed the idea that heat is a form of kinetic energy; his measurements refuted caloric theory, but were imprecise enough to leave room for doubt.
The mechanical equivalence principle was first stated in its modern form by the German surgeon Julius Robert von Mayer in 1842. Mayer reached his conclusion on a voyage to the Dutch East Indies, where he found that his patients' blood was a deeper red because they were consuming less oxygen, and therefore less energy, to maintain their body temperature in the hotter climate. He discovered that heat and mechanical work were both forms of energy, and in 1845, after improving his knowledge of physics, he published a monograph that stated a quantitative relationship between them.
Meanwhile, in 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. In one of them, now called the "Joule apparatus", a descending weight attached to a string caused a paddle immersed in water to rotate. He showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding, although it was little known outside his native Denmark.
Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that eventually drew the wider recognition.
In 1844, the Welsh scientist William Robert Grove postulated a relationship between mechanics, heat, light, electricity, and magnetism by treating them all as manifestations of a single "force" ("energy" in modern terms). In 1846, Grove published his theories in his book "The Correlation of Physical Forces". In 1847, drawing on the earlier work of Joule, Sadi Carnot, and Émile Clapeyron, Hermann von Helmholtz arrived at conclusions similar to Grove's and published his theories in his book "Über die Erhaltung der Kraft" ("On the Conservation of Force", 1847). The general modern acceptance of the principle stems from this publication.
In 1850, the Scottish mathematician William Rankine first used the phrase "the law of the conservation of energy" for the principle.
In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based on a creative reading of propositions 40 and 41 of the "Philosophiae Naturalis Principia Mathematica". This is now regarded as an example of Whig history.
Mass–energy equivalence.
Matter is composed of atoms and what makes up atoms. Matter has "intrinsic" or "rest" mass. In the limited range of recognized experience of the nineteenth century, it was found that such rest mass is conserved. Einstein's 1905 theory of special relativity showed that rest mass corresponds to an equivalent amount of "rest energy". This means that "rest mass" can be converted to or from equivalent amounts of (non-material) forms of energy, for example, kinetic energy, potential energy, and electromagnetic radiant energy. When this happens, as recognized in twentieth-century experience, rest mass is not conserved, unlike the "total" mass or "total" energy. All forms of energy contribute to the total mass and total energy.
For example, an electron and a positron each have rest mass. They can perish together, converting their combined rest energy into photons which have electromagnetic radiant energy but no rest mass. If this occurs within an isolated system that does not release the photons or their energy into the external surroundings, then neither the total "mass" nor the total "energy" of the system will change. The produced electromagnetic radiant energy contributes just as much to the inertia (and to any weight) of the system as did the rest mass of the electron and positron before their demise. Likewise, non-material forms of energy can perish into matter, which has rest mass.
Thus, conservation of energy ("total", including material or "rest" energy) and conservation of mass ("total", not just "rest") are one (equivalent) law. In the 18th century, these had appeared as two seemingly-distinct laws.
Conservation of energy in beta decay.
The discovery in 1911 that electrons emitted in beta decay have a continuous rather than a discrete spectrum appeared to contradict conservation of energy, under the then-current assumption that beta decay is the simple emission of an electron from a nucleus. This problem was eventually resolved in 1933 by Enrico Fermi who proposed the correct description of beta-decay as the emission of both an electron and an antineutrino, which carries away the apparently missing energy.
First law of thermodynamics.
For a closed thermodynamic system, the first law of thermodynamics may be stated as:
formula_8, or equivalently, formula_9
where formula_10 is the quantity of energy added to the system by a heating process, formula_11 is the quantity of energy lost by the system due to work done by the system on its surroundings, and formula_12 is the change in the internal energy of the system.
The δ's before the heat and work terms are used to indicate that they describe an increment of energy which is to be interpreted somewhat differently than the formula_12 increment of internal energy (see Inexact differential). Work and heat refer to kinds of process which add or subtract energy to or from a system, while the internal energy formula_13 is a property of a particular state of the system when it is in unchanging thermodynamic equilibrium. Thus the term "heat energy" for formula_10 means "that amount of energy added as a result of heating" rather than referring to a particular form of energy. Likewise, the term "work energy" for formula_11 means "that amount of energy lost as a result of work". Thus one can state the amount of internal energy possessed by a thermodynamic system that one knows is presently in a given state, but one cannot tell, just from knowledge of the given present state, how much energy has in the past flowed into or out of the system as a result of its being heated or cooled, nor as a result of work being performed on or by the system.
Entropy is a function of the state of a system which tells of limitations of the possibility of conversion of heat into work.
For a simple compressible system, the work performed by the system may be written:
formula_14
where formula_15 is the pressure and formula_16 is a small change in the volume of the system, each of which are system variables. In the fictive case in which the process is idealized and infinitely slow, so as to be called "quasi-static", and regarded as reversible, the heat being transferred from a source with temperature infinitesimally above the system temperature, the heat energy may be written
formula_17
where formula_18 is the temperature and formula_19 is a small change in the entropy of the system. Temperature and entropy are variables of the state of a system.
If an open system (in which mass may be exchanged with the environment) has several walls such that the mass transfer is through rigid walls separate from the heat and work transfers, then the first law may be written as
formula_20
where formula_21 is the added mass of species formula_22 and formula_23 is the corresponding enthalpy per unit mass. Note that generally formula_24 in this case, as matter carries its own entropy. Instead, formula_25, where formula_26 is the entropy per unit mass of type formula_22, from which we recover the fundamental thermodynamic relation
formula_27
because the chemical potential formula_28 is the partial molar Gibbs free energy of species formula_22 and the Gibbs free energy formula_29.
Noether's theorem.
The conservation of energy is a common feature in many physical theories. From a mathematical point of view it is understood as a consequence of Noether's theorem, developed by Emmy Noether in 1915 and first published in 1918. In any physical theory that obeys the stationary-action principle, the theorem states that every continuous symmetry has an associated conserved quantity; if the theory's symmetry is time invariance, then the conserved quantity is called "energy". The energy conservation law is a consequence of the shift symmetry of time; energy conservation is implied by the empirical fact that the laws of physics do not change with time itself. Philosophically this can be stated as "nothing depends on time per se". In other words, if the physical system is invariant under the continuous symmetry of time translation, then its energy (which is the canonical conjugate quantity to time) is conserved. Conversely, systems that are not invariant under shifts in time (e.g. systems with time-dependent potential energy) do not exhibit conservation of energy – unless we consider them to exchange energy with another, external system so that the theory of the enlarged system becomes time-invariant again. Conservation of energy for finite systems is valid in physical theories such as special relativity and quantum theory (including QED) in the flat space-time.
Special relativity.
With the discovery of special relativity by Henri Poincaré and Albert Einstein, the energy was proposed to be a component of an energy-momentum 4-vector. Each of the four components (one of energy and three of momentum) of this vector is separately conserved across time, in any closed system, as seen from any given inertial reference frame. Also conserved is the vector length (Minkowski norm), which is the rest mass for single particles, and the invariant mass for systems of particles (where momenta and energy are separately summed before the length is calculated).
The relativistic energy of a single massive particle contains a term related to its rest mass in addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in the rest frame) of a massive particle, or else in the center of momentum frame for objects or systems which retain kinetic energy, the total energy of a particle or object (including internal kinetic energy in systems) is proportional to the rest mass or invariant mass, as described by the equation formula_30.
Thus, the rule of "conservation of energy" over time in special relativity continues to hold, so long as the reference frame of the observer is unchanged. This applies to the total energy of systems, although different observers disagree as to the energy value. Also conserved, and invariant to all observers, is the invariant mass, which is the minimal system mass and energy that can be seen by any observer, and which is defined by the energy–momentum relation.
General relativity.
General relativity introduces new phenomena. In an expanding universe, photons spontaneously redshift and tethers spontaneously gain tension; if vacuum energy is positive, the total vacuum energy of the universe appears to spontaneously increase as the volume of space increases. Some scholars claim that energy is no longer meaningfully conserved in any identifiable form.
John Baez's view is that energy–momentum conservation is not well-defined except in certain special cases. Energy-momentum is typically expressed with the aid of a stress–energy–momentum pseudotensor. However, since pseudotensors are not tensors, they do not transform cleanly between reference frames. If the metric under consideration is static (that is, does not change with time) or asymptotically flat (that is, at an infinite distance away spacetime looks empty), then energy conservation holds without major pitfalls. In practice, some metrics, notably the Friedmann–Lemaître–Robertson–Walker metric that appears to govern the universe, do not satisfy these constraints and energy conservation is not well defined. Besides being dependent on the coordinate system, pseudotensor energy is dependent on the type of pseudotensor in use; for example, the energy exterior to a Kerr–Newman black hole is twice as large when calculated from Møller's pseudotensor as it is when calculated using the Einstein pseudotensor.
For asymptotically flat universes, Einstein and others salvage conservation of energy by introducing a specific global gravitational potential energy that cancels out mass-energy changes triggered by spacetime expansion or contraction. This global energy has no well-defined density and cannot technically be applied to a non-asymptotically flat universe; however, for practical purposes this can be finessed, and so by this view, energy is conserved in our universe. Alan Guth stated that the universe might be "the ultimate free lunch", and theorized that, when accounting for gravitational potential energy, the net energy of the Universe is zero.
Quantum theory.
In quantum mechanics, the energy of a quantum system is described by a self-adjoint (or Hermitian) operator called the Hamiltonian, which acts on the Hilbert space (or a space of wave functions) of the system. If the Hamiltonian is a time-independent operator, emergence probability of the measurement result does not change in time over the evolution of the system. Thus the expectation value of energy is also time independent. The local energy conservation in quantum field theory is ensured by the quantum Noether's theorem for the energy-momentum tensor operator. Thus energy is conserved by the normal unitary evolution of a quantum system.
However, when the non-unitary Born rule is applied, the system's energy is measured with an energy that can be below or above the expectation value, if the system was not in an energy eigenstate. (For macroscopic systems, this effect is usually too small to measure.) The disposition of this energy gap is not well-understood; most physicists believe that the energy is transferred to or from the macroscopic environment in the course of the measurement process, while others believe that the observable energy is only conserved "on average". No experiment has been confirmed as definitive evidence of violations of the conservation of energy principle in quantum mechanics, but that does not rule out that some newer experiments, as proposed, may find evidence of violations of the conservation of energy principle in quantum mechanics.
Status.
In the context of perpetual motion machines such as the Orbo, Professor Eric Ash has argued at the BBC: "Denying [conservation of energy] would undermine not just little bits of science - the whole edifice would be no more. All of the technology on which we built the modern world would lie in ruins". It is because of conservation of energy that "we know - without having to examine details of a particular device - that Orbo cannot work."
Energy conservation has been a foundational physical principle for about two hundred years. From the point of view of modern general relativity, the lab environment can be well approximated by Minkowski spacetime, where energy is exactly conserved. The entire Earth can be well approximated by the Schwarzschild metric, where again energy is exactly conserved. Given all the experimental evidence, any new theory (such as quantum gravity), in order to be successful, will have to explain why energy has appeared to always be exactly conserved in terrestrial experiments. In some speculative theories, corrections to quantum mechanics are too small to be detected at anywhere near the current TeV level accessible through particle accelerators. Doubly special relativity models may argue for a breakdown in energy-momentum conservation for sufficiently energetic particles; such models are constrained by observations that cosmic rays appear to travel for billions of years without displaying anomalous non-conservation behavior. Some interpretations of quantum mechanics claim that observed energy tends to increase when the Born rule is applied due to localization of the wave function. If true, objects could be expected to spontaneously heat up; thus, such models are constrained by observations of large, cool astronomical objects as well as the observation of (often supercooled) laboratory experiments.
Milton A. Rothman wrote that the law of conservation of energy has been verified by nuclear physics experiments to an accuracy of one part in a thousand million million (1015). He then defines its precision as "perfect for all practical purposes".
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E = mc^2"
},
{
"math_id": 1,
"text": "\\sum_{i} m_i v_i^2"
},
{
"math_id": 2,
"text": "\\sum_{i} m_i v_i"
},
{
"math_id": 3,
"text": "E_k = \\frac12 mv^2"
},
{
"math_id": 4,
"text": "E_k"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "v"
},
{
"math_id": 7,
"text": "\\frac {1} {2}\\sum_{i} m_i v_i^2"
},
{
"math_id": 8,
"text": "\\delta Q = \\mathrm{d}U + \\delta W"
},
{
"math_id": 9,
"text": "\\mathrm{d}U = \\delta Q - \\delta W,"
},
{
"math_id": 10,
"text": "\\delta Q"
},
{
"math_id": 11,
"text": "\\delta W"
},
{
"math_id": 12,
"text": "\\mathrm{d}U"
},
{
"math_id": 13,
"text": "U"
},
{
"math_id": 14,
"text": "\\delta W = P\\,\\mathrm{d}V,"
},
{
"math_id": 15,
"text": "P"
},
{
"math_id": 16,
"text": "dV"
},
{
"math_id": 17,
"text": "\\delta Q = T\\,\\mathrm{d}S,"
},
{
"math_id": 18,
"text": "T"
},
{
"math_id": 19,
"text": "\\mathrm{d}S"
},
{
"math_id": 20,
"text": "\\mathrm{d}U = \\delta Q - \\delta W + \\sum_i h_i\\,dM_i,"
},
{
"math_id": 21,
"text": "dM_i"
},
{
"math_id": 22,
"text": "i"
},
{
"math_id": 23,
"text": "h_i"
},
{
"math_id": 24,
"text": "dS\\neq\\delta Q/T"
},
{
"math_id": 25,
"text": "dS=\\delta Q/T+\\textstyle{\\sum_{i}}s_i\\,dM_i"
},
{
"math_id": 26,
"text": "s_i"
},
{
"math_id": 27,
"text": "\\mathrm{d}U = T\\,dS - P\\,dV + \\sum_i\\mu_i\\,dN_i"
},
{
"math_id": 28,
"text": "\\mu_i"
},
{
"math_id": 29,
"text": "G\\equiv H-TS"
},
{
"math_id": 30,
"text": "E=mc^2"
}
] |
https://en.wikipedia.org/wiki?curid=67088
|
67090228
|
Threshold dose
|
Threshold dose is the minimum dose of drug that triggers minimal detectable biological effect in an animal. At extremely low doses, biological responses are absent for some of the drugs. The increase in dose above threshold dose induces an increase in the percentage of biological responses. Several benchmarks have been established to describe the effects of a particular dose of drug in a particular species, such as NOEL(no-observed-effect-level), NOAEL(no-observed-adverse-effect-level) and LOAEL(lowest-observed-adverse-effect-level). They are established by reviewing the available studies and animal studies. The application of threshold dose in risk assessment safeguards the participants in human clinical trials and evaluates the risks of chronic exposure to certain substances. However, the nature of animal studies also limits the applicability of experimental results in the human population and its significance in evaluating potential risk of certain substances. In toxicology, there are some other safety factors including LD50, LC50 and EC50.
Dose levels.
Threshold dose is a dose of drug barely adequate to produce a biological effect in an animal. In dose-response assessment, the term ‘threshold dose’ is refined into several terminologies, such as NOEL, NOAEL, and LOAEL. They define the limits of doses resulting in biological responses or toxic effects. Common responses are alterations in structures, growth, development and average lifespan of the treated group of organisms. The changes are found by comparing the observations between the treated and control groups. Both groups are of the same species and have the same environment of exposure in the trial. The only difference is that the treated group receives the experimental substance while the control group does not.
For the drugs administered by oral and dermal route, the units of threshold dose are mg/kg body-weight/day (dose of the drug in mg per body weight in kg per day) or ppm (parts per million), while the threshold dose of drugs by inhalation delivery has the unit of mg/L 6h/day (amount of drug in mg in 1L of air, for 6 hours per day).
NOEL.
NOEL is no-observed-effect-level. It is the maximum dose of a substance that has no observable effect on the treated group in human clinical trials or animal experimental trials. In some literature, NOEL is the only dose level referred by the terminology ‘threshold dose’.
NOAEL.
NOAEL is no-observed-adverse-effect-level. It is the maximum dose of a substance that has no observable adverse effect on the treated group in human clinical trials or animal experimental trials.
LOAEL.
LOAEL is lowest-observed-adverse-effect-level. It is the minimum dose of a substance that produces an observable adverse effect on the treated group in human clinical trials or animal experimental trials. There is a biologically or statistically significant increase in the prevalence of adverse effect in the treated group above this level.
Establishment of dose levels.
Factors affecting threshold dose.
The dose-response relationship is dependent on various factors. They include the physicochemical properties of the drug, route of administration or exposure, duration of exposure, population size, and the characteristics of the studied organism such as their species, sex, ages, etc. The type of biological responses is also a significant factor for the variations of a dose-response relationship. Each response corresponds to one unique relationship. As it is not practical to establish the dose-response relationships for all possible responses, the studies usually narrow down the scopes to a few responses. All available studies examining the correlation between the target drug and its biological responses will be reviewed. The selection criteria for the critical responses for assessment is that the dose required to produce that particular response is the lowest. The precursor of a biological effect can also be the response for assessment. For instance, the risk factors of a disease may eventually precipitate the disease. In the study of the relationship between a drug and the development of a particular cardiovascular disease, the risk factors of the disease can be considered as the responses for measurement as well.
Process to evaluate threshold dose.
A two-step process is adopted to evaluate the specific dose levels, NOAEL and LOAEL. The first step is to carry out reviews of available studies or animal studies to obtain data on the effect of different doses of the target drug. They allow the establishment of dose-response relationships over the range of doses reported in the data collected. Often the data collected is inadequate to produce a range wide enough to observe the dose in which biological responses are not induced in humans. The dose which is sufficiently low to prevent the occurrence of the response in humans cannot be evaluated and therefore paves the way to the second step, extrapolation of the dose-response relationship. The results beyond the range covered by the available data are estimated. It attempts to make inferences of the region that the critical dose levels such as NOAEL and LOAEL fell within. Thus the doses starting to trigger adverse effects in humans can be evaluated.
For step one, the two common approaches for evaluating threshold doses are qualitative examination of available studies and animal studies.
Qualitative examination of available studies.
The effects of the target drug at different doses are obtained from available studies. The dose-response relationship will be identified and extrapolation is often required to make inferences about the dose levels below the range of data collected.
Animal Studies.
Animal studies are conducted when the data collected from qualitative examination of available studies is scarce. It is for expanding the range of doses. Also, animal studies allow the manipulation of the study design, such as the age and gender of treated animals. Animal study is therefore less susceptible to the influences of confounders than observational studies and therefore contributes to a more rigorous dose-response assessment. As the assessed animals exhibit variation in characteristics with humans such as body size, extrapolation should be carried out to estimate the dose-response relationship in humans.
A common animal study is repeated dose toxicity testing. The participating species are divided into 4 groups, receiving placebo, low dose, mid-dose and high dose of the drugs respectively. Within the same group, the same dose is given on a daily basis for a specified period, such as 28 days or 90 days. Subsequent to the specified period, necropsy or tissue samples collection allows identification of the dose levels bring about certain effects and therefore establishment of NOAEL and LOAEL.
Significance.
The threshold doses such as NOAEL, LOAEL and NOEL are essential values in risk assessment. The maximum safe starting doses of different drugs can be obtained from them prior to human clinical trials. Another application is to assess the safe dose for chronic exposure. They are utilized to estimate the daily exposure which does not induce detrimental effects in humans in their lifetime, which is also known as the Reference Dose (RfD).
The variations between different species and the extrapolation of dose-response relationship generated from animal studies to that for humans introduce uncertainties into the analysis of dose-response. Humans also manifest intra-variation of sensitivity towards a particular substance among the population. As a result, 10-fold uncertainty factors (UF) are applied to convert NOAEL to the reference dose. The UFinter and UFintra account for the inter- and intra-species variation respectively.
formula_0
Limitations.
Inapplicability.
For carcinogenic substances, theoretically NOAEL and LOAEL do not exist as there is no safe dose for the carcinogens. A linear no-threshold model is commonly used for illustrating the probability of cancer development from radiation. There is no threshold value at which stochastic health effects start emerging. Only for non-cancer health outcomes, there is an assumption of the presence of a safety margin below which no negative biological effect is expected.
Inconsistency.
Most dose-response models are obtained from animal experiments out of ethical concerns. Therefore, the results might not be consistent with that of the human population. Individual differences also arise among people in terms of age, weight, gender, health status, etc. Thus, in most circumstances, the threshold dose serves as a reference to evaluate the probable outcome of a certain dosage of a substance for the general population, while great deviations might exist in special populations such as immunocompromised patients, pregnant women and young children.
Incomprehensiveness.
The threshold dose is only a measure of acute toxicity since the drug or toxic substance investigated is administered at once. The consequence of long-term administration remains unknown. As the threshold dose is the measured minimal response, its accuracy heavily depends on the machinery used. It is possible that further refinement is needed. Furthermore, the threshold dose only reflects the dose required for a minimum detectable response but it should not be misunderstood that health effects are absolutely absent in the doses below the threshold dose.
Other safety factors.
LD50, LC50.
The median lethal dose (LD50) of a substance is defined as the dose that leads to death in 50% of the tested population. It is a significant parameter in toxicology study and indicates the acute toxicity of a particular substance. LD50 is usually expressed in the weight of the chemical administered in milligram per unit of body weight (mg/kg). In the discussion of environmental toxins, as there is no direct administration of toxic materials, a similar parameter LC50 will be mentioned instead. LC50 is the concentration of substance in air that kills half of the tested population during the experimental period.
EC50.
The median effective concentration (EC50) is the concentration of a drug required to reach 50% of the maximal biological effect the drug can exert. It is a reflection of the potency of a drug and is expressed in molar units such as mol/L. The value of EC50 greatly depends on the affinity of the drug for its receptor, as well as the efficacy of the drug, which conveys receptor occupancy and the ability of the drug to trigger a biological response. EC50 is incorporated in the Hill’s Equation, a function that demonstrates the relationship between agonist concentration and ligand binding. EC50 is mathematically given as the inflection point of the equation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "RfD = NOAEL \\div(UFinter\\times(UFintra))"
}
] |
https://en.wikipedia.org/wiki?curid=67090228
|
6709643
|
Random effects model
|
Statistical model
In statistics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are drawn from a hierarchy of different populations whose differences relate to that hierarchy. A random effects model is a special case of a mixed model.
Contrast this to the biostatistics definitions, as biostatisticians use "fixed" and "random" effects to respectively refer to the population-average and subject-specific effects (and where the latter are generally assumed to be unknown, latent variables).
Qualitative description.
Random effect models assist in controlling for unobserved heterogeneity when the heterogeneity is constant over time and not correlated with independent variables. This constant can be removed from longitudinal data through differencing, since taking a first difference will remove any time invariant components of the model.
Two common assumptions can be made about the individual specific effect: the random effects assumption and the fixed effects assumption. The random effects assumption is that the individual unobserved heterogeneity is uncorrelated with the independent variables. The fixed effect assumption is that the individual specific effect is correlated with the independent variables.
If the random effects assumption holds, the random effects estimator is more efficient than the fixed effects model.
Simple example.
Suppose formula_0 large elementary schools are chosen randomly from among thousands in a large country. Suppose also that formula_1 pupils of the same age are chosen randomly at each selected school. Their scores on a standard aptitude test are ascertained. Let formula_2 be the score of the formula_3-th pupil at the formula_4-th school.
A simple way to model this variable is
formula_5
where formula_6 is the average test score for the entire population.
In this model formula_7 is the school-specific random effect: it measures the difference between the average score at school formula_4 and the average score in the entire country. The term formula_8 is the individual-specific random effect, i.e., it's the deviation of the formula_3-th pupil's score from the average for the formula_4-th school.
The model can be augmented by including additional explanatory variables, which would capture differences in scores among different groups. For example:
formula_9
where formula_10 is a binary dummy variable and formula_11records, say, the average education level of a child's parents. This is a mixed model, not a purely random effects model, as it introduces fixed-effects terms for Sex and Parents' Education.
Variance components.
The variance of formula_2 is the sum of the variances formula_12 and formula_13 of formula_7 and formula_8 respectively.
Let
formula_14
be the average, not of all scores at the formula_4-th school, but of those at the formula_4-th school that are included in the random sample. Let
formula_15
be the grand average.
Let
formula_16
formula_17
be respectively the sum of squares due to differences "within" groups and the sum of squares due to difference "between" groups. Then it can be shown that
formula_18
and
formula_19
These "expected mean squares" can be used as the basis for estimation of the "variance components" formula_13 and "formula_12.
The formula_13 parameter is also called the .
Marginal Likelihood.
For random effects models the marginal likelihoods are important.
Applications.
Random effects models used in practice include the Bühlmann model of insurance contracts and the Fay-Herriot model used for small area estimation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "Y_{ij}"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "\n Y_{ij} = \\mu + U_i + W_{ij},\\,\n "
},
{
"math_id": 6,
"text": "\\mu"
},
{
"math_id": 7,
"text": "U_i"
},
{
"math_id": 8,
"text": "W_{ij}"
},
{
"math_id": 9,
"text": "\n Y_{ij} = \\mu + \\beta_1 \\mathrm{Sex}_{ij} + \\beta_2 \\mathrm{ParentsEduc}_{ij} + U_i + W_{ij},\\,\n "
},
{
"math_id": 10,
"text": "\\mathrm{Sex}_{ij}"
},
{
"math_id": 11,
"text": "\\mathrm{ParentsEduc}_{ij}"
},
{
"math_id": 12,
"text": "\\tau^2"
},
{
"math_id": 13,
"text": "\\sigma^2"
},
{
"math_id": 14,
"text": "\\overline{Y}_{i\\bullet} = \\frac{1}{n}\\sum_{j=1}^n Y_{ij}"
},
{
"math_id": 15,
"text": "\\overline{Y}_{\\bullet\\bullet} = \\frac{1}{mn}\\sum_{i=1}^m\\sum_{j=1}^n Y_{ij}"
},
{
"math_id": 16,
"text": "SSW = \\sum_{i=1}^m\\sum_{j=1}^n (Y_{ij} - \\overline{Y}_{i\\bullet})^2 \\, "
},
{
"math_id": 17,
"text": "SSB = n\\sum_{i=1}^m (\\overline{Y}_{i\\bullet} - \\overline{Y}_{\\bullet\\bullet})^2 \\,"
},
{
"math_id": 18,
"text": " \\frac{1}{m(n - 1)}E(SSW) = \\sigma^2"
},
{
"math_id": 19,
"text": " \\frac{1}{(m - 1)n}E(SSB) = \\frac{\\sigma^2}{n} + \\tau^2."
}
] |
https://en.wikipedia.org/wiki?curid=6709643
|
6710180
|
Alkylglycerone phosphate synthase
|
Class of enzymes
Alkylglycerone phosphate synthase (EC 2.5.1.26, "alkyldihydroxyacetonephosphate synthase", "alkyldihydroxyacetone phosphate synthetase", "alkyl DHAP synthetase", "alkyl-DHAP", "dihydroxyacetone-phosphate acyltransferase", "DHAP-AT") is an enzyme associated with Type 3 Rhizomelic chondrodysplasia punctata. This enzyme catalyses the following chemical reaction
1-acyl-glycerone 3-phosphate + a long-chain alcohol formula_0 an alkyl-glycerone 3-phosphate + a long-chain acid anion
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=6710180
|
67103042
|
2 Chronicles 1
|
Second Book of Chronicles, chapter 1
2 Chronicles 1 is the first chapter of the Second Book of Chronicles the Old Testament of the Christian Bible or of the second part of the Books of Chronicles in the Hebrew Bible. The book is compiled from older sources by an unknown person or group, designated by modern scholars as "the Chronicler", and had the final shape established in late fifth or fourth century BCE. This chapter belongs to the section focusing on the kingship of Solomon (2 Chronicles 1 to 9). The focus of this chapter is Solomon's ascension and wealth.
Text.
This chapter was originally written in the Hebrew language and is divided into 17 verses in Christian Bibles, but into 18 verses in the Hebrew Bible with the following verse numbering comparison:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century) and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BCE. Extant ancient manuscripts of the Septuagint version include Codex Vaticanus (B; formula_0B; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Solomon's sacrifice and prayer at Gibeon (1:1–13).
The section records how Solomon began his reign to succeed David in the unified monarchy as David had consolidated domestic support for Solomon (1 Chronicles 25–29). In verses 3–5, the Chronicler attempts to unite all legitimate worship sites and objects, that is the tabernacle built by Moses in the desert, which was placed in Gibeon (; 21:29), and the ark of the Covenant, placed in the tent by David in Jerusalem. The Chronicler deliberately presents 'a great people, so numerous they cannot be numbered or counted' () into 'a people as numerous as the dust of the earth' (verse 9), referring to the promise made to Jacob (or "Israel") in . The reference to a promise of an eternal dynasty made to David ('let your promise to my father David now be fulfilled'; cf. ) refers to verse 1 where Solomon is introduced as David's son and rightful successor by divine choice.
"So Solomon, and all the congregation with him, went to the high place that was at Gibeon; for there was the tabernacle of the congregation of God, which Moses the servant of the Lord had made in the wilderness."
Solomon's wealth (1:14–17).
The record of Solomon's wealth in this section is almost identical to other passages (). Here is to illustrate the fulfillment of God's promise to Solomon in Gibeon.
"And they fetched up, and brought forth out of Egypt a chariot for six hundred shekels of silver, and a horse for an hundred and fifty: and so brought they out horses for all the kings of the Hittites, and for the kings of Syria, by their means."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=67103042
|
67107896
|
Cosmic microwave background spectral distortions
|
Fluctuations in the energy spectrum of the microwave background
CMB spectral distortions are tiny departures of the average cosmic microwave background (CMB) frequency spectrum from the predictions given by a perfect black body. They can be produced by a number of standard and non-standard processes occurring at the early stages of cosmic history, and therefore allow us to probe the standard picture of cosmology. Importantly, the CMB frequency spectrum and its distortions should not be confused with the CMB anisotropy power spectrum, which relates to spatial fluctuations of the CMB temperature in different directions of the sky.
Overview.
The energy spectrum of the CMB is extremely close to that of a perfect blackbody with a temperature of formula_1. This is expected because in the early Universe matter and radiation are in thermal equilibrium. However, at redshifts formula_2, several mechanisms, both standard and non-standard, can modify the CMB spectrum and introduce departures from a blackbody spectrum. These departures are commonly referred to as CMB spectral distortions and mostly concern the average CMB spectrum across the full sky (i.e., the CMB monopole spectrum).
Spectral distortions are created by processes that drive matter and radiation out of equilibrium. One important scenario relates to spectral distortions from early energy injection, for instance, by decaying particles, primordial black hole evaporation or the dissipation of acoustic waves set up by inflation. In this process, the baryons heat up and transfer some of their excess energy to the ambient CMB photon bath via Compton scattering. Depending on the moment of injection, this causes a distortion, which can be characterized using so-called formula_3- and formula_4-type distortion spectra. The dimensionless formula_3 and formula_4-parameters are a measure for the total amount of energy that was injected into the CMB. CMB spectral distortions therefore provide a powerful probe of early-universe physics and even deliver crude estimates for the epoch at which the injection occurred.
The current best observational limits set in the 1990s by COBE-satellite/FIRAS-instrument (COBE/FIRAS) are formula_5 and formula_6 at 95% confidence level. Within formula_7CDM we expect formula_8 and formula_9, signals that have come into reach of current-day technology (see ). Richer distortion signals, going beyond the classical formula_3 and formula_4 distortions, can be created by photon injection processes, relativistic electron distributions and during the gradual transition between the formula_3 and formula_4-distortion eras. The cosmological recombination radiation (CRR) is a prime example within formula_7CDM that is created by photon injection from the recombining hydrogen and helium plasma around redshifts of formula_10.
History.
The first considerations of spectral distortions to the CMB go back to the early days of CMB cosmology starting with the seminal papers of Yakov B. Zeldovich and Rashid Sunyaev in 1969 and 1970. These works appeared just a few years after the first detection of the CMB by Arno Allan Penzias and Robert Woodrow Wilson and its interpretation as the echo of the Big Bang by Robert H. Dicke and his team in 1965. These findings constitute one of the most important pillars of Big Bang cosmology, which predicts the blackbody nature of the CMB. However, as shown by Zeldovich and Sunyaev, energy exchange with moving electrons can cause spectral distortions.
The pioneering analytical studies of Zeldovich and Sunyaev were later complemented by the numerical investigations of Illarionov and Sunyaev in the 1970s. These treated the thermalization problem including Compton scattering and the Bremsstrahlung process for a single release of energy. In 1982, the importance of double Compton emission as a source of photons at high redshifts was recognized by Danese and de Zotti. Modern considerations of CMB spectral distortions started with the works of Burigana, Danese and de Zotti and Hu, Silk and Scott in the early 1990s.
After COBE/FIRAS provided stringent limits on the CMB spectrum, essentially ruling out distortions at the level formula_11, the interest in CMB spectral distortions decreased. In 2011, PIXIE was proposed to NASA as a mid-Ex satellite mission, providing first strong motivation to revisit the theory of spectral distortions. Although no successor of COBE/FIRAS has been funded so far, this led to a renaissance of CMB spectral distortions with numerous theoretical studies and the design of novel experimental concepts
Thermalisation physics.
In the cosmological 'thermalization problem', three main eras are distinguished: the thermalization or temperature-era, the formula_3-era and the formula_4-era, each with slightly different physical conditions due to the change in the density and temperature of particles caused by the Hubble expansion.
Thermalization era.
In the very early stages of cosmic history (up until a few months after the Big Bang), photons and baryons are efficiently coupled by scattering processes and, therefore, are in full thermodynamic equilibrium. Energy that is injected into the medium is rapidly redistributed among the photons, mainly by Compton scattering, while the photon number density is adjusted by photon non-conserving processes, such as double Compton and thermal Bremsstrahlung. This allows the photon field to quickly relax back to a Planckian distribution, even if for a very short phase a spectral distortion appears. Observations today cannot tell the difference in this case, as there is no independent cosmological prediction for the CMB monopole temperature. This regime is frequently referred to as the thermalization or temperature era and ends at redshift formula_12.
"μ"-distortion era.
At redshifts between formula_13 and formula_14, efficient energy exchange through Compton scattering continues to establish kinetic equilibrium between matter and radiation, but photon number changing processes stop being efficient. Since the photon number density is conserved but the energy density is modified, photons gain an effective non-zero chemical potential, acquiring a Bose-Einstein distribution. This distinct type of distortion is called formula_3-distortion after the chemical potential known from standard thermodynamics. The value for the chemical potential can be estimated by combining the photon energy density and number density constraints from before and after the energy injection. This yields the well-known expression,
formula_15
where formula_16 determines the total energy that is injected into the CMB photon field. With respect to the equilibrium blackbody spectrum, the formula_3-distortion is characterized by a deficit of photons at low frequencies and an increment at high frequencies. The distortion changes sign at a frequency of formula_17, allowing us to distinguish it observationally from the formula_4-type distortion.
formula_3-distortion signals can be created by decaying particles, evaporating primordial black holes, primordial magnetic fields and other non-standard physics examples. Within formula_7CDM cosmology, the adiabatic cooling of matter and dissipation of acoustic waves set up by inflation cause a formula_3-distortion with formula_18. This signal can be used as a powerful test for inflation, as it is sensitive to the amplitude of density fluctuations at scales corresponding to physical scales of formula_19 (i.e., dwarf galaxies). By combining COBE’s measurements of the large-scale CMB anisotropies with the formula_3-distortion constraint, the first limits on the small-scale power spectrum could be obtained well-before direct measurements became possible
"y"-distortion era.
At redshifts formula_20, also Compton scattering becomes inefficient. The plasma has a temperature of formula_21, such that CMB photons are boosted via non-relativistic Compton scattering, giving rise to a formula_4-distortion. Again, by considering the total energetics of the problem and using photon number conservation, one can obtain the estimate
formula_22
The name for the formula_4-distortion simply stems from the choice of dimensionless variables in the seminal paper of Zeldovich and Sunyaev, 1969. There, the energy injection caused by the hot electrons residing inside clusters of galaxies was considered and the associated effect is more commonly referred to as the thermal Sunyaev-Zeldovich (SZ) effect.
Like for the formula_3-distortion, in principle many non-standard physics examples can cause formula_4-type distortions. However, the largest contribution to the all-sky formula_4-distortion stems from the cumulative cluster SZ signal, which provides a way to constrain the amount of hot gas in the Universe. While at formula_0, the cosmic plasma on average has a low temperature, electrons inside galaxy clusters can reach temperatures of a few keV. In this case, the scattering electrons can have speeds of formula_23, such that relativistic corrections to the Compton process become relevant. These relativistic corrections carry information of electron temperatures which can be used as a measure for the cluster energetics.
Beyond "μ" and "y" distortions.
The classical studies mainly considered energy release (i.e., heating) as a source of distortions. However, recent work has shown that richer signals can be created by direct photon injection and non-thermal electron populations, both processes that appear in connection with decaying or annihilating particles. Similarly, it was demonstrated that the transition between the formula_3 and formula_4-eras is more gradual and that the distortion shape is not simply given by a sum of formula_3- and formula_4. All these effects could allow us to differentiate observationally between a wide range of scenarios, as additional time-dependent information can be extracted.
Cosmological recombination radiation (CRR).
About 280,000 years after the Big Bang, electrons and protons became bound into electrically neutral atoms as the Universe expanded. In cosmology, this is known as recombination and preludes the decoupling of the CMB photons from matter before they free stream throughout the Universe around 380,000 years after the Big Bang. Within the energy levels of hydrogen and helium atoms, various interactions take place, both collisional and radiative. The line emission arising from these processes is injected into the CMB, showing as small distortions to the CMB blackbody commonly referred to as the cosmological recombination radiation (CRR). The specific spectral shape of this distortion is directly related to the redshift at which this emission takes place, freezing the distortion in time over the microwave frequency bands. Since the distortion signal arises from the hydrogen and two helium recombination eras, this gives us a unique probe of the pre-recombination Universe that allows us to peek behind the last scattering surface that we observe using the CMB anisotropies. It gives us a unique way to constrain the primordial amount of helium in the early Universe, before recombination, and measure the early expansion rate.
Experimental and observational challenges.
The expected Lambda-CDM (LCDM) distortion signals are small – The largest distortion, arising from the cumulative flux of all hot gas in the Universe, has an amplitude that is about one order of magnitude below the limits of COBE/FIRAS. While this is considered to be an ‘easy’ target, the cosmological recombination radiation (CRR), as the smallest expected signal, has an amplitude that is another factor of formula_24 smaller. All LCDM distortions are furthermore obscured by large Galactic and extragalactic foreground emissions (e.g., dust, synchrotron and free-free emission, cosmic infrared background), and for observations from the ground or balloons, atmospheric emission poses another hurdle to overcome.
A detection of the LCDM distortions therefore requires novel experimental approaches that provide unprecedented sensitivity, spectral coverage, control of systematics and the capabilities to accurately remove foregrounds. Building on the design of FIRAS and experience with ARCADE, this has led to several spectrometer concepts to observe from space (PIXIE, PRISM, PRISTINE, SuperPIXIE and Voyage2050), balloon (BISOU) and the ground (APSERa and Cosmo at Dome-C, TMS at Teide Observatory). These are all designed to reach important milestones towards a detection of CMB distortions. As an ultimate frontier, a full characterization and exploitation of the cosmological recombination signal could be achieved by using a coordinated international experimental campaign, potentially including an observatory on the moon
In June 2021, the European Space Agency unveiled its plans for the future L-class missions as part of "Voyage 2050" with a chance for "`high precision spectroscopy`" for the new "early universe" part of their strategy, opening the door for spectral distortions telescopes for the future.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z<10^4"
},
{
"math_id": 1,
"text": "2.7255 K"
},
{
"math_id": 2,
"text": "z<2\\times10^6"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "|\\mu|<9\\times 10^{-5}"
},
{
"math_id": 6,
"text": "|y|<1.5\\times 10^{-5}"
},
{
"math_id": 7,
"text": "\\Lambda"
},
{
"math_id": 8,
"text": "\\mu\\sim2\\times 10^{-8}"
},
{
"math_id": 9,
"text": "y\\sim{\\rm few}\\times10^{-6}"
},
{
"math_id": 10,
"text": "z\\sim10^3-10^4"
},
{
"math_id": 11,
"text": "\\tfrac{\\Delta I}{I} \\sim 10^{-5}-10^{-4}"
},
{
"math_id": 12,
"text": "z\\sim2\\times10^{6}"
},
{
"math_id": 13,
"text": "5\\times10^4"
},
{
"math_id": 14,
"text": "2\\times10^6"
},
{
"math_id": 15,
"text": "\\mu \\sim 1.4\\; \\frac{\\Delta \\rho}{\\rho},"
},
{
"math_id": 16,
"text": "\\tfrac{\\Delta \\rho}{\\rho}"
},
{
"math_id": 17,
"text": "\\nu\\sim130 {\\rm GHz}"
},
{
"math_id": 18,
"text": "\\mu\\sim2\\times10^{-8}"
},
{
"math_id": 19,
"text": "\\lambda\\sim0.6 \\,{\\rm kpc}"
},
{
"math_id": 20,
"text": "z\\lesssim5\\times10^4"
},
{
"math_id": 21,
"text": "T<10^5 {\\rm K}"
},
{
"math_id": 22,
"text": "y \\sim \\frac{1}{4}\\; \\frac{\\Delta \\rho}{\\rho}."
},
{
"math_id": 23,
"text": "v\\sim0.1 c"
},
{
"math_id": 24,
"text": "10^3"
}
] |
https://en.wikipedia.org/wiki?curid=67107896
|
671090
|
Cowan–Reines neutrino experiment
|
Institute of Technology Experimental confirmation of neutrinos
The Cowan–Reines neutrino experiment was conducted by physicists Clyde Cowan and Frederick Reines in 1956. The experiment confirmed the existence of neutrinos. Neutrinos, subatomic particles with no electric charge and very small mass, had been conjectured to be an essential particle in beta decay processes in the 1930s. With neither mass nor charge, such particles appeared to be impossible to detect. The experiment exploited a huge flux of (then hypothetical) electron antineutrinos emanating from a nearby nuclear reactor and a detector consisting of large tanks of water. Neutrino interactions with the protons of the water were observed, verifying the existence and basic properties of this particle for the first time.
Background.
During the 1910s and 1920s, the observations of electrons from the nuclear beta decay showed that their energy had a continuous distribution. If the process involved only the atomic nucleus and the electron, the electron's energy would have a single, narrow peak, rather than a continuous energy spectrum. Only the resulting electron was observed, so its varying energy suggested that energy may not be conserved. This quandary and other factors led Wolfgang Pauli to attempt to resolve the issue by postulating the existence of the neutrino in 1930. If the fundamental principle of energy conservation was to be preserved, beta decay had to be a three-body, rather than a two-body, decay. Therefore, in addition to an electron, Pauli suggested that another particle was emitted from the atomic nucleus in beta decay. This particle, the neutrino, had very small mass and no electric charge; it was not observed, but it carried the missing energy.
Pauli's suggestion was developed into a proposed theory for beta decay by Enrico Fermi in 1933. The theory posits that the beta decay process consists of four fermions directly interacting with one another. By this interaction, the neutron decays directly to an electron, the conjectured
neutrino (later determined to be an antineutrino) and a proton. The theory, which proved to be remarkably successful, relied on the existence of the hypothetical neutrino. Fermi first submitted his "tentative" theory of beta decay to the journal "Nature", which rejected it "because it contained speculations too remote from reality to be of interest to the reader."
One problem with the neutrino conjecture and Fermi's theory was that the neutrino appeared to have such weak interactions with other matter that it would never be observed. In a 1934 paper, Rudolf Peierls and Hans Bethe calculated that neutrinos could easily pass through the Earth without interactions with any matter.
Potential for experiment.
By inverse beta decay, the predicted neutrino, more correctly an electron antineutrino (formula_0), should interact with a proton () to produce a neutron () and positron (formula_1),
formula_2
The chance of this reaction occurring was small. The probability for any given reaction to occur is in proportion to its cross section. Cowan and Reines predicted a cross section for the reaction to be about . The usual unit for a cross section in nuclear physics is a barn, which is and 20 orders of magnitudes larger.
Despite the low probability of the neutrino interaction, the signatures of the interaction are unique, making detection of the rare interactions possible. The positron, the antimatter counterpart of the electron, quickly interacts with any nearby electron, and they annihilate each other. The two resulting coincident gamma rays () are detectable. The neutron can be detected by its capture by an appropriate nucleus, releasing a third gamma ray. The coincidence of the positron annihilation and neutron capture events gives a unique signature of an antineutrino interaction.
A water molecule is composed of an oxygen and two hydrogen atoms, and most of the hydrogen atoms of water have a single proton for a nucleus. Those protons can serve as targets for antineutrinos, so that simple water can serve as a primary detecting material. The hydrogen atoms are so weakly bound in water that they can be viewed as free protons for the neutrino interaction. The interaction mechanism of neutrinos with heavier nuclei, those with several protons and neutrons, is more complicated, since the constituent protons are strongly bound within the nuclei.
Setup.
Given the small chance of interaction of a single neutrino with a proton, neutrinos could only be observed using a huge neutrino flux. Beginning in 1951, Cowan and Reines, both then scientists at Los Alamos, New Mexico, initially thought that neutrino bursts from the atomic weapons tests that were then occurring could provide the required flux. For a neutrino source, they proposed using an atomic bomb. Permission for this was obtained from the laboratory director, Norris Bradbury. The plan was to detonate a "20-kiloton nuclear bomb, comparable to that dropped on Hiroshima, Japan". The detector was proposed to be dropped at the moment of explosion into a hole 40 meters from the detonation site "to catch the flux at its maximum"; it was named "El Monstro". They eventually used a nuclear reactor as a source of neutrinos, as advised by Los Alamos physics division leader J.M.B. Kellogg. The reactor had a neutrino flux of neutrinos per second per square centimeter, far higher than any flux attainable from other radioactive sources. A detector consisting of two tanks of water was employed, offering a huge number of potential targets in the protons of the water.
At those rare instances when neutrinos interacted with protons in the water, neutrons and positrons were created. The two gamma rays created by positron annihilation were detected by sandwiching the water tanks between tanks filled with liquid scintillator. The scintillator material gives off flashes of light in response to the gamma rays, and these light flashes are detected by photomultiplier tubes.
The additional detection of the neutron from the neutrino interaction provided a second layer of certainty. Cowan and Reines detected the neutrons by dissolving cadmium chloride, CdCl2, in the tank. Cadmium is a highly effective neutron absorber and gives off a gamma ray when it absorbs a neutron.
+ [<noinclude />[cadmium-108|Cd]<noinclude />] → [<noinclude />[cadmium-109m|Cd]<noinclude />] → Cd +
The arrangement was such that after a neutrino interaction event, the two gamma rays from the positron annihilation would be detected, followed by the gamma ray from the neutron absorption by cadmium several microseconds later.
The experiment that Cowan and Reines devised used two tanks with a total of about 200 liters of water with about 40 kg of dissolved CdCl2. The water tanks were sandwiched between three scintillator layers which contained 110 five-inch (127 mm) photomultiplier tubes.
Results.
In 1953, Cowan and Reines built a detector they dubbed "Herr Auge", "Mr. Eye" in German. They called the neutrino-searching experiment "Project Poltergeist", because of "the neutrino’s ghostly nature". A preliminary experiment was performed in 1953 at the Hanford Site in Washington state, but in late 1955 the experiment moved to the Savannah River Plant near Aiken, South Carolina. The Savannah River site had better shielding against cosmic rays. This shielded location was 11 m from the reactor and 12 m underground.
After months of data collection, the accumulated data showed about three neutrino interactions per hour in the detector. To be absolutely sure that they were seeing neutrino events from the detection scheme described above, Cowan and Reines shut down the reactor to show that there was a difference in the rate of detected events.
They had predicted a cross-section for the reaction to be about and their measured cross-section was . The results were published in the July 20, 1956 issue of Science.
Legacy.
Clyde Cowan died in 1974 at the age of 54. In 1995, Frederick Reines was honored with the Nobel Prize for his work on neutrino physics.
The basic strategy of employing massive detectors, often water based, for neutrino research was exploited by several subsequent experiments, including the Irvine–Michigan–Brookhaven detector, Kamiokande, the Sudbury Neutrino Observatory and the Homestake Experiment. The Homestake Experiment is a contemporary experiment which detected neutrinos from nuclear fusion in the solar core. Observatories such as these detected neutrino bursts from supernova SN 1987A in 1987, the birth
of neutrino astronomy. Through observations of solar neutrinos, the Sudbury Neutrino Observatory was able to demonstrate the process of neutrino oscillation. Neutrino oscillation shows that neutrinos are not massless, a profound development in particle physics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\bar{\\nu}_e"
},
{
"math_id": 1,
"text": "e^+"
},
{
"math_id": 2,
"text": "\\bar{\\nu}_e + p \\to n + e^+"
}
] |
https://en.wikipedia.org/wiki?curid=671090
|
6710903
|
Batting average on balls in play
|
Term in baseball sabermetrics
In baseball statistics, batting average on balls in play (abbreviated BABIP) is a measurement of how often batted balls result in hits, excluding home runs. It can be expressed as, "when you hit the ball and it’s not a home run, what’s your batting average?" The statistic is typically used to evaluate individual batters and individual pitchers.
Calculation.
BABIP is computed per the following equation, where H is hits, HR is home runs, AB is at bats, K is strikeouts, and SF is sacrifice flies.
formula_0
Effect.
As compared to batting average, which is simply hits divided by at bats, BABIP excludes home runs and strikeouts from consideration while treating sacrifice flies as hitless at bats.
In Major League Baseball (MLB), .300 is considered an average BABIP. Various factors can impact BABIP, such as a player's home ballpark; for batters, being speedy enough to reach base on infield hits; or, for pitchers, the quality of their team's defense.
Usage.
BABIP is commonly used as a red flag in sabermetric analysis, as a consistently high or low BABIP is hard to maintain—much more so for pitchers than hitters. Therefore, BABIP can be used to spot outlying seasons by pitchers. As with other statistical measures, those pitchers whose BABIPs are extremely high (bad) can often be expected to improve in the following season, and those pitchers whose BABIPs are extremely low (good) can often be expected to worsen in the following season.
While a pitcher's BABIP may vary from season to season, there are distinct differences between pitchers when looking at career BABIP figures.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "BABIP = \\frac{H-HR}{AB-K-HR+SF}"
}
] |
https://en.wikipedia.org/wiki?curid=6710903
|
67112408
|
Empowerment (artificial intelligence)
|
Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation.
The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables (formula_0) and time (formula_1). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network.
Definition.
Empowerment (formula_2) is defined as the channel capacity (formula_3) of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors.
formula_4
In a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment.
formula_5
The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits.
Contextual Empowerment.
In general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximising policy. State-specific empowerment can be found using the more general formalism for 'contextual empowerment'. formula_3 is a random variable describing the context (e.g. state).
formula_6
Application.
Empowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to the agent.
Empowerment has been applied in studies of collective behaviour and in continuous domains. As is the case with Bayesian methods in general, computation of empowerment becomes computationally expensive as the number of actions and time horizon extends, but approaches to improve efficiency have led to usage in real-time control. Empowerment has been used for intrinsically motivated reinforcement learning agents playing video games, and in the control of underwater vehicles.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S: s \\in \\mathcal{S}, A: a \\in \\mathcal{A}"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "\\mathfrak{E}"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "\n\n\\mathfrak{E} := C(A_t \\longrightarrow S_{t+1}) \\equiv \\max_{p(a_t)} I(A_t;S_{t+1})\n\n"
},
{
"math_id": 5,
"text": "\n\n\\mathfrak{E}(A^n_t \\longrightarrow S_{t+n}) = \\max_{p(a_t,...,a_{t+n-1})} I(A_t,...,A_{t+n-1};S_{t+n})\n\n"
},
{
"math_id": 6,
"text": "\n\n\\mathfrak{E}(A^n_t \\longrightarrow S_{t+n}{\\mid}C) = \\sum_{c{\\in}C} p(c) \\mathfrak{E}(A^n_t \\longrightarrow S_{t+n}{\\mid}C=c)\n\n"
}
] |
https://en.wikipedia.org/wiki?curid=67112408
|
67129260
|
Anson equation
|
In electrochemistry, the Anson equation defines the charge-time dependence for linear diffusion control in chronocoulometry.
The Anson equation is written as:
formula_0
where,
Q = charge in coulombs
n = number of electrons (to reduce/oxidize one molecule of analyte)
F = Faraday constant, 96485 C/mol
A = area of the (planar) electrode in cm2
C = concentration in mol/cm3;
D = diffusion coefficient in cm2/s
t = time in s.
This is related to the Cottrell equation via integration with respect to time (t), and similarly implies that the electrode is planar.
|
[
{
"math_id": 0,
"text": "Q = nFACD^{1/2}\\pi^{-1/2}t^{1/2}"
}
] |
https://en.wikipedia.org/wiki?curid=67129260
|
6713437
|
Doubling time
|
Time required to double a quantity
The doubling time is the time it takes for a population to double in size/value. It is applied to population growth, inflation, resource extraction, consumption of goods, compound interest, the volume of malignant tumours, and many other things that tend to grow over time. When the relative growth rate (not the absolute growth rate) is constant, the quantity undergoes exponential growth and has a constant doubling time or period, which can be calculated directly from the growth rate.
This time can be calculated by dividing the natural logarithm of 2 by the exponent of growth, or approximated by dividing 70 by the percentage growth rate (more roughly but roundly, dividing 72; see the rule of 72 for details and derivations of this formula).
The doubling time is a characteristic unit (a natural unit of scale) for the exponential growth equation, and its converse for exponential decay is the half-life.
As an example, Canada's net population growth was 2.7 percent in the year 2022, dividing 72 by 2.7 gives an approximate doubling time of about 27 years. Thus if that growth rate were to remain constant, Canada's population would double from its 2023 figure of about 39 million to about 78 million by 2050.
History.
The notion of doubling time dates to interest on loans in Babylonian mathematics. Clay tablets from circa 2000 BCE include the exercise "Given an interest rate of 1/60 per month (no compounding), come the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years. Further, repaying double the initial amount of a loan, after a fixed time, was common commercial practice of the period: a common Assyrian loan of 1900 BCE consisted of loaning 2 minas of gold, getting back 4 in five years, and an Egyptian proverb of the time was "If wealth is placed where it bears interest, it comes back to you redoubled."
Examination.
Examining the doubling time can give a more intuitive sense of the long-term impact of growth than simply viewing the percentage growth rate.
For a constant growth rate of "r" % within time "t", the formula for the doubling time "T""d" is given by
formula_0
Some doubling times calculated with this formula are shown in this table.
Simple doubling time formula:
formula_1
where
For example, with an annual growth rate of 4.8% the doubling time is 14.78 years, and a doubling time of 10 years corresponds to a growth rate between 7% and 7.5% (actually about 7.18%).
When applied to the constant growth in consumption of a resource, the total amount consumed in one doubling period equals the total amount consumed in all previous periods. This enabled U.S. President Jimmy Carter to note in a speech in 1977 that in each of the previous two decades the world had used more oil than in all of previous history (The roughly exponential growth in world oil consumption between 1950 and 1970 had a doubling period of under a decade).
Given two measurements of a growing quantity, "q"1 at time "t"1 and "q"2 at time "t"2, and assuming a constant growth rate, the doubling time can be calculated as
formula_2
Related concepts.
The equivalent concept to "doubling time" for a material undergoing a constant negative relative growth rate or exponential decay is the half-life.
The equivalent concept in base-"e" is "e"-folding.
Cell culture doubling time.
Cell doubling time can be calculated in the following way using growth rate (amount of doubling in one unit of time)
Growth rate:
formula_3
or
formula_4
where
Doubling time:
formula_9
The following is the known doubling time for the following cells:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " T_{d} = t \\frac{\\ln(2)}{\\ln(1+\\frac{r}{100})} \\approx t \\frac{70}{r}"
},
{
"math_id": 1,
"text": "N(t) = N_0 2^{t/T_d}"
},
{
"math_id": 2,
"text": " T_{d} = (t_{2} - t_{1}) \\cdot \\frac{\\ln(2)}{\\ln(\\frac{q_{2}}{q_{1}})}."
},
{
"math_id": 3,
"text": "N(t) = N_0 e^{rt}"
},
{
"math_id": 4,
"text": "r = \\frac{\\ln\\left(N(t)/N_0\\right)}{t}"
},
{
"math_id": 5,
"text": "N(t)"
},
{
"math_id": 6,
"text": "N_0"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\text{doubling time} = \\frac{\\ln(2)}{\\text{growth rate}}"
}
] |
https://en.wikipedia.org/wiki?curid=6713437
|
67137722
|
James Cogdell
|
James Wesley Cogdell (born 22 September 1953) is an American mathematician.
Education and career.
He graduated from Yale University in 1977 with a bachelor's degree and in 1981 with a Ph.D. His doctoral dissertation "Arithmetic Quotients of the Complex 2-Ball and Modular Forms of Nebentypus" was supervised by Ilya Piatetski-Shapiro. Cogdell was a postdoc at the University of Maryland and the University of California, Los Angeles. He was from 1982 to 1988 an assistant professor at Rutgers University. At Oklahoma State University he was from 1987 to 1988 assistant professor, from 1988 to 1994 an associate professor, and from 1994 to 2004 a full professor (from 1999 as Southwestern Bell Professor, from 2000 as Regents Professor, and from 2003 as Vaughan Foundation Professor). In 2004 he became a professor at Ohio State University.
In autumn 1983 and for the academic year 1999–2000 he was at the Institute for Advanced Study. He has held visiting positions at Hebrew University of Jerusalem, at the University of Iowa, at Fields Institute, and at the Erwin Schrödinger International Institute for Mathematical Physics (where he gave the 2009 Erwin Schrödinger Lecture).
Cogdell works on L-functions, automorphic forms (within the context of the Langlands program), and analytic number theory. In collaboration with Piatetski-Shapiro, he proved converse theorems for L-functions for the general linear groups formula_0. The goal is to characterize the L-functions that originate from automorphic forms. For formula_1 this was solved by Hervé Jacquet and Robert Langlands and for formula_2 by Jacquet, Piatetski-Shapiro and Joseph Shalika. The problem goes back to Erich Hecke's characterization of the Dirichlet series that come from modular forms.
In 2002 Cogdell was, with Piatetski-Shapiro, an Invited Speaker with talk "Converse theorems, functoriality and applications to number theory" at the International Congress of Mathematicians in Beijing. He was an editor, with Simon Gindikin and Peter Sarnak, for "Selected Works of Ilya Piatetski-Shapiro" (2000, AMS).
Cogdell was elected in 2012 a Fellow of the American Mathematical Society and in 2016 a Fellow of the American Association for the Advancement of Science.
|
[
{
"math_id": 0,
"text": "GL_n"
},
{
"math_id": 1,
"text": "GL_2"
},
{
"math_id": 2,
"text": "GL_3"
},
{
"math_id": 3,
"text": "L"
}
] |
https://en.wikipedia.org/wiki?curid=67137722
|
67150624
|
Online fair division
|
Fair division class using unique allocation methods
Online fair division is a class of fair division problems in which the resources, or the people to whom they should be allocated, or both, are not all available when the allocation decision is made. Some situations in which not all resources are available include:
Some situations in which not all participants are available include:
The online nature of the problem requires different techniques and fairness criteria than in the classic, offline fair division.
Online arrival of people.
The party cake-cutting problem.
Walsh studies an online variant of fair cake-cutting, in which agents arrive and depart during the division process, like in a party. Well-known fair division procedures like divide and choose and the Dubins-Spanier moving-knife procedure can be adapted to this setting. They guarantee online variants of proportionality and envy-freeness. The online version of divide-and-choose is more robust to collusion, and has better empirical performance.
The sequential fair allocation problem.
Sinclair, Jain, Bannerjee and Yu study allocation of divisible resources when individuals arrive randomly over time. They present an algorithm that attains the optimal fairness-efficiency threshold.
The secretive agent problem.
Several authors studied fair division problems in which one agent is "secretive", i.e., unavailable during the division process. When this agent arrives, he is allowed to choose any part of the resource, and the remaining "n"-1 parts should be divided among the remaining "n"-1 agents such that the division is fair. Note that divide and choose satisfies these requirements for "n"=2 agents, but extending this to 3 or more agents is non-trivial. The following extensions are known:
The cake redivision problem.
The cake redivision problem is a variant of fair cake-cutting in which the cake is already divided in an unfair way (e.g. among a subset of the agents), and it should be re-divided in a fair way (among all the agents) while letting the incumbent owners keep a substantial fraction of their present value. The model problem is land reform.
Online arrival of resources.
The food bank problem.
The food bank problem is an online variant of fair allocation of indivisible goods. Each time, a single item arrives; each agent declares his/her value for this item; and the mechanism should decide which of the agents should receive it. The model application is a central food bank, which receives food donations and has to allocate each donation to one of the charities who want it. The donations are consumed immediately, and it is not known what donations are going to come next, so the decision must be made based only on the previous donations.
Binary valuations.
Working with Foodbank Australia, Aleksandrov, Aziz, Gaspers and Walsh have initiated the study of the food bank problem when all agents have binary valuations {0,1}, that is, for each arriving item, every agent states whether he likes the item or not. The mechanism should decide which of the agents who like the item should receive it. They study two simple mechanisms for this setting:
Additive valuations.
In a more general case of the food bank problem, agents can have additive valuations, which are normalized to [0,1].
Due to the online nature of the problem, it may be impossible to attain some fairness and efficiency guarantees that are possible in the offline setting. In particular, Kahana and Hazon prove that no online algorithm always finds a PROP1 (proportional up to at most one good) allocation, even for two agents with additive valuations. Moreover, no online algorithm always finds any positive approximation of RRS (round-robin share).
Benade, Kazachkov, Procaccia and Psomas study another fairness criterion - envy-freeness. Define the "envy" of agent "i" at agent "j" as the amount by which "i" believes that "j"'s bundle is better, that is, formula_0. The "max-envy" of an allocation is the maximum of the envy among all ordered agent pairs. Suppose the values of all items are normalized to [0,1]. Then, in the offline setting, it is easy to attain an allocation in which the max-envy is at most 1, for example, by the round-robin item allocation (this condition is called EF1). However, in the online setting, the envy might grow with the number of items ("T"). Therefore, instead of EF1, they aim to attain "vanishing envy—"the expected value of the max-envy of the allocation of "T" items should be sublinear in "T" (assuming the value of every item is between 0 and 1). They show that:
Jiang, Kulkarni and Singla improve the bound for the case of "n"=2 agents, when the values are random (rather than adversarial). They reduce the problem to the problem of Online Stripe Discrepancy, which is a special case of discrepancy of permutations, with two permutations and online item arrival. They show that their algorithm for Online Stripe Discrepancy attains envy in formula_3, for some universal constant "c", with high probability (that depends on "c"). Their algorithm even bounds a stronger notion of envy, which they call "ordinal envy": it is the worst possible cardinal envy that is consistent with the item ranking.
Zeng and Psomas study the trade-off between efficiency and fairness under five adversary models, from weak to strong. Below, "vit" denotes the value of the item arriving at time "t" to agent "i".
For adversary 3 (hence also 2 and 1), they show an allocation strategy that guarantees, to each pair of agents, either EF1, or EF with high probability, and in addition, guarantees ex-post Pareto efficiency. They show that the "EF1 or EF w.h.p." guarantee cannot be improved even for adversary 1 (hence also for 2 and 3). For adversary 4 (hence also 5), they show that every algorithm attaining vanishing envy can be at most 1/"n" ex-ante Pareto-efficient.
The costly reallocation problem.
In some cases, items that were previously allocated may be reallocated, but reallocation is costly, so the number of adjustments should be as small as possible. An example is the allocation of expensive scientific equipment among different university departments. Each piece of equipment is allocated as soon as it arrives, but some previously allocated equipment may be reallocated in order to attain a fairer overall allocation.
He, Procaccia and Psomas show that, with two agents, algorithms that are informed about values of future items can attain EF1 without any adjustments, whereas uninformed algorithms require Θ("T") adjustments. With three or more agents, even informed algorithms must use Ω("T") adjustments, and there is an uninformed algorithm that attains EF1 with O("T"3/2) adjustments.
Uncertain supply.
In many fair division problems, such as production of energy from solar cells, the exact amount of available resource may not be known at the time the allocation is decided. Buermann, Gerding and Rastegari study fair division of a "homogeneous" divisible resource, such as electricity, where the available amount is given by a probability distribution, and the agents' valuations are not linear (for example, each agent has a cap on the amount of the resource he can use; above this cap, his utility does not increase by getting more of the resource). They compare two fairness criteria: ex-post envy-freeness and ex-ante envy-freeness. The latter criterion is weaker (since envy-freeness holds only in expectation), but it allows a higher social welfare. The price of ex-ante envy-freeness is still high: it is at least Ø("n"), where "n" is the number of agents. Moreover, maximizing ex-ante social welfare subject to ex-ante envy-freeness is strongly NP-hard, but there is an integer program to calculate the optimal ex-ante envy-free allocation for a special class of valuation functions - linear functions with a saturation cap.
Uncertain demand.
In many fair division problems, there are agents or groups of agents whose demand for resources is not known when the resources are allocated. For example, suppose there are two villages who are susceptible to power outages. Each village has a different probability distribution over storms:
The government has two generators, each of which can supply electricity to a single house. It has to decide how to allocate the generators between the villages. Two important considerations are "utilization" and "fairness":
Donahue and Kleinberg prove upper and lower bounds on the price of fairness—the maximum possible utilization, divided by the maximum utilization of a fair allocation. The bounds are weak in general, but stronger bounds are possible for some specific probability distributions that are commonly used to model demand.
Other applications with uncertain demands are allocation of orders in service supply chains, allocation of aircraft to routes, allocation of doctors to surgeries, and more.
Uncertain value.
Morgan studies a partnership dissolution setting, where the partnership assets have the same value for all partners, but this value is not known. Each partner has a noisy signal about the value, but the signals are different. He shows that Divide and choose is not fair - it favors the chooser. He presents another mechanism that can be considered fair in this setting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\max(0, V_i(X_j)-V_i(X_i))"
},
{
"math_id": 1,
"text": "O(\\sqrt{T/n})"
},
{
"math_id": 2,
"text": "O(\\sqrt{T/(m n)})"
},
{
"math_id": 3,
"text": "O(T^{c / \\log\\log T})"
}
] |
https://en.wikipedia.org/wiki?curid=67150624
|
67170495
|
Tsou plot
|
The Tsou plot is a graphical method of determining the number and nature of the functional groups on an enzyme that are necessary for its catalytic activity.
Theory of Tsou's method.
Tsou Chen-Lu analysed the relationship between the functional groups of enzymes that are necessary for their biological activity and the loss of activity that occurs when the enzyme is treated with an irreversible inhibitor. Suppose now that there are formula_0 groups on each monomeric enzyme molecule that react equally fast with the modifying agent, and formula_1 of these are essential for catalytic activity. After modification of an average of formula_2 groups on each molecule, the probability that any particular group has been modified is formula_3 and the probability that it remains unmodified is formula_4. For the enzyme molecule to retain activity, all of its formula_1 essential groups must remain unmodified, for which the probability is formula_5. The fraction formula_6 of activity remaining after modification of formula_2 groups per molecule must be
formula_7
and so
formula_8
This means that a plot of formula_9 against formula_2 should be a straight line. As the value of formula_1 is initially unknown one must draw plots with values 1, 2, 3, etc. to see which one gives a straight line.
There are various complications with this analysis, as discussed in the original paper, and, more recently in a textbook.
Experimental applications.
Despite the possible objections, the Tsou plot gave clear results when applied by Paterson and Knowles to inactivation of pepsin by trimethyloxonium tetrafluoroborate (Meerwein's reagent), a reagent that modifies carboxyl residues in proteins. They were able to deduce from this experiment that three non-essential groups are modified without loss of activity, followed by two essential groups — two because assuming formula_10 yielded a straight line in the plot, whereas values of 1 and 3 yielded curves, with a total of 13 residues modified, as illustrated in the figure.
Tsou's plot has also given good results with other systems, such the type I dehydroquinase from "Salmonella typhi", for which modification of just one essential group by diethyl pyrocarbonate was sufficient to inactivate the enzyme.
Alternative approach to the same question.
A little before Tsou published his paper William Ray and Daniel Koshland had described a different way of investigating the number and nature of groups on an enzyme essential for activity. Their method depends on kinetic measurements, and cannot be used, therefore, in cases where the modification is too fast for such measurements, such as the case of pepsin discussed above, but it complements Tsou's approach in useful ways.
Suppose that an enzyme has two groups formula_11 and formula_12 that are both essential for the catalytic activity, so if either is lost the catalytic activity is also lost. If formula_11 is converted to an inactive form in a first-order reaction with rate constant formula_13, and formula_12 is inactivated in a first-order reaction with a different rate constant formula_14, then the remaining activity formula_15 after time formula_16 obeys an equation of the following form:
formula_17
in which formula_18 is the value of formula_15 when formula_19 and formula_20 is the observed first-order rate constant for inactivation, the sum of the rate constants for the separate reactions. The equation can be extended in an obvious way to the case more than two groups are essential. Ray and Koshland also described the properties to be observed when not all of the modified groups are essential.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n_\\mathrm{essential}"
},
{
"math_id": 2,
"text": "n_\\mathrm{modified}"
},
{
"math_id": 3,
"text": "n_\\mathrm{modified}/n"
},
{
"math_id": 4,
"text": "(1 - n_ \\mathrm{modified})^{n_\\mathrm{essential}}"
},
{
"math_id": 5,
"text": "1 - n_\\mathrm{modified}/n"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "f = \\left( 1 - \\dfrac{n_ \\mathrm{modified}}{n} \\right) ^ {n_\\mathrm{essential}}"
},
{
"math_id": 8,
"text": "f^{1/n_\\mathrm{essential}} = 1 - \\dfrac{n_ \\mathrm{modified}}{n}"
},
{
"math_id": 9,
"text": "f ^ {1/n_\\mathrm{essential}}"
},
{
"math_id": 10,
"text": "n_\\mathrm{essential} = 2"
},
{
"math_id": 11,
"text": "\\mathrm{G_1}"
},
{
"math_id": 12,
"text": "\\mathrm{G_2}"
},
{
"math_id": 13,
"text": "k_1"
},
{
"math_id": 14,
"text": "k_2"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "A = A_0 \\exp{(-k_1t)} \\exp{(-k_2t)} = A_0 \\exp{[-(k_1 + k_2)t]} = A_0 \\exp{(k_\\mathrm{inactivation}t)}"
},
{
"math_id": 18,
"text": "A_0"
},
{
"math_id": 19,
"text": "t = 0"
},
{
"math_id": 20,
"text": "k_\\mathrm{inactivation} = k_1 + k_2"
}
] |
https://en.wikipedia.org/wiki?curid=67170495
|
671711
|
Hilbert–Smith conjecture
|
In mathematics, the Hilbert–Smith conjecture is concerned with the transformation groups of manifolds; and in particular with the limitations on topological groups "G" that can act effectively (faithfully) on a (topological) manifold "M". Restricting to groups "G" which are locally compact and have a continuous, faithful group action on "M", the conjecture states that "G" must be a Lie group.
Because of known structural results on "G", it is enough to deal with the case where "G" is the additive group formula_0 of p-adic integers, for some prime number "p". An equivalent form of the conjecture is that formula_0 has no faithful group action on a topological manifold.
The naming of the conjecture is for David Hilbert, and the American topologist Paul A. Smith. It is considered by some to be a better formulation of Hilbert's fifth problem, than the characterisation in the category of topological groups of the Lie groups often cited as a solution.
In 1997, Dušan Repovš and Evgenij Ščepin proved the Hilbert–Smith conjecture for groups acting by Lipschitz maps on a Riemannian manifold using covering, fractal, and cohomological dimension theory.
In 1999, Gaven Martin extended their dimension-theoretic argument to quasiconformal actions on a Riemannian manifold and gave applications concerning unique analytic continuation for Beltrami systems.
In 2013, John Pardon proved the three-dimensional case of the Hilbert–Smith conjecture.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Z_p"
}
] |
https://en.wikipedia.org/wiki?curid=671711
|
671768
|
Faddeev–Popov ghost
|
Type of unphysical field in quantum field theory which provides mathematical consistency
In physics, Faddeev–Popov ghosts (also called Faddeev–Popov gauge ghosts or Faddeev–Popov ghost fields) are extraneous fields which are introduced into gauge quantum field theories to maintain the consistency of the path integral formulation. They are named after Ludvig Faddeev and Victor Popov.
A more general meaning of the word "ghost" in theoretical physics is discussed in Ghost (physics).
Overcounting in Feynman path integrals.
The necessity for Faddeev–Popov ghosts follows from the requirement that quantum field theories yield unambiguous, non-singular solutions. This is not possible in the path integral formulation when a gauge symmetry is present since there is no procedure for selecting among physically equivalent solutions related by gauge transformation. The path integrals overcount field configurations corresponding to the same physical state; the measure of the path integrals contains a factor which does not allow obtaining various results directly from the action.
Faddeev–Popov procedure.
It is possible, however, to modify the action, such that methods such as Feynman diagrams will be applicable by adding "ghost fields" which break the gauge symmetry. The ghost fields do not correspond to any real particles in external states: they appear as virtual particles in Feynman diagrams – or as the "absence" of some gauge configurations. However, they are a necessary computational tool to preserve unitarity.
The exact form or formulation of ghosts is dependent on the particular gauge chosen, although the same physical results must be obtained with all gauges since the gauge one chooses to carry out calculations is an arbitrary choice. The Feynman–'t Hooft gauge is usually the simplest gauge for this purpose, and is assumed for the rest of this article.
Consider for example non-Abelian gauge theory with
formula_0
The integral needs to be constrained via gauge-fixing via formula_1 to integrate only over physically distinct configurations. Following Faddeev and Popov, this constraint can be applied by inserting
formula_2
into the integral. formula_3 denotes the gauge-fixed field.
Spin–statistics relation violated.
The Faddeev–Popov ghosts violate the spin–statistics relation, which is another reason why they are often regarded as "non-physical" particles.
For example, in Yang–Mills theories (such as quantum chromodynamics) the ghosts are complex scalar fields (spin 0), but they anti-commute (like fermions).
In general, anti-commuting ghosts are associated with bosonic symmetries, while commuting ghosts are associated with fermionic symmetries.
Gauge fields and associated ghost fields.
Every gauge field has an associated ghost, and where the gauge field acquires a mass via the Higgs mechanism, the associated ghost field acquires the same mass (in the Feynman–'t Hooft gauge only, not true for other gauges).
Appearance in Feynman diagrams.
In Feynman diagrams, the ghosts appear as closed loops wholly composed of 3-vertices, attached to the rest of the diagram via a gauge particle at each 3-vertex. Their contribution to the S-matrix is exactly cancelled (in the Feynman–'t Hooft gauge) by a contribution from a similar loop of gauge particles with only 3-vertex couplings or gauge attachments to the rest of the diagram. (A loop of gauge particles not wholly composed of 3-vertex couplings is not cancelled by ghosts.) The opposite sign of the contribution of the ghost and gauge loops is due to them having opposite fermionic/bosonic natures. (Closed fermion loops have an extra −1 associated with them; bosonic loops don't.)
Ghost field Lagrangian.
The Lagrangian for the ghost fields formula_4 in Yang–Mills theories (where formula_5 is an index in the adjoint representation of the gauge group) is given by
formula_6
The first term is a kinetic term like for regular complex scalar fields, and the second term describes the interaction with the gauge fields as well as the Higgs field. Note that in "abelian" gauge theories (such as quantum electrodynamics) the ghosts do not have any effect since the structure constants formula_7 vanish. Consequently, the ghost particles do not interact with abelian gauge fields.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \n\\int \\mathcal{D}[A] \\exp i \\int \\mathrm d^4 x \\left ( - \\frac{1}{4} F^a_{\\mu \\nu} F^{a \\mu \\nu } \\right ).\n"
},
{
"math_id": 1,
"text": " G(A) = 0 "
},
{
"math_id": 2,
"text": " \n1 = \\int \\mathcal{D}[\\alpha (x) ] \\delta (G(A^{\\alpha })) \\mathrm{det} \\frac{\\delta G(A^{\\alpha} )}{\\delta \\alpha }\n"
},
{
"math_id": 3,
"text": " A^{\\alpha } "
},
{
"math_id": 4,
"text": "c^a(x)\\,"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "\n\\mathcal{L}_{\\text{ghost}}=\\partial_{\\mu}\\bar{c}^{a}\\partial^{\\mu}c^{a}+gf^{abc}\\left(\\partial^{\\mu}\\bar{c}^{a}\\right)A_{\\mu}^{b}c^{c}\\;.\n"
},
{
"math_id": 7,
"text": "f^{abc} = 0"
}
] |
https://en.wikipedia.org/wiki?curid=671768
|
671772
|
Nilpotent operator
|
In operator theory, a bounded operator "T" on a Banach space is said to be nilpotent if "Tn" = 0 for some positive integer "n". It is said to be quasinilpotent or topologically nilpotent if its spectrum "σ"("T") = {0}.
Examples.
In the finite-dimensional case, i.e. when "T" is a square matrix (Nilpotent matrix) with complex entries, "σ"("T") = {0} if and only if
"T" is similar to a matrix whose only nonzero entries are on the superdiagonal(this fact is used to prove the existence of Jordan canonical form). In turn this is equivalent to "Tn" = 0 for some "n". Therefore, for matrices, quasinilpotency coincides with nilpotency.
This is not true when "H" is infinite-dimensional. Consider the Volterra operator, defined as follows: consider the unit square "X" = [0,1] × [0,1] ⊂ R2, with the Lebesgue measure "m". On "X", define the kernel function "K" by
formula_0
The Volterra operator is the corresponding integral operator "T" on the Hilbert space "L"2(0,1) given by
formula_1
The operator "T" is not nilpotent: take "f" to be the function that is 1 everywhere and direct calculation shows that
"Tn f" ≠ 0 (in the sense of "L"2) for all "n". However, "T" is quasinilpotent. First notice that "K" is in "L"2("X", "m"), therefore "T" is compact. By the spectral properties of compact operators, any nonzero "λ" in "σ"("T") is an eigenvalue. But it can be shown that "T" has no nonzero eigenvalues, therefore "T" is quasinilpotent.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K(x,y) =\n\\left\\{\n \\begin{matrix}\n 1, & \\mbox{if} \\; x \\geq y\\\\ \n 0, & \\mbox{otherwise}. \n \\end{matrix}\n\\right.\n"
},
{
"math_id": 1,
"text": "T f(x) = \\int_0 ^1 K(x,y) f(y) dy."
}
] |
https://en.wikipedia.org/wiki?curid=671772
|
671804
|
Lattice gauge theory
|
Theory of quantum gauge fields on a lattice
In physics, lattice gauge theory is the study of gauge theories on a spacetime that has been discretized into a lattice.
Gauge theories are important in particle physics, and include the prevailing theories of elementary particles: quantum electrodynamics, quantum chromodynamics (QCD) and particle physics' Standard Model. Non-perturbative gauge theory calculations in continuous spacetime formally involve evaluating an infinite-dimensional path integral, which is computationally intractable. By working on a discrete spacetime, the path integral becomes finite-dimensional, and can be evaluated by stochastic simulation techniques such as the Monte Carlo method. When the size of the lattice is taken infinitely large and its sites infinitesimally close to each other, the continuum gauge theory is recovered.
Basics.
In lattice gauge theory, the spacetime is Wick rotated into Euclidean space and discretized into a lattice with sites separated by distance formula_0 and connected by links. In the most commonly considered cases, such as lattice QCD, fermion fields are defined at lattice sites (which leads to fermion doubling), while the gauge fields are defined on the links. That is, an element "U" of the compact Lie group "G" (not algebra) is assigned to each link. Hence, to simulate QCD with Lie group SU(3), a 3×3 unitary matrix is defined on each link. The link is assigned an orientation, with the inverse element corresponding to the same link with the opposite orientation. And each node is given a value in formula_1 (a color 3-vector, the space on which the fundamental representation of SU(3) acts), a bispinor (Dirac 4-spinor), an "nf" vector, and a Grassmann variable.
Thus, the composition of links' SU(3) elements along a path (i.e. the ordered multiplication of their matrices) approximates a path-ordered exponential (geometric integral), from which Wilson loop values can be calculated for closed paths.
Yang–Mills action.
The Yang–Mills action is written on the lattice using Wilson loops (named after Kenneth G. Wilson), so that the limit formula_2 formally reproduces the original continuum action. Given a faithful irreducible representation ρ of "G", the lattice Yang–Mills action, known as the Wilson action, is the sum over all lattice sites of the (real component of the) trace over the "n" links "e"1, ..., "e"n in the Wilson loop,
formula_3
Here, χ is the character. If ρ is a real (or pseudoreal) representation, taking the real component is redundant, because even if the orientation of a Wilson loop is flipped, its contribution to the action remains unchanged.
There are many possible Wilson actions, depending on which Wilson loops are used in the action. The simplest Wilson action uses only the 1×1 Wilson loop, and differs from the continuum action by "lattice artifacts" proportional to the small lattice spacing formula_0. By using more complicated Wilson loops to construct "improved actions", lattice artifacts can be reduced to be proportional to formula_4, making computations more accurate.
Measurements and calculations.
Quantities such as particle masses are stochastically calculated using techniques such as the Monte Carlo method. Gauge field configurations are generated with probabilities proportional to formula_5, where formula_6 is the lattice action and formula_7 is related to the lattice spacing formula_0. The quantity of interest is calculated for each configuration, and averaged. Calculations are often repeated at different lattice spacings formula_0 so that the result can be extrapolated to the continuum, formula_2.
Such calculations are often extremely computationally intensive, and can require the use of the largest available supercomputers. To reduce the computational burden, the so-called quenched approximation can be used, in which the fermionic fields are treated as non-dynamic "frozen" variables. While this was common in early lattice QCD calculations, "dynamical" fermions are now standard. These simulations typically utilize algorithms based upon molecular dynamics or microcanonical ensemble algorithms.
The results of lattice QCD computations show e.g. that in a meson not only the particles (quarks and antiquarks), but also the "fluxtubes" of the gluon fields are important.
Quantum triviality.
Lattice gauge theory is also important for the study of quantum triviality by the real-space renormalization group. The most important information in the RG flow are what's called the "fixed points".
The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to be "trivial" or noninteracting. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
Triviality has yet to be proven rigorously, but lattice computations
have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the mass of Higgs boson. Lattice calculations have been useful in this context.
Other applications.
Originally, solvable two-dimensional lattice gauge theories had already been introduced in 1971 as models with interesting statistical properties by the theorist Franz Wegner, who worked in the field of phase transitions.
When only 1×1 Wilson loops appear in the action, lattice gauge theory can be shown to be exactly dual to spin foam models.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "\\mathbb{C}^3"
},
{
"math_id": 2,
"text": "a \\to 0"
},
{
"math_id": 3,
"text": "S=\\sum_F -\\Re\\{\\chi^{(\\rho)}(U(e_1)\\cdots U(e_n))\\}."
},
{
"math_id": 4,
"text": "a^2"
},
{
"math_id": 5,
"text": "e^{-\\beta S}"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "\\beta"
}
] |
https://en.wikipedia.org/wiki?curid=671804
|
671814
|
Warped product
|
Warped product formula_0 of two Riemannian (or pseudo-Riemannian) manifolds formula_1 and formula_2 with respect to a function formula_3 is the product space formula_4 with the metric tensor formula_5.
Warped geometries are useful in that separation of variables can be used when solving partial differential equations over them.
Examples.
Warped geometries acquire their full meaning when we substitute the variable "y" for "t", time and "x", for "s", space. Then the "f"("y") factor of the spatial dimension becomes the effect of time that in words of Einstein "curves space". How it curves space will define one or other solution to a space-time world. For that reason, different models of space-time use warped geometries.
Many basic solutions of the Einstein field equations are warped geometries, for example, the Schwarzschild solution and the Friedmann–Lemaitre–Robertson–Walker models.
Also, warped geometries are the key building block of Randall–Sundrum models in string theory.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "F\\times_f B"
},
{
"math_id": 1,
"text": "F=(F,h)"
},
{
"math_id": 2,
"text": "B=(B,g)"
},
{
"math_id": 3,
"text": "f\\colon B\\to\\R"
},
{
"math_id": 4,
"text": "F\\times B"
},
{
"math_id": 5,
"text": "g\\oplus (f^2\\cdot h)"
}
] |
https://en.wikipedia.org/wiki?curid=671814
|
671821
|
Randall–Sundrum model
|
Extra-dimensional model of the universe
In physics, Randall–Sundrum models (also called 5-dimensional warped geometry theory) are models that describe the world in terms of a warped-geometry higher-dimensional universe, or more concretely as a 5-dimensional anti-de Sitter space where the elementary particles (except the graviton) are localized on a (3 + 1)-dimensional brane or branes.
The two models were proposed in two articles in 1999 by Lisa Randall and Raman Sundrum because they were dissatisfied with the universal extra-dimensional models then in vogue. Such models require two fine tunings; one for the value of the bulk cosmological constant and the other for the brane tensions. Later, while studying RS models in the context of the anti-de Sitter / conformal field theory (AdS/CFT) correspondence, they showed how it can be dual to technicolor models.
The first of the two models, called RS1, has a finite size for the extra dimension with two branes, one at each end. The second, RS2, is similar to the first, but one brane has been placed infinitely far away, so that there is only one brane left in the model.
Overview.
The model is a braneworld theory developed while trying to solve the hierarchy problem of the Standard Model. It involves a finite five-dimensional bulk that is extremely warped and contains two branes: the Planckbrane (where gravity is a relatively strong force; also called "Gravitybrane") and the Tevbrane (our home with the Standard Model particles; also called "Weakbrane"). In this model, the two branes are separated in the not-necessarily large fifth dimension by approximately 16 units (the units based on the brane and bulk energies). The Planckbrane has positive brane energy, and the Tevbrane has negative brane energy. These energies are the cause of the extremely warped spacetime.
Graviton probability function.
In this warped spacetime that is "only" warped along the fifth dimension, the graviton's probability function is extremely high at the Planckbrane, but it drops exponentially as it moves closer towards the Tevbrane. In this, gravity would be much weaker on the Tevbrane than on the Planckbrane.
RS1 model.
The RS1 model attempts to address the hierarchy problem. The warping of the extra dimension is analogous to the warping of spacetime in the vicinity of a massive object, such as a black hole. This warping, or red-shifting, generates a large ratio of energy scales, so that the natural energy scale at one end of the extra dimension is much larger than at the other end:
formula_0
where "k" is some constant, and η has "−+++" metric signature. This space has boundaries at "y" = 1/"k" and "y" = 1/("Wk"), with formula_1, where "k" is around the Planck scale, "W" is the warp factor, and "Wk" is around a TeV. The boundary at "y" = 1/"k" is called the Planck brane, and the boundary at "y" = 1/("Wk") is called the TeV brane. The particles of the standard model reside on the TeV brane. The distance between both branes is only −ln("W")/"k", though.
In another coordinate system,
formula_2
so that
formula_3
and
formula_4
RS2 model.
The RS2 model uses the same geometry as RS1, but there is no TeV brane. The particles of the standard model are presumed to be on the Planck brane. This model was originally of interest because it represented an infinite 5-dimensional model, which, in many respects, behaved as a 4-dimensional model. This setup may also be of interest for studies of the AdS/CFT conjecture.
Prior models.
In 1998/99 Merab Gogberashvili published on arXiv a number of articles on a very similar theme. He showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space, then there is a possibility to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of stability requirement, since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with the one of the conditions of stability.
Experimental results.
In August 2016, experimental results from the LHC excluded RS gravitons with masses below 3.85 and 4.45 TeV for ˜k = 0.1 and 0.2 respectively and for ˜k = 0.01, graviton masses below 1.95 TeV, except for the region between 1.75 TeV and 1.85 TeV. Currently, the most stringent limits on RS graviton production.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{d}s^2 = \\frac{1}{k^2 y^2}(\\mathrm{d}y^2 + \\eta_{\\mu\\nu}\\,\\mathrm{d}x^\\mu\\,\\mathrm{d}x^\\nu),"
},
{
"math_id": 1,
"text": "0 \\le 1/k \\le 1/(Wk)"
},
{
"math_id": 2,
"text": "\\varphi\\ \\stackrel{\\mathrm{def}}{=}\\ -\\frac{\\pi \\ln(ky)}{\\ln(W)},"
},
{
"math_id": 3,
"text": "0 \\le \\varphi \\le \\pi,"
},
{
"math_id": 4,
"text": "\\mathrm{d}s^2 = \\left(\\frac{\\ln(W)}{\\pi k}\\right)^2\\, \\mathrm{d}\\varphi^2 + e^\\frac{2\\ln(W)\\varphi}{\\pi} \\eta_{\\mu\\nu}\\,\\mathrm{d}x^\\mu\\, \\mathrm{d}x^\\nu."
}
] |
https://en.wikipedia.org/wiki?curid=671821
|
67182867
|
Directional component analysis
|
Statistical method for analysing climate data
Directional component analysis (DCA) is a statistical method used in climate science for identifying representative patterns of variability in space-time data-sets such as historical climate observations, weather prediction ensembles or climate ensembles.
The first DCA pattern is a pattern of weather or climate variability that is both likely to occur (measured using likelihood) and has a large impact (for a specified linear impact function, and given certain mathematical conditions: see below).
The first DCA pattern contrasts with the first PCA pattern, which is likely to occur, but may not have a large impact, and with a pattern derived from the gradient of the impact function, which has a large impact, but may not be likely to occur.
DCA differs from other pattern identification methods used in climate research, such as EOFs, rotated EOFs and extended EOFs in that it takes into account an external vector, the gradient of the impact.
DCA provides a way to reduce large ensembles from weather forecasts or climate models to just two patterns.
The first pattern is the ensemble mean, and the second pattern is the DCA pattern, which represents variability around the ensemble mean in a way that takes impact into account.
DCA contrasts with other methods that have been proposed for the reduction of ensembles in that it takes impact into account in addition to the structure of the ensemble.
Overview.
Inputs.
DCA is calculated from two inputs:
Formula.
Consider a space-time data set formula_0, containing individual spatial pattern vectors formula_1, where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrix formula_2.
We define a linear impact function of a spatial pattern as formula_3, where formula_4 is a vector of spatial weights.
The first DCA pattern is given in terms the covariance matrix formula_2 and the weights formula_4 by the proportional expression
formula_5.
The pattern can then be normalized to any length as required.
Properties.
If the weather or climate data is elliptically distributed (e.g., is distributed as a multivariate normal distribution or a multivariate t-distribution) then the first DCA pattern (DCA1) is defined as the spatial pattern with the following mathematical properties:
Rainfall Example.
For instance, in a rainfall anomaly dataset, using an impact metric defined as the total rainfall anomaly, the first DCA pattern is the spatial pattern that has the highest probability density for a given total rainfall anomaly. If the given total rainfall anomaly is chosen to have a large value, then this pattern combines being extreme in terms of the metric (i.e., representing large amounts of total rainfall) with being likely in terms of the pattern, and so is well suited as a representative extreme pattern.
Comparison with PCA.
The main differences between Principal component analysis (PCA) and DCA are
As a result, for unit vector spatial patterns:
The degenerate cases occur when the PCA and DCA patterns are equal.
Also, given the first PCA pattern, the DCA pattern can be scaled so that:
Two Dimensional Example.
Source:
Figure 1 gives an example, which can be understood as follows:
From this diagram, the DCA pattern can be seen to possess the following properties:
In this case the total rainfall anomaly of the PCA pattern is quite small, because of anticorrelations between the rainfall anomalies at the two locations. As a result, the first PCA pattern is not a good representative example of a pattern with large total rainfall anomaly, while the first DCA pattern is.
In formula_6 dimensions the ellipse becomes an ellipsoid, the diagonal line becomes an formula_7 dimensional plane, and the PCA and DCA patterns are vectors in formula_6 dimensions.
Applications.
Application to Climate Variability.
DCA has been applied to the CRU data-set of historical rainfall variability in order to understand the most likely patterns of rainfall extremes in the US and China.
Application to Ensemble Weather Forecasts.
DCA has been applied to ECMWF medium-range weather forecast ensembles in order to identify the most likely patterns of extreme temperatures in the ensemble forecast.
Application to Ensemble Climate Model Projections.
DCA has been applied to ensemble climate model projections in order to identify the most likely patterns of extreme future rainfall.
Derivation of the First DCA Pattern.
Source:
Consider a space-time data-set formula_0, containing individual spatial pattern vectors formula_1, where the individual patterns are each considered as single samples from a multivariate normal distribution with mean zero and covariance matrix formula_2.
As a function of formula_1, the log probability density is proportional to formula_8.
We define a linear impact function of a spatial pattern as formula_3, where formula_4 is a vector of spatial weights.
We then seek to find the spatial pattern that maximises the probability density for a given value of the linear impact function. This is equivalent to finding the spatial pattern that maximises the "log" probability density for a given value of the linear impact function, which is slightly easier to solve.
This is a constrained maximisation problem, and can be solved using the method of Lagrange multipliers.
The Lagrangian function is given by
formula_9
Differentiating by formula_1 and setting to zero gives the solution
formula_5
Normalising so that formula_1 is unit vector gives
formula_10
This is the first DCA pattern.
Subsequent patterns can be derived which are orthogonal to the first, to form an orthonormal set and a method for matrix factorisation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "r^tx"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "x \\propto Cr"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "n-1"
},
{
"math_id": 8,
"text": "-x^t C^{-1} x"
},
{
"math_id": 9,
"text": "L(x,\\lambda)=-x^t C^{-1}x-\\lambda(r^tx-1)"
},
{
"math_id": 10,
"text": "x = Cr / (r^tCCr)^{1/2}"
}
] |
https://en.wikipedia.org/wiki?curid=67182867
|
671875
|
Conformal symmetry
|
Extension to the Poincaré group
In mathematical physics, the conformal symmetry of spacetime is expressed by an extension of the Poincaré group, known as the conformal group; in layman's terms, it refers to the fact that stretching, compressing or otherwise distorting spacetime preserves the angles between lines or curves that exist within spacetime.
Conformal symmetry encompasses special conformal transformations and dilations. In three spatial plus one time dimensions, conformal symmetry has 15 degrees of freedom: ten for the Poincaré group, four for special conformal transformations, and one for a dilation.
Harry Bateman and Ebenezer Cunningham were the first to study the conformal symmetry of Maxwell's equations. They called a generic expression of conformal symmetry a spherical wave transformation. General relativity in two spacetime dimensions also enjoys conformal symmetry.
Generators.
The Lie algebra of the conformal group has the following representation:
formula_0
where formula_1 are the Lorentz generators, formula_2 generates translations, formula_3 generates scaling transformations (also known as dilatations or dilations) and formula_4 generates the special conformal transformations.
Commutation relations.
The commutation relations are as follows:
formula_5
other commutators vanish. Here formula_6 is the Minkowski metric tensor.
Additionally, formula_3 is a scalar and formula_4 is a covariant vector under the Lorentz transformations.
The special conformal transformations are given by
formula_7
where formula_8 is a parameter describing the transformation. This special conformal transformation can also be written as formula_9, where
formula_10
which shows that it consists of an inversion, followed by a translation, followed by a second inversion.
In two-dimensional spacetime, the transformations of the conformal group are the conformal transformations. There are infinitely many of them.
In more than two dimensions, Euclidean conformal transformations map circles to circles, and hyperspheres to hyperspheres with a straight line considered a degenerate circle and a hyperplane a degenerate hypercircle.
In more than two Lorentzian dimensions, conformal transformations map null rays to null rays and light cones to light cones, with a null hyperplane being a degenerate light cone.
Applications.
Conformal field theory.
In relativistic quantum field theories, the possibility of symmetries is strictly restricted by Coleman–Mandula theorem under physically reasonable assumptions. The largest possible global symmetry group of a non-supersymmetric interacting field theory is a direct product of the conformal group with an internal group. Such theories are known as conformal field theories.
Second-order phase transitions.
One particular application is to critical phenomena in systems with local interactions. Fluctuations in such systems are conformally invariant at the critical point. That allows for classification of universality classes of phase transitions in terms of conformal field theories
Conformal invariance is also present in two-dimensional turbulence at high Reynolds number.
High-energy physics.
Many theories studied in high-energy physics admit conformal symmetry due to it typically being implied by local scale invariance. A famous example is d=4, N=4 supersymmetric Yang–Mills theory due its relevance for AdS/CFT correspondence. Also, the worldsheet in string theory is described by a two-dimensional conformal field theory coupled to two-dimensional gravity.
Mathematical proofs of conformal invariance in lattice models.
Physicists have found that many lattice models become conformally invariant in the critical limit. However, mathematical proofs of these results have only appeared much later, and only in some cases.
In 2010, the mathematician Stanislav Smirnov was awarded the Fields medal "for the proof of conformal invariance of percolation and the planar Ising model in statistical physics".
In 2020, the mathematician Hugo Duminil-Copin and his collaborators proved that rotational invariance exists at the boundary between phases in many physical systems.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align} & M_{\\mu\\nu} \\equiv i(x_\\mu\\partial_\\nu-x_\\nu\\partial_\\mu) \\,, \\\\\n&P_\\mu \\equiv-i\\partial_\\mu \\,, \\\\\n&D \\equiv-ix_\\mu\\partial^\\mu \\,, \\\\\n&K_\\mu \\equiv i(x^2\\partial_\\mu-2x_\\mu x_\\nu\\partial^\\nu) \\,, \\end{align}"
},
{
"math_id": 1,
"text": "M_{\\mu\\nu}"
},
{
"math_id": 2,
"text": "P_\\mu"
},
{
"math_id": 3,
"text": "D"
},
{
"math_id": 4,
"text": "K_\\mu"
},
{
"math_id": 5,
"text": "\\begin{align} &[D,K_\\mu]= -iK_\\mu \\,, \\\\\n&[D,P_\\mu]= iP_\\mu \\,, \\\\\n&[K_\\mu,P_\\nu]=2i (\\eta_{\\mu\\nu}D-M_{\\mu\\nu}) \\,, \\\\\n&[K_\\mu, M_{\\nu\\rho}] = i ( \\eta_{\\mu\\nu} K_{\\rho} - \\eta_{\\mu \\rho} K_\\nu ) \\,, \\\\\n&[P_\\rho,M_{\\mu\\nu}] = i(\\eta_{\\rho\\mu}P_\\nu - \\eta_{\\rho\\nu}P_\\mu) \\,, \\\\\n&[M_{\\mu\\nu},M_{\\rho\\sigma}] = i (\\eta_{\\nu\\rho}M_{\\mu\\sigma} + \\eta_{\\mu\\sigma}M_{\\nu\\rho} - \\eta_{\\mu\\rho}M_{\\nu\\sigma} - \\eta_{\\nu\\sigma}M_{\\mu\\rho})\\,, \\end{align}"
},
{
"math_id": 6,
"text": "\\eta_{\\mu\\nu}"
},
{
"math_id": 7,
"text": "\n x^\\mu \\to \\frac{x^\\mu-a^\\mu x^2}{1 - 2a\\cdot x + a^2 x^2}\n"
},
{
"math_id": 8,
"text": "a^{\\mu}"
},
{
"math_id": 9,
"text": " x^\\mu \\to x'^\\mu "
},
{
"math_id": 10,
"text": "\n\\frac{{x}'^\\mu}{{x'}^2}= \\frac{x^\\mu}{x^2} - a^\\mu,\n"
}
] |
https://en.wikipedia.org/wiki?curid=671875
|
671882
|
Infrared fixed point
|
Low energy fixed point
In physics, an infrared fixed point is a set of coupling constants, or other parameters, that evolve from arbitrary initial values at very high energies (short distance) to fixed, stable values, usually predictable, at low energies (large distance). This usually involves the use of the renormalization group, which specifically details the way parameters in a physical system (a quantum field theory) depend on the energy scale being probed.
Conversely, if the length-scale decreases and the physical parameters approach fixed values, then we have ultraviolet fixed points. The fixed points are generally independent of the initial values of the parameters over a large range of the initial values. This is known as universality.
Statistical physics.
In the statistical physics of second order phase transitions, the physical system approaches an infrared fixed point that is independent of the
initial short distance dynamics that defines the material. This determines the properties of the phase transition at the critical temperature, or critical point. Observables, such as critical exponents usually depend only upon dimension of space, and are independent of the atomic or molecular constituents.
Top Quark.
In the Standard Model, quarks and leptons have "Yukawa couplings" to the Higgs boson which determine the masses of the particles. Most of the quarks' and leptons' Yukawa couplings are small compared to the top quark's Yukawa coupling. Yukawa couplings are not constants and their properties change depending on the energy scale at which they are measured, this is known as "running" of the constants. The dynamics of Yukawa couplings are determined by the renormalization group equation:
formula_0
where formula_1 is the color gauge coupling (which is a function of formula_2 and associated with asymptotic freedom ) and formula_3 is the Yukawa coupling for the quark formula_4 This equation describes how the Yukawa coupling changes with energy scale formula_5
A more complete version of the same formula is more appropriate for the top quark:
formula_6
where g2 is the weak isospin gauge coupling and g1 is the weak hypercharge gauge coupling. For small or near constant values of g1 and g2 the qualitative behavior is the same.
The Yukawa couplings of the up, down, charm, strange and bottom quarks, are small at the extremely high energy scale of grand unification, formula_7 Therefore, the formula_8 term can be neglected in the above equation for all but the top quark. Solving, we then find that formula_3 is increased slightly at the low energy scales at which the quark masses are generated by the Higgs, formula_9
On the other hand, solutions to this equation for large initial values typical for the top quark formula_10 cause the expression on the right side to quickly approach zero as we descend in energy scale, which stopsformula_10 from changing and locks it to the QCD coupling formula_11 This is known as a (infrared) quasi-fixed point of the renormalization group equation for the Yukawa coupling. No matter what the initial starting value of the coupling is, if it is sufficiently large at high energies to begin with, it will reach this quasi-fixed point value, and the corresponding quark mass is predicted to be about formula_12
The renormalization group equation for
large values of the top Yukawa coupling was first
considered in 1981 by Pendleton & Ross,
and the "infrared quasi-fixed point" was proposed by Hill.
The prevailing view at the time was that the top quark mass would lie in a range of 15 to 26 GeV. The quasi-infrared fixed point emerged in top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks.
While the value of the quasi-fixed point is determined in the Standard Model of about formula_13 if there is more than one Higgs doublet, the value will be reduced by an increase in the factor in the equation, and any Higgs mixing angle effects. Since the observed top quark mass of 174 GeV is slightly lower than the standard model prediction by about 20%, this suggests there may be
more Higgs doublets beyond the single standard model Higgs boson. If there are many additional Higgs doublets in nature the predicted value of the quasi-fixed point comes into agreement with experiment. Even if there are two Higgs doublets, the fixed point for the top mass is reduced, 170~200 GeV. Some theorists believed this was supporting evidence for the Supersymmetric Standard Model, however no other signs of supersymmetry have emerged at the Large Hadron Collider.
Banks–Zaks fixed point.
Another example of an infrared fixed point is the Banks–Zaks fixed point in which the coupling constant of a Yang–Mills theory evolves to a fixed value. The beta-function vanishes, and the theory possesses a symmetry known as conformal symmetry.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ \\mu\\ \\frac{\\partial}{\\partial\\mu}\\ y_q \\approx \\frac{ y_q }{\\ 16\\pi^2\\ }\\left(\\frac{\\ 9\\ }{2}y_q^2 - 8 g_3^2\\right)\\ ,"
},
{
"math_id": 1,
"text": "\\ g_3\\ "
},
{
"math_id": 2,
"text": "\\ \\mu\\ "
},
{
"math_id": 3,
"text": "\\ y_q\\ "
},
{
"math_id": 4,
"text": "\\ q \\in \\{ \\mathrm{u, b, t} \\}~."
},
{
"math_id": 5,
"text": "\\ \\mu ~."
},
{
"math_id": 6,
"text": "\\ \\mu\\ \\frac{\\ \\partial}{\\partial\\mu}\\ y_\\mathrm{t} \\approx \\frac{\\ y_\\text{t}\\ }{16\\ \\pi^2}\\left(\\frac{\\ 9\\ }{2}y_\\mathrm{t}^2 - 8 g_3^2- \\frac{\\ 9\\ }{4}g_2^2 - \\frac{\\ 17\\ }{20} g_1^2 \\right)\\ ,"
},
{
"math_id": 7,
"text": "\\ \\mu \\approx 10^{15} \\mathrm{ GeV } ~."
},
{
"math_id": 8,
"text": "\\ y^2_q\\ "
},
{
"math_id": 9,
"text": "\\ \\mu \\approx 125\\ \\mathrm{ GeV } ~."
},
{
"math_id": 10,
"text": "\\ y_\\mathrm{t}\\ "
},
{
"math_id": 11,
"text": "\\ g_3 ~."
},
{
"math_id": 12,
"text": "\\ m \\approx 220\\ \\mathrm{ GeV } ~."
},
{
"math_id": 13,
"text": "\\ m \\approx 220\\ \\mathrm{ GeV } ~,"
}
] |
https://en.wikipedia.org/wiki?curid=671882
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.