id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
636806
|
Total fertility rate
|
Number of children a woman is expected to have barring select circumstances
The total fertility rate (TFR) of a population is the average number of children that are born to a woman over her lifetime, if they were to experience the exact current age-specific fertility rates (ASFRs) through their lifetime, and they were to live from birth until the end of their reproductive life.
As of 2023, the total fertility rate varied widely across the world, from 0.72 in South Korea, to 6.73 in Niger.
Fertility tends to be inversely correlated with levels of economic development. Historically, developed countries have significantly lower fertility rates, generally correlated with greater wealth, education, urbanization, and other factors. Conversely, in least developed countries, fertility rates tend to be higher. Families desire children for their labor and as caregivers for their parents in old age. Fertility rates are also higher due to the lack of access to contraceptives, generally lower levels of female education, and lower rates of female employment. It does not significantly correlate with any particular religion.
The United Nations predicts that global fertility will continue to decline for the remainder of this century and reach a below-replacement level of 1.8 by 2100, and that world population will peak in the period 2084–2088.
Parameter characteristics.
The Total Fertility Rate (TFR) is not based on the actual fertility of a specific group of women, as that would require waiting until they have completed childbearing. It also does not involve counting the total number of children born over their lifetime. Instead, the TFR is based on the age-specific fertility rates of women in their "child-bearing years," typically considered to be ages 15–44 in international statistical usage.
The TFR is a measure of the fertility of an imaginary woman who experiences the age-specific fertility rates for ages 15–49 that were recorded for a specific population in a given year. It represents the average number of children a woman would potentially have if she were to go through all her childbearing years in a single year, subject to the age-specific fertility rates for that year. In simpler terms, the TFR is the number of children a woman would have if she were to experience the prevailing fertility rates at all ages from a single given year and survived throughout her childbearing years.
Related parameters.
Net reproduction rate.
An alternative measure of fertility is the net reproduction rate (NRR), which calculates the number of daughters a female would have in her lifetime if she were subject to prevailing age-specific fertility and mortality rates in a given year. When the NRR is exactly 1, each generation of females is precisely replacing itself.
The NRR is not as commonly used as the TFR, but it is particularly relevant in cases where the number of male babies born is very high due to gender imbalance and sex selection. This is a significant consideration in world population dynamics, especially given the high level of gender imbalance in the heavily populated nations of China and India. The gross reproduction rate (GRR) is the same as the NRR, except that, like the TFR, it disregards life expectancy.
Total period fertility rate.
The TFR, sometimes called TPFR—total period fertility rate, is a better index of fertility than the crude birth rate (annual number of births per thousand population) because it is independent of the age structure of the population, but it is a poorer estimate of actual completed family size than the total cohort fertility rate, which is obtained by summing the age-specific fertility rates that actually applied to each cohort as they aged through time.
In particular, the TFR does not necessarily predict how many children young women now will eventually have, as their fertility rates in years to come may change from those of older women now. However, the TFR is a reasonable summary of current fertility levels. TFR and long term population growth rate, "g", are closely related. For a population structure in a steady state, growth rate equals formula_0, where formula_1 is the mean age for childbearing women.
Tempo effect.
The TPFR (total "period" fertility rate) is affected by a tempo effect—if age of childbearing increases, and life cycle fertility is unchanged, then while the age of childbearing is increasing, TPFR will be lower, because the births are occurring later, and then the age of childbearing stops increasing, the TPFR will increase, due to the deferred births occurring in the later period, even though the life cycle fertility has been unchanged. In other words, the TPFR is a misleading measure of life cycle fertility when childbearing age is changing, due to this statistical artifact. This is a significant factor in some countries, such as the Czech Republic and Spain in the 1990s. Some measures seek to adjust for this timing effect to gain a better measure of life-cycle fertility.
Replacement rates.
Replacement fertility is the total fertility rate at which women give birth to enough babies to sustain population levels, assuming that mortality rates remain constant and net migration is zero. If replacement level fertility is sustained over a sufficiently long period, each generation will exactly replace itself. In 2003, the replacement fertility rate was 2.1 births per female for most developed countries (2.1 in the UK, for example), but can be as high as 3.5 in undeveloped countries because of higher mortality rates, especially child mortality. The global average for the replacement total fertility rate, eventually leading to a stable global population, for 2010–2015, was 2.3 children per female.
Lowest-low fertility.
The term "lowest-low fertility" is defined as a TFR at or below 1.3. Lowest-low fertility is found almost exclusively within East Asian countries and Southern European countries. The East Asian American community in the United States also exhibits lowest-low fertility. At one point in the late 20th century and early 21st century this was also observed in Eastern and Southern Europe. Since then, the fertility rate has risen in most countries of Europe.
The lowest TFR recorded anywhere in the world in recorded history, is for the Xiangyang district of Jiamusi city (Heilongjiang, China) which had a TFR of 0.41 in 2000. Outside China, the lowest TFR ever recorded was 0.80 for Eastern Germany in 1994. The low Eastern German value was influenced by a change to higher maternal age at birth, with the consequence that neither older cohorts (e.g. women born until the late 1960s), who often already had children, nor younger cohorts, who were postponing childbirth, had many children during that time. The total cohort fertility rate of each age cohort of women in East Germany did not drop as significantly.
Population-lag effect.
A population that maintained a TFR of 3.8 over an extended period, without a correspondingly high death or emigration rate, would increase rapidly, doubling period ~ 32 years. A population that maintained a TFR of 2.0 over a long time would decrease, unless it had a large enough immigration.
It may take several generations for a change in the total fertility rate to be reflected in birth rate, because the age distribution must reach equilibrium. For example, a population that has recently dropped below replacement-level fertility will continue to grow, because the recent high fertility produced large numbers of young couples, who would now be in their childbearing years.
This phenomenon carries forward for several generations and is called population momentum, "population inertia," or "population-lag effect". This time-lag effect is of great importance to the growth rates of human populations.
TFR (net) and long-term population growth rate, g, are closely related. For a population structure in a steady state and with zero migration, formula_2, where formula_3 is mean age for childbearing women and thus formula_4. At the left side is shown the empirical relation between the two variables in a cross-section of countries with the most recent y-y growth rate.
The parameter formula_5 should be an estimate of the formula_3; here equal to formula_6 years, way off the mark because of population momentum. E.g. for formula_7, g should be exactly zero, which is seen not to be the case.
Influencing factors.
Fertility factors are determinants of the number of children that an individual is likely to have. Fertility factors are mostly positive or negative correlations without certain causations.
Factors generally associated with increased fertility include the intention to have children, very high level of gender inequality, inter-generational transmission of values, marriage and cohabitation, maternal and social support, rural residence, pro family government programs, low IQ and increased food production.
Factors generally associated with decreased fertility include rising income, value and attitude changes, education, female labor participation, population control, age, contraception, partner reluctance to having children, a low level of gender inequality, and infertility. The effect of all these factors can be summarized with a plot of total fertility rate against Human Development Index (HDI) for a sample of countries. The chart shows that the two factors are inversely correlated, that is, in general, the lower a country's HDI the higher its fertility.
Another common way of summarizing the relationship between economic development and fertility is a plot of TFR against , a proxy for standard of living. This chart shows that per capita GDP is also inversely correlated with fertility.
The impact of human development on TFR can best be summarized by a quote from Karan Singh, a former minister of population in India. At a 1974 United Nations population conference in Bucharest, he said "Development is the best contraceptive."
Wealthy countries, those with high per capita GDP, usually have a lower fertility rate than poor countries, those with low per capita GDP. This may seem counter-intuitive. The inverse relationship between income and fertility has been termed a "demographic-economic paradox" because evolutionary biology suggests that greater means should enable the production of more offspring, not fewer.
Many of these factors may differ by region and social class. For instance, Scandinavian countries and France are among the least religious in the EU, but have the highest TFR, while the opposite is true about Portugal, Greece, Cyprus, Poland and Spain.
National efforts to increase or decrease fertility.
Governments have often set population targets, to either increase or decrease the total fertility rate, or to have certain ethnic or socioeconomic groups have a lower or higher fertility rate. Often such policies have been interventionist, and abusive. The most notorious natalist policies of the 20th century include those in communist Romania and communist Albania, under Nicolae Ceaușescu and Enver Hoxha respectively.
The natalist policy in Romania between 1967 and 1989 was very aggressive, including outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. It resulted in large numbers of children put into Romanian orphanages by parents who could not cope with raising them, street children in the 1990s, when many orphanages were closed and the children ended up on the streets, overcrowding in homes and schools, and over 9,000 women who died due to illegal abortions.
Conversely, in China the government sought to lower the fertility rate, and, as such, enacted the one-child policy (1978–2015), which included abuses such as forced abortions. In India, during the national emergency of 1975, a massive compulsory sterilization drive was carried out in India, but it is considered to be a failure and is criticized for being an abuse of power.
Some governments have sought to regulate which groups of society could reproduce through eugenic policies, including forced sterilizations of population groups they considered undesirable. Such policies were carried out against ethnic minorities in Europe and North America in the first half of the 20th century, and more recently in Latin America against the Indigenous population in the 1990s; in Peru, former President Alberto Fujimori has been accused of genocide and crimes against humanity as a result of a sterilization program put in place by his administration targeting indigenous people (mainly the Quechua and Aymara people).
Within these historical contexts, the notion of reproductive rights has developed. Such rights are based on the concept that each person freely decides if, when, and how many children to have - not the state or religion. According to the Office of the United Nations High Commissioner for Human Rights, reproductive rights "rest on the recognition of the basic rights of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. It also includes the right to make decisions concerning reproduction free of discrimination, coercion and violence, as expressed in human rights documents".
History and future projections.
From around 10,000 BC to the beginning of the Industrial Revolution, fertility rates around the world were high by 21st-century standards. The onset of the Industrial Revolution around 1800 AD brought about what has come to be called the demographic transition, and the TFR began a long-term decline in almost every region of the world. This has continued in the 21st century.
Before 1800.
Because all nations before the Industrial Revolution found themselves in what is now labeled the "Malthusian Trap", improvements in standards of living could be achieved only by reductions in population growth through either increases in mortality rates (via wars, plagues, famines, etc) or reductions in birth rates.76 Child mortality could reach 50% and the need to produce workers, male heirs, and old-age caregivers required a high fertility rate by 21st-century standards.
For example, fertility rates in Europe before 1800 ranged from 4.5 in Scandinavia to 6.2 in Belgium.76 In 1800, the TFR in the United States was 7.0. Fertility rates in Asia during this period were similar to those in Europe.74 In spite of these high fertility rates, global population growth was still very slow, about 0.04% per year, mostly due to high mortality rates and the equally slow growth in the production of food.
1800 to 1950.
After 1800, the Industrial Revolution began in some places, particularly Great Britain, continental Europe, and the United States, and they underwent the beginnings of what is now called the demographic transition. Stage two of this process fueled a steady reduction in mortality rates due to improvements in public sanitation, personal hygiene and the food supply, which reduced the number of famines.
These reductions in mortality rates, particularly reductions in child mortality, that increased the fraction of children surviving, plus other major societal changes such as urbanization, led to stage three of the demographic transition. There was a reduction in fertility rates, because there was simply no longer a need to birth so many children.294
The example from the US of the correlation between child mortality and the fertility rate is illustrative. In 1800, child mortality in the US was 33%, meaning that one third of all children born would die before their fifth birthday. The TFR in 1800 was 7.0, meaning that the average female would bear seven children during their lifetime. In 1900, child mortality in the US had declined to 23%, a reduction of almost one third, and the TFR had declined to 3.9, a reduction of 44%. By 1950, child mortality had declined dramatically to 4%, a reduction of 84%, and the TFR declined to 3.2. By 2018, child mortality had declined further to 0.6% and the TFR declined to 1.9, below replacement level.
1950 to the present and projections.
The table shows that after 1965, the demographic transition spread around the world, and the global TFR began a long decline that continues in the 21st century.
The chart shows that the decline in the TFR since the 1960s has occurred in every region of the world. The global TFR is projected to continue declining for the remainder of the century, and reach a below-replacement level of 1.8 by 2100.
In 2022, the global TFR was 2.3. Because the global fertility replacement rate for 2010–2015 was 2.3, humanity has achieved or is approaching a significant milestone where the fertility rate is equal to the replacement rate.
By region.
The United Nations Population Division divides the world into six geographical regions. The table below shows the estimated TFR for each region.
In 2013, the TFR of Europe, Latin America and the Caribbean, and Northern America were below the global replacement-level fertility rate of 2.1 children per female.
Africa.
Africa has a TFR of 4.4, the highest in the world. Angola, Benin, DR Congo, Mali, and the Niger have the highest TFR. In 2023, the most populous country in Africa, Nigeria, had an estimated TFR of 4.57. In 2023, the second most populous African country, Ethiopia, had an estimated TFR of 3.92.
The poverty of Africa, and the high maternal mortality and infant mortality had led to calls from WHO for family planning, and the encouragement of smaller families.
Asia.
Eastern Asia.
Hong Kong, Macau, Singapore, South Korea, and Taiwan have the lowest-low fertility, defined as TFR at or below 1.3, and are among the lowest in the world. In 2004, Macau had a TFR below 1.0. In 2018, North Korea had the highest TFR in East Asia, at 1.95.
China.
In 2022, China's TFR was 1.09. China implemented the one-child policy in January 1979 as a drastic population planning measure to control the ever-growing population at the time. In January 2016, the policy was replaced with the two-child policy. In July 2021, a three-child policy was introduced, as China's population is aging faster than almost any other country in modern history.
Japan.
In 2022, Japan had a TFR of 1.26. Japan's population is rapidly aging due to both a long life expectancy and a low birth rate. The total population is shrinking, losing 430,000 in 2018, to a total of 126.4 million. Hong Kong and Singapore mitigate this through immigrant workers. In Japan, a serious demographic imbalance has developed, partly due to limited immigration to Japan.
South Korea.
In South Korea, a low birthrate is one of its most urgent socio-economic challenges. Rising housing expenses, shrinking job opportunities for younger generations, insufficient support to families with newborns either from the government or employers are among the major explanations for its crawling TFR, which fell to 0.92 in 2019. Koreans are yet to find viable solutions to make the birthrate rebound, even after trying out dozens of programs over a decade, including subsidizing rearing expenses, giving priorities for public rental housing to couples with multiple children, funding day care centers, reserving seats in public transportation for pregnant women, and so on.
In the past 20 years, South Korea has recorded some of the lowest fertility and marriage levels in the world. As of 2022, South Korea is the country with the world's lowest total fertility rate, at 0.78. In 2022, the TFR of the capital Seoul was 0.57.
Southern Asia.
Bangladesh.
The fertility rate fell from 6.8 in 1970–1975, to 2.0 in 2020, an interval of about 47 years, or a little less than two generations.
India.
The Indian fertility rate has declined significantly over the early 21st century. The Indian TFR declined from 5.2 in 1971 to 2.2 in 2018. The TFR in India declined to 2.0 in 2019-2020, marking the first time it has gone below replacement level. In 2023, the TFR in India declined to an all time low of 1.9
Iran.
In the Iranian calendar year (March 2019 – March 2020), Iran's total fertility rate fell to 1.8.
Western Asia.
In 2019, the TFR of Turkey reached 1.88.
Europe.
Total fertility rate in Europe by region/province/federal subject in 1960
The average total fertility rate in the European Union (EU-27) was calculated at 1.53 children per female in 2021. In 2021, France had the highest TFR among EU countries at 1.84, followed by Czechia (1.83), Romania (1.81), Ireland (1.78) and Denmark (1.72). In 2021, Malta had the lowest TFR among the EU countries, at 1.13. Other southern European countries also had very low TFR (Portugal 1.35, Cyprus 1.39, Greece 1.43, Spain 1.19, and Italy 1.25).
In 2021, the United Kingdom had a TFR of 1.53. In 2021 estimates for the non-EU European post-Soviet states group, Russia had a TFR of 1.60, Moldova 1.59, Ukraine 1.57, and Belarus 1.52.
Emigration of young adults from Eastern Europe to the West aggravates the demographic problems of those countries. People from countries such as Bulgaria, Moldova, Romania, and Ukraine are particularly moving abroad.
Latin America and the Caribbean.
In 2023, the TFR of Brazil, the most populous country in the region, was estimated at 1.75. In 2021, the second most populous country, Mexico, had an estimated TFR of 1.73. The next most populous four countries in the region had estimated TFRs of between 1.9 and 2.2 in 2023, including Colombia (1.94), Argentina (2.17), Peru (2.18), and Venezuela (2.20). Belize had the highest estimated TFR in the region at 2.59 in 2023. In 2021, Puerto Rico had the lowest, at 1.25.
Northern America.
Canada.
In 2021, the TFR of Canada was 1.43.
United States.
The total fertility rate in the United States after World War II peaked at about 3.8 children per female in the late 1950s, dropped to below replacement in the early 70s, and by 1999 was at 2 children. Currently, the fertility is below replacement among those native born, and above replacement among immigrant families, most of whom come to the US from countries with higher fertility. However, the fertility rate of immigrants to the US has been found to decrease sharply in the second generation, correlating with improved education and income. In 2021, the US TFR was 1.664, ranging between over 2 in some states and under 1.6 in others.
Oceania.
Australia.
After World War II, Australia's TFR was approximately 3.0. In 2017, Australia's TFR was 1.74, i.e. below replacement.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\log(\\mathrm{TFR}/2)/X_m"
},
{
"math_id": 1,
"text": "X_m"
},
{
"math_id": 2,
"text": "g=\\tfrac{\\log(\\text{TFR}/2)}{\\text{X}_{m}}"
},
{
"math_id": 3,
"text": "\\text{X}_m"
},
{
"math_id": 4,
"text": "P(t) = P(0)^{(gt)}"
},
{
"math_id": 5,
"text": " \\tfrac{1}{b} "
},
{
"math_id": 6,
"text": "\\tfrac{1}{0.02}=50"
},
{
"math_id": 7,
"text": "{\\log}(\\tfrac{\\text{TFR}}{2}) = 0"
}
] |
https://en.wikipedia.org/wiki?curid=636806
|
63684284
|
Nuclear operator
|
Linear operator related to topological vector spaces
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs).
Preliminaries and notation.
Throughout let "X","Y", and "Z" be topological vector spaces (TVSs) and "L" : "X" → "Y" be a linear operator (no assumption of continuity is made unless otherwise stated).
In a Hilbert space, positive compact linear operators, say "L" : "H" → "H" have a simple spectral decomposition discovered at the beginning of the 20th century by Fredholm and F. Riesz:
There is a sequence of positive numbers, decreasing and either finite or else converging to 0, formula_30 and a sequence of nonzero finite dimensional subspaces formula_31 of "H" (i = 1, 2, formula_32) with the following properties: (1) the subspaces formula_31 are pairwise orthogonal; (2) for every "i" and every formula_33, formula_34; and (3) the orthogonal of the subspace spanned by formula_35 is equal to the kernel of "L".
A canonical tensor product as a subspace of the dual of Bi("X", "Y").
Let "X" and "Y" be vector spaces (no topology is needed yet) and let Bi("X", "Y") be the space of all bilinear maps defined on formula_47 and going into the underlying scalar field.
For every formula_48, let formula_49 be the canonical linear form on Bi("X", "Y") defined by formula_50 for every "u" ∈ Bi("X", "Y").
This induces a canonical map formula_51 defined by formula_52, where formula_53 denotes the algebraic dual of Bi("X", "Y").
If we denote the span of the range of "𝜒" by "X" ⊗ "Y" then it can be shown that "X" ⊗ "Y" together with "𝜒" forms a tensor product of "X" and "Y" (where "x" ⊗ "y" := "𝜒"("x", "y")).
This gives us a canonical tensor product of "X" and "Y".
If "Z" is any other vector space then the mapping Li("X" ⊗ "Y"; "Z") → Bi("X", "Y"; "Z") given by "u" ↦ "u" ∘ "𝜒" is an isomorphism of vector spaces.
In particular, this allows us to identify the algebraic dual of "X" ⊗ "Y" with the space of bilinear forms on "X" × "Y".
Moreover, if "X" and "Y" are locally convex topological vector spaces (TVSs) and if "X" ⊗ "Y" is given the π-topology then for every locally convex TVS "Z", this map restricts to a vector space isomorphism formula_54 from the space of "continuous" linear mappings onto the space of "continuous" bilinear mappings.
In particular, the continuous dual of "X" ⊗ "Y" can be canonically identified with the space B("X", "Y") of continuous bilinear forms on "X" × "Y";
furthermore, under this identification the equicontinuous subsets of B("X", "Y") are the same as the equicontinuous subsets of formula_55.
Nuclear operators between Banach spaces.
There is a canonical vector space embedding formula_56 defined by sending formula_57 to the map
formula_58
Assuming that "X" and "Y" are Banach spaces, then the map formula_59 has norm formula_60 (to see that the norm is formula_61, note that formula_62 so that formula_63). Thus it has a continuous extension to a map formula_64, where it is known that this map is not necessarily injective. The range of this map is denoted by formula_65 and its elements are called nuclear operators. formula_65 is TVS-isomorphic to formula_66 and the norm on this quotient space, when transferred to elements of formula_65 via the induced map formula_67, is called the trace-norm and is denoted by formula_68. Explicitly, if formula_69 is a nuclear operator then formula_70.
Characterization.
Suppose that "X" and "Y" are Banach spaces and that formula_71 is a continuous linear operator.
Properties.
Let "X" and "Y" be Banach spaces and let formula_71 be a continuous linear operator.
Nuclear operators between Hilbert spaces.
Nuclear automorphisms of a Hilbert space are called trace class operators.
Let "X" and "Y" be Hilbert spaces and let "N" : "X" → "Y" be a continuous linear map. Suppose that formula_85 where "R" : "X" → "X" is the square-root of formula_86 and "U" : "X" → "Y" is such that formula_87 is a surjective isometry. Then "N" is a nuclear map if and only if "R" is a nuclear map;
hence, to study nuclear maps between Hilbert spaces it suffices to restrict one's attention to positive self-adjoint operators "R".
Characterizations.
Let "X" and "Y" be Hilbert spaces and let "N" : "X" → "Y" be a continuous linear map whose absolute value is "R" : "X" → "X".
The following are equivalent:
Nuclear operators between locally convex spaces.
Suppose that "U" is a convex balanced closed neighborhood of the origin in "X" and "B" is a convex balanced bounded Banach disk in "Y" with both "X" and "Y" locally convex spaces. Let formula_104 and let formula_105 be the canonical projection. One can define the auxiliary Banach space formula_106 with the canonical map formula_107 whose image, formula_108, is dense in formula_106 as well as the auxiliary space formula_109 normed by formula_110 and with a canonical map formula_111 being the (continuous) canonical injection.
Given any continuous linear map formula_112 one obtains through composition the continuous linear map formula_113; thus we have an injection formula_114 and we henceforth use this map to identify formula_115 as a subspace of formula_116.
Definition: Let "X" and "Y" be Hausdorff locally convex spaces. The union of all formula_117 as "U" ranges over all closed convex balanced neighborhoods of the origin in "X" and "B" ranges over all bounded Banach disks in "Y", is denoted by formula_65 and its elements are call nuclear mappings of "X" into "Y".
When "X" and "Y" are Banach spaces, then this new definition of "nuclear mapping" is consistent with the original one given for the special case where "X" and "Y" are Banach spaces.
Characterizations.
Let "X" and "Y" be Hausdorff locally convex spaces and let formula_71 be a continuous linear operator.
Properties.
The following is a type of "Hahn-Banach theorem" for extending nuclear maps:
Let "X" and "Y" be Hausdorff locally convex spaces and let formula_71 be a continuous linear operator.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X \\otimes_{\\pi} Y"
},
{
"math_id": 1,
"text": "X \\widehat{\\otimes}_{\\pi} Y"
},
{
"math_id": 2,
"text": "L : X \\to \\operatorname{Im} L"
},
{
"math_id": 3,
"text": "\\operatorname{Im} L"
},
{
"math_id": 4,
"text": "X \\times Y \\to Z"
},
{
"math_id": 5,
"text": "L : X \\to Y"
},
{
"math_id": 6,
"text": "X \\to X / \\ker L \\; \\xrightarrow{L_0} \\; \\operatorname{Im} L \\to Y"
},
{
"math_id": 7,
"text": "L_0\\left( x + \\ker L \\right) := L (x)"
},
{
"math_id": 8,
"text": "X'"
},
{
"math_id": 9,
"text": "x'"
},
{
"math_id": 10,
"text": "X^{\\#}"
},
{
"math_id": 11,
"text": "\\langle L(x), x \\rangle \\geq 0"
},
{
"math_id": 12,
"text": "x \\in H"
},
{
"math_id": 13,
"text": "L = r \\circ r"
},
{
"math_id": 14,
"text": "L : H_1 \\to H_2"
},
{
"math_id": 15,
"text": "L^* \\circ L"
},
{
"math_id": 16,
"text": "U : H_1 \\to H_2"
},
{
"math_id": 17,
"text": "\\operatorname{Im} R"
},
{
"math_id": 18,
"text": "U(x) = L(x)"
},
{
"math_id": 19,
"text": "x = R \\left( x_1 \\right) \\in \\operatorname{Im} R"
},
{
"math_id": 20,
"text": "U"
},
{
"math_id": 21,
"text": "\\overline{\\operatorname{Im} R}"
},
{
"math_id": 22,
"text": "\\ker R"
},
{
"math_id": 23,
"text": "U(x) = 0"
},
{
"math_id": 24,
"text": "x \\in \\ker R"
},
{
"math_id": 25,
"text": "H_1"
},
{
"math_id": 26,
"text": "U\\big\\vert_{\\operatorname{Im} R} : \\operatorname{Im} R \\to \\operatorname{Im} L"
},
{
"math_id": 27,
"text": "L = U \\circ R"
},
{
"math_id": 28,
"text": "\\Lambda : X \\to Y"
},
{
"math_id": 29,
"text": "\\Lambda(U)"
},
{
"math_id": 30,
"text": "r_1 > r_2 > \\cdots > r_k > \\cdots"
},
{
"math_id": 31,
"text": "V_i"
},
{
"math_id": 32,
"text": "\\ldots"
},
{
"math_id": 33,
"text": "x \\in V_i"
},
{
"math_id": 34,
"text": "L(x) = r_i x"
},
{
"math_id": 35,
"text": "\\bigcup_{i} V_i"
},
{
"math_id": 36,
"text": "X_{\\sigma\\left(X, X'\\right)}"
},
{
"math_id": 37,
"text": "X_\\sigma "
},
{
"math_id": 38,
"text": "X_{\\sigma\\left(X', X\\right)}"
},
{
"math_id": 39,
"text": "X'_\\sigma"
},
{
"math_id": 40,
"text": "x_0 \\in X"
},
{
"math_id": 41,
"text": "X' \\to \\mathbb{R}"
},
{
"math_id": 42,
"text": "\\lambda \\mapsto \\lambda(x_0)"
},
{
"math_id": 43,
"text": "X_{b\\left(X, X'\\right)}"
},
{
"math_id": 44,
"text": "X_{b}"
},
{
"math_id": 45,
"text": "X_{b\\left(X', X\\right)}"
},
{
"math_id": 46,
"text": "X'_b"
},
{
"math_id": 47,
"text": "X \\times Y"
},
{
"math_id": 48,
"text": "(x, y) \\in X \\times Y"
},
{
"math_id": 49,
"text": "\\chi_{(x, y)}"
},
{
"math_id": 50,
"text": "\\chi_{(x, y)}(u) := u(x, y)"
},
{
"math_id": 51,
"text": "\\chi : X \\times Y \\to \\mathrm{Bi}(X, Y)^{\\#}"
},
{
"math_id": 52,
"text": "\\chi(x, y) := \\chi_{(x, y)}"
},
{
"math_id": 53,
"text": "\\mathrm{Bi}(X, Y)^{\\#}"
},
{
"math_id": 54,
"text": "L(X \\otimes_{\\pi} Y; Z) \\to B(X, Y; Z)"
},
{
"math_id": 55,
"text": "(X \\otimes_{\\pi} Y)'"
},
{
"math_id": 56,
"text": "I : X' \\otimes Y \\to L(X; Y)"
},
{
"math_id": 57,
"text": "z := \\sum_{i}^n x_i' \\otimes y_i"
},
{
"math_id": 58,
"text": "x \\mapsto \\sum_{i}^n x_i'(x) y_i ."
},
{
"math_id": 59,
"text": "I : X'_b \\otimes_{\\pi} Y \\to L_b(X; Y)"
},
{
"math_id": 60,
"text": "1"
},
{
"math_id": 61,
"text": "\\leq 1"
},
{
"math_id": 62,
"text": "\\| I(z) \\| = \\sup_{\\| x \\| \\leq 1} \\| I(z)(x) \\| = \\sup_{\\| x \\| \\leq 1} \\left\\| \\sum_{i=1}^{n} x_i'(x) y_i \\right\\| \\leq \\sup_{\\| x \\| \\leq 1} \\sum_{i=1}^{n} \\left\\| x_i' \\right\\| \\|x\\| \\left\\| y_i \\right\\| \\leq \\sum_{i=1}^{n} \\left\\| x_i' \\right\\| \\left\\| y_i \\right\\|"
},
{
"math_id": 63,
"text": "\\left\\| I(z) \\right\\| \\leq \\left\\| z \\right\\|_{\\pi}"
},
{
"math_id": 64,
"text": "\\hat{I} : X'_b \\widehat{\\otimes}_{\\pi} Y \\to L_b(X; Y)"
},
{
"math_id": 65,
"text": "L^1(X; Y)"
},
{
"math_id": 66,
"text": "\\left( X'_b \\widehat{\\otimes}_{\\pi} Y \\right) / \\ker \\hat{I}"
},
{
"math_id": 67,
"text": "\\hat{I} : \\left( X'_b \\widehat{\\otimes}_{\\pi} Y \\right) / \\ker \\hat{I} \\to L^1(X; Y)"
},
{
"math_id": 68,
"text": "\\| \\cdot \\|_{\\operatorname{Tr}}"
},
{
"math_id": 69,
"text": "T : X \\to Y"
},
{
"math_id": 70,
"text": "\\left\\| T \\right\\|_{\\operatorname{Tr}} := \\inf_{z \\in \\hat{I}^{-1}\\left( T \\right)} \\left\\| z \\right\\|_{\\pi} "
},
{
"math_id": 71,
"text": "N : X \\to Y"
},
{
"math_id": 72,
"text": "\\left( x_i' \\right)_{i=1}^{\\infty}"
},
{
"math_id": 73,
"text": "\\left( y_i \\right)_{i=1}^{\\infty}"
},
{
"math_id": 74,
"text": "Y"
},
{
"math_id": 75,
"text": "\\left( c_i \\right)_{i=1}^{\\infty}"
},
{
"math_id": 76,
"text": "\\sum_{i=1}^{\\infty} |c_i| < \\infty"
},
{
"math_id": 77,
"text": "N"
},
{
"math_id": 78,
"text": "N(x) = \\sum_{i=1}^{\\infty} c_i x'_i(x) y_i"
},
{
"math_id": 79,
"text": "x \\in X"
},
{
"math_id": 80,
"text": "\\| N \\|_{\\operatorname{Tr}}"
},
{
"math_id": 81,
"text": "\\sum_{i=1}^{\\infty} | c_i |"
},
{
"math_id": 82,
"text": "{}^{t}N : Y'_{b} \\to X'_{b}"
},
{
"math_id": 83,
"text": "\\left\\| {}^{t}N \\right\\|_{\\operatorname{Tr}} = \\left\\| N \\right\\|_{\\operatorname{Tr}}"
},
{
"math_id": 84,
"text": "\\left\\| {}^{t}N\\right \\|_{\\operatorname{Tr}} \\leq \\left\\| N \\right\\|_{\\operatorname{Tr}}"
},
{
"math_id": 85,
"text": "N = UR"
},
{
"math_id": 86,
"text": "N^* N"
},
{
"math_id": 87,
"text": "U\\big\\vert_{\\operatorname{Im} R} : \\operatorname{Im} R \\to \\operatorname{Im} N"
},
{
"math_id": 88,
"text": "\\operatorname{Tr} R"
},
{
"math_id": 89,
"text": "\\operatorname{Tr} R = \\| N \\|_{\\operatorname{Tr}}"
},
{
"math_id": 90,
"text": "\\lambda_1 > \\lambda_2 > \\cdots"
},
{
"math_id": 91,
"text": "V_1, V_2, \\ldots"
},
{
"math_id": 92,
"text": "\\operatorname{span}\\left( V_1 \\cup V_2 \\cup \\cdots \\right)"
},
{
"math_id": 93,
"text": "\\ker N"
},
{
"math_id": 94,
"text": "R(x) = \\lambda_k x"
},
{
"math_id": 95,
"text": "x \\in V_k"
},
{
"math_id": 96,
"text": "\\operatorname{Tr} R := \\sum_{k} \\lambda_k \\dim V_k"
},
{
"math_id": 97,
"text": "{}^{t}N : Y'_b \\to X'_{b}"
},
{
"math_id": 98,
"text": "\\| {}^t N \\|_{\\operatorname{Tr}} = \\| N \\|_{\\operatorname{Tr}}"
},
{
"math_id": 99,
"text": "(x_i)_{i=1}^\\infty "
},
{
"math_id": 100,
"text": "(y_i)_{i=1}^\\infty "
},
{
"math_id": 101,
"text": "\\left( \\lambda_i \\right)_{i=1}^\\infty "
},
{
"math_id": 102,
"text": "\\ell^1"
},
{
"math_id": 103,
"text": "N(x) = \\sum_i \\lambda_i \\langle x, x_i \\rangle y_i"
},
{
"math_id": 104,
"text": "p_U(x) = \\inf_{r > 0, x \\in r U} r"
},
{
"math_id": 105,
"text": "\\pi : X \\to X/p_U^{-1}(0)"
},
{
"math_id": 106,
"text": "\\hat{X}_U"
},
{
"math_id": 107,
"text": "\\hat{\\pi}_U : X \\to \\hat{X}_U"
},
{
"math_id": 108,
"text": "X/p_U^{-1}(0)"
},
{
"math_id": 109,
"text": "F_B = \\operatorname{span} B"
},
{
"math_id": 110,
"text": "p_B(y) = \\inf_{r > 0, y \\in r B} r"
},
{
"math_id": 111,
"text": "\\iota : F_B \\to F"
},
{
"math_id": 112,
"text": "T : \\hat{X}_U \\to Y_B"
},
{
"math_id": 113,
"text": "\\hat{\\pi}_U \\circ T \\circ \\iota : X \\to Y"
},
{
"math_id": 114,
"text": "L \\left( \\hat{X}_U; Y_B \\right) \\to L(X; Y)"
},
{
"math_id": 115,
"text": "L \\left( \\hat{X}_U; Y_B \\right)"
},
{
"math_id": 116,
"text": "L(X; Y)"
},
{
"math_id": 117,
"text": "L^1\\left( \\hat{X}_U; Y_B \\right)"
},
{
"math_id": 118,
"text": "M : W \\to X"
},
{
"math_id": 119,
"text": "P : Y \\to Z"
},
{
"math_id": 120,
"text": "N \\circ M : W \\to Y"
},
{
"math_id": 121,
"text": "P \\circ N : X \\to Z"
},
{
"math_id": 122,
"text": "P \\circ N \\circ M : W \\to Z"
},
{
"math_id": 123,
"text": "\\left\\| P \\circ N \\circ M\\right\\|_{\\operatorname{Tr}} \\leq \\left\\| P \\right\\| \\left\\| N \\right\\|_{\\operatorname{Tr}} \\| \\left\\| M \\right\\|"
},
{
"math_id": 124,
"text": "\\left\\| {}^{t}N \\right\\|_{\\operatorname{Tr}} \\leq \\left\\| N \\right\\|_{\\operatorname{Tr}}"
},
{
"math_id": 125,
"text": "\\hat{X}"
},
{
"math_id": 126,
"text": "\\hat{N} : \\hat{X} \\to Y"
},
{
"math_id": 127,
"text": "N(U) \\subseteq B"
},
{
"math_id": 128,
"text": "\\overline{N}_0 : \\hat{X}_U \\to Y_B"
},
{
"math_id": 129,
"text": "\\overline{N}_0"
},
{
"math_id": 130,
"text": "N_0 : X_U \\to Y_B"
},
{
"math_id": 131,
"text": "N = \\operatorname{In}_B \\circ N_0 \\circ \\pi_U"
},
{
"math_id": 132,
"text": "\\operatorname{In}_B : Y_B \\to Y"
},
{
"math_id": 133,
"text": "\\pi_U : X \\to X / p_U^{-1}(0)"
},
{
"math_id": 134,
"text": "B_1"
},
{
"math_id": 135,
"text": "B_2"
},
{
"math_id": 136,
"text": "f : X \\to B_1"
},
{
"math_id": 137,
"text": "n : B_1 \\to B_2"
},
{
"math_id": 138,
"text": "g : B_2 \\to Y"
},
{
"math_id": 139,
"text": "N = g \\circ n \\circ f"
},
{
"math_id": 140,
"text": "B \\subseteq Y"
},
{
"math_id": 141,
"text": "E : X \\to Z"
},
{
"math_id": 142,
"text": "\\tilde{N} : Z \\to Y"
},
{
"math_id": 143,
"text": "\\tilde{N} \\circ E = N"
},
{
"math_id": 144,
"text": "\\epsilon > 0"
},
{
"math_id": 145,
"text": "\\tilde{N}"
},
{
"math_id": 146,
"text": "\\| \\tilde{N} \\|_{\\operatorname{Tr}} \\leq \\| N \\|_{\\operatorname{Tr}} + \\epsilon"
},
{
"math_id": 147,
"text": "\\pi : Z \\to Z / \\operatorname{Im} E"
},
{
"math_id": 148,
"text": "Z / \\operatorname{Im} E"
},
{
"math_id": 149,
"text": "\\pi"
},
{
"math_id": 150,
"text": "\\operatorname{Im} E"
},
{
"math_id": 151,
"text": "N : Y \\to Z / \\operatorname{Im} E"
},
{
"math_id": 152,
"text": "\\tilde{N} : Y \\to Z"
},
{
"math_id": 153,
"text": "\\pi \\circ \\tilde{N} = N"
},
{
"math_id": 154,
"text": "\\left\\| \\tilde{N} \\right\\|_{\\operatorname{Tr}} \\leq \\left\\| N \\right\\|_{\\operatorname{Tr}} + \\epsilon"
},
{
"math_id": 155,
"text": "L( X; Y )"
},
{
"math_id": 156,
"text": "X' \\otimes Y"
}
] |
https://en.wikipedia.org/wiki?curid=63684284
|
6368430
|
Linear programming relaxation
|
In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.
For example, in a 0–1 integer program, all constraints are of the form
formula_0.
The relaxation of the original integer program instead uses a collection of linear constraints
formula_1
The resulting relaxation is a linear program, hence the name. This relaxation technique transforms an NP-hard optimization problem (integer programming) into a related problem that is solvable in polynomial time (linear programming); the solution to the relaxed linear program can be used to gain information about the solution to the original integer program.
Example.
Consider the set cover problem, the linear programming relaxation of which was first considered by Lovász in 1975. In this problem, one is given as input a family of sets "F" = {"S"0, "S"1, ...}; the task is to find a subfamily, with as few sets as possible, having the same union as "F".
To formulate this as a 0–1 integer program, form an indicator variable "xi" for each set "Si", that takes the value 1 when "Si" belongs to the chosen subfamily and 0 when it does not. Then a valid cover can be described by an assignment of values to the indicator variables satisfying the constraints
formula_2
(that is, only the specified indicator variable values are allowed) and, for each element "ej" of the union of "F",
formula_3
(that is, each element is covered). The minimum set cover corresponds to the assignment of indicator variables satisfying these constraints and minimizing the linear objective function
formula_4
The linear programming relaxation of the set cover problem describes a "fractional cover" in which the input sets are assigned weights such that the total weight of the sets containing each element is at least one and the total weight of all sets is minimized.
As a specific example of the set cover problem, consider the instance "F" = . There are three optimal set covers, each of which includes two of the three given sets. Thus, the optimal value of the objective function of the corresponding 0–1 integer program is 2, the number of sets in the optimal covers. However, there is a fractional solution in which each set is assigned the weight 1/2, and for which the total value of the objective function is 3/2. Thus, in this example, the linear programming relaxation has a value differing from that of the unrelaxed 0–1 integer program.
Solution quality of relaxed and original programs.
The linear programming relaxation of an integer program may be solved using any standard linear programming technique. If it happens that, in the optimal solution, all variables have integer values, then it will also be an optimal solution to the original integer program. However, this is generally not true, except for some special cases (e.g. problems with totally unimodular matrix specifications.)
In all cases, though, the solution quality of the linear program is at least as good as that of the integer program, because any integer program solution would also be a valid linear program solution. That is, in a maximization problem, the relaxed program has a value greater than or equal to that of the original program, while in a minimization problem such as the set cover problem the relaxed program has a value smaller than or equal to that of the original program. Thus, the relaxation provides an optimistic bound on the integer program's solution.
In the example instance of the set cover problem described above, in which the relaxation has an optimal solution value of 3/2, we can deduce that the optimal solution value of the unrelaxed integer program is at least as large. Since the set cover problem has solution values that are integers (the numbers of sets chosen in the subfamily), the optimal solution quality must be at least as large as the next larger integer, 2. Thus, in this instance, despite having a different value from the unrelaxed problem, the linear programming relaxation gives us a tight lower bound on the solution quality of the original problem.
Approximation and integrality gap.
Linear programming relaxation is a standard technique for designing approximation algorithms for hard optimization problems. In this application, an important concept is the integrality gap, the maximum ratio between the solution quality of the integer program and of its relaxation. In an instance of a minimization problem, if the real minimum (the minimum of the integer problem) is formula_5, and the relaxed minimum (the minimum of the linear programming relaxation) is formula_6, then the integrality gap of that instance is formula_7. In a maximization problem the fraction is reversed. The integrality gap is always at least 1. In the example above, the instance "F" = shows an integrality gap of 4/3.
Typically, the integrality gap translates into the approximation ratio of an approximation algorithm. This is because an approximation algorithm relies on some rounding strategy that finds, for every relaxed solution of size formula_6, an integer solution of size at most formula_8 (where "RR" is the rounding ratio). If there is an instance with integrality gap "IG", then "every" rounding strategy will return, on that instance, a rounded solution of size at least formula_9. Therefore necessarily formula_10. The rounding ratio "RR" is only an upper bound on the approximation ratio, so in theory the actual approximation ratio may be lower than "IG", but this may be hard to prove. In practice, a large "IG" usually implies that the approximation ratio in the linear programming relaxation might be bad, and it may be better to look for other approximation schemes for that problem.
For the set cover problem, Lovász proved that the integrality gap for an instance with "n" elements is "Hn", the "n"th harmonic number. One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding. Given a fractional cover, in which each set "Si" has weight "wi", choose randomly the value of each 0–1 indicator variable "xi" to be 1 with probability "wi" × (ln "n" +1), and 0 otherwise. Then any element "ej" has probability less than 1/("e"×"n") of remaining uncovered, so with constant probability all elements are covered. The cover generated by this technique has total size, with high probability, (1+o(1))(ln "n")"W", where "W" is the total weight of the fractional solution. Thus, this technique leads to a randomized approximation algorithm that finds a set cover within a logarithmic factor of the optimum. As Young showed in 1995 both the random part of this algorithm and the need to construct an explicit solution to the linear programming relaxation may be eliminated using the method of conditional probabilities, leading to a deterministic greedy algorithm for set cover, known already to Lovász, that repeatedly selects the set that covers the largest possible number of remaining uncovered elements. This greedy algorithm approximates the set cover to within the same "Hn" factor that Lovász proved as the integrality gap for set cover. There are strong complexity-theoretic reasons for believing that no polynomial time approximation algorithm can achieve a significantly better approximation ratio.
Similar randomized rounding techniques, and derandomized approximation algorithms, may be used in conjunction with linear programming relaxation to develop approximation algorithms for many other problems, as described by Raghavan, Tompson, and Young.
Branch and bound for exact solutions.
As well as its uses in approximation, linear programming plays an important role in branch and bound algorithms for computing the true optimum solution to hard optimization problems.
If some variables in the optimal solution have fractional values, we may start a branch and bound type process, in which we recursively solve subproblems in which some of the fractional variables have their values fixed to either zero or one. In each step of an algorithm of this type, we consider a subproblem of the original 0–1 integer program in which some of the variables have values assigned to them, either 0 or 1, and the remaining variables are still free to take on either value. In subproblem "i", let "Vi" denote the set of remaining variables. The process begins by considering a subproblem in which no variable values have been assigned, and in which "V0" is the whole set of variables of the original problem. Then, for each subproblem "i", it performs the following steps.
Although it is difficult to prove theoretical bounds on the performance of algorithms of this type, they can be very effective in practice.
Cutting plane method.
Two 0–1 integer programs that are equivalent, in that they have the same objective function and the same set of feasible solutions, may have quite different linear programming relaxations: a linear programming relaxation can be viewed geometrically, as a convex polytope that includes all feasible solutions and excludes all other 0–1 vectors, and infinitely many different polytopes have this property. Ideally, one would like to use as a relaxation the convex hull of the feasible solutions; linear programming on this polytope would automatically yield the correct solution to the original integer program. However, in general, this polytope will have exponentially many facets and be difficult to construct. Typical relaxations, such as the relaxation of the set cover problem discussed earlier, form a polytope that strictly contains the convex hull and has vertices other than the 0–1 vectors that solve the unrelaxed problem.
The cutting-plane method for solving 0–1 integer programs, first introduced for the traveling salesman problem by Dantzig, Fulkerson, and Johnson in 1954 and generalized to other integer programs by Gomory in 1958, takes advantage of this multiplicity of possible relaxations by finding a sequence of relaxations that more tightly constrain the solution space until eventually an integer solution is obtained. This method starts from any relaxation of the given program, and finds an optimal solution using a linear programming solver. If the solution assigns integer values to all variables, it is also the optimal solution to the unrelaxed problem. Otherwise, an additional linear constraint (a "cutting plane" or "cut") is found that separates the resulting fractional solution from the convex hull of the integer solutions, and the method repeats on this new more tightly constrained problem.
Problem-specific methods are needed to find the cuts used by this method. It is especially desirable to find cutting planes that form facets of the convex hull of the integer solutions, as these planes are the ones that most tightly constrain the solution space; there always exists a cutting plane of this type that separates any fractional solution from the integer solutions. Much research has been performed on methods for finding these facets for different types of combinatorial optimization problems, under the framework of polyhedral combinatorics.
The related branch and cut method combines the cutting plane and branch and bound methods. In any subproblem, it runs the cutting plane method until no more cutting planes can be found, and then branches on one of the remaining fractional variables.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_i\\in\\{0,1\\}"
},
{
"math_id": 1,
"text": "0 \\le x_i \\le 1."
},
{
"math_id": 2,
"text": "\\textstyle x_i\\in\\{0,1\\}"
},
{
"math_id": 3,
"text": "\\textstyle \\sum_{\\{i\\mid e_j\\in S_i\\}} x_i \\ge 1"
},
{
"math_id": 4,
"text": "\\textstyle \\min \\sum_i x_i."
},
{
"math_id": 5,
"text": "M_\\text{int}"
},
{
"math_id": 6,
"text": "M_\\text{frac}"
},
{
"math_id": 7,
"text": "IG = \\frac{M_\\text{int}}{M_\\text{frac}}"
},
{
"math_id": 8,
"text": "RR\\cdot M_\\text{frac}"
},
{
"math_id": 9,
"text": "M_\\text{int} = IG\\cdot M_\\text{frac}"
},
{
"math_id": 10,
"text": "RR \\geq IG"
}
] |
https://en.wikipedia.org/wiki?curid=6368430
|
63686596
|
Pipe plug
|
Pipe Plug is a rubber pneumatic tool for temporary sealing of pipelines.
A pipe plug is a tool or material for the temporary sealing of pipelines in sewerage and other liquid and gas transportation systems; typically for maintenance or non-pressurized line testing. A pipe plug is also known as an inflatable plug, mechanical pipe plug, pipe test plug, pipeline isolation plug, expandable plug, pipe bung, pipe stopper, pipe packer, pneumatic pipe plug or pipe balloon depending on the region where it is used.
History.
The origin is debated, but the earliest patents related with plugging the pipes date back to the 1890s. The first patent for a pipe plug as we know today is by Oscar F. Anderson, published in 1952., and the first patent for inflatable plugs was published in 1965
Usage.
Pipe plugs are often confused with relatively smaller plumbing accessories. However, as an industrial tool, pipe plugs are used in larger infrastructure pipelines. Pipe plugs provide a trench-less method for the maintenance of drains and sewers, and construction and testing of non-pressurized gravity pipelines.
There are three main purposes of pipe plugs. These are temporary sealing or stopping the fluid flow in a pipeline, leak testing and by-passing the flow. They are also used for blocking the ends of pipes to prevent the entry of dirt and other contaminants during construction, maintenance or repair of pipelines.
The leak tests of gravity pipelines using the pipe plug are performed with respecting the requirements of the European Standard EN1610 for both water and air tests.
The inflatable pipe plugs have a wide variety of types each for different purpose:
Back pressure.
Back pressure is a major issue for the users of pipe plugs on site. It refers to the force that a pipe plug holds during the process. Pipe plugs are usually subject to huge amount of back pressure that occurs in the pipeline, so the back pressure must be calculated accurately in order to prevent the pipe plug to slip inside the pipe. Slipping of the pipe plug may cause in hazardous results. Though Mechanical and Inflatable Pipe Plugs can rely upon the seals for restraint, typically, secondary mechanical restraint is required to prevent slippage – in the form of friction screw dogs that engage to the pipe, by utilizing strong back supports, anchors, or other user added blocking methods.
Formula of back pressure calculation.
<templatestyles src="Block indent/styles.css"/>"Fback = Surface x back pressure"
formula_0
formula_1
formula_2
formula_3
<templatestyles src="Block indent/styles.css"/>formula_4: Back pressure force
Surface: formula_5
formula_6: Radius of the pipe
formula_7: Friction coefficient
formula_8: Friction force
formula_9: Back pressure
formula_10: Inflation pressure
formula_11: Contact length
formula_12: Weight of the plug
Accessories.
Pipe plugs are used with supplementary accessories such as air and water hoses, air and pressure control devices, gauges, adapters and chains depending on the type of the pipe plug and the process.
Auxiliary equipment like compressors for inflating the pipe plugs, water tanks for filling the pipeline and pumps for some cases must be used.
Maintenance.
For a longer life cycle, pipe plugs should be cleaned with soap and water before and after each use. Chemical solvents, hydrocarbons, petroleum fluids or other aggressive substances shouldn't be used while cleaning, since they may damage or destroy the rubber of the pipe plug. After cleaning, pipe plugs should be flushed with clean water and left to dry at room temperature before using in the pipelines.
Storing conditions are determined by the ISO 2230 standard. Pipe plugs are to be stored in a dry space at 15-25 °C away from direct sun light and circulating air. Long term contact with liquids, metals and other rubber materials should be avoided.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "F_b = \\pi.r^2 . P_b"
},
{
"math_id": 1,
"text": "F_f = (\\mu . P_i . 2\\pi.r . L_c) + (\\mu . W_w)"
},
{
"math_id": 2,
"text": "F_b = F_f"
},
{
"math_id": 3,
"text": "P_b = \\frac{\\mu. P_i . L_c}r + \\frac{W_w.\\mu}{2\\pi.r^2}"
},
{
"math_id": 4,
"text": "F_b"
},
{
"math_id": 5,
"text": "\\pi.r^2"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "F_f"
},
{
"math_id": 9,
"text": "P_b"
},
{
"math_id": 10,
"text": "P_i"
},
{
"math_id": 11,
"text": "L_c"
},
{
"math_id": 12,
"text": "W_w"
}
] |
https://en.wikipedia.org/wiki?curid=63686596
|
63687901
|
Thermogravitational cycle
|
Type of thermodynamic fluid effect
A thermogravitational cycle is a reversible thermodynamic cycle using the gravitational works of weight and buoyancy to respectively compress and expand a working fluid.
Theoretical framework.
Consider a column filled with a transporting medium and a balloon filled with a working fluid. Due to the hydrostatic pressure of the transporting medium, the pressure inside the column increases along the "z" axis (see figure). Initially, the balloon is inflated by the working fluid at temperature "T"C and pressure "P"0 and located on top of the column. A thermogravitational cycle is decomposed into four ideal steps:
For a thermogravitational cycle to occur, the balloon has to be denser than the transporting medium during 1→2 step and less dense during 3→4 step. If these conditions are not naturally satisfied by the working fluid, a weight can be attached to the balloon to increase its effective mass density.
Applications and examples.
An experimental device working according to thermogravitational cycle principle was developed in a laboratory of the University of Bordeaux and patented in France. Such thermogravitational electric generator is based on inflation and deflation cycles of an elastic bag made of nitrile elastomer cut from a glove finger. The bag is filled with a volatile working fluid that has low chemical affinity for the elastomer such as perfluorohexane (C6F14). It is attached to a strong NdFeB spherical magnet that acts both as a weight and for transducing the mechanical energy into voltage. The glass cylinder is filled with water acting as transporting fluid. It is heated at the bottom by a hot circulating water-jacket, and cooled down at the top by a cold water bath. Due to its low boiling point temperature (56 °C), the perfluorohexane drop contained in the bag vaporizes and inflates the balloon. Once its density is lower than the water density, the balloon raises according to Archimedes’ principle. Cooled down at the column top, the balloon deflates partially until its gets effectively denser than water and starts to fall down. As seen from the videos, the cyclic motion has a period of several seconds. These oscillations can last for several hours and their duration is limited only by leaks of the working fluid through the rubbery membrane. Each time the magnet goes through the coil produces a variation in the magnetic flux. An electromotive force is created and detected through an oscilloscope. It has been estimated that the average power of this machine is 7 μW and its efficiency is 4.8 x 10−6. Although these values are very small, this experiment brings a proof of principle of renewable energy device for harvesting electricity from a weak waste heat source without need of other external energy supply, e.g. for a compressor in a regular heat engine. The experiment was successfully reproduced by undergraduate students in preparatory classes of the Lycée Hoche in Versailles.
Several other applications based on the thermogravitational cycles could be found in the literature. For example:
Cycle efficiency.
The efficiency "η" of a thermogravitational cycle depends on the thermodynamic processes the working fluid goes through during each step of the cycle. Below some examples:
formula_0
formula_1
formula_2
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\eta = 1 - {T_C \\over T_H}"
},
{
"math_id": 1,
"text": "\\eta = {(h_3 - h_4) - (h_2 - h_1) \\over h_3 - h_2}"
},
{
"math_id": 2,
"text": "\\eta = 1 - \\left ( \\frac{P_0}{P_h} \\right )^{\\gamma \\over \\gamma - 1}"
}
] |
https://en.wikipedia.org/wiki?curid=63687901
|
63688717
|
Ptak space
|
A locally convex topological vector space (TVS) formula_0 is "B"-complete or a Ptak space if every subspace formula_1 is closed in the weak-* topology on formula_2 (i.e. formula_3 or formula_4) whenever formula_5 is closed in formula_6 (when formula_6 is given the subspace topology from formula_3) for each equicontinuous subset formula_7.
"B"-completeness is related to formula_8-completeness, where a locally convex TVS formula_0 is formula_8-complete if every dense subspace formula_1 is closed in formula_3 whenever formula_5 is closed in formula_6 (when formula_6 is given the subspace topology from formula_3) for each equicontinuous subset formula_7.
Characterizations.
Throughout this section, formula_0 will be a locally convex topological vector space (TVS).
The following are equivalent:
* A linear map formula_10 is called nearly open if for each neighborhood formula_11 of the origin in formula_0, formula_12 is dense in some neighborhood of the origin in formula_13
The following are equivalent:
Properties.
Every Ptak space is complete. However, there exist complete Hausdorff locally convex space that are not Ptak spaces.
<templatestyles src="Math_theorem/styles.css" />
Homomorphism Theorem — Every continuous linear map from a Ptak space onto a barreled space is a topological homomorphism.
Let formula_14 be a nearly open linear map whose domain is dense in a formula_8-complete space formula_0 and whose range is a locally convex space formula_9. Suppose that the graph of formula_14 is closed in formula_15. If formula_14 is injective or if formula_0 is a Ptak space then formula_14 is an open map.
Examples and sufficient conditions.
There exist Br-complete spaces that are not B-complete.
Every Fréchet space is a Ptak space. The strong dual of a reflexive Fréchet space is a Ptak space.
Every closed vector subspace of a Ptak space (resp. a Br-complete space) is a Ptak space (resp. a formula_8-complete space). and every Hausdorff quotient of a Ptak space is a Ptak space.
If every Hausdorff quotient of a TVS formula_0 is a Br-complete space then formula_0 is a "B"-complete space.
If formula_0 is a locally convex space such that there exists a continuous nearly open surjection formula_16 from a Ptak space, then formula_0 is a Ptak space.
If a TVS formula_0 has a closed hyperplane that is B-complete (resp. Br-complete) then formula_0 is B-complete (resp. Br-complete).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Q \\subseteq X^{\\prime}"
},
{
"math_id": 2,
"text": "X^{\\prime}"
},
{
"math_id": 3,
"text": "X^{\\prime}_{\\sigma}"
},
{
"math_id": 4,
"text": "\\sigma\\left(X^{\\prime}, X \\right)"
},
{
"math_id": 5,
"text": "Q \\cap A"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "A \\subseteq X^{\\prime}"
},
{
"math_id": 8,
"text": "B_r"
},
{
"math_id": 9,
"text": "Y"
},
{
"math_id": 10,
"text": "u : X \\to Y"
},
{
"math_id": 11,
"text": "U"
},
{
"math_id": 12,
"text": "u(U)"
},
{
"math_id": 13,
"text": "u(X)."
},
{
"math_id": 14,
"text": "u"
},
{
"math_id": 15,
"text": "X \\times Y"
},
{
"math_id": 16,
"text": "u : P \\to X"
}
] |
https://en.wikipedia.org/wiki?curid=63688717
|
63688957
|
Integral linear operator
|
Mathematical function
An integral bilinear form is a bilinear functional that belongs to the continuous dual space of formula_0, the injective tensor product of the locally convex topological vector spaces (TVSs) "X" and "Y". An integral linear operator is a continuous linear operator that arises in a canonical way from an integral bilinear form.
These maps play an important role in the theory of nuclear spaces and nuclear maps.
Definition - Integral forms as the dual of the injective tensor product.
Let "X" and "Y" be locally convex TVSs, let formula_1 denote the projective tensor product, formula_2 denote its completion, let formula_3 denote the injective tensor product, and formula_0 denote its completion.
Suppose that formula_4 denotes the TVS-embedding of formula_3 into its completion and let formula_5 be its transpose, which is a vector space-isomorphism. This identifies the continuous dual space of formula_3 as being identical to the continuous dual space of formula_0.
Let formula_6 denote the identity map and formula_7 denote its transpose, which is a continuous injection. Recall that formula_8 is canonically identified with formula_9, the space of continuous bilinear maps on formula_10. In this way, the continuous dual space of formula_3 can be canonically identified as a vector subspace of formula_9, denoted by formula_11. The elements of formula_11 are called integral (bilinear) forms on formula_10. The following theorem justifies the word integral.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
The dual "J"("X", "Y") of formula_0 consists of exactly of the continuous bilinear forms u on formula_10 of the form
formula_12
where S and T are respectively some weakly closed and equicontinuous (hence weakly compact) subsets of the duals formula_13 and formula_14, and formula_15 is a (necessarily bounded) positive Radon measure on the (compact) set formula_16.
There is also a closely related formulation of the theorem above that can also be used to explain the terminology "integral" bilinear form: a continuous bilinear form formula_17 on the product formula_18 of locally convex spaces is integral if and only if there is a "compact" topological space formula_19 equipped with a (necessarily bounded) positive Radon measure formula_15 and continuous linear maps formula_20 and formula_21 from formula_22 and formula_23 to the Banach space formula_24 such that
formula_25,
i.e., the form formula_17 can be realised by integrating (essentially bounded) functions on a compact space.
Integral linear maps.
A continuous linear map formula_26 is called integral if its associated bilinear form is an integral bilinear form, where this form is defined by formula_27. It follows that an integral map formula_26 is of the form:
formula_28
for suitable weakly closed and equicontinuous subsets "S" and "T" of formula_29 and formula_30, respectively, and some positive Radon measure formula_15 of total mass ≤ 1.
The above integral is the weak integral, so the equality holds if and only if for every formula_31, formula_32.
Given a linear map formula_33, one can define a canonical bilinear form formula_34, called the associated bilinear form on formula_35, by formula_36.
A continuous map formula_33 is called integral if its associated bilinear form is an integral bilinear form. An integral map formula_37 is of the form, for every formula_38 and formula_39:
formula_40
for suitable weakly closed and equicontinuous aubsets formula_41 and formula_42 of formula_29 and formula_43, respectively, and some positive Radon measure formula_15 of total mass formula_44.
Relation to Hilbert spaces.
The following result shows that integral maps "factor through" Hilbert spaces.
Proposition: Suppose that formula_45 is an integral map between locally convex TVS with "Y" Hausdorff and complete. There exists a Hilbert space "H" and two continuous linear mappings formula_46 and formula_47 such that formula_48.
Furthermore, every integral operator between two Hilbert spaces is nuclear. Thus a continuous linear operator between two Hilbert spaces is nuclear if and only if it is integral.
Sufficient conditions.
Every nuclear map is integral. An important partial converse is that every integral operator between two Hilbert spaces is nuclear.
Suppose that "A", "B", "C", and "D" are Hausdorff locally convex TVSs and that formula_49, formula_50, and formula_51 are all continuous linear operators. If formula_50 is an integral operator then so is the composition formula_52.
If formula_45 is a continuous linear operator between two normed space then formula_45 is integral if and only if formula_53 is integral.
Suppose that formula_45 is a continuous linear map between locally convex TVSs.
If formula_45 is integral then so is its transpose formula_54. Now suppose that the transpose formula_54 of the continuous linear map formula_45 is integral. Then formula_45 is integral if the canonical injections formula_55 (defined by formula_56 value at x) and formula_57 are TVS-embeddings (which happens if, for instance, formula_22 and formula_58 are barreled or metrizable).
Properties.
Suppose that "A", "B", "C", and "D" are Hausdorff locally convex TVSs with "B" and "D" complete. If formula_49, formula_50, and formula_51 are all integral linear maps then their composition formula_52 is nuclear.
Thus, in particular, if X is an infinite-dimensional Fréchet space then a continuous linear surjection formula_59 cannot be an integral operator.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X \\widehat{\\otimes}_{\\epsilon} Y"
},
{
"math_id": 1,
"text": "X \\otimes_{\\pi} Y"
},
{
"math_id": 2,
"text": "X \\widehat{\\otimes}_{\\pi} Y"
},
{
"math_id": 3,
"text": "X \\otimes_{\\epsilon} Y"
},
{
"math_id": 4,
"text": "\\operatorname{In} : X \\otimes_{\\epsilon} Y \\to X \\widehat{\\otimes}_{\\epsilon} Y"
},
{
"math_id": 5,
"text": "{}^{t}\\operatorname{In} : \\left( X \\widehat{\\otimes}_{\\epsilon} Y \\right)^{\\prime}_b \\to \\left( X \\otimes_{\\epsilon} Y \\right)^{\\prime}_b"
},
{
"math_id": 6,
"text": "\\operatorname{Id} : X \\otimes_{\\pi} Y \\to X \\otimes_{\\epsilon} Y"
},
{
"math_id": 7,
"text": "{}^{t}\\operatorname{Id} : \\left( X \\otimes_{\\epsilon} Y \\right)^{\\prime}_b \\to \\left( X \\otimes_{\\pi} Y \\right)^{\\prime}_b"
},
{
"math_id": 8,
"text": "\\left( X \\otimes_{\\pi} Y \\right)^{\\prime}"
},
{
"math_id": 9,
"text": "B(X, Y)"
},
{
"math_id": 10,
"text": "X \\times Y"
},
{
"math_id": 11,
"text": "J(X, Y)"
},
{
"math_id": 12,
"text": " u(x,y) = \\int_{S \\times T} \\langle x, x'\\rangle \\langle y, y' \\rangle\\; d \\mu\\!\\left( x', y' \\right),"
},
{
"math_id": 13,
"text": "X^{\\prime}"
},
{
"math_id": 14,
"text": "Y^{\\prime}"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "S \\times T"
},
{
"math_id": 17,
"text": "u"
},
{
"math_id": 18,
"text": "X\\times Y"
},
{
"math_id": 19,
"text": "\\Omega"
},
{
"math_id": 20,
"text": "\\alpha"
},
{
"math_id": 21,
"text": "\\beta"
},
{
"math_id": 22,
"text": "X"
},
{
"math_id": 23,
"text": "Y"
},
{
"math_id": 24,
"text": "L^{\\infty}(\\Omega,\\mu)"
},
{
"math_id": 25,
"text": "u(x,y) = \\langle\\alpha(x),\\beta(y)\\rangle = \\int_{\\Omega}\\alpha(x)\\beta(y)\\;d\\mu"
},
{
"math_id": 26,
"text": "\\kappa : X \\to Y'"
},
{
"math_id": 27,
"text": "(x, y) \\in X \\times Y \\mapsto (\\kappa x)(y)"
},
{
"math_id": 28,
"text": "x \\in X \\mapsto \\kappa(x) = \\int_{S \\times T} \\left\\langle x', x \\right\\rangle y' \\mathrm{d} \\mu\\! \\left( x', y' \\right)"
},
{
"math_id": 29,
"text": "X'"
},
{
"math_id": 30,
"text": "Y'"
},
{
"math_id": 31,
"text": "y \\in Y"
},
{
"math_id": 32,
"text": "\\left\\langle \\kappa(x), y \\right\\rangle = \\int_{S \\times T} \\left\\langle x', x \\right\\rangle \\left\\langle y', y \\right\\rangle \\mathrm{d} \\mu\\! \\left( x', y' \\right)"
},
{
"math_id": 33,
"text": "\\Lambda : X \\to Y"
},
{
"math_id": 34,
"text": "B_{\\Lambda} \\in Bi\\left(X, Y' \\right)"
},
{
"math_id": 35,
"text": "X \\times Y'"
},
{
"math_id": 36,
"text": "B_{\\Lambda}\\left( x, y' \\right) := \\left( y' \\circ \\Lambda \\right) \\left( x \\right)"
},
{
"math_id": 37,
"text": "\\Lambda: X \\to Y"
},
{
"math_id": 38,
"text": "x \\in X"
},
{
"math_id": 39,
"text": "y' \\in Y'"
},
{
"math_id": 40,
"text": "\\left\\langle y', \\Lambda(x) \\right\\rangle = \\int_{A' \\times B''} \\left\\langle x', x \\right\\rangle \\left\\langle y'', y' \\right\\rangle \\mathrm{d} \\mu\\! \\left( x', y'' \\right)"
},
{
"math_id": 41,
"text": "A'"
},
{
"math_id": 42,
"text": "B''"
},
{
"math_id": 43,
"text": "Y''"
},
{
"math_id": 44,
"text": "\\leq 1"
},
{
"math_id": 45,
"text": "u : X \\to Y"
},
{
"math_id": 46,
"text": "\\alpha : X \\to H"
},
{
"math_id": 47,
"text": "\\beta : H \\to Y"
},
{
"math_id": 48,
"text": "u = \\beta \\circ \\alpha"
},
{
"math_id": 49,
"text": "\\alpha : A \\to B"
},
{
"math_id": 50,
"text": "\\beta : B \\to C"
},
{
"math_id": 51,
"text": "\\gamma: C \\to D"
},
{
"math_id": 52,
"text": "\\gamma \\circ \\beta \\circ \\alpha : A \\to D"
},
{
"math_id": 53,
"text": "{}^{t}u : Y' \\to X'"
},
{
"math_id": 54,
"text": "{}^{t}u : Y^{\\prime}_b \\to X^{\\prime}_b"
},
{
"math_id": 55,
"text": "\\operatorname{In}_X : X \\to X''"
},
{
"math_id": 56,
"text": "x \\mapsto "
},
{
"math_id": 57,
"text": "\\operatorname{In}_Y : Y \\to Y''"
},
{
"math_id": 58,
"text": "Y^{\\prime}_b"
},
{
"math_id": 59,
"text": "u : X \\to X"
}
] |
https://en.wikipedia.org/wiki?curid=63688957
|
63694205
|
Inductive tensor product
|
The finest locally convex topological vector space (TVS) topology on formula_0 the tensor product of two locally convex TVSs, making the canonical map formula_1 (defined by sending formula_2 to formula_3) separately continuous is called the inductive topology or the formula_4-topology. When formula_5 is endowed with this topology then it is denoted by formula_6 and called the inductive tensor product of formula_7 and formula_8
Preliminaries.
Throughout let formula_9 and formula_10 be locally convex topological vector spaces and formula_11 be a linear map.
There is a sequence of positive numbers, decreasing and either finite or else converging to 0, formula_54 and a sequence of nonzero finite dimensional subspaces formula_55 of formula_56 (formula_57) with the following properties: (1) the subspaces formula_55 are pairwise orthogonal; (2) for every formula_58 and every formula_59 formula_60; and (3) the orthogonal of the subspace spanned by formula_61 is equal to the kernel of formula_39
Universal property.
Suppose that formula_10 is a locally convex space and that formula_78 is the canonical map from the space of all bilinear mappings of the form formula_79 going into the space of all linear mappings of formula_80
Then when the domain of formula_78 is restricted to formula_81 (the space of separately continuous bilinear maps) then the range of this restriction is the space formula_82 of continuous linear operators formula_83
In particular, the continuous dual space of formula_6 is canonically isomorphic to the space formula_84 the space of separately continuous bilinear forms on formula_85
If formula_86 is a locally convex TVS topology on formula_5 (formula_5 with this topology will be denoted by formula_87), then formula_86 is equal to the inductive tensor product topology if and only if it has the following property:
For every locally convex TVS formula_88 if formula_78 is the canonical map from the space of all bilinear mappings of the form formula_79 going into the space of all linear mappings of formula_89 then when the domain of formula_78 is restricted to formula_81 (space of separately continuous bilinear maps) then the range of this restriction is the space formula_90 of continuous linear operators formula_91
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X \\otimes Y,"
},
{
"math_id": 1,
"text": "\\cdot \\otimes \\cdot : X \\times Y \\to X \\otimes Y"
},
{
"math_id": 2,
"text": "(x, y) \\in X \\times Y"
},
{
"math_id": 3,
"text": "x \\otimes y"
},
{
"math_id": 4,
"text": "\\iota"
},
{
"math_id": 5,
"text": "X \\otimes Y"
},
{
"math_id": 6,
"text": "X \\otimes_{\\iota} Y"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "Y."
},
{
"math_id": 9,
"text": "X, Y,"
},
{
"math_id": 10,
"text": "Z"
},
{
"math_id": 11,
"text": "L : X \\to Y"
},
{
"math_id": 12,
"text": "L : X \\to \\operatorname{Im} L"
},
{
"math_id": 13,
"text": "\\operatorname{Im} L,"
},
{
"math_id": 14,
"text": "L,"
},
{
"math_id": 15,
"text": "S \\subseteq X"
},
{
"math_id": 16,
"text": "X \\to X / S"
},
{
"math_id": 17,
"text": "S \\to X"
},
{
"math_id": 18,
"text": "X \\to X / \\operatorname{ker} L \\overset{L_0}{\\rightarrow} \\operatorname{Im} L \\to Y"
},
{
"math_id": 19,
"text": "L_0(x + \\ker L) := L(x)"
},
{
"math_id": 20,
"text": "X \\to Z"
},
{
"math_id": 21,
"text": "X \\times Y \\to Z"
},
{
"math_id": 22,
"text": "L(X; Z)"
},
{
"math_id": 23,
"text": "B(X, Y; Z)"
},
{
"math_id": 24,
"text": "L(X)"
},
{
"math_id": 25,
"text": "B(X, Y)"
},
{
"math_id": 26,
"text": "X^{\\prime}"
},
{
"math_id": 27,
"text": "X,"
},
{
"math_id": 28,
"text": "X^{\\#}."
},
{
"math_id": 29,
"text": "x^{\\prime}"
},
{
"math_id": 30,
"text": "x"
},
{
"math_id": 31,
"text": "L : H \\to H"
},
{
"math_id": 32,
"text": "\\langle L(x), X \\rangle \\geq 0"
},
{
"math_id": 33,
"text": "x \\in H."
},
{
"math_id": 34,
"text": "r : H \\to H,"
},
{
"math_id": 35,
"text": "L = r \\circ r."
},
{
"math_id": 36,
"text": "L : H_1 \\to H_2"
},
{
"math_id": 37,
"text": "L^* \\circ L"
},
{
"math_id": 38,
"text": "R : H \\to H"
},
{
"math_id": 39,
"text": "L."
},
{
"math_id": 40,
"text": "U : H_1 \\to H_2"
},
{
"math_id": 41,
"text": "\\operatorname{Im} R"
},
{
"math_id": 42,
"text": "U(x) = L(x)"
},
{
"math_id": 43,
"text": "x = R \\left(x_1\\right) \\in \\operatorname{Im} R"
},
{
"math_id": 44,
"text": "U"
},
{
"math_id": 45,
"text": "\\overline{\\operatorname{Im} R},"
},
{
"math_id": 46,
"text": "\\operatorname{ker} R"
},
{
"math_id": 47,
"text": "U(x) = 0"
},
{
"math_id": 48,
"text": "x \\in \\operatorname{ker} R"
},
{
"math_id": 49,
"text": "H_1."
},
{
"math_id": 50,
"text": "U\\big\\vert_{\\operatorname{Im} R} : \\operatorname{Im} R \\to \\operatorname{Im} L"
},
{
"math_id": 51,
"text": "L = U \\circ R."
},
{
"math_id": 52,
"text": "\\Lambda : X \\to Y"
},
{
"math_id": 53,
"text": "\\Lambda(U)"
},
{
"math_id": 54,
"text": "r_1 > r_2 > \\cdots > r_k > \\cdots"
},
{
"math_id": 55,
"text": "V_i"
},
{
"math_id": 56,
"text": "H"
},
{
"math_id": 57,
"text": "i = 1, 2, \\ldots"
},
{
"math_id": 58,
"text": "i"
},
{
"math_id": 59,
"text": "x \\in V_i,"
},
{
"math_id": 60,
"text": "L(x) = r_i x"
},
{
"math_id": 61,
"text": "\\cup_i V_i"
},
{
"math_id": 62,
"text": "\\sigma\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 63,
"text": "X_{\\sigma\\left(X, X^{\\prime}\\right)}"
},
{
"math_id": 64,
"text": "X_{\\sigma}"
},
{
"math_id": 65,
"text": "\\sigma\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 66,
"text": "X_{\\sigma\\left(X^{\\prime}, X\\right)}"
},
{
"math_id": 67,
"text": "X^{\\prime}_{\\sigma}"
},
{
"math_id": 68,
"text": "x_0 \\in X"
},
{
"math_id": 69,
"text": "X^{\\prime} \\to \\R"
},
{
"math_id": 70,
"text": "\\lambda \\mapsto \\lambda \\left(x_0\\right)."
},
{
"math_id": 71,
"text": "b\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 72,
"text": "X_{b\\left(X, X^{\\prime}\\right)}"
},
{
"math_id": 73,
"text": "X_b"
},
{
"math_id": 74,
"text": "b\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 75,
"text": "X_{b\\left(X^{\\prime}, X\\right)}"
},
{
"math_id": 76,
"text": "X^{\\prime}_b"
},
{
"math_id": 77,
"text": "b\\left(X^{\\prime}, X\\right)."
},
{
"math_id": 78,
"text": "I"
},
{
"math_id": 79,
"text": "X \\times Y \\to Z,"
},
{
"math_id": 80,
"text": "X \\otimes Y \\to Z."
},
{
"math_id": 81,
"text": "\\mathcal{B}(X, Y; Z)"
},
{
"math_id": 82,
"text": "L\\left(X \\otimes_{\\iota} Y; Z\\right)"
},
{
"math_id": 83,
"text": "X \\otimes_{\\iota} Y \\to Z."
},
{
"math_id": 84,
"text": "\\mathcal{B}(X, Y),"
},
{
"math_id": 85,
"text": "X \\times Y."
},
{
"math_id": 86,
"text": "\\tau"
},
{
"math_id": 87,
"text": "X \\otimes_{\\tau} Y"
},
{
"math_id": 88,
"text": "Z,"
},
{
"math_id": 89,
"text": "X \\otimes Y \\to Z,"
},
{
"math_id": 90,
"text": "L\\left(X \\otimes_{\\tau} Y; Z\\right)"
},
{
"math_id": 91,
"text": "X \\otimes_{\\tau} Y \\to Z."
}
] |
https://en.wikipedia.org/wiki?curid=63694205
|
63694441
|
TCN Protocol
|
Proximity contact tracing protocol
The Temporary Contact Numbers Protocol, or TCN Protocol, is an open source, decentralized, anonymous exposure alert protocol developed by Covid Watch in response to the COVID-19 pandemic. The Covid Watch team, started as an independent research collaboration between Stanford University and the University of Waterloo was the first in the world to publish a white paper, develop, and open source fully anonymous Bluetooth exposure alert technology in collaboration with CoEpi after writing a blog post on the topic in early March.
Covid Watch's TCN Protocol received significant news coverage and was followed by similar decentralized protocols in early April 2020 like DP-3T, PACT, and Google/Apple Exposure Notification framework. Covid Watch then helped other groups like the TCN Coalition and MIT SafePaths implement the TCN Protocol within their open source projects to further the development of decentralized technology and foster global interoperability of contact tracing and exposure alerting apps, a key aspect of achieving widespread adoption. Covid Watch volunteers and nonprofit staff also built a fully open source mobile app for sending anonymous exposure alerts first using the TCN Protocol and later using the very similar Google/Apple Exposure Notification Framework (ENF).
The protocol, like BlueTrace and the Google / Apple contact tracing project, use Bluetooth Low Energy to track and log encounters with other users. The major distinction between TCN and protocols like BlueTrace is the fact the central reporting server never has access to contact logs nor is it responsible for processing and informing clients of contact. Because contact logs are never transmitted to third parties, it has major privacy benefits over approaches like the one used in BlueTrace. This approach however, by its very nature, does not allow for human-in-the-loop reporting, potentially leading to false positives if the reports are not verified by public health agencies.
The TCN protocol received notoriety as one of the first widely released digital contact tracing protocols alongside BlueTrace, the Exposure Notification framework, and the Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) project. It also stood out for its incorporation of blockchain technology, and its influence over the Google/Apple project.
Overview.
The TCN protocol works off the basis of Temporary Contact Numbers (TCN), semi-random identifiers derived from a seed. When two clients encounter each other, a unique TCN is generated, exchanged, and then locally stored in a contact log. Then, once a user tests positive for infection, a report is sent to a central server. Each client on the network then collects the reports from the server and independently checks their local contact logs for a TCN contained in the report. If a matching TCN is found, then the user has come in close contact with an infected patient, and is warned by the client. Since each device locally verifies contact logs, and thus contact logs are never transmitted to third parties, the central reporting server cannot by itself ascertain the identity or contact log of any client in the network. This is in contrast to competing protocols like BlueTrace, where the central reporting server receives and processes client contact logs.
Temporary contact numbers.
The entire protocol is based on the principle of "temporary contact numbers" (TCN), a unique and anonymous 128-bit identifier generated deterministically from a seed value on a client device. TCNs are used to identify people with which a user has come in contact, and the seed is used to compactly report infection to a central reporting server. TCN reports are authenticated to be genuine by a secret held only by the client.
Generation.
To generate a TCN, first a "report authorization key" (RAK) and "report verification key" (RVK) are created as the signing and verification keys of a signature scheme (RAK-RVK pair). In the reference implementation this pair is created using the Ed25519 signature scheme. Then, using the RAK an initial "temporary contact key" (TCK) is generated using the algorithm formula_0, where formula_1 is the SHA-256 hash function as formula_2. This TCK is not used to generate any TCNs, but is used in the next TCK; where all future TCKs are calculated using the algorithm formula_3. A 128 bit TCN is then generated from a given TCK using the algorithm formula_4, where formula_5 formats a supplied number as a little endian unsigned 2 byte integer, and formula_6 is the SHA-256 hash function as formula_7. The following diagram demonstrates the key derivation process:TCNs are unique to each device encounter, and RAK-RVK pairs are cycled at regular intervals to allow a client to report only specific periods of contact.
Reporting.
When a client wishes to submit a report for the TCN indices formula_8 to formula_9, it structures the report as formula_10. A signature is then calculated using the RAK, and it is transmitted to the server as formula_11.
Because any given TCK can only be used to derive an equal or higher indexed TCNs, by submitting formula_12 no encounters prior to formula_13 can be calculated. However, there is no upper limit to encounters calculated using the same RAK-RVK pair, which is why they are cycled often. To prevent clients calculating unused TCNs, formula_9 indicates the last TCN index generated with the given RVK. Additionally, since the RVK is used to calculate a TCK, and formula_12 is provided, no valid TCNs in the reporting period can be derived from an illegitimate report. The only correct TCN calculable from a mismatched RVK and formula_12 is formula_13, the TCN before the start of the reporting period.
Once a report is received, clients individually recalculate TCKs and TCNs for a given period using the original algorithms:formula_14This is used by client devices to check their local contact logs for potential encounters with the infected patient, but has the dual benefit of verifying reports since false reports will never produce matching TCNs.
Memo.
In the report structure, the memo is a space for freeform messages that differ between TCN implementations. The section is between 2 and 257 bytes, and made up of a tag identifying the specific implementation, as well as a data and data length pair. It is formatted as formula_15. The data is standardized for different tags, and can be as follows:
Technical specification.
The protocol can be divided into two responsibilities: an encounter between two devices running TCN apps, and the notification of potential infection to users that came in contact with a patient. For the purposes of this specification, these areas are named the "encounter handshake", and "infection reporting". The "encounter handshake" runs on Bluetooth LE and defines how two devices acknowledge each other's presence. The "infection reporting" is built on HTTPS and defines how infection notices are distributed among clients.
Encounter handshake.
When two devices come within range of each other, they exchange a handshake containing TCNs. In order to achieve this the encounter handshake operates in two modes (both with two sub-modes), broadcast oriented and connection oriented. Broadcast oriented operates using the modes broadcaster and observer, while connection oriented operates using peripheral and central. The two modes are used to circumvent certain device limitations, particularly in regard to iOS restrictions in place before version 13.4. In both modes the protocol is identified with the 16 bit UUID .
In broadcast mode, a broadcaster advertises a 16-byte TCN using the service data field of the advertisement data. The observer reads the TCN from this field. In connection-oriented mode, the peripheral advertises using the UUID. The service exposes a read and writeable packet for sharing TCNs. After sharing a TCN, the central disconnects from the peripheral.
Infection reporting.
When a user tests positive for infection, they upload a signed report, allowing the past 14 days of encounters to be calculated, to a central server. On a regular basis, client devices download reports from the server and check their local contact logs using the verification algorithm. If there is a matching record, the app notifies the user to potential infection.
TCN Coalition.
On 5 April 2020, the global TCN Coalition was founded by Covid Watch and other groups that had coalesced around what was essentially the same approach and largely overlapping protocols, with the goal to reduce fragmentation, and enable global interoperability of tracing and alerting apps, a key aspect of achieving widespread adoption. The TCN Coalition also helped establish the Data Rights for Digital Contact Tracing and Alerting framework, which functions as a bill of rights for users of such apps.
Currently the protocol is used by TCN Coalition members CoEpi and Covid Watch, and was likely a source of inspiration for the similar Google / Apple contact tracing project.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\ntck_{0} = H\\_tck(rak)\n"
},
{
"math_id": 1,
"text": "\nH\\_tck()\n"
},
{
"math_id": 2,
"text": "\nH\\_tck(s) = SHA256(\\text{b'H}\\_\\text{TCK'}||s)\n"
},
{
"math_id": 3,
"text": "\ntck_{i} = H\\_tck(rvk || tck_{i-1})\n"
},
{
"math_id": 4,
"text": "\ntcn_{i>0} = H\\_tcn(le\\_u16(i) || tck_{i})\n"
},
{
"math_id": 5,
"text": "\nle\\_u16()\n"
},
{
"math_id": 6,
"text": "\nH\\_tcn()\n"
},
{
"math_id": 7,
"text": "\nH\\_tcn(s) = SHA256(\\text{b'H}\\_\\text{TCN'}||s)[0:128]\n"
},
{
"math_id": 8,
"text": "\ns > 0\n"
},
{
"math_id": 9,
"text": "\ne\n"
},
{
"math_id": 10,
"text": "\nreport = rvk || tck_{s - 1} || le\\_u16(s) || le\\_u16(e) || memo\n"
},
{
"math_id": 11,
"text": "\ns\\_report = report||sig\n"
},
{
"math_id": 12,
"text": "\ntck_{s-1}\n"
},
{
"math_id": 13,
"text": "\ntcn_{s-1}\n"
},
{
"math_id": 14,
"text": "\n\\begin{array}{lcr}\ntck_s = H\\_tck(rvk || tck_{s-1})\\\\\ntcn_{s} = H\\_tcn(le\\_u16(s) || tck_{s})\\\\\ntck_{s+1} = H\\_tck(rvk || tck_{s})\\\\\ntcn_{s+1} = H\\_tcn(le\\_u16(s+1) || tck_{s+1})\\\\\n...\\\\\ntck_{e} = H\\_tck(rvk || tck_{e-1})\\\\\ntcn_{e} = H\\_tcn(le\\_u16(e) || tck_{e})\n\\end{array}\n"
},
{
"math_id": 15,
"text": "\nmemo = tag || len(data) || data\n"
}
] |
https://en.wikipedia.org/wiki?curid=63694441
|
63695722
|
Law of squares
|
Theorem concerning transmission lines
The law of squares is a theorem concerning transmission lines. It states that the current injected into the line by a step in voltage reaches a maximum at a time proportional to the square of the distance down the line. The theorem is due to William Thomson, the future Lord Kelvin. The law had some importance in connection with submarine telegraph cables.
The law.
For a step increase in the voltage applied to a transmission line, the law of squares can be stated as follows,
formula_0
where,
formula_1 is the time at which the current on the line reaches a maximum
formula_2 is the resistance per metre of the line
formula_3 is the capacitance per metre of the line
formula_4 is the distance in metres from the input of the line.
The law of squares is not just limited to step functions. It also applies to an impulse response or a rectangular function which are more relevant to telegraphy. However, the multiplicative factor is different in these cases. For an impulse it is 1/6 rather than 1/2 and for rectangular pulses it is something in between depending on their length.
History.
The law of squares was proposed by William Thomson (later to become Lord Kelvin) in 1854 at Glasgow University. He had some input from George Gabriel Stokes. Thomson and Stokes were interested in investigating the feasibility of the proposed transatlantic telegraph cable.
Thomson built his result by analogy with the heat transfer theory of Joseph Fourier (the transmission of an electrical step down a line is analogous to suddenly applying a fixed temperature at one end of a metal bar). He found that the equation governing the instantaneous voltage on the line, formula_5 is given by,
formula_6
It is from this that he derived the law of squares. While Thomson's description of a transmission line is not exactly incorrect, and it is perfectly adequate for the low frequencies involved in a Victorian telegraph cable, it is not the complete picture. In particular, Thomson did not take into account the inductance (L) of the line, or the leakage conductivity (G) of the insulation material. The full description was given by Oliver Heaviside in what is now known as the telegrapher's equations. The law of squares can be derived from a special case of the telegrapher's equations – that is, with L and G set to zero.
Disbelief.
Thomson's result is quite counter-intuitive and led to some disbelieving it. The result that most telegraph engineers expected was that the delay in the peak would be directly proportional to line length. Telegraphy was in its infancy and many telegraph engineers were self taught. They tended to mistrust academics and rely instead on practical experience. Even as late as 1887, the author of a letter to "The Electrician" wished to "...protest against the growing tendency to drag mathematics into everything."
One opponent of Thomson was of particular significance, Wildman Whitehouse, who challenged Thomson when he presented the theorem to the British Association in 1855. Both Thomson and Whitehouse were associated with the transatlantic telegraph cable project, Thomson as an unpaid director and scientific advisor, and Whitehouse as the Chief Electrician of the Atlantic Telegraph Company. Thomson's discovery threatened to derail the project, or at least, indicated that a much larger cable was required (a larger conductor will reduce formula_2 and a thicker insulator will reduce formula_3). Whitehouse had no advanced mathematical education (he was a doctor by training) and did not fully understand Thomson's work. He claimed he had experimental evidence that Thomson was wrong, but his measurements were poorly conceived and Thomson refuted his claims, showing that Whitehouse's results were consistent with the law of squares.
Whitehouse believed that a thinner cable could be made to work with a high voltage induction coil. The Atlantic Telegraph Company, in a hurry to push ahead with the project, went with Whitehouse's cheaper solution rather than Thomson's. After the cable was laid, it suffered badly from retardation, an effect that had first been noticed by Latimer Clark in 1853 on the Anglo-Dutch submarine cable of the Electric Telegraph Company. Retardation causes a delay and a lengthening of telegraph pulses, the latter as if one part of the pulse has been retarded more than the other. Retardation can cause adjacent telegraph pulses to overlap making them unreadable, an effect now called intersymbol interference. It forced telegraph operators to send more slowly to restore a space between pulses. The problem was so severe on the Atlantic cable that transmission speeds were measured in minutes per word rather than words per minute. In attempting to overcome this problem with ever higher voltage, Whitehouse permanently damaged the cable insulation and made it unusable. He was dismissed shortly afterwards.
Some commentators overinterpreted the law of squares and concluded that it implied that the "speed of electricity" depends on the length of the cable. Heaviside, with typical sarcasm, in a piece in "The Electrician" countered this:
<templatestyles src="Template:Blockquote/styles.css" />Is it possible to conceive that the current, when it first sets out to go, say, to Edinburgh, "knows" where it's going, how long a journey it has to make, and where it has to stop, so that it can adjust its speed accordingly? Of course not...
Explanation.
Both the law of squares and the differential retardation associated with it can be explained with reference to dispersion. This is the phenomenon whereby different frequency components of the telegraph pulse travel down the cable at different speeds depending on the cable materials and geometry. This kind of analysis, using the frequency domain with Fourier analysis rather than the time domain, was unknown to telegraph engineers of the period. They would likely deny that a regular chain of pulses contained more than one frequency. On a line dominated by resistance and capacitance, such as the low-frequency ones analysed by Thomson, the square of the velocity, formula_7, of a wave frequency component is proportional to its angular frequency, formula_8 such that,
formula_9
See and for the derivation of this.
From this it can be seen that the higher frequency components travel faster, progressively stretching out the pulse. As the higher frequency components "run away" from the main pulse, the remaining low-frequency components, which contain most of the energy, are left progressively travelling slower as a group.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t_\\text {max} = {1 \\over 2} RCx^2"
},
{
"math_id": 1,
"text": "t_\\text {max}"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "v (x,t)"
},
{
"math_id": 6,
"text": "\\frac {\\partial ^2 v}{\\partial x^2} = RC \\frac {\\partial v}{\\partial t}."
},
{
"math_id": 7,
"text": "u"
},
{
"math_id": 8,
"text": "\\omega"
},
{
"math_id": 9,
"text": "u^2 = \\frac {2 \\omega}{CR}."
}
] |
https://en.wikipedia.org/wiki?curid=63695722
|
63697055
|
Surface differential reflectivity
|
Spectroscopic technique
Surface differential reflectivity (SDR) or differential reflectance spectroscopy (DRS) is a spectroscopic technique that measures and compares the reflectivity of a sample in two different physical conditions (modulation spectroscopy). The result is presented in terms of ΔR/R, which is defined as follow:
formula_0
where "R"1 and "R"2 represent the reflectivity due to a particular state or condition of the sample.
The differential reflectivity is used to enhance just the contributions to the reflected signal coming from the sample. In fact, the light penetration ("α"−1) inside a solid is related to the adsorption coefficient ("α") of the material. The contribution of the sample surface (e.g., surface states, ultra-thin and thin deposited films, etc.) to the reflected signal is generally evaluated in the 10−2 range. The difference between two sample states (1 and 2) is thought to put in evidence small changes occurring onto the sample surface. If "R"1 represents a clean freshly prepared surface (e.g., after a cleavage in vacuum) and "R"2 the same sample after the exposure to hydrogen or oxygen contaminants, the ΔR/R spectrum can be related to features of the clean surface (e.g., surface states); if "R"1 is the reflectivity spectrum of a sample covered by an organic film (even if the substrate is only partially covered) and "R"2 represents the optical spectrum of the pristine substrate, the ΔR/R spectrum can be related to the optical properties of the deposited molecules; etc.
The experimental SDR definition reported was interpreted in terms of surface (or film) thickness ("d") and its dielectric function (ε2 = ε’2 - iε”2). This model, which assumes the surface as a well-defined phase above a bulk, is known as the “three-layer model” and states that:
formula_1
where ε1 = 1 is the vacuum dielectric constant and ε3 = ε’3 - iε”3 is the bulk dielectric function.
The SDR measurements are generally realized by exploiting an optical multichannel system coupled with a double optical path in the so-called Michelson-cross configuration.
In this configuration, the ΔR/R signal is obtained by a direct comparison between the reflectivity signal "R"1 arises from the sample (e.g., a silicon substrate covered by a few amount of molecules) placed inside the UHV chamber (first optical path) and the "R"2 signal acquired from a reference sample (dummy sample; e.g., a silicon wafer) placed along the second optical path. The difference between "R"1 and "R"2 is due to the deposited molecules, which can affect the reflectivity signal in the 10−3÷10−2 range of the overall reflected signal of the real sample. Consequently, a high signal stability is required and the two optical paths must be as comparable as possible.
The SDR apparatus was firstly described and used by G. Chiarotti for the investigation of the surface states contribution in the Ge(111) reflectivity properties. This work also represents the first direct evidence of the existence of surface states in semiconductors. An evolution of the SDR set-up by using linearly polarized light was firstly described by P. Chiaradia and co-workers for testing the structure of the Si(111) 2 × 1 surface. Other equivalent SDR set-up have been exploited for studying: the surface roughening evolution, the reactivity of halogens with semiconductor surfaces, the adhesion of nanoparticles during their growth, the growth of heavy metals on semiconductors, the nano-antennas characterization, just to mention some of the works related to this surface optical technique.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\Delta R}{R}=\\frac{R_1 - R_2}{R_2}"
},
{
"math_id": 1,
"text": "\\frac{\\Delta R}{R}=8\\pi\\frac{d}{\\lambda}Im\\frac{\\epsilon_1 - \\epsilon_2}{\\epsilon_1 - \\epsilon_3}"
}
] |
https://en.wikipedia.org/wiki?curid=63697055
|
63698698
|
Ewa Damek
|
Polish mathematician
Ewa Damek (born 9 August 1958) is a Polish mathematician at the University of Wrocław whose research interests include harmonic analysis, branching processes, and Siegel domains.
Education and career.
Damek is a professor in the mathematical institute of the University of Wrocław, which she directed from 2002 to 2007.
She studied mathematics at the University of Wrocław beginning in 1977, and completed a doctorate under the supervision of Andrzej Hulanicki in 1987. After a stint at the University of Georgia in the US, she returned to Wrocław, where she became a full professor in 2000.
Contributions.
In 1992, with Fulvio Ricci, Damek published a family of counterexamples to a form of the Lichnerowicz conjecture according to which harmonic Riemannian manifolds must be locally symmetric. The asymmetric spaces they found as counterexamples are at least seven-dimensional; they are called Damek–Ricci spaces.
Damek is the coauthor, with D. Buraczewski and T. Mikosch, of the book "Stochastic Models with Power Law Tails: The Equation formula_0" (Springer, 2016).
Recognition.
In 2011 Damek was named a knight of the Order of Polonia Restituta.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X=AX+B"
}
] |
https://en.wikipedia.org/wiki?curid=63698698
|
63700751
|
Phase space measurement with forward modeling
|
Phase space measurement with forward modeling is one approach to address the scattering issue in biomedical imaging.
Scattering is one of the biggest problems in biomedical imaging, given that scattered light is eventually defocused, thus resulting in diffused images. Instead of removing the scattered light, this approach uses the information of scattered light to reconstruct the original light signals. This approach requires the phase space data of light in imaging system and a forward model to describe scattering events in a turbid medium. Phase space of light can be obtained by using digital micromirror device (DMD) or light field microscopy. Phase space measurement with forward modeling can be used in neuroscience to record neuronal activity in the brain.
Concepts.
Phase space of light is used to delineate the space and spatial frequency of light. As light propagates or scatters it will change its phase space as well. For example, as the position of light changes while staying in the same angle, simple propagation of light will shear the phase space of light. For scattering, since it diverges the light angle, the phase will be broadened after scattering. Therefore, scattering, and propagation of light can be modeled by the Wigner function which can generally describe light in wave optics. With a forward model to describe the propagation and scattering event in a scattering tissue, such as brain, a light field of a surface from point sources in a tissue can be estimated. To find the location of point sources of a target in a scattering medium, first, a light field of whole targets should be measured. Then simulated intensity plane is made by a phase space with all possible coordinates that may account for measured phase space. By applying optimization process with the non-negative least squares and a sparsity constraint, a sparse vector set that would correspond to the locations of targets of interest would be obtained by getting rid of non-possible options.
An example of using a forward model for scattering events in a turbid medium.
The Wigner quasiprobability distribution can be used for a forward model
formula_0 (1)
Eventually, scattering and propagation of light can be described as
formula_1 (2)
The weight sum of decomposed contribution is
formula_2 (3)
where formula_3 is an coefficient that represents the intensity of light from a point source at the location formula_4
To obtain a sparse vector set formula_3, solve the lasso problem
formula_5 (4)
where formula_6 is the actual measured phase space and formula_7 is an arbitrary coefficient that favors sparsity.
Application.
Phase space measurement with forward modeling can be used in neuroscience to record neuronal activity in the brain. Researchers have been widely using two-photon scanning microscopy to visualize neurons and their activity by imaging fluorescence emitted from calcium indicators expressed in neurons. However, two-photon excitation microscopy is slow, because it has to scan all the pixels one by one in the target of interest. One advantage of using Phase space measurement with forward modeling is fast, which largely depends on the speed of camera being used. A light field camera can capture an image with all the pixels in one frame at a time to speed up the frame rate of their system. This feature can facilitate voltage imaging in the brain to record action potentials.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " W(r,u) = \\iint\\limits_D <\\tilde{f}^*(u+u'/2)\\tilde{f}(u-u'/2)>e^{i2\\pi u'r}d^2u' "
},
{
"math_id": 1,
"text": " W(r,u) = \\frac{-Nr^2}{2 \\pi \\lambda^2 \\sigma^2 (Zd - Zs)^2} e^{\\frac{Nr^2}{2 \\lambda^2 \\sigma^2 (Zd - Zs)^2} (r - rs + \\lambda (Zd - \\frac{Zd - Zs}{Nr}) u)^2} "
},
{
"math_id": 2,
"text": " \\hat I (r,u) = \\sum_{rs,Zs} C(rs,Zs) W(r,u;rs,Zs) "
},
{
"math_id": 3,
"text": " C(rs,Zs) "
},
{
"math_id": 4,
"text": " (rs,Zs) "
},
{
"math_id": 5,
"text": " \\min_{c \\geqslant 0} \\sum_{r,u} \\left\\vert I(r,u) - \\hat I (r,u) \\right\\vert ^2 + \\mu \\sum_{rs,Zs} \\left\\vert c(rs,Zs) \\right\\vert "
},
{
"math_id": 6,
"text": " I(r,u) "
},
{
"math_id": 7,
"text": " \\mu "
}
] |
https://en.wikipedia.org/wiki?curid=63700751
|
63703274
|
DF-space
|
In the mathematical field of functional analysis, DF-spaces, also written ("DF")-spaces are locally convex topological vector space having a property that is shared by locally convex metrizable topological vector spaces. They play a considerable part in the theory of topological tensor products.
DF-spaces were first defined by Alexander Grothendieck and studied in detail by him in .
Grothendieck was led to introduce these spaces by the following property of strong duals of metrizable spaces: If formula_0 is a metrizable locally convex space and formula_1 is a sequence of convex 0-neighborhoods in formula_2 such that formula_3 absorbs every strongly bounded set, then formula_4 is a 0-neighborhood in formula_2 (where formula_2 is the continuous dual space of formula_0 endowed with the strong dual topology).
Definition.
A locally convex topological vector space (TVS) formula_0 is a DF-space, also written ("DF")-space, if
Sufficient conditions.
The strong dual space formula_8 of a Fréchet space formula_0 is a DF-space.
However,
Examples.
There exist complete DF-spaces that are not TVS-isomorphic with the strong dual of a metrizable locally convex space.
There exist DF-spaces having closed vector subspaces that are not DF-spaces.
Citations.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "V_1, V_2, \\ldots"
},
{
"math_id": 2,
"text": "X^{\\prime}_b"
},
{
"math_id": 3,
"text": "V := \\cap_{i} V_i"
},
{
"math_id": 4,
"text": "V"
},
{
"math_id": 5,
"text": "X^{\\prime}"
},
{
"math_id": 6,
"text": "B_1, B_2, \\ldots"
},
{
"math_id": 7,
"text": "B_i"
},
{
"math_id": 8,
"text": "X_b^{\\prime}"
}
] |
https://en.wikipedia.org/wiki?curid=63703274
|
63711954
|
Dixmier–Ng theorem
|
In functional analysis, the Dixmier–Ng theorem is a characterization of when a normed space is in fact a dual Banach space. It was proven by Kung-fu Ng, who called it a variant of a theorem proven earlier by Jacques Dixmier.
Dixmier-Ng theorem. Let formula_0 be a normed space. The following are equivalent:
That 2. implies 1. is an application of the Banach–Alaoglu theorem, setting formula_1 to the Weak-* topology. That 1. implies 2. is an application of the Bipolar theorem.
Applications.
Let formula_4 be a pointed metric space with distinguished point denoted formula_5. The Dixmier-Ng Theorem is applied to show that the Lipschitz space formula_6 of all real-valued Lipschitz functions from formula_4 to formula_7 that vanish at formula_5 (endowed with the Lipschitz constant as norm) is a dual Banach space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "\\mathbf{B}_X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "0_M"
},
{
"math_id": 6,
"text": "\\text{Lip}_0(M)"
},
{
"math_id": 7,
"text": "\\mathbb{R}"
}
] |
https://en.wikipedia.org/wiki?curid=63711954
|
637138
|
Transfer principle
|
Concept in model theory
In model theory, a transfer principle states that all statements of some language that are true for some structure are true for another structure. One of the first examples was the Lefschetz principle, which states that any sentence in the first-order language of fields that is true for the complex numbers is also true for any algebraically closed field of characteristic 0.
History.
An incipient form of a transfer principle was described by Leibniz under the name of "the Law of Continuity". Here infinitesimals are expected to have the "same" properties as appreciable numbers. The transfer principle can also be viewed as a rigorous formalization of the principle of permanence. Similar tendencies are found in Cauchy, who used infinitesimals to define both the continuity of functions (in Cours d'Analyse) and a form of the Dirac delta function.
In 1955, Jerzy Łoś proved the transfer principle for any hyperreal number system. Its most common use is in Abraham Robinson's nonstandard analysis of the hyperreal numbers, where the transfer principle states that any sentence expressible in a certain formal language that is true of real numbers is also true of hyperreal numbers.
Transfer principle for the hyperreals.
The transfer principle concerns the logical relation between the properties of the real numbers R, and the properties of a larger field denoted *R called the hyperreal numbers. The field *R includes, in particular, infinitesimal ("infinitely small") numbers, providing a rigorous mathematical realisation of a project initiated by Leibniz.
The idea is to express analysis over R in a suitable language of mathematical logic, and then point out that this language applies equally well to *R. This turns out to be possible because at the set-theoretic level, the propositions in such a language are interpreted to apply only to internal sets rather than to all sets. As Robinson put it, "the sentences of [the theory] are interpreted in *R in Henkin's sense."
The theorem to the effect that each proposition valid over R, is also valid over *R, is called the transfer principle.
There are several different versions of the transfer principle, depending on what model of nonstandard mathematics is being used.
In terms of model theory, the transfer principle states that a map from a standard model to a nonstandard model is an elementary embedding (an embedding preserving the truth values of all statements in a language), or sometimes a "bounded" elementary embedding (similar, but only for statements with bounded quantifiers).
The transfer principle appears to lead to contradictions if it is not handled correctly.
For example, since the hyperreal numbers form a non-Archimedean ordered field and the reals form an Archimedean ordered field, the property of being Archimedean ("every positive real is larger than formula_0 for some positive integer formula_1") seems at first sight not to satisfy the transfer principle. The statement "every positive hyperreal is larger than formula_0 for some positive integer formula_1" is false; however the correct interpretation is "every positive hyperreal is larger than formula_0 for some positive hyperinteger formula_1". In other words, the hyperreals appear to be Archimedean to an internal observer living in the nonstandard universe, but appear
to be non-Archimedean to an external observer outside the universe.
A freshman-level accessible formulation of the transfer principle is Keisler's book "".
Example.
Every real formula_2 satisfies the inequality
formula_3
where formula_4 is the integer part function. By a typical application of the transfer principle, every hyperreal formula_2 satisfies the inequality
formula_5
where formula_6 is the natural extension of the integer part function. If formula_2 is infinite, then the hyperinteger formula_7 is infinite, as well.
Generalizations of the concept of number.
Historically, the concept of number has been repeatedly generalized. The addition of 0 to the natural numbers formula_8 was a major intellectual accomplishment in its time. The addition of negative integers to form formula_9 already constituted a departure from the realm of immediate experience to the realm of mathematical models. The further extension, the rational numbers formula_10, is more familiar to a layperson than their completion formula_11, partly because the reals do not correspond to any physical reality (in the sense of measurement and computation) different from that represented by formula_10. Thus, the notion of an irrational number is meaningless to even the most powerful floating-point computer. The necessity for such an extension stems not from physical observation but rather from the internal requirements of mathematical coherence. The infinitesimals entered mathematical discourse at a time when such a notion was required by mathematical developments at the time, namely the emergence of what became known as the infinitesimal calculus. As already mentioned above, the mathematical justification for this latest extension was delayed by three centuries. Keisler wrote:
"In discussing the real line we remarked that we have no way of knowing what a line in physical space is really like. It might be like the hyperreal line, the real line, or neither. However, in applications of the calculus, it is helpful to imagine a line in physical space as a hyperreal line."
The self-consistent development of the hyperreals turned out to be possible if every true first-order logic statement that uses basic arithmetic (the natural numbers, plus, times, comparison) and quantifies only over the real numbers was assumed to be true in a reinterpreted form if we presume that it quantifies over hyperreal numbers. For example, we can state that for every real number there is another number greater than it:
formula_12
The same will then also hold for hyperreals:
formula_13
Another example is the statement that if you add 1 to a number you get a bigger number:
formula_14
which will also hold for hyperreals:
formula_15
The correct general statement that formulates these equivalences is called the transfer principle. Note that, in many formulas in analysis, quantification is over higher-order objects such as functions and sets, which makes the transfer principle somewhat more subtle than the above examples suggest.
Differences between R and *R.
The transfer principle however doesn't mean that R and *R have identical behavior. For instance, in *R there exists an element "ω" such that
formula_16
but there is no such number in R. This is possible because the nonexistence of this number cannot be expressed as a first order statement of the above type. A hyperreal number like "ω" is called infinitely large; the reciprocals of the infinitely large numbers are the infinitesimals.
The hyperreals *R form an ordered field containing the reals R as a subfield. Unlike the reals, the hyperreals do not form a standard metric space, but by virtue of their order they carry an order topology.
Constructions of the hyperreals.
The hyperreals can be developed either axiomatically or by more constructively oriented methods. The essence of the axiomatic approach is to assert (1) the existence of at least one infinitesimal number, and (2) the validity of the transfer principle. In the following subsection we give a detailed outline of a more constructive approach. This method allows one to construct the hyperreals if given a set-theoretic object called an ultrafilter, but the ultrafilter itself cannot be explicitly constructed. Vladimir Kanovei and Shelah give a construction of a definable, countably saturated elementary extension of the structure consisting of the reals and all finitary relations on it.
In its most general form, transfer is a bounded elementary embedding between structures.
Statement.
The ordered field *R of nonstandard real numbers properly includes the real field R. Like all ordered fields that properly include R, this field is non-Archimedean. It means that some members "x" ≠ 0 of *R are infinitesimal, i.e.,
formula_17
The only infinitesimal in "R" is 0. Some other members of *R, the reciprocals "y" of the nonzero infinitesimals, are infinite, i.e.,
formula_18
The underlying set of the field *R is the image of R under a mapping "A" ↦ *"A" from subsets "A" of R to subsets of *R. In every case
formula_19
with equality if and only if "A" is finite. Sets of the form *"A" for some formula_20 are called standard subsets of *R. The standard sets belong to a much larger class of subsets of *R called internal sets. Similarly each function
formula_21
extends to a function
formula_22
these are called standard functions, and belong to the much larger class of internal functions. Sets and functions that are not internal are external.
The importance of these concepts stems from their role in the following proposition and is illustrated by the examples that follow it.
The transfer principle:
formula_23
For example, one such proposition is
formula_24
Such a proposition is true in R if and only if it is true in *R when the quantifier
formula_25
replaces
formula_26
and similarly for formula_27.
* The set
formula_28
must be
formula_29
including not only members of R between 0 and 1 inclusive, but also members of *R between 0 and 1 that differ from those by infinitesimals. To see this, observe that the sentence
formula_30
is true in R, and apply the transfer principle.
* The set *N must have no upper bound in *R (since the sentence expressing the non-existence of an upper bound of N in R is simple enough for the transfer principle to apply to it) and must contain "n" + 1 if it contains "n", but must not contain anything between "n" and "n" + 1. Members of
formula_31
are "infinite integers".)
formula_32
Such a proposition is true in R if and only if it is true in *R after the changes specified above and the replacement of the quantifiers with
formula_33
and
formula_34
Three examples.
The appropriate setting for the hyperreal transfer principle is the world of "internal" entities. Thus, the well-ordering property of the natural numbers by transfer yields the fact that every internal subset of formula_8 has a least element. In this section internal sets are discussed in more detail.
formula_35
of all infinite integers is external.
formula_36
Consequently
formula_37
formula_38
with
formula_39
when applying the transfer principle, and similarly with formula_27 in place of formula_40.
For example: If "n" is an infinite integer, then the complement of the image of any internal one-to-one function "ƒ" from the infinite set {1, ..., "n"} into {1, ..., "n", "n" + 1, "n" + 2, "n" + 3} has exactly three members by the transfer principle. Because of the infiniteness of the domain, the complements of the images of one-to-one functions from the former set to the latter come in many sizes, but most of these functions are external.
This last example motivates an important definition: A *-finite (pronounced star-finite) subset of *R is one that can be placed in "internal" one-to-one correspondence with {1, ..., "n"} for some "n" ∈ *N.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1/n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "x \\geq \\lfloor x \\rfloor,"
},
{
"math_id": 4,
"text": "\\lfloor \\,\\cdot\\, \\rfloor"
},
{
"math_id": 5,
"text": "x \\geq {}^{*}\\! \\lfloor x \\rfloor,"
},
{
"math_id": 6,
"text": "{}^{*}\\! \\lfloor \\,\\cdot\\, \\rfloor"
},
{
"math_id": 7,
"text": "{}^{*}\\! \\lfloor x \\rfloor"
},
{
"math_id": 8,
"text": "\\mathbb{N}"
},
{
"math_id": 9,
"text": "\\mathbb{Z}"
},
{
"math_id": 10,
"text": "\\mathbb{Q}"
},
{
"math_id": 11,
"text": "\\mathbb{R}"
},
{
"math_id": 12,
"text": " \\forall x \\in \\mathbb{R} \\quad \\exists y \\in\\mathbb{R}\\quad x < y. "
},
{
"math_id": 13,
"text": " \\forall x \\in {}^\\star\\mathbb{R} \\quad \\exists y \\in {}^\\star\\mathbb{R}\\quad x < y. "
},
{
"math_id": 14,
"text": " \\forall x \\in \\mathbb{R} \\quad x < x+1 "
},
{
"math_id": 15,
"text": " \\forall x \\in {}^\\star\\mathbb{R} \\quad x < x+1. "
},
{
"math_id": 16,
"text": " 1<\\omega, \\quad 1+1<\\omega, \\quad 1+1+1<\\omega, \\quad 1+1+1+1<\\omega, \\ldots "
},
{
"math_id": 17,
"text": " \\underbrace{\\left|x\\right|+\\cdots+\\left|x\\right|}_{n \\text{ terms}} < 1 \\text{ for every finite [[cardinal number]] } n."
},
{
"math_id": 18,
"text": "\\underbrace{1+\\cdots+1}_{n\\text{ terms}}<\\left|y\\right|\n\\text{ for every finite [[cardinal number]] } n."
},
{
"math_id": 19,
"text": " A \\subseteq {^*\\!A}, "
},
{
"math_id": 20,
"text": "\\scriptstyle A\\,\\subseteq\\,\\mathbb{R}"
},
{
"math_id": 21,
"text": "f:A\\rightarrow\\mathbb{R}"
},
{
"math_id": 22,
"text": " {^*\\! f} : {^*\\!A} \\rightarrow {^*\\mathbb{R}};"
},
{
"math_id": 23,
"text": "\\forall x\\in\\mathbb{R}\\text{ and }\\exists x\\in\\mathbb{R}."
},
{
"math_id": 24,
"text": " \\forall x\\in\\mathbb{R} \\ \\exists y\\in\\mathbb{R} \\ x+y=0."
},
{
"math_id": 25,
"text": " \\forall x \\in {^*\\!\\mathbb{R}}"
},
{
"math_id": 26,
"text": "\\forall x\\in\\mathbb{R},"
},
{
"math_id": 27,
"text": "\\exists"
},
{
"math_id": 28,
"text": " [0,1]^\\ast = \\{\\,x\\in\\mathbb{R}:0\\leq x\\leq 1\\,\\}^\\ast"
},
{
"math_id": 29,
"text": " \\{\\,x \\in {^*\\mathbb{R}} : 0 \\le x \\le 1 \\,\\},"
},
{
"math_id": 30,
"text": " \\forall x\\in\\mathbb{R} \\ (x\\in [0,1] \\text{ if and only if } 0\\leq x \\leq 1)"
},
{
"math_id": 31,
"text": " {^*\\mathbb{N}} \\setminus \\mathbb{N} "
},
{
"math_id": 32,
"text": " \\forall A\\subseteq\\mathbb{R}\\dots\\text{ or }\\exists A\\subseteq\\mathbb{R}\\dots\\ ."
},
{
"math_id": 33,
"text": " [\\forall \\text{ internal } A\\subseteq{^*\\mathbb{R}}\\dots] "
},
{
"math_id": 34,
"text": " [\\exists \\text{ internal } A\\subseteq{^*\\mathbb{R}}\\dots]\\ ."
},
{
"math_id": 35,
"text": " {^*\\mathbb{N}} \\setminus \\mathbb{N}"
},
{
"math_id": 36,
"text": " \\forall n\\in\\mathbb{N} \\ \\exists A\\subseteq\\mathbb{N} \\ \\forall x\\in\\mathbb{N} \\ [x\\in A \\text{ iff } x \\leq n]."
},
{
"math_id": 37,
"text": " \\forall n \\in {^*\\mathbb{N}} \\ \\exists \\text{ internal } A \\subseteq {^*\\mathbb{N}} \\ \\forall x \\in {^*\\mathbb{N}} \\ [x\\in A \\text{ iff } x\\leq n]."
},
{
"math_id": 38,
"text": " \\forall f : A \\rightarrow \\mathbb{R} \\dots "
},
{
"math_id": 39,
"text": " \\forall\\text{ internal } f: {^*\\!A}\\rightarrow {^*\\mathbb{R}} \\dots"
},
{
"math_id": 40,
"text": "\\forall"
}
] |
https://en.wikipedia.org/wiki?curid=637138
|
63724799
|
Schelling's model of segregation
|
Agent-based segregation model
Schelling's model of segregation is an agent-based model developed by economist Thomas Schelling. Schelling's model does not include outside factors that place pressure on agents to segregate such as Jim Crow laws in the United States, but Schelling's work does demonstrate that having people with "mild" in-group preference towards their own group could still lead to a highly segregated society via de facto segregation.
Model.
The original model is set in an formula_0 grid. Agents are split into two groups and occupy the spaces of the grid and only one agent can occupy a space at a time. Agents desire a fraction formula_1 of their neighborhood (in this case defined to be the eight adjacent agents around them) to be from the same group. Increasing formula_1 corresponds to increasing the agent's intolerance of outsiders.
Each round consists of agents checking their neighborhood to see if the fraction of neighbors formula_2 that matches their group—ignoring empty spaces—is greater than or equal formula_3. If formula_4 then the agent will choose to relocate to a vacant spot where formula_5. This continues until every agent is satisfied. Every agent is not guaranteed to be satisfied and in these cases it is of interest to study the patterns (if any) of the agent dynamics.
While studying populations dynamics of two groups of equal size, Schelling found a threshold formula_6 such that formula_7 leads to a random population configuration and formula_8 leads to a segregated population. The value of formula_6 was approximately formula_9. This points to how individuals with even a small amount of in-group preference can form segregated societies. There are different parameterizations and variants of the model and a 'unified' approach is presented in allowing the simulations to explore the thresholds for different segregation events to occur.
Physical model analogies.
There have been observations that the fundamental dynamics of the agents resemble the mechanics used in the Ising model of ferromagnetism. This primarily relies on the similar nature in which each occupied grid location calculates an aggregate measure based upon the similarities of the adjacent grid cells. If each agent produces a satisfaction based upon their homophilic satisfaction threshold as formula_10 then the summation of those values can provide an indication for the segregation of the state that is analogous to the clustering of the aligned spins in a magnetic material. If each cell is a member of a group formula_11, then the local homogeneity can be found via
formula_12 where the 1-d position of formula_13 can be translated into i,j coordinates of ni,nj. Then the state of whether the agent formula_14 moves to a randomly empty grid cell position or 'remains' is defined by:
formula_16
Each agent produces a binary value, so that for each grid configuration of agents of both groups, a vector can be produced of the remain due to satisfaction or not. The overall satisfaction from the remain states of all the agents can be computed;formula_17.
formula_15 then provides a measure for the amount of homogeneity (segregation) on the grid and can be used with the maximum possible value (total sum of agents) as a 'density' of segregation over the simulation of movements as is performed in. Following the approach of formula_15 can be interpreted as a macrostate whose density formula_18 can be estimated by sampling via the Monte Carlo method the grid space from the random initialisations of the grid to produce a calculation of the entropy; formula_19 This allows a trace of the entropy to be computed over the iterations of the simulation as is done with other physical systems.
Broader model considerations.
The canonical Schelling model does not consider variables which may affect the agent's ability to relocate positions in the grid. The work of investigates a model extension where the utility available to agents to move governs this action. It can explain some of the patterns seen where groups do not segregate due to the financial barrier homogeneous zones produce as a result of high demand. The consideration of the financial aspect is also investigated in and. The work of further develops this concept of the importance of the monetary factor in the decision making, and uses it to extend the model with a dual dynamic where agents radiate their income store whenever a movement is made. This also provides a means to produce a more complete model where the trace of the entropy is non-decreasing and adds support that social systems obey the Second law of thermodynamics.
Schelling's model has also been studied from a game-theoretic perspective: In "Schelling games", agents strategically strive to maximize their utilities by relocating to a position with the highest fraction of neighboring agents from the same group.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " N \\times N "
},
{
"math_id": 1,
"text": " B_{\\textrm{a}} "
},
{
"math_id": 2,
"text": " B "
},
{
"math_id": 3,
"text": " B_{\\textrm{a}} "
},
{
"math_id": 4,
"text": " B < B_{\\textrm{a}} "
},
{
"math_id": 5,
"text": " B \\geq B_{\\textrm{a}} "
},
{
"math_id": 6,
"text": " B_{\\textrm{seg}} "
},
{
"math_id": 7,
"text": " B_{\\textrm{a}} < B_{\\textrm{seg}}"
},
{
"math_id": 8,
"text": " B_{\\textrm{a}} \\geq B_{\\textrm{seg}}"
},
{
"math_id": 9,
"text": "\\frac{1}{3}"
},
{
"math_id": 10,
"text": "[0,1]"
},
{
"math_id": 11,
"text": "m_n \\in {m_1,m_2,m_{empty}}"
},
{
"math_id": 12,
"text": "l(m_n) = \\sum_{i=-1}^{1}\\sum_{j=-1}^{1} \\left( \\delta_{m_{(ni,nj)},m_{(ni+i,nj+j)}} : i,j \\neq 0 \\right)"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "m_n"
},
{
"math_id": 15,
"text": "R"
},
{
"math_id": 16,
"text": "\nr\\left(m_n\\right) =\n\\begin{cases}\n\\left( l\\left(m_n\\right) \\geq B_a \\right), & \\text{if} : m_n \\notin { m_{empty} } \\\\\n 0, & \\text{if} : m_n \\in {m_{empty}}\n\\end{cases}\n"
},
{
"math_id": 17,
"text": "R = \\sum_{n=1}^{N}r(m_n)"
},
{
"math_id": 18,
"text": "\\Omega"
},
{
"math_id": 19,
"text": " S = k_B \\text{ln} \\Omega(R)."
}
] |
https://en.wikipedia.org/wiki?curid=63724799
|
63724991
|
Decentralized Privacy-Preserving Proximity Tracing
|
Proximity contact tracing protocol
Decentralized Privacy-Preserving Proximity Tracing (DP-3T, stylized as dp3t) is an open protocol developed in response to the COVID-19 pandemic to facilitate digital contact tracing of infected participants. The protocol, like competing protocol Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT), uses Bluetooth Low Energy to track and log encounters with other users. The protocols differ in their reporting mechanism, with PEPP-PT requiring clients to upload contact logs to a central reporting server, whereas with DP-3T, the central reporting server never has access to contact logs nor is it responsible for processing and informing clients of contact. Because contact logs are never transmitted to third parties, it has major privacy benefits over the PEPP-PT approach; however, this comes at the cost of requiring more computing power on the client side to process infection reports.
The Apple/Google Exposure Notification project is based on similar principles as the DP-3T protocol, and supports a variant of it since May 2020. Huawei added a similar implementation of DP-3T to its Huawei Mobile Services APIs known as "Contact Shield" in June 2020.
The DP-3T SDK and calibration apps intend to support the Apple/Google API as soon as it is released to iOS and Android devices.
On the 21 April 2020, the Swiss Federal Office of Public Health announced that the Swiss national coronavirus contact tracing app will be based on DP-3T. On the 22 April 2020, the Austrian Red Cross, leading on the national digital contact tracing app, announced its migration to the approach of DP-3T. Estonia also confirmed that their app would be based on DP-3T. On April 28, 2020, it was announced that Finland was piloting a version of DP-3T called "Ketju". In Germany, a national app is being built upon DP-3T by SAP SE and Deutsche Telekom alongside CISPA, one of the organisations that authored the protocol. As of September 30, 2020, contact tracing apps using DP-3T are available in Austria, Belgium, Croatia, Germany, Ireland, Italy, the Netherlands, Portugal and Switzerland.
Overview.
The DP-3T protocol works off the basis of Ephemeral IDs (EphID), semi-random rotating strings that uniquely identify clients. When two clients encounter each other, they exchange EphIDs and store them locally in a contact log. Then, once a user tests positive for infection, a report is sent to a central server. Each client on the network then collects the reports from the server and independently checks their local contact logs for an EphID contained in the report. If a matching EphID is found, then the user has come in close contact with an infected patient, and is warned by the client. Since each device locally verifies contact logs, and thus contact logs are never transmitted to third parties, the central reporting server cannot by itself ascertain the identity or contact log of any client in the network. This is in contrast to competing protocols like PEPP-PT, where the central reporting server receives and processes client contact logs.
Ephemeral ID.
Similar to the TCN Protocol and its Temporary Contact Numbers, the DP-3T protocol makes use of 16 byte "Ephemeral IDs" (EphID) to uniquely identify devices in the proximity of a client. These EphIDs are logged locally on a receiving client's device and are never transmitted to third parties.
To generate an EphID, first a client generates a secret key that rotates daily (formula_0) by computing formula_1, where formula_2 is a cryptographic hash function such as SHA-256. formula_3 is calculated by a standard secret key algorithm such as Ed25519. The client will use formula_0 during day formula_4 to generate a list of EphIDs. At the beginning of the day, a client generates a local list of size formula_5 new EphIDs to broadcast throughout the day, where formula_6 is the lifetime of an EphID in minutes. To prevent malicious third parties from establishing patterns of movement by tracing static identifiers over a large area, EphIDs are rotated frequently. Given the secret day key formula_0, each device computes formula_7, where formula_8 is a global fixed string, formula_9 is a pseudo-random function like HMAC-SHA256, and formula_10 is a stream cipher producing formula_11 bytes. This stream is then split into 16-byte chunks and randomly sorted to obtain the EphIDs of the day.
Technical specification.
The DP-3T protocol is made up of two separate responsibilities, tracking and logging close range encounters with other users (device handshake), and the reporting of those encounters such that other clients can determine if they have been in contact with an infected patient (infection reporting). Like most digital contact tracing protocols, the device handshake uses Bluetooth Low Energy to find and exchange details with local clients, and the infection reporting stage uses HTTPS to upload a report to a central reporting server. Additionally, like other , the central reporting server never has access to any client's contact logs; rather the report is structured such that clients can individually derive contact from the report.
Device handshake.
In order to find and communicate with clients in proximity of a device, the protocol makes use of both the server and client modes of Bluetooth LE, switching between the two frequently. In server mode the device advertises its EphID to be read by clients, with clients scanning for servers. When a client and server meet, the client reads the EphID and subsequently writes its own EphID to the server. The two devices then store the encounter in their respective contact logs in addition to a coarse timestamp and signal strength. The signal strength is later used as part of the infection reporting process to estimate the distance between an infected patient and the user.
Infection reporting.
When reporting infection, there exists a central reporting server controlled by the local health authority. Before a user can submit a report, the health authority must first confirm infection and generate a code authorizing the client to upload the report. The health authority additionally instructs the patient on which day their report should begin (denoted as formula_4). The client then uploads the pair formula_0 and formula_4 to the central reporting server, which other clients in the network download at a later date. By using the same algorithm used to generate the original EphIDs, clients can reproduce every EphID used for the period past and including formula_4, which they then check against their local contact log to determine whether the user has been in close proximity to an infected patient.
In the entire protocol, the health authority never has access to contact logs, and only serve to test patients and authorize report submissions.
Epidemiological analysis.
When a user installs a DP-3T app, they are asked if they want to opt in to sharing data with epidemiologists. If the user consents, when they are confirmed to have been within close contact of an infected patient the respective contact log entry containing the encounter is scheduled to be sent to a central statistics server. In order to prevent malicious third parties from discovering potential infections by detecting these uploads, reports are sent at regular intervals, with indistinguishable dummy reports sent when there is no data to transmit.
Health authority cooperation.
To facilitate compatibility between DP-3T apps administered by separate health authorities, apps maintain a local list of the regions a user has visited. Regions are large areas directly corresponding to health authority jurisdiction; the exact location is not recorded. The app will later connect these regions to their respective foreign central reporting server, and fetch reports from these servers in addition to its normal home reporting server. Apps will also submit reports to these foreign reporting servers if the user tests positive for infection.
Attacks on DP-3T and criticism.
Cryptography and security scholar Serge Vaudenay, analyzing the security of DP-3T argued that:
<templatestyles src="Template:Blockquote/styles.css" />some privacy protection measurements by DP3T may have the opposite affect ["sic"] of what they were intended to. Specifically, sick and reported people may be deanonymized, private encounters may be revealed, and people may be coerced to reveal the private data they collect.
Vaudenay's work presents several attacks against DP-3T and similar systems. In response, the DP-3T group claim that out of twelve risks Vaudenay presents, eight are also present in centralized systems, three do not work, and one, which involves physical access to the phone, works but can be mitigated.
In a subsequent work Vaudenay reviews attacks against both centralized and decentralized tracing systems and referring to identification attacks of diagnosed people concludes that:
<templatestyles src="Template:Blockquote/styles.css" />By comparing centralized and decentralized architectures, we observe that attacks against decentralized systems are undetectable, can be done at a wide scale, and that the proposed countermeasures are, at best, able to mitigate attacks in a limited number of scenarios. Contrarily, centralized systems offer many countermeasures, by accounting and auditing.
In the same work Vaudenay advocates that, since neither the centralized nor the decentralized approaches offer sufficient level of privacy protection, different solutions should be explored, in particular suggesting the ConTra Corona, Epione and Pronto-C2 systems as a "third way".
Tang surveys the major digital contact tracing systems and shows that DP-3T is subject to what he calls "targeted identification attacks".
Theoretical attacks on DP-3T have been simulated showing that persistent tracking of users of the first version of the DP-3T system who have voluntarily uploaded their identifiers can be made easy to any 3rd party who can install a large fleet of Bluetooth Low Energy devices. This attack leverages the linkability of a user during a day, and therefore is possible on within a day on all users of some centralized systems such as the system proposed in the United Kingdom, but does not function on 'unlinkable' versions of DP-3T where infected users' identifiers are not transmitted using a compact representation such as a key or seed.
|
[
{
"math_id": 0,
"text": "SK_t"
},
{
"math_id": 1,
"text": "SK_t = H(SK_{t-1})"
},
{
"math_id": 2,
"text": "H()"
},
{
"math_id": 3,
"text": "SK_0"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "n=(24*60)/l"
},
{
"math_id": 6,
"text": "l"
},
{
"math_id": 7,
"text": "S\\_EphID(BK) = PRG(PRF(SK_t, BK))"
},
{
"math_id": 8,
"text": "BK"
},
{
"math_id": 9,
"text": "PRF()"
},
{
"math_id": 10,
"text": "PRG()"
},
{
"math_id": 11,
"text": "n * 16"
}
] |
https://en.wikipedia.org/wiki?curid=63724991
|
63731291
|
Restricted power series
|
Formal power series with coefficients tending to 0
In algebra, the ring of restricted power series is the subring of a formal power series ring that consists of power series whose coefficients approach zero as degree goes to infinity. Over a non-archimedean complete field, the ring is also called a Tate algebra. Quotient rings of the ring are used in the study of a formal algebraic space as well as rigid analysis, the latter over non-archimedean complete fields.
Over a discrete topological ring, the ring of restricted power series coincides with a polynomial ring; thus, in this sense, the notion of "restricted power series" is a generalization of a polynomial.
Definition.
Let "A" be a linearly topologized ring, separated and complete and formula_0 the fundamental system of open ideals. Then the ring of restricted power series is defined as the projective limit of the polynomial rings over formula_1:
formula_2.
In other words, it is the completion of the polynomial ring formula_3 with respect to the filtration formula_4. Sometimes this ring of restricted power series is also denoted by formula_5.
Clearly, the ring formula_6 can be identified with the subring of the formal power series ring formula_7 that consists of series formula_8 with coefficients formula_9; i.e., each formula_10 contains all but finitely many coefficients formula_11.
Also, the ring satisfies (and in fact is characterized by) the universal property: for (1) each continuous ring homomorphism formula_12 to a linearly topologized ring formula_13, separated and complete and (2) each elements formula_14 in formula_13, there exists a unique continuous ring homomorphism
formula_15
extending formula_12.
Tate algebra.
In rigid analysis, when the base ring "A" is the valuation ring of a complete non-archimedean field formula_16, the ring of restricted power series tensored with formula_17,
formula_18
is called a Tate algebra, named for John Tate. It is equivalently the subring of formal power series formula_19 which consists of series convergent on formula_20, where formula_21 is the valuation ring in the algebraic closure formula_22.
The maximal spectrum of formula_23 is then a rigid-analytic space that models an affine space in rigid geometry.
Define the Gauss norm of formula_24 in formula_23 by
formula_25
This makes formula_23 a Banach algebra over "k"; i.e., a normed algebra that is complete as a metric space. With this norm, any ideal formula_26 of formula_23 is closed and thus, if "I" is radical, the quotient formula_27 is also a (reduced) Banach algebra called an affinoid algebra.
Some key results are:
As consequence of the division, preparation theorems and Noether normalization, formula_23 is a Noetherian unique factorization domain of Krull dimension "n". An analog of Hilbert's Nullstellensatz is valid: the radical of an ideal is the intersection of all maximal ideals containing the ideal (we say the ring is Jacobson).
Results.
Results for polynomial rings such as Hensel's lemma, division algorithms (or the theory of Gröbner bases) are also true for the ring of restricted power series. Throughout the section, let "A" denote a linearly topologized ring, separated and complete.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{ I_{\\lambda} \\}"
},
{
"math_id": 1,
"text": "A/I_{\\lambda}"
},
{
"math_id": 2,
"text": "A \\langle x_1, \\dots, x_n \\rangle = \\varprojlim_{\\lambda} A/I_{\\lambda}[x_1, \\dots, x_n]"
},
{
"math_id": 3,
"text": "A[x_1, \\dots, x_n]"
},
{
"math_id": 4,
"text": "\\{ I_{\\lambda}[x_1, \\dots, x_n] \\}"
},
{
"math_id": 5,
"text": "A \\{ x_1, \\dots, x_n \\}"
},
{
"math_id": 6,
"text": "A \\langle x_1, \\dots, x_n \\rangle"
},
{
"math_id": 7,
"text": "A[[x_1, \\dots, x_n]]"
},
{
"math_id": 8,
"text": "\\sum c_{\\alpha} x^{\\alpha}"
},
{
"math_id": 9,
"text": "c_{\\alpha} \\to 0"
},
{
"math_id": 10,
"text": "I_\\lambda"
},
{
"math_id": 11,
"text": "c_{\\alpha}"
},
{
"math_id": 12,
"text": "A \\to B"
},
{
"math_id": 13,
"text": "B"
},
{
"math_id": 14,
"text": "b_1, \\dots, b_n"
},
{
"math_id": 15,
"text": "A \\langle x_1, \\dots, x_n \\rangle \\to B, \\, x_i \\mapsto b_i"
},
{
"math_id": 16,
"text": "(K, | \\cdot |)"
},
{
"math_id": 17,
"text": "K"
},
{
"math_id": 18,
"text": "T_n = K \\langle \\xi_1, \\dots \\xi_n \\rangle = A \\langle \\xi_1, \\dots, \\xi_n \\rangle \\otimes_A K"
},
{
"math_id": 19,
"text": "k[[\\xi_1, \\dots, \\xi_n]]"
},
{
"math_id": 20,
"text": "\\mathfrak{o}_{\\overline{k}}^n"
},
{
"math_id": 21,
"text": "\\mathfrak{o}_{\\overline{k}} := \\{x \\in \\overline{k} : |x| \\leq 1\\}"
},
{
"math_id": 22,
"text": "\\overline{k}"
},
{
"math_id": 23,
"text": "T_n"
},
{
"math_id": 24,
"text": "f = \\sum a_{\\alpha} \\xi^{\\alpha} "
},
{
"math_id": 25,
"text": "\\|f\\| = \\max_{\\alpha} |a_\\alpha|."
},
{
"math_id": 26,
"text": "I"
},
{
"math_id": 27,
"text": "T_n/I"
},
{
"math_id": 28,
"text": "g \\in T_n"
},
{
"math_id": 29,
"text": "\\xi_n"
},
{
"math_id": 30,
"text": "g = \\sum_{\\nu = 0}^{\\infty} g_{\\nu} \\xi_n^{\\nu}"
},
{
"math_id": 31,
"text": "g_{\\nu} \\in T_{n-1}"
},
{
"math_id": 32,
"text": "g_s"
},
{
"math_id": 33,
"text": "| g_s | = \\|g\\| > |g_v |"
},
{
"math_id": 34,
"text": "\\nu > s"
},
{
"math_id": 35,
"text": "f \\in T_n"
},
{
"math_id": 36,
"text": "q \\in T_n"
},
{
"math_id": 37,
"text": "r \\in T_{n-1}[\\xi_n]"
},
{
"math_id": 38,
"text": "< s"
},
{
"math_id": 39,
"text": "f = qg + r."
},
{
"math_id": 40,
"text": "g"
},
{
"math_id": 41,
"text": "f \\in T_{n-1}[\\xi_n]"
},
{
"math_id": 42,
"text": "s"
},
{
"math_id": 43,
"text": "u \\in T_n"
},
{
"math_id": 44,
"text": "g = f u"
},
{
"math_id": 45,
"text": "\\mathfrak{a} \\subset T_n"
},
{
"math_id": 46,
"text": "T_d \\hookrightarrow T_n/\\mathfrak{a}"
},
{
"math_id": 47,
"text": "\\mathfrak m \\subset A"
},
{
"math_id": 48,
"text": "\\varphi : A \\to k := A/\\mathfrak{m}"
},
{
"math_id": 49,
"text": "F"
},
{
"math_id": 50,
"text": "A\\langle \\xi \\rangle"
},
{
"math_id": 51,
"text": "\\varphi(F) = gh"
},
{
"math_id": 52,
"text": "g \\in k[\\xi]"
},
{
"math_id": 53,
"text": "h \\in k\\langle \\xi \\rangle"
},
{
"math_id": 54,
"text": "g, h"
},
{
"math_id": 55,
"text": "k \\langle \\xi \\rangle"
},
{
"math_id": 56,
"text": "G"
},
{
"math_id": 57,
"text": "A[\\xi]"
},
{
"math_id": 58,
"text": "H"
},
{
"math_id": 59,
"text": "F = G H, \\, \\varphi(G) = g, \\varphi(H) = h"
}
] |
https://en.wikipedia.org/wiki?curid=63731291
|
63735167
|
Dual system
|
In mathematics, a dual system, dual pair or a duality over a field formula_0 is a triple formula_1 consisting of two vector spaces, formula_2 and formula_3, over formula_0 and a non-degenerate bilinear map formula_4.
In mathematics, duality is the study of dual systems and is important in functional analysis. Duality plays crucial roles in quantum mechanics because it has extensive applications to the theory of Hilbert spaces.
Definition, notation, and conventions.
Pairings.
A <templatestyles src="Template:Visible anchor/styles.css" />pairing or pair over a field formula_0 is a triple formula_5 which may also be denoted by formula_6 consisting of two vector spaces formula_2 and formula_3 over formula_0 and a bilinear map formula_4 called the bilinear map associated with the pairing, or more simply called the pairing's map or its bilinear form. The examples here only describe when formula_0 is either the real numbers formula_7 or the complex numbers formula_8, but the mathematical theory is general.
For every formula_9, define
formula_10
and for every formula_11 define
formula_12
Every formula_13 is a linear functional on formula_3 and every formula_14 is a linear functional on formula_2. Therefore both
formula_15
form vector spaces of linear functionals.
It is common practice to write formula_16 instead of formula_17, in which in some cases the pairing may be denoted by formula_18 rather than formula_19. However, this article will reserve the use of formula_20 for the canonical evaluation map (defined below) so as to avoid confusion for readers not familiar with this subject.
Dual pairings.
A pairing formula_1 is called a <templatestyles src="Template:Visible anchor/styles.css" />dual system, a <templatestyles src="Template:Visible anchor/styles.css" />dual pair, or a <templatestyles src="Template:Visible anchor/styles.css" />duality over formula_0 if the bilinear form formula_21 is non-degenerate, which means that it satisfies the following two separation axioms:
In this case formula_21 is non-degenerate, and one can say that formula_21 places formula_2 and formula_3 in duality (or, redundantly but explicitly, in separated duality), and formula_21 is called the duality pairing of the triple formula_1.
Total subsets.
A subset formula_31 of formula_3 is called <templatestyles src="Template:Visible anchor/styles.css" />total if for every formula_9, formula_32 implies formula_33
A total subset of formula_2 is defined analogously (see footnote). Thus formula_2 separates points of formula_3 if and only if formula_2 is a total subset of formula_2, and similarly for formula_3.
Orthogonality.
The vectors formula_34 and formula_35 are orthogonal, written formula_36, if formula_37. Two subsets formula_38 and formula_39 are orthogonal, written formula_40, if formula_41; that is, if formula_42 for all formula_43 and formula_44. The definition of a subset being orthogonal to a vector is defined analogously.
The orthogonal complement or annihilator of a subset formula_38 is
formula_45Thus formula_46 is a total subset of formula_2 if and only if formula_47 equals formula_48.
Polar sets.
Given a triple formula_1 defining a pairing over formula_0, the absolute polar set or polar set of a subset formula_49 of formula_2 is the set:formula_50Symmetrically, the absolute polar set or polar set of a subset formula_51 of formula_3 is denoted by formula_52 and defined by
formula_53
To use bookkeeping that helps keep track of the anti-symmetry of the two sides of the duality, the absolute polar of a subset formula_51 of formula_3 may also be called the absolute prepolar or prepolar of formula_51 and then may be denoted by formula_54
The polar formula_52 is necessarily a convex set containing formula_55 where if formula_51 is balanced then so is formula_52 and if formula_51 is a vector subspace of formula_2 then so too is formula_52 a vector subspace of formula_56
If formula_49 is a vector subspace of formula_57 then formula_58 and this is also equal to the real polar of formula_59 If formula_60 then the bipolar of formula_49, denoted formula_61, is the polar of the orthogonal complement of formula_49, i.e., the set formula_62 Similarly, if formula_63 then the bipolar of formula_51 is formula_64
Dual definitions and results.
Given a pairing formula_5 define a new pairing formula_65 where formula_66 for all formula_9 and formula_26.
There is a consistent theme in duality theory that any definition for a pairing formula_1 has a corresponding dual definition for the pairing formula_67
Convention and Definition: Given any definition for a pairing formula_5 one obtains a dual definition by applying it to the pairing formula_67 These conventions also apply to theorems.
For instance, if "formula_2 distinguishes points of formula_3" (resp, "formula_31 is a total subset of formula_3") is defined as above, then this convention immediately produces the dual definition of "formula_3 distinguishes points of formula_2" (resp, "formula_31 is a total subset of formula_2").
This following notation is almost ubiquitous and allows us to avoid assigning a symbol to formula_68
Convention and Notation: If a definition and its notation for a pairing formula_1 depends on the order of formula_2 and formula_3 (for example, the definition of the Mackey topology formula_69 on formula_2) then by switching the order of formula_2 and formula_70 then it is meant that definition applied to formula_65 (continuing the same example, the topology formula_71 would actually denote the topology formula_72).
For another example, once the weak topology on formula_2 is defined, denoted by formula_73, then this dual definition would automatically be applied to the pairing formula_65 so as to obtain the definition of the weak topology on formula_3, and this topology would be denoted by formula_74 rather than formula_75.
Identification of formula_76 with formula_77.
Although it is technically incorrect and an abuse of notation, this article will adhere to the nearly ubiquitous convention of treating a pairing formula_1 interchangeably with formula_65 and also of denoting formula_65 by formula_78
Examples.
Restriction of a pairing.
Suppose that formula_1 is a pairing, formula_79 is a vector subspace of formula_57 and formula_80 is a vector subspace of formula_3. Then the restriction of formula_1 to formula_81 is the pairing formula_82 If formula_1 is a duality, then it's possible for a restriction to fail to be a duality (e.g. if formula_83 and formula_84).
This article will use the common practice of denoting the restriction formula_85 by formula_86
Canonical duality on a vector space.
Suppose that formula_2 is a vector space and let formula_87 denote the algebraic dual space of formula_2 (that is, the space of all linear functionals on formula_2).
There is a canonical duality formula_88 where formula_89 which is called the evaluation map or the natural or canonical bilinear functional on formula_90
Note in particular that for any formula_91 formula_92 is just another way of denoting formula_93; i.e. formula_94
If formula_80 is a vector subspace of formula_87, then the restriction of formula_88 to formula_95 is called the canonical pairing where if this pairing is a duality then it is instead called the canonical duality. Clearly, formula_2 always distinguishes points of formula_80, so the canonical pairing is a dual system if and only if formula_80 separates points of formula_96
The following notation is now nearly ubiquitous in duality theory.
The evaluation map will be denoted by formula_97 (rather than by formula_98) and formula_99 will be written rather than formula_100
Assumption: As is common practice, if formula_2 is a vector space and formula_80 is a vector space of linear functionals on formula_57 then unless stated otherwise, it will be assumed that they are associated with the canonical pairing formula_101
If formula_80 is a vector subspace of formula_87 then formula_2 distinguishes points of formula_80 (or equivalently, formula_102 is a duality) if and only if formula_80 distinguishes points of formula_57 or equivalently if formula_80 is total (that is, formula_103 for all formula_104 implies formula_23).
Canonical duality on a topological vector space.
Suppose formula_2 is a topological vector space (TVS) with continuous dual space formula_105
Then the restriction of the canonical duality formula_88 to formula_2 × formula_106 defines a pairing formula_107 for which formula_2 separates points of formula_105
If formula_106 separates points of formula_2 (which is true if, for instance, formula_2 is a Hausdorff locally convex space) then this pairing forms a duality.
Assumption: As is commonly done, whenever formula_2 is a TVS, then unless indicated otherwise, it will be assumed without comment that it's associated with the canonical pairing formula_108
Polars and duals of TVSs.
The following result shows that the continuous linear functionals on a TVS are exactly those linear functionals that are bounded on a neighborhood of the origin.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_2 be a TVS with algebraic dual
formula_87 and let formula_109 be a basis of neighborhoods of formula_2 at the origin.
Under the canonical duality formula_110 the continuous dual space of formula_2 is the union of all formula_111 as formula_80 ranges over formula_109 (where the polars are taken in
formula_87).
Inner product spaces and complex conjugate spaces.
A pre-Hilbert space formula_112 is a dual pairing if and only if formula_113 is vector space over formula_7 or formula_113 has dimension formula_114 Here it is assumed that the sesquilinear form formula_20 is conjugate homogeneous in its second coordinate and homogeneous in its first coordinate.
Suppose that formula_112 is a complex pre-Hilbert space with scalar multiplication denoted as usual by juxtaposition or by a dot formula_115
Define the map
formula_116
where the right-hand side uses the scalar multiplication of formula_117
Let formula_118 denote the complex conjugate vector space of formula_119 where formula_118 denotes the additive group of formula_120 (so vector addition in formula_118 is identical to vector addition in formula_113) but with scalar multiplication in formula_118 being the map formula_121 (instead of the scalar multiplication that formula_113 is endowed with).
The map formula_122 defined by formula_123 is linear in both coordinates and so formula_124 forms a dual pairing.
Weak topology.
Suppose that formula_1 is a pairing of vector spaces over formula_125
If formula_39 then the weak topology on formula_2 induced by formula_31 (and formula_21) is the weakest TVS topology on formula_57 denoted by formula_126 or simply formula_127 making all maps formula_30 continuous as formula_35 ranges over formula_128 If formula_31 is not clear from context then it should be assumed to be all of formula_70 in which case it is called the weak topology on formula_2 (induced by formula_3).
The notation formula_129 formula_130 or (if no confusion could arise) simply formula_131 is used to denote formula_2 endowed with the weak topology formula_132
Importantly, the weak topology depends entirely on the function formula_133 the usual topology on formula_134 and formula_2's vector space structure but not on the algebraic structures of formula_56
Similarly, if formula_38 then the dual definition of the weak topology on formula_3 induced by formula_46 (and formula_21), which is denoted by formula_135 or simply formula_136 (see footnote for details).
Definition and Notation: If "formula_73" is attached to a topological definition (e.g. formula_73-converges, formula_73-bounded, formula_137 etc.) then it means that definition when the first space (i.e. formula_2) carries the formula_73 topology. Mention of formula_21 or even formula_2 and formula_3 may be omitted if no confusion arises. So, for instance, if a sequence formula_138 in formula_3 "formula_139-converges" or "weakly converges" then this means that it converges in formula_140 whereas if it were a sequence in formula_2, then this would mean that it converges in formula_141).
The topology formula_73 is locally convex since it is determined by the family of seminorms formula_142 defined by formula_143 as formula_35 ranges over formula_56
If formula_9 and formula_144 is a net in formula_57 then formula_144 formula_73-converges to formula_34 if formula_144 converges to formula_34 in formula_145
A net formula_144 formula_73-converges to formula_34 if and only if for all formula_11 formula_146 converges to formula_147
If formula_148 is a sequence of orthonormal vectors in Hilbert space, then formula_148 converges weakly to 0 but does not norm-converge to 0 (or any other vector).
If formula_1 is a pairing and formula_80 is a proper vector subspace of formula_3 such that formula_149 is a dual pair, then formula_150 is strictly coarser than formula_151
Bounded subsets.
A subset formula_31 of formula_2 is formula_73-bounded if and only if
formula_152 where formula_153
Hausdorffness.
If formula_1 is a pairing then the following are equivalent:
Weak representation theorem.
The following theorem is of fundamental importance to duality theory because it completely characterizes the continuous dual space of formula_145
<templatestyles src="Math_theorem/styles.css" />
Weak representation theorem —
Let formula_1 be a pairing over the field formula_125 Then the continuous dual space of formula_141 is formula_155 Furthermore,
Consequently, the continuous dual space of formula_141 is
formula_157
With respect to the canonical pairing, if formula_2 is a TVS whose continuous dual space formula_106 separates points on formula_2 (i.e. such that formula_158 is Hausdorff, which implies that formula_2 is also necessarily Hausdorff) then the continuous dual space of formula_159 is equal to the set of all "evaluation at a point formula_34" maps as formula_34 ranges over formula_2 (i.e. the map that send formula_160 to formula_161).
This is commonly written as
formula_162
This very important fact is why results for polar topologies on continuous dual spaces, such as the strong dual topology formula_163 on formula_106 for example, can also often be applied to the original TVS formula_2; for instance, formula_2 being identified with formula_164 means that the topology formula_165 on formula_164 can instead be thought of as a topology on formula_96
Moreover, if formula_106 is endowed with a topology that is finer than formula_166 then the continuous dual space of formula_106 will necessarily contain formula_164 as a subset.
So for instance, when formula_106 is endowed with the strong dual topology (and so is denoted by formula_167) then
formula_168
which (among other things) allows for formula_2 to be endowed with the subspace topology induced on it by, say, the strong dual topology formula_169 (this topology is also called the strong bidual topology and it appears in the theory of reflexive spaces: the Hausdorff locally convex TVS formula_2 is said to be semi-reflexive if formula_170 and it will be called reflexive if in addition the strong bidual topology formula_169 on formula_2 is equal to formula_2's original/starting topology).
Orthogonals, quotients, and subspaces.
If formula_1 is a pairing then for any subset formula_31 of formula_2:
If formula_2 is a normed space then under the canonical duality, formula_171 is norm closed in formula_106 and formula_172 is norm closed in formula_96
Subspaces.
Suppose that formula_79 is a vector subspace of formula_2 and let formula_173 denote the restriction of formula_1 to formula_174
The weak topology formula_175 on formula_79 is identical to the subspace topology that formula_79 inherits from formula_145
Also, formula_176 is a paired space (where formula_177 means formula_178) where formula_179 is defined by
formula_180
The topology formula_181 is equal to the subspace topology that formula_79 inherits from formula_145
Furthermore, if formula_141 is a dual system then so is formula_182
Quotients.
Suppose that formula_79 is a vector subspace of formula_96
Then formula_183is a paired space where formula_184 is defined by
formula_185
The topology formula_186 is identical to the usual quotient topology induced by formula_141 on formula_187
Polars and the weak topology.
If formula_2 is a locally convex space and if formula_113 is a subset of the continuous dual space formula_188 then formula_113 is formula_166-bounded if and only if formula_189 for some barrel formula_51 in formula_96
The following results are important for defining polar topologies.
If formula_1 is a pairing and formula_190 then:
If formula_1 is a pairing and formula_192 is a locally convex topology on formula_2 that is consistent with duality, then a subset formula_51 of formula_2 is a barrel in formula_193 if and only if formula_51 is the polar of some formula_74-bounded subset of formula_56
Transposes.
Transposes of a linear map with respect to pairings.
Let formula_1 and formula_194 be pairings over formula_0 and let formula_195 be a linear map.
For all formula_196 let formula_197 be the map defined by formula_198
It is said that formula_199's transpose or adjoint is well-defined if the following conditions are satisfied:
In this case, for any formula_203 there exists (by condition 2) a unique (by condition 1) formula_26 such that formula_204), where this element of formula_3 will be denoted by formula_205
This defines a linear map
formula_206
called the transpose or adjoint of formula_199 with respect to formula_1 and formula_194 (this should not be confused with the Hermitian adjoint).
It is easy to see that the two conditions mentioned above (i.e. for "the transpose is well-defined") are also necessary for formula_207 to be well-defined.
For every formula_196 the defining condition for formula_208 is
formula_209
that is,
formula_210 for all formula_211
By the conventions mentioned at the beginning of this article, this also defines the transpose of linear maps of the form formula_212
formula_214
formula_215
formula_216 etc. (see footnote).
Properties of the transpose.
Throughout, formula_1 and formula_194 be pairings over formula_0 and formula_195 will be a linear map whose transpose formula_217 is well-defined.
These results hold when the real polar is used in place of the absolute polar.
If formula_2 and formula_3 are normed spaces under their canonical dualities and if formula_240 is a continuous linear map, then formula_241
Weak continuity.
A linear map formula_195 is weakly continuous (with respect to formula_1 and formula_194) if formula_242 is continuous.
The following result shows that the existence of the transpose map is intimately tied to the weak topology.
<templatestyles src="Math_theorem/styles.css" />
Proposition — Assume that formula_2 distinguishes points of formula_3 and formula_195 is a linear map.
Then the following are equivalent:
If formula_199 is weakly continuous then
Weak topology and the canonical duality.
Suppose that formula_2 is a vector space and that formula_87 is its the algebraic dual.
Then every formula_246-bounded subset of formula_2 is contained in a finite dimensional vector subspace and every vector subspace of formula_2 is formula_246-closed.
Weak completeness.
If formula_141 is a complete topological vector space say that formula_2 is formula_73-complete or (if no ambiguity can arise) weakly-complete.
There exist Banach spaces that are not weakly-complete (despite being complete in their norm topology).
If formula_2 is a vector space then under the canonical duality, formula_247 is complete.
Conversely, if formula_213 is a Hausdorff locally convex TVS with continuous dual space formula_248 then formula_249 is complete if and only if formula_250; that is, if and only if the map formula_251 defined by sending formula_203 to the evaluation map at formula_252 (i.e. formula_253) is a bijection.
In particular, with respect to the canonical duality, if formula_3 is a vector subspace of formula_87 such that formula_3 separates points of formula_57 then formula_254 is complete if and only if formula_255
Said differently, there does not exist a proper vector subspace formula_256 of formula_87 such that formula_257 is Hausdorff and formula_3 is complete in the weak-* topology (i.e. the topology of pointwise convergence).
Consequently, when the continuous dual space formula_106 of a Hausdorff locally convex TVS formula_2 is endowed with the weak-* topology, then formula_258 is complete if and only if formula_259 (that is, if and only if every linear functional on formula_2 is continuous).
Identification of "Y" with a subspace of the algebraic dual.
If formula_2 distinguishes points of formula_3 and if formula_213 denotes the range of the injection formula_154 then formula_213 is a vector subspace of the algebraic dual space of formula_2 and the pairing formula_1 becomes canonically identified with the canonical pairing formula_260 (where formula_261 is the natural evaluation map).
In particular, in this situation it will be assumed without loss of generality that formula_3 is a vector subspace of formula_2's algebraic dual and formula_21 is the evaluation map.
Convention: Often, whenever formula_154 is injective (especially when formula_1 forms a dual pair) then it is common practice to assume without loss of generality that formula_3 is a vector subspace of the algebraic dual space of formula_57 that formula_21 is the natural evaluation map, and also denote formula_3 by formula_105
In a completely analogous manner, if formula_3 distinguishes points of formula_2 then it is possible for formula_2 to be identified as a vector subspace of formula_3's algebraic dual space.
Algebraic adjoint.
In the special case where the dualities are the canonical dualities formula_262 and formula_263 the transpose of a linear map formula_195 is always well-defined.
This transpose is called the algebraic adjoint of formula_199 and it will be denoted by formula_264;
that is, formula_265
In this case, for all formula_266
formula_267 where the defining condition for formula_268 is:
formula_269
or equivalently, formula_270
If formula_271 for some integer formula_272 formula_273 is a basis for formula_2 with dual basis formula_274 formula_275 is a linear operator, and the matrix representation of formula_199 with respect to formula_276 is formula_277 then the transpose of formula_79 is the matrix representation with respect to formula_278 of formula_279
Weak continuity and openness.
Suppose that formula_18 and formula_280 are canonical pairings (so formula_281and formula_282) that are dual systems and let formula_195 be a linear map.
Then formula_195 is weakly continuous if and only if it satisfies any of the following equivalent conditions:
If formula_199 is weakly continuous then formula_286 will be continuous and furthermore, formula_287
A map formula_288 between topological spaces is relatively open if formula_289 is an open mapping, where formula_290 is the range of formula_291
Suppose that formula_292 and formula_280 are dual systems and formula_195 is a weakly continuous linear map.
Then the following are equivalent:
Furthermore,
Transpose of a map between TVSs.
The transpose of map between two TVSs is defined if and only if formula_199 is weakly continuous.
If formula_240 is a linear map between two Hausdorff locally convex topological vector spaces, then:
Metrizability and separability.
Let formula_2 be a locally convex space with continuous dual space formula_106 and let formula_300
Polar topologies and topologies compatible with pairing.
Starting with only the weak topology, the use of polar sets produces a range of locally convex topologies.
Such topologies are called polar topologies.
The weak topology is the weakest topology of this range.
Throughout, formula_1 will be a pairing over formula_0 and formula_308 will be a non-empty collection of formula_73-bounded subsets of formula_96
Polar topologies.
Given a collection formula_308 of subsets of formula_2, the polar topology on formula_3 determined by formula_308 (and formula_21) or the formula_308-topology on formula_3 is the unique topological vector space (TVS) topology on formula_3 for which
formula_309
forms a subbasis of neighborhoods at the origin.
When formula_3 is endowed with this formula_308-topology then it is denoted by "Y"formula_308.
Every polar topology is necessarily locally convex.
When formula_308 is a directed set with respect to subset inclusion (i.e. if for all formula_310 there exists some formula_311 such that formula_312) then this neighborhood subbasis at 0 actually forms a neighborhood basis at 0.
The following table lists some of the more important polar topologies.
Notation: If formula_313 denotes a polar topology on formula_3 then formula_3 endowed with this topology will be denoted by formula_314 formula_315 or simply formula_316 (e.g. for formula_74 we'd have formula_317 so that formula_318 formula_319 and formula_320 all denote formula_3 endowed with formula_73).
Definitions involving polar topologies.
Continuity
A linear map formula_195 is Mackey continuous (with respect to formula_1 and formula_194) if formula_321 is continuous.
A linear map formula_195 is strongly continuous (with respect to formula_1 and formula_194) if formula_322 is continuous.
Bounded subsets.
A subset of formula_2 is weakly bounded (resp. Mackey bounded, strongly bounded) if it is bounded in formula_141 (resp. bounded in formula_323 bounded in formula_324).
Topologies compatible with a pair.
If formula_1 is a pairing over formula_0 and formula_325 is a vector topology on formula_2 then formula_325 is a topology of the pairing and that it is compatible (or consistent) with the pairing formula_1 if it is locally convex and if the continuous dual space of formula_326
If formula_2 distinguishes points of formula_3 then by identifying formula_3 as a vector subspace of formula_2's algebraic dual, the defining condition becomes: formula_327
Some authors (e.g. [Trèves 2006] and [Schaefer 1999]) require that a topology of a pair also be Hausdorff, which it would have to be if formula_3 distinguishes the points of formula_2 (which these authors assume).
The weak topology formula_73 is compatible with the pairing formula_1 (as was shown in the Weak representation theorem) and it is in fact the weakest such topology.
There is a strongest topology compatible with this pairing and that is the Mackey topology.
If formula_80 is a normed space that is not reflexive then the usual norm topology on its continuous dual space is not compatible with the duality formula_328
Mackey–Arens theorem.
The following is one of the most important theorems in duality theory.
<templatestyles src="Math_theorem/styles.css" />
Mackey–Arens theorem I —
Let formula_1 will be a pairing such that formula_2 distinguishes the points of formula_3 and let formula_325 be a locally convex topology on formula_2 (not necessarily Hausdorff).
Then formula_325 is compatible with the pairing formula_1 if and only if formula_325 is a polar topology determined by some collection formula_308 of formula_74-compact disks that cover formula_56
It follows that the Mackey topology formula_329 which recall is the polar topology generated by all formula_73-compact disks in formula_70 is the strongest locally convex topology on formula_2 that is compatible with the pairing formula_330
A locally convex space whose given topology is identical to the Mackey topology is called a Mackey space.
The following consequence of the above Mackey-Arens theorem is also called the Mackey-Arens theorem.
<templatestyles src="Math_theorem/styles.css" />
Mackey–Arens theorem II — Let formula_1 will be a pairing such that formula_2 distinguishes the points of formula_3 and let formula_325 be a locally convex topology on formula_96
Then formula_325 is compatible with the pairing if and only if formula_331
Mackey's theorem, barrels, and closed convex sets.
If formula_2 is a TVS (over formula_332 or formula_8) then a half-space is a set of the form formula_333 for some real formula_334 and some continuous real linear functional formula_156 on formula_96
<templatestyles src="Math_theorem/styles.css" />
Theorem —
If formula_2 is a locally convex space (over formula_332 or formula_8) and if formula_335 is a non-empty closed and convex subset of formula_57 then formula_335 is equal to the intersection of all closed half spaces containing it.
The above theorem implies that the closed and convex subsets of a locally convex space depend entirely on the continuous dual space. Consequently, the closed and convex subsets are the same in any topology compatible with duality;that is, if formula_325 and formula_336 are any locally convex topologies on formula_2 with the same continuous dual spaces, then a convex subset of formula_2 is closed in the formula_325 topology if and only if it is closed in the formula_336 topology.
This implies that the formula_325-closure of any convex subset of formula_2 is equal to its formula_336-closure and that for any formula_325-closed disk formula_49 in formula_57 formula_337
In particular, if formula_51 is a subset of formula_2 then formula_51 is a barrel in formula_338 if and only if it is a barrel in formula_339
The following theorem shows that barrels (i.e. closed absorbing disks) are exactly the polars of weakly bounded subsets.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_1 will be a pairing such that formula_2 distinguishes the points of formula_3 and let formula_325 be a topology of the pair.
Then a subset of formula_2 is a barrel in formula_2 if and only if it is equal to the polar of some formula_74-bounded subset of formula_56
If formula_2 is a topological vector space, then:
All of this leads to Mackey's theorem, which is one of the central theorems in the theory of dual systems.
In short, it states the bounded subsets are the same for any two Hausdorff locally convex topologies that are compatible with the same duality.
<templatestyles src="Math_theorem/styles.css" />
Mackey's theorem — Suppose that formula_338 is a Hausdorff locally convex space with continuous dual space formula_106 and consider the canonical duality formula_108
If formula_336 is any topology on formula_2 that is compatible with the duality formula_342 on formula_2 then the bounded subsets of formula_338 are the same as the bounded subsets of formula_339
Space of finite sequences.
Let formula_2 denote the space of all sequences of scalars formula_343 such that formula_344 for all sufficiently large formula_345
Let formula_346 and define a bilinear map formula_347 by
formula_348
Then formula_349
Moreover, a subset formula_350 is formula_351-bounded (resp. formula_352-bounded) if and only if there exists a sequence formula_353 of positive real numbers such that formula_354 for all formula_355 and all indices formula_356 (resp. and formula_357).
It follows that there are weakly bounded (that is, formula_351-bounded) subsets of formula_2 that are not strongly bounded (that is, not formula_352-bounded).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{K}"
},
{
"math_id": 1,
"text": "(X, Y, b)"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "b : X \\times Y \\to \\mathbb{K}"
},
{
"math_id": 5,
"text": "(X, Y, b),"
},
{
"math_id": 6,
"text": "b(X, Y),"
},
{
"math_id": 7,
"text": "\\R"
},
{
"math_id": 8,
"text": "\\Complex"
},
{
"math_id": 9,
"text": "x \\in X"
},
{
"math_id": 10,
"text": "\\begin{alignat}{4}\nb(x, \\,\\cdot\\,) : \\,& Y && \\to &&\\, \\mathbb{K} \\\\\n & y && \\mapsto &&\\, b(x, y)\n\\end{alignat}"
},
{
"math_id": 11,
"text": "y \\in Y,"
},
{
"math_id": 12,
"text": "\\begin{alignat}{4}\nb(\\,\\cdot\\,, y) : \\,& X && \\to &&\\, \\mathbb{K} \\\\\n & x && \\mapsto &&\\, b(x, y).\n\\end{alignat}"
},
{
"math_id": 13,
"text": "b(x, \\,\\cdot\\,)"
},
{
"math_id": 14,
"text": "b(\\,\\cdot\\,, y)"
},
{
"math_id": 15,
"text": "b(X, \\,\\cdot\\,) := \\{ b(x, \\,\\cdot\\,) : x \\in X \\} \\qquad \\text{ and } \\qquad b(\\,\\cdot\\,, Y) := \\{ b(\\,\\cdot\\,, y) : y \\in Y \\},"
},
{
"math_id": 16,
"text": "\\langle x, y \\rangle"
},
{
"math_id": 17,
"text": "b(x, y)"
},
{
"math_id": 18,
"text": "\\left\\langle X, Y \\right\\rangle"
},
{
"math_id": 19,
"text": "(X, Y, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 20,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 21,
"text": "b"
},
{
"math_id": 22,
"text": "b(x, \\,\\cdot\\,) = 0"
},
{
"math_id": 23,
"text": "x = 0"
},
{
"math_id": 24,
"text": "b(x, \\,\\cdot\\,) : Y \\to \\mathbb{K}"
},
{
"math_id": 25,
"text": "0"
},
{
"math_id": 26,
"text": "y \\in Y"
},
{
"math_id": 27,
"text": "b(x, y) \\neq 0"
},
{
"math_id": 28,
"text": "b(\\,\\cdot\\,, y) = 0"
},
{
"math_id": 29,
"text": "y = 0"
},
{
"math_id": 30,
"text": "b(\\,\\cdot\\,, y) : X \\to \\mathbb{K}"
},
{
"math_id": 31,
"text": "S"
},
{
"math_id": 32,
"text": "b(x, s) = 0 \\quad \\text{ for all } s \\in S"
},
{
"math_id": 33,
"text": "x = 0."
},
{
"math_id": 34,
"text": "x"
},
{
"math_id": 35,
"text": "y"
},
{
"math_id": 36,
"text": "x \\perp y"
},
{
"math_id": 37,
"text": "b(x, y) = 0"
},
{
"math_id": 38,
"text": "R \\subseteq X"
},
{
"math_id": 39,
"text": "S \\subseteq Y"
},
{
"math_id": 40,
"text": "R \\perp S"
},
{
"math_id": 41,
"text": "b(R, S) = \\{ 0 \\}"
},
{
"math_id": 42,
"text": "b(r, s) = 0"
},
{
"math_id": 43,
"text": "r \\in R"
},
{
"math_id": 44,
"text": "s \\in S"
},
{
"math_id": 45,
"text": "R^{\\perp} := \\{ y \\in Y : R \\perp y \\} := \\{ y \\in Y : b(R, y) = \\{ 0 \\} \\}"
},
{
"math_id": 46,
"text": "R"
},
{
"math_id": 47,
"text": "R^\\perp"
},
{
"math_id": 48,
"text": "\\{0\\}"
},
{
"math_id": 49,
"text": "A"
},
{
"math_id": 50,
"text": "A^{\\circ} := \\left\\{ y \\in Y : \\sup_{x \\in A} |b(x, y)| \\leq 1 \\right\\}."
},
{
"math_id": 51,
"text": "B"
},
{
"math_id": 52,
"text": "B^{\\circ}"
},
{
"math_id": 53,
"text": "B^{\\circ} := \\left\\{ x \\in X : \\sup_{y \\in B} |b(x, y)| \\leq 1 \\right\\}."
},
{
"math_id": 54,
"text": "B^{\\circ}."
},
{
"math_id": 55,
"text": "0 \\in Y"
},
{
"math_id": 56,
"text": "Y."
},
{
"math_id": 57,
"text": "X,"
},
{
"math_id": 58,
"text": "A^{\\circ} = A^{\\perp}"
},
{
"math_id": 59,
"text": "A."
},
{
"math_id": 60,
"text": "A \\subseteq X"
},
{
"math_id": 61,
"text": "A^{\\circ\\circ}"
},
{
"math_id": 62,
"text": "{}^{\\circ}\\left(A^{\\perp}\\right)."
},
{
"math_id": 63,
"text": "B \\subseteq Y"
},
{
"math_id": 64,
"text": "B^{\\circ\\circ} := \\left({}^{\\circ}B\\right)^{\\circ}."
},
{
"math_id": 65,
"text": "(Y, X, d)"
},
{
"math_id": 66,
"text": "d(y, x) := b(x, y)"
},
{
"math_id": 67,
"text": "(Y, X, d)."
},
{
"math_id": 68,
"text": "d."
},
{
"math_id": 69,
"text": "\\tau(X, Y, b)"
},
{
"math_id": 70,
"text": "Y,"
},
{
"math_id": 71,
"text": "\\tau(Y, X, b)"
},
{
"math_id": 72,
"text": "\\tau(Y, X, d)"
},
{
"math_id": 73,
"text": "\\sigma(X, Y, b)"
},
{
"math_id": 74,
"text": "\\sigma(Y, X, b)"
},
{
"math_id": 75,
"text": "\\sigma(Y, X, d)"
},
{
"math_id": 76,
"text": "(X, Y)"
},
{
"math_id": 77,
"text": "(Y, X)"
},
{
"math_id": 78,
"text": "(Y, X, b)."
},
{
"math_id": 79,
"text": "M"
},
{
"math_id": 80,
"text": "N"
},
{
"math_id": 81,
"text": "M \\times N"
},
{
"math_id": 82,
"text": "\\left(M, N, b\\big\\vert_{M \\times N}\\right)."
},
{
"math_id": 83,
"text": "Y \\neq \\{ 0 \\}"
},
{
"math_id": 84,
"text": "N = \\{ 0 \\}"
},
{
"math_id": 85,
"text": "\\left(M, N, b\\big\\vert_{M \\times N}\\right)"
},
{
"math_id": 86,
"text": "(M, N, b)."
},
{
"math_id": 87,
"text": "X^{\\#}"
},
{
"math_id": 88,
"text": "\\left(X, X^{\\#}, c\\right)"
},
{
"math_id": 89,
"text": "c\\left(x, x^{\\prime}\\right) = \\left\\langle x, x^{\\prime} \\right\\rangle = x^{\\prime}(x),"
},
{
"math_id": 90,
"text": "X \\times X^{\\#}."
},
{
"math_id": 91,
"text": "x^{\\prime} \\in X^{\\#},"
},
{
"math_id": 92,
"text": "c\\left(\\,\\cdot\\,, x^{\\prime}\\right)"
},
{
"math_id": 93,
"text": "x^{\\prime}"
},
{
"math_id": 94,
"text": "c\\left(\\,\\cdot\\,, x^{\\prime}\\right) = x^{\\prime}(\\,\\cdot\\,) = x^{\\prime}."
},
{
"math_id": 95,
"text": "X \\times N"
},
{
"math_id": 96,
"text": "X."
},
{
"math_id": 97,
"text": "\\left\\langle x, x^{\\prime} \\right\\rangle = x^{\\prime}(x)"
},
{
"math_id": 98,
"text": "c"
},
{
"math_id": 99,
"text": "\\langle X, N \\rangle"
},
{
"math_id": 100,
"text": "(X, N, c)."
},
{
"math_id": 101,
"text": "\\langle X, N \\rangle."
},
{
"math_id": 102,
"text": "(X, N, c)"
},
{
"math_id": 103,
"text": "n(x) = 0"
},
{
"math_id": 104,
"text": "n \\in N"
},
{
"math_id": 105,
"text": "X^{\\prime}."
},
{
"math_id": 106,
"text": "X^{\\prime}"
},
{
"math_id": 107,
"text": "\\left(X, X^{\\prime}, c\\big\\vert_{X \\times X^{\\prime}}\\right)"
},
{
"math_id": 108,
"text": "\\left\\langle X, X^{\\prime} \\right\\rangle."
},
{
"math_id": 109,
"text": "\\mathcal{N}"
},
{
"math_id": 110,
"text": "\\left\\langle X, X^{\\#} \\right\\rangle,"
},
{
"math_id": 111,
"text": "N^{\\circ}"
},
{
"math_id": 112,
"text": "(H, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 113,
"text": "H"
},
{
"math_id": 114,
"text": "0."
},
{
"math_id": 115,
"text": "\\cdot."
},
{
"math_id": 116,
"text": "\\,\\cdot\\, \\perp \\,\\cdot\\, : \\Complex \\times H \\to H \\quad \\text{ by } \\quad c \\perp x := \\overline{c} x,"
},
{
"math_id": 117,
"text": "H."
},
{
"math_id": 118,
"text": "\\overline{H}"
},
{
"math_id": 119,
"text": "H,"
},
{
"math_id": 120,
"text": "(H, +)"
},
{
"math_id": 121,
"text": "\\,\\cdot\\, \\perp \\,\\cdot\\,"
},
{
"math_id": 122,
"text": "b : H \\times \\overline{H} \\to \\Complex"
},
{
"math_id": 123,
"text": "b(x, y) := \\langle x, y \\rangle"
},
{
"math_id": 124,
"text": "\\left(H, \\overline{H}, \\langle \\cdot, \\cdot \\rangle\\right)"
},
{
"math_id": 125,
"text": "\\mathbb{K}."
},
{
"math_id": 126,
"text": "\\sigma(X, S, b)"
},
{
"math_id": 127,
"text": "\\sigma(X, S),"
},
{
"math_id": 128,
"text": "S."
},
{
"math_id": 129,
"text": "X_{\\sigma(X, S, b)},"
},
{
"math_id": 130,
"text": "X_{\\sigma(X, S)},"
},
{
"math_id": 131,
"text": "X_{\\sigma}"
},
{
"math_id": 132,
"text": "\\sigma(X, S, b)."
},
{
"math_id": 133,
"text": "b,"
},
{
"math_id": 134,
"text": "\\Complex,"
},
{
"math_id": 135,
"text": "\\sigma(Y, R, b)"
},
{
"math_id": 136,
"text": "\\sigma(Y, R)"
},
{
"math_id": 137,
"text": "\\operatorname{cl}_{\\sigma(X, Y, b)}(S),"
},
{
"math_id": 138,
"text": "\\left(a_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 139,
"text": "\\sigma"
},
{
"math_id": 140,
"text": "(Y, \\sigma(Y, X, b))"
},
{
"math_id": 141,
"text": "(X, \\sigma(X, Y, b))"
},
{
"math_id": 142,
"text": "p_y : X \\to \\R"
},
{
"math_id": 143,
"text": "p_y(x) := |b(x, y)|,"
},
{
"math_id": 144,
"text": "\\left(x_i\\right)_{i \\in I}"
},
{
"math_id": 145,
"text": "(X, \\sigma(X, Y, b))."
},
{
"math_id": 146,
"text": "b\\left(x_i, y\\right)"
},
{
"math_id": 147,
"text": "b(x, y)."
},
{
"math_id": 148,
"text": "\\left(x_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 149,
"text": "(X, N, b)"
},
{
"math_id": 150,
"text": "\\sigma(X, N, b)"
},
{
"math_id": 151,
"text": "\\sigma(X, Y, b)."
},
{
"math_id": 152,
"text": "\\sup_{} |b(S, y)| < \\infty \\quad \\text{ for all } y \\in Y,"
},
{
"math_id": 153,
"text": "|b(S, y)| := \\{ b(s, y) : s \\in S \\}."
},
{
"math_id": 154,
"text": "y \\mapsto b(\\,\\cdot\\,, y)"
},
{
"math_id": 155,
"text": "b(\\,\\cdot\\,, Y) := \\{b(\\,\\cdot\\,, y) : y \\in Y\\}."
},
{
"math_id": 156,
"text": "f"
},
{
"math_id": 157,
"text": "(X, \\sigma(X, Y, b))^{\\prime} = b(\\,\\cdot\\,, Y) := \\left\\{ b(\\,\\cdot\\,, y) : y \\in Y \\right\\}."
},
{
"math_id": 158,
"text": "\\left(X, \\sigma\\left(X, X^{\\prime}\\right)\\right)"
},
{
"math_id": 159,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, X\\right)\\right)"
},
{
"math_id": 160,
"text": "x^{\\prime} \\in X^{\\prime}"
},
{
"math_id": 161,
"text": "x^{\\prime}(x)"
},
{
"math_id": 162,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, X\\right)\\right)^{\\prime} = X \\qquad \\text{ or } \\qquad \\left(X^{\\prime}_{\\sigma}\\right)^{\\prime} = X."
},
{
"math_id": 163,
"text": "\\beta\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 164,
"text": "\\left(X^{\\prime}_{\\sigma}\\right)^{\\prime}"
},
{
"math_id": 165,
"text": "\\beta\\left(\\left(X^{\\prime}_{\\sigma}\\right)^{\\prime}, X^{\\prime}_{\\sigma}\\right)"
},
{
"math_id": 166,
"text": "\\sigma\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 167,
"text": "X^{\\prime}_{\\beta}"
},
{
"math_id": 168,
"text": "\\left(X^{\\prime}_{\\beta}\\right)^{\\prime} ~\\supseteq~ \\left(X^{\\prime}_{\\sigma}\\right)^{\\prime} ~=~ X"
},
{
"math_id": 169,
"text": "\\beta\\left(\\left(X^{\\prime}_{\\beta}\\right)^{\\prime}, X^{\\prime}_{\\beta}\\right)"
},
{
"math_id": 170,
"text": "\\left(X^{\\prime}_{\\beta}\\right)^{\\prime} = X"
},
{
"math_id": 171,
"text": "S^{\\perp}"
},
{
"math_id": 172,
"text": "S^{\\perp\\perp}"
},
{
"math_id": 173,
"text": "(M, Y, b)"
},
{
"math_id": 174,
"text": "M \\times Y."
},
{
"math_id": 175,
"text": "\\sigma(M, Y, b)"
},
{
"math_id": 176,
"text": "\\left(M, Y / M^{\\perp}, b\\big\\vert_M\\right)"
},
{
"math_id": 177,
"text": "Y / M^{\\perp}"
},
{
"math_id": 178,
"text": "Y / \\left(M^{\\perp}\\right)"
},
{
"math_id": 179,
"text": "b\\big\\vert_M : M \\times Y / M^{\\perp} \\to \\mathbb{K}"
},
{
"math_id": 180,
"text": "\\left(m, y + M^{\\perp}\\right) \\mapsto b(m, y)."
},
{
"math_id": 181,
"text": "\\sigma\\left(M, Y / M^{\\perp}, b\\big\\vert_M\\right)"
},
{
"math_id": 182,
"text": "\\left(M, Y / M^{\\perp}, b\\big\\vert_M\\right)."
},
{
"math_id": 183,
"text": "\\left(X / M, M^{\\perp}, b / M\\right)"
},
{
"math_id": 184,
"text": "b / M : X / M \\times M^{\\perp} \\to \\mathbb{K}"
},
{
"math_id": 185,
"text": "(x + M, y) \\mapsto b(x, y)."
},
{
"math_id": 186,
"text": "\\sigma\\left(X / M, M^{\\perp}\\right)"
},
{
"math_id": 187,
"text": "X / M."
},
{
"math_id": 188,
"text": "X^{\\prime},"
},
{
"math_id": 189,
"text": "H \\subseteq B^{\\circ}"
},
{
"math_id": 190,
"text": "A \\subseteq X,"
},
{
"math_id": 191,
"text": "A,"
},
{
"math_id": 192,
"text": "\\tau"
},
{
"math_id": 193,
"text": "(X, \\tau)"
},
{
"math_id": 194,
"text": "(W, Z, c)"
},
{
"math_id": 195,
"text": "F : X \\to W"
},
{
"math_id": 196,
"text": "z \\in Z,"
},
{
"math_id": 197,
"text": "c(F(\\,\\cdot\\,), z) : X \\to \\mathbb{K}"
},
{
"math_id": 198,
"text": "x \\mapsto c(F(x), z)."
},
{
"math_id": 199,
"text": "F"
},
{
"math_id": 200,
"text": "c(F(\\,\\cdot\\,), Z) \\subseteq b(\\,\\cdot\\,, Y),"
},
{
"math_id": 201,
"text": "c(F(\\,\\cdot\\,), Z) := \\{ c(F(\\,\\cdot\\,), z) : z \\in Z \\}"
},
{
"math_id": 202,
"text": "b(\\,\\cdot\\,, Y) := \\{ b(\\,\\cdot\\,, y) : y \\in Y \\}"
},
{
"math_id": 203,
"text": "z \\in Z"
},
{
"math_id": 204,
"text": "c(F(\\,\\cdot\\,), z) = b(\\,\\cdot\\,, y)"
},
{
"math_id": 205,
"text": "{}^t F(z)."
},
{
"math_id": 206,
"text": "{}^t F : Z \\to Y"
},
{
"math_id": 207,
"text": "{}^t F"
},
{
"math_id": 208,
"text": "{}^t F(z)"
},
{
"math_id": 209,
"text": "c(F(\\,\\cdot\\,), z) = b\\left(\\,\\cdot\\,, {}^t F(z)\\right),"
},
{
"math_id": 210,
"text": "c(F(x), z) = b\\left(x, {}^t F(z)\\right)"
},
{
"math_id": 211,
"text": "x \\in X."
},
{
"math_id": 212,
"text": "Z \\to Y,"
},
{
"math_id": 213,
"text": "Z"
},
{
"math_id": 214,
"text": "X \\to Z,"
},
{
"math_id": 215,
"text": "W \\to Y,"
},
{
"math_id": 216,
"text": "Y \\to W,"
},
{
"math_id": 217,
"text": "{}^t F : Z \\to Y"
},
{
"math_id": 218,
"text": "\\operatorname{ker} {}^t F = \\{ 0 \\}"
},
{
"math_id": 219,
"text": "\\left(W, \\sigma\\left(W, Z, c\\right)\\right)."
},
{
"math_id": 220,
"text": "{}^{tt} F = F."
},
{
"math_id": 221,
"text": "(U, V, a)"
},
{
"math_id": 222,
"text": "E : U \\to X"
},
{
"math_id": 223,
"text": "{}^t E : Y \\to V"
},
{
"math_id": 224,
"text": "F \\circ E : U \\to W,"
},
{
"math_id": 225,
"text": "{}^t (F \\circ E) : Z \\to V,"
},
{
"math_id": 226,
"text": "{}^t (F \\circ E) = {}^t E \\circ {}^t F."
},
{
"math_id": 227,
"text": "F^{-1} : W \\to X,"
},
{
"math_id": 228,
"text": "{}^t \\left(F^{-1}\\right) : Y \\to Z,"
},
{
"math_id": 229,
"text": "{}^t \\left(F^{-1}\\right) = \\left({}^t F\\right)^{-1}"
},
{
"math_id": 230,
"text": "S \\subseteq X"
},
{
"math_id": 231,
"text": "S^{\\circ}"
},
{
"math_id": 232,
"text": "[F(S)]^{\\circ} = \\left({}^t F\\right)^{-1}\\left(S^{\\circ}\\right)"
},
{
"math_id": 233,
"text": "F(S) \\subseteq T"
},
{
"math_id": 234,
"text": "T \\subseteq W,"
},
{
"math_id": 235,
"text": "{}^t F\\left(T^{\\circ}\\right) \\subseteq S^{\\circ}"
},
{
"math_id": 236,
"text": "T \\subseteq W"
},
{
"math_id": 237,
"text": "{}^t F\\left(T^{\\circ}\\right) \\subseteq S^{\\circ},"
},
{
"math_id": 238,
"text": "F(S) \\subseteq T^{\\circ\\circ}"
},
{
"math_id": 239,
"text": "\\operatorname{ker} {}^t F = [ F(X) ]^{\\perp}."
},
{
"math_id": 240,
"text": "F : X \\to Y"
},
{
"math_id": 241,
"text": "\\|F\\| = \\left\\|{}^t F\\right\\|."
},
{
"math_id": 242,
"text": "F : (X, \\sigma(X, Y, b)) \\to (W, (W, Z, c))"
},
{
"math_id": 243,
"text": "c(F(\\,\\cdot\\,), Z) \\subseteq b(\\,\\cdot\\,, Y)"
},
{
"math_id": 244,
"text": "{}^t F : (Z, \\sigma(Z, W, c)) \\to (Y, (Y, X, b))"
},
{
"math_id": 245,
"text": "W,"
},
{
"math_id": 246,
"text": "\\sigma\\left(X, X^{\\#}\\right)"
},
{
"math_id": 247,
"text": "\\left(X^{\\#}, \\sigma\\left(X^{\\#}, X\\right)\\right)"
},
{
"math_id": 248,
"text": "Z^{\\prime},"
},
{
"math_id": 249,
"text": "\\left(Z, \\sigma\\left(Z, Z^{\\prime}\\right)\\right)"
},
{
"math_id": 250,
"text": "Z = \\left(Z^{\\prime}\\right)^{\\#}"
},
{
"math_id": 251,
"text": "Z \\to \\left(Z^{\\prime}\\right)^{\\#}"
},
{
"math_id": 252,
"text": "z"
},
{
"math_id": 253,
"text": "z^{\\prime} \\mapsto z^{\\prime}(z)"
},
{
"math_id": 254,
"text": "(Y, \\sigma(Y, X))"
},
{
"math_id": 255,
"text": "Y = X^{\\#}."
},
{
"math_id": 256,
"text": "Y \\neq X^{\\#}"
},
{
"math_id": 257,
"text": "(X, \\sigma(X, Y))"
},
{
"math_id": 258,
"text": "X^{\\prime}_{\\sigma}"
},
{
"math_id": 259,
"text": "X^{\\prime} = X^{\\#}"
},
{
"math_id": 260,
"text": "\\langle X, Z \\rangle"
},
{
"math_id": 261,
"text": "\\left\\langle x, x^{\\prime} \\right\\rangle := x^{\\prime}(x)"
},
{
"math_id": 262,
"text": "\\left\\langle X, X^{\\#} \\right\\rangle"
},
{
"math_id": 263,
"text": "\\left\\langle W, W^{\\#} \\right\\rangle,"
},
{
"math_id": 264,
"text": "F^{\\#}"
},
{
"math_id": 265,
"text": "F^{\\#} = {}^t F : W^{\\#} \\to X^{\\#}."
},
{
"math_id": 266,
"text": "w^{\\prime} \\in W^{\\#},"
},
{
"math_id": 267,
"text": "F^{\\#}\\left(w^{\\prime}\\right) = w^{\\prime} \\circ F"
},
{
"math_id": 268,
"text": "F^{\\#}\\left(w^{\\prime}\\right)"
},
{
"math_id": 269,
"text": "\\left\\langle x, F^{\\#}\\left(w^{\\prime}\\right) \\right\\rangle = \\left\\langle F(x), w^{\\prime} \\right\\rangle \\quad \\text{ for all } >x \\in X,"
},
{
"math_id": 270,
"text": "F^{\\#}\\left(w^{\\prime}\\right)(x) = w^{\\prime}(F(x)) \\quad \\text{ for all } x \\in X."
},
{
"math_id": 271,
"text": "X = Y = \\mathbb{K}^n"
},
{
"math_id": 272,
"text": "n,"
},
{
"math_id": 273,
"text": "\\mathcal{E} = \\left\\{ e_1, \\ldots, e_n\\right\\}"
},
{
"math_id": 274,
"text": "\\mathcal{E}^{\\prime} = \\left\\{ e_1^{\\prime}, \\ldots, e_n^{\\prime} \\right\\},"
},
{
"math_id": 275,
"text": "F : \\mathbb{K}^n \\to \\mathbb{K}^n"
},
{
"math_id": 276,
"text": "\\mathcal{E}"
},
{
"math_id": 277,
"text": "M := \\left(f_{i,j}\\right),"
},
{
"math_id": 278,
"text": "\\mathcal{E}^{\\prime}"
},
{
"math_id": 279,
"text": "F^{\\#}."
},
{
"math_id": 280,
"text": "\\langle W, Z \\rangle"
},
{
"math_id": 281,
"text": "Y \\subseteq X^{\\#}"
},
{
"math_id": 282,
"text": "Z \\subseteq W^{\\#}"
},
{
"math_id": 283,
"text": "F : (X, \\sigma(X, Y)) \\to (W, \\sigma(W, Z))"
},
{
"math_id": 284,
"text": "F^{\\#}(Z) \\subseteq Y"
},
{
"math_id": 285,
"text": "{}^t F : Z \\to Y,"
},
{
"math_id": 286,
"text": "{}^t F : : (Z, \\sigma(Z, W)) \\to (Y, \\sigma(Y, X))"
},
{
"math_id": 287,
"text": "{}^{tt} F = F"
},
{
"math_id": 288,
"text": "g : A \\to B"
},
{
"math_id": 289,
"text": "g : A \\to \\operatorname{Im} g"
},
{
"math_id": 290,
"text": "\\operatorname{Im} g"
},
{
"math_id": 291,
"text": "g."
},
{
"math_id": 292,
"text": "\\langle X, Y \\rangle"
},
{
"math_id": 293,
"text": "\\sigma(Y, X)"
},
{
"math_id": 294,
"text": "\\operatorname{Im} {}^t F = (\\operatorname{ker} F)^{\\perp}"
},
{
"math_id": 295,
"text": "{}^t F : ^{\\prime} \\to X^{\\prime}"
},
{
"math_id": 296,
"text": "Y^{\\prime}"
},
{
"math_id": 297,
"text": "F : \\left(X, \\sigma\\left(X, X^{\\prime}\\right)\\right) \\to \\left(Y, \\sigma\\left(Y, Y^{\\prime}\\right)\\right)"
},
{
"math_id": 298,
"text": "\\operatorname{Im} {}^t F = {}^t F\\left(Y^{\\prime}\\right)"
},
{
"math_id": 299,
"text": "Y^{\\prime}."
},
{
"math_id": 300,
"text": "K \\subseteq X^{\\prime}."
},
{
"math_id": 301,
"text": "K"
},
{
"math_id": 302,
"text": "D \\subseteq X^{\\prime}"
},
{
"math_id": 303,
"text": "\\operatorname{span} D"
},
{
"math_id": 304,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, D\\right)\\right)"
},
{
"math_id": 305,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, X\\right)\\right)."
},
{
"math_id": 306,
"text": "K,"
},
{
"math_id": 307,
"text": "\\left(X^{\\prime}, \\sigma\\left(X^{\\prime}, X\\right)\\right),"
},
{
"math_id": 308,
"text": "\\mathcal{G}"
},
{
"math_id": 309,
"text": "\\left\\{ r G^{\\circ} : G \\in \\mathcal{G}, r > 0 \\right\\}"
},
{
"math_id": 310,
"text": "G, K \\in \\mathcal{G}"
},
{
"math_id": 311,
"text": "K \\in \\mathcal{G}"
},
{
"math_id": 312,
"text": "G \\cup H \\subseteq K"
},
{
"math_id": 313,
"text": "\\Delta(X, Y, b)"
},
{
"math_id": 314,
"text": "Y_{\\Delta(Y, X, b)},"
},
{
"math_id": 315,
"text": "Y_{\\Delta(Y, X)}"
},
{
"math_id": 316,
"text": "Y_{\\Delta}"
},
{
"math_id": 317,
"text": "\\Delta = \\sigma"
},
{
"math_id": 318,
"text": "Y_{\\sigma(Y, X, b)},"
},
{
"math_id": 319,
"text": "Y_{\\sigma(Y, X)}"
},
{
"math_id": 320,
"text": "Y_{\\sigma}"
},
{
"math_id": 321,
"text": "F : (X, \\tau(X, Y, b)) \\to (W, \\tau(W, Z, c))"
},
{
"math_id": 322,
"text": "F : (X, \\beta(X, Y, b)) \\to (W, \\beta(W, Z, c))"
},
{
"math_id": 323,
"text": "(X, \\tau(X, Y, b)),"
},
{
"math_id": 324,
"text": "(X, \\beta(X, Y, b))"
},
{
"math_id": 325,
"text": "\\mathcal{T}"
},
{
"math_id": 326,
"text": "\\left(X, \\mathcal{T}\\right) = b(\\,\\cdot\\,, Y)."
},
{
"math_id": 327,
"text": "\\left(X, \\mathcal{T}\\right)^{\\prime} = Y."
},
{
"math_id": 328,
"text": "\\left(N^{\\prime}, N\\right)."
},
{
"math_id": 329,
"text": "\\tau(X, Y, b),"
},
{
"math_id": 330,
"text": "(X, Y, b)."
},
{
"math_id": 331,
"text": "\\sigma(X, Y, b) \\subseteq \\mathcal{T} \\subseteq \\tau(X, Y, b)."
},
{
"math_id": 332,
"text": "\\Reals"
},
{
"math_id": 333,
"text": "\\{ x \\in X : f(x) \\leq r \\}"
},
{
"math_id": 334,
"text": "r"
},
{
"math_id": 335,
"text": "C"
},
{
"math_id": 336,
"text": "\\mathcal{L}"
},
{
"math_id": 337,
"text": "A = A^{\\circ\\circ}."
},
{
"math_id": 338,
"text": "(X, \\mathcal{L})"
},
{
"math_id": 339,
"text": "(X, \\mathcal{L})."
},
{
"math_id": 340,
"text": "r > 0"
},
{
"math_id": 341,
"text": "r B"
},
{
"math_id": 342,
"text": "\\left\\langle X, X^{\\prime} \\right\\rangle"
},
{
"math_id": 343,
"text": "r_{\\bull} = \\left(r_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 344,
"text": "r_i = 0"
},
{
"math_id": 345,
"text": "i."
},
{
"math_id": 346,
"text": "Y = X"
},
{
"math_id": 347,
"text": "b : X \\times X \\to \\mathbb{K}"
},
{
"math_id": 348,
"text": "b\\left(r_{\\bull}, s_{\\bull}\\right) := \\sum_{i=1}^{\\infty} r_i s_i."
},
{
"math_id": 349,
"text": "\\sigma(X, X, b) = \\tau(X, X, b)."
},
{
"math_id": 350,
"text": "T \\subseteq X"
},
{
"math_id": 351,
"text": "\\sigma(X, X, b)"
},
{
"math_id": 352,
"text": "\\beta(X, X, b)"
},
{
"math_id": 353,
"text": "m_{\\bull} = \\left(m_i\\right)_{i=1}^{\\infty}"
},
{
"math_id": 354,
"text": "\\left|t_i\\right| \\leq m_i"
},
{
"math_id": 355,
"text": "t_{\\bull} = \\left(t_i\\right)_{i=1}^{\\infty} \\in T"
},
{
"math_id": 356,
"text": "i"
},
{
"math_id": 357,
"text": "m_{\\bull} \\in X"
}
] |
https://en.wikipedia.org/wiki?curid=63735167
|
6373591
|
Discontinuous deformation analysis
|
Discontinuous deformation analysis (DDA) is a type of discrete element method (DEM) originally proposed by Shi in 1988. DDA is somewhat similar to the finite element method for solving stress-displacement problems, but accounts for the interaction of independent particles (blocks) along discontinuities in fractured and jointed rock masses. DDA is typically formulated as a work-energy method, and can be derived using the principle of minimum potential energy or by using Hamilton's principle. Once the equations of motion are discretized, a step-wise linear time marching scheme in the Newmark family is used for the solution of the equations of motion. The relation between adjacent blocks is governed by equations of contact interpenetration and accounts for friction. DDA adopts a stepwise approach to solve for the large displacements that accompany discontinuous movements between blocks. The blocks are said to be "simply deformable". Since the method accounts for the inertial forces of the blocks' mass, it can be used to solve the full dynamic problem of block motion.
Vs DEM.
Although DDA and DEM are similar in the sense that they both simulate the behavior of interacting discrete bodies, they are quite different theoretically. While DDA is a displacement method, DEM is a force method. While DDA uses displacement as variables in an implicit formulation with opening-closing iterations within each time step to achieve equilibrium of the blocks under constrains of the contact, DEM employs an explicit, time marching scheme to solve the equations of motion directly (Cundall and Hart). The system of equation in DDA is derived from minimizing the total potential energy of the system being analyzed. This guarantee that equilibrium is satisfied at all times and that energy consumption is natural since it is due to frictional forces. In DEM, unbalanced forces drive the solution process, and damping is used to dissipate energy. If a quasi-static solution is desired in which the intermediate steps are not of interest, the type of damping and the type of relaxation scheme can be selected in DEM to obtain the most efficient solution method (Cundall). The application of damping in DEM for quasi-static problem is somewhat analogues to the setting to zero of the initial velocities of the block in the static analysis of DDA. In dynamic problem, however, the amount and type of damping in DEM, which are very difficult to qualify experimentally, has to be selected very carefully to as not to damp out real vibrations. On the other hand, the energy consumption in DDA is due to the frictional resistance at contact. By passing the velocities of the blocks at the end of a time step to the next time step, DDA gives real dynamic solution with correct energy consumption. By using an energy approach, DDA does not require an artificial damping term to dissipate energy as in DEM, and can easily incorporate other mechanisms for energy loss.
Strengths and limitations.
DDA has several strengths recommending it for use in slope stability problems in jointed rock masses, which are balanced by serious limitations be accounted when DDA is used for larger scale, faster moving problems.
Limitations.
the stiffness formula_1 doesn't vary over 1 or 2 orders of magnitude, while the mass formula_2 is
a function of the cube of the characteristic length.
Modification and improvement.
Various modifications to the original DDA formulation have been reported in the rock mechanics literature. In the original DDA formulation a first order polynomial displacement function was assumed, so the stresses and strains within a block in the model were constant. This approximation precludes the application of this algorithm to problems with significant stress variations within the block. However, in cases where the displacement inside the block is high and cannot be ignored, the blocks can be divided by mesh. An example of this approach is the research by Chang et al. and Jing who resolved this problem by adding finite element meshes in the two-dimensional domain of the blocks so that stress variations within the blocks can be allowed.
Higher order DDA method for two-dimensional problems has been developed in both theory and computer codes by researchers like Koo and Chern, Ma et al. and Hsiung. Additionally, The DDA contact model which was originally based on penalty method was improved by adopting the Lagrange type approach reported by Lin et al.
Since a blocky system is a highly non-linear system due to non-linearity within blocks and between blocks, Chang et al. implemented a material non-linearity model to DDA using strain hardening curves. Ma developed a non-linear contact model for analysis of slope progressive failure including strain softening using the stress and strain curve.
Recent progress in DDA algorithm is reported by Kim et al. and Jing et al. which considers coupling of fluid flow in fractures. The hydro-mechanical coupling across rock fracture surfaces is also taken into account. The program computes water pressure and seepage throughout the rock mass of interest. In its original formulation, a rock bolt was modeled as a line spring connecting two adjacent blocks. Later, Te-Chin Ke suggested an improved bolt model, followed by the rudimentary formulation of lateral constraint of rock bolting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sqrt{k/M}"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "M"
}
] |
https://en.wikipedia.org/wiki?curid=6373591
|
6375618
|
Weil group
|
In mathematics, a Weil group, introduced by Weil (1951), is a modification of the absolute Galois group of a local or global field, used in class field theory. For such a field "F", its Weil group is generally denoted "WF". There also exists "finite level" modifications of the Galois groups: if "E"/"F" is a finite extension, then the relative Weil group of "E"/"F" is "W""E"/"F" = "WF"/ (where the superscript "c" denotes the commutator subgroup).
For more details about Weil groups see or or .
Class formation.
The Weil group of a class formation with fundamental classes "u""E"/"F" ∈ "H"2("E"/"F", "A""F") is a kind of modified Galois group, used in various formulations of class field theory, and in particular in the Langlands program.
If "E"/"F" is a normal layer, then the (relative) Weil group "W""E"/"F" of "E"/"F" is the extension
1 → "A""F" → "W""E"/"F" → Gal("E"/"F") → 1
corresponding (using the interpretation of elements in the second group cohomology as central extensions) to the fundamental class "u""E"/"F" in "H"2(Gal("E"/"F"), "A""F"). The Weil group of the whole formation is defined to be the inverse limit of the Weil groups of all the layers
"G"/"F", for "F" an open subgroup of "G".
The reciprocity map of the class formation ("G", "A") induces an isomorphism from "AG" to the abelianization of the Weil group.
Archimedean local field.
For archimedean local fields the Weil group is easy to describe: for C it is the group C× of non-zero complex numbers, and for R it is a non-split extension of the Galois group of order 2 by the group of non-zero complex numbers, and can be identified with the subgroup C× ∪ "j" C× of the non-zero quaternions.
Finite field.
For finite fields the Weil group is infinite cyclic. A distinguished generator is provided by the Frobenius automorphism. Certain conventions on terminology, such as arithmetic Frobenius, trace back to the fixing here of a generator (as the Frobenius or its inverse).
Local field.
For a local field of characteristic "p" > 0, the Weil group is the subgroup of the absolute Galois group of elements that act as a power of the Frobenius automorphism on the constant field (the union of all finite subfields).
For "p"-adic fields the Weil group is a dense subgroup of the absolute Galois group, and consists of all elements whose image in the Galois group of the residue field is an integral power of the Frobenius automorphism.
More specifically, in these cases, the Weil group does not have the subspace topology, but rather a finer topology. This topology is defined by giving the inertia subgroup its subspace topology and imposing that it be an open subgroup of the Weil group. (The resulting topology is "locally profinite".)
Function field.
For global fields of characteristic "p">0 (function fields), the Weil group is the subgroup of the absolute Galois group of elements that act as a power of the Frobenius automorphism on the constant field (the union of all finite subfields).
Number field.
For number fields there is no known "natural" construction of the Weil group without using cocycles to construct the extension. The map from the Weil group to the Galois group is surjective, and its kernel is the connected component of the identity of the Weil group, which is quite complicated.
Weil–Deligne group.
The Weil–Deligne group scheme (or simply Weil–Deligne group) "W"′"K" of a non-archimedean local field, "K", is an extension of the Weil group "WK" by a one-dimensional additive group scheme "G""a", introduced by . In this extension the Weil group acts on the
additive group by
formula_0
where "w" acts on the residue field of order "q" as "a"→"a"||"w"|| with ||"w"|| a power of "q".
The local Langlands correspondence for GL"n" over "K" (now proved) states that there is a natural bijection between isomorphism classes of irreducible admissible representations of GL"n"("K") and certain "n"-dimensional representations of the Weil–Deligne group of "K".
The Weil–Deligne group often shows up through its representations. In such cases, the Weil–Deligne group is sometimes taken to be "WK" × "SL"(2,C) or "WK" × "SU"(2,R), or is simply done away with and Weil–Deligne representations of "WK" are used instead.
In the archimedean case, the Weil–Deligne group is simply defined to be Weil group.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\displaystyle wxw^{-1} = ||w||x"
}
] |
https://en.wikipedia.org/wiki?curid=6375618
|
63758760
|
String Quartet No. 1 (Gerhard)
|
String quartet composed by Robert Gerhard
The String Quartet No.1 is a piece for two violins, viola and cello, composed by Robert Gerhard between 1951 and 1955, premiered at Dartington in 1956. This work marks a turning point in Gerhard's style and composition processes, because in one hand, he recovers some old techniques such as the sonata form in the first movement, along with others not as old like the 12-tone technique. Gerhard brilliantly develops, combines and transforms these resources along with new systematic processes created by himself, so that it leads to a new and broad theoretical framework that will be essential to his music thereafter.
Background.
Robert Gerhard began writing this string quartet in 1951 in Cambridge, where he lived in exile since 1939 as a result of Franco's dictatorship, within a significant historical and personal context. He had been one of the most important disciples (from the few who remained alive) of Arnold Schoenberg, who died in the same year of 1951. At the same moment, the twelve-tone compositional technique of the avant-garde in the interwar period defended by Schoenberg was replaced by integral serialism, led by Pierre Boulez.
Shortly before, two of Gerhard's last pieces were poorly received by critics. The first one, the opera The Duenna, released abroad, was rejected for "abusing popular melodies", which may not have happened if the premiere had been in Barcelona. There, the public could have been more involved in the plot and its references to politics and society. The second piece, "Sonata for viola and piano", is said to have been "lacking originality". This led Gerhard to a period of crisis. At this point he is witnessing a second great shock in the world of musical composition, after the first step towards the twelve-tone system, at the end of the First World War.
Afterwards, he began a third stage of his compositional career, characterized by an exploration of the concept of serialism, but in a very different direction from that of Boulez or Messiaen. From then on, he applies the concept of serialization to pitch and temporalities, but in a more lax way: he does not necessarily make use of the entire chromatic scale, and also makes some block exchanges within a single series, leaving more room for expressiveness. He also develops serialism in terms of time, based on the concept of time-seven: Gerhard is inclined to orient himself towards proportions (rather than rhythm) and the distance between events (so that articulation, rhythm, duration, metric and form, are included in the same spectrum).
However, despite the criticisms and his predilection for this new serialism, Gerhard was never opposed to use elements from the Catalan folk music in aspects such as rhythm, orchestration or in the shape of the tone-rows. From this moment on, he also took over from Schoenberg in the didactic field of 12-tone teaching, giving lectures and writing articles.
In this sense, he shares with Bartók the will of transforming the tradition to maintain it. In fact, the esthetic evoked by some of the pieces of this creative new period by Gerhard, such as the "Piano Concerto" (1951) or the Harpsichord Concerto (1956), denote to some extent, a clear influence of the Hungarian composer. In addition to this, by comparing aspects such as form, rhythm, modes or coloration, one can clearly see the existence of similarities in the use of folk music from the respective regions of each of the composers.
Two years later, the premiere of the "Symphony No. 1" (1953) had a considerable success, at the Festival of the International Society of Contemporary Music in Baden-Baden. Despite Gerhard didn't follow the other new musical currents, he was well aware of the work of the new generations of British composers and, in general, of the international scene. Within this situation there are a couple of exceptions: on one hand, during the 1950s he wrote instrumental music for BBC series (using the pseudonym "Joan de Serrallonga"); on the other hand, he researched and experimented with electronic music (particularly concrete music), obtaining resources and materials that would later give rise to his electro-acoustic compositions. In this task, Leopoldina Feichtegger played an important role (Poldi, married to Gerhard). And it could also be said that they would hardly have achieved all this progress without the patronage of Alice Roughton, who offered jobs and housing to the marriage: Alice's residence was an important place of assistance and reunion between intellectuals of the time.
An important fact about the string quartet No. 1 is that actually it was not the first quartet he composed, as he had previously written 3 more, but Gerhard ended up dismissing the idea of incorporating this others into his catalog. The first two date from 1917 (being a pupil of Felip Pedrell) and 1922 (self-taught) respectively. The third has a more complex history: he composed it in 1927, while he was Schoenberg's student, but it is discussed whether it was really a work he presented to the professor of the Akademie der Künste or not, as it differs stylistically from the rest of his works. However, a part of the material of this latter quartet appears in the "Concertino for Strings" (1929). In addition, two years before his return to Catalonia in 1929, he presented this quartet to a composition competition in Barcelona, to get a boost in his career in terms of recognition and economic autonomy.
Analysis.
This work is written in four movements, and has an approximate duration of 20 minutes (19 in the premiere and 23 in the recording of the Kreutzer Quartet, of which the durations below are noted):
The temporal difference between the beginning of the compositional process of the work and the end, although based on the same twelve-note sequence, creates a stylistic separation between movements 1–2 and 3–4. The last two movements are one of the first explorations of the "Combinatory Code" that relates heights and proportions through series.""[I] employ simultaneously with the 12-note series, a set of numerically expressed proportions. This series can be thought of as a code for the combinatoric calculations that deal with the height-structure relationship." "Robert Gerhard, "Developments in Twelve-Tone Technique" (1956)
In addition, incidentally, one can notice that the golden ratio is implicit in the duration of the last movements: this may vary depending on the recording, but we can see that the ratio between the fourth and third movement is approaching, as well as the total of the recording between the last two movements.
Analysis of the movements.
Moviment I ("Allegro assai").
Gerhard worked hard to reinvent the application of the sonata form by mixing it with the stylistic resources that he uses in his compositions, and this movement is a clear example of this. However, outside tonality, it was necessary to define new musical processes that would allow structuring a sonata within thematic and harmonic parameters. Because the thematic element is preserved within the twelve-tone ideology, it is not necessary to change the technique in this regard beyond the use of the kind of melodic lines of its own. As for harmony and the lack of tonality, he divides the tone-row of heights into two Hexachords and uses this duality to achieve the harmonic contrast that previously was marked by ancient modes and tonalities. Hexachords are a resource Schoenberg had used before, and Gerhard is known to have spent some time studying his master's compositions such as "Von Heute auf Morgen".
The division of the tone-row into these two complementary hexachords at the beginning of the quartet predisposes the importance that the hexachordal relationship plays in this work. In addition to this, each hexachord in the original series is part of the SC 6–22 (012468), which is complementary in itself. As a result, each hexachord, either extracted from the beginning or the end of the row, is related by its transposition and retrograded inversion. Gerhard takes this a step further by choosing to relate hexachordal tone-rows that share five pitch classes in common. This means that for each hexachordal row there will be two other hexachords that share five pitch classes among each other. Since Gerhard exploits these relationships exclusively quartet, we will refer to hexachords who manifest this relationship as "closely" related.
While each hexachord in its prime form has only one related companion who shares the six pitch classes, there are two hexachordal transformations for each of the related six-note segments. This ladder-shaped relationship can be seen in Figure 2, which features a "hexachordal matrix", where each prime shape of the series is reordered from lowest to highest (from the order of pitch class) within its two hexachordal limits. Lines drawn from each hexachord to the other are paired with their nearest relative.
Due to this relationship, Gerhard can move perfectly between multiple transformations of the series keeping at least five notes in common between each hexachord, resulting in clear and uninterrupted harmonic coherence.
Regarding the use of these hexachords in the themes, he develops the relationships between these two structures, using retrogrades and inversions to develop the areas that correspond to exposition, development, recapitulation, and primary and subordinate themes. Thus, for example, he can only present one hexachord (with its respective transpositions) in the exposition, and reserve certain combinations of transpositions for occurrences of very specific motifs. The following schema refers to the association of these traditional segmentations and how Gerhard implements series transpositions and hexachords (encoded with P and H respectively).
This new perspective of the harmonic structure is largely consistent with Babbitt's thinking, that the fundamental idea of 12-tone technique is a reformulation of the principle of tonality. In fact, it is inferred in Gerhard's writings that he was very much aware of Babbit's research.
In addition, this first movement features "echoes of primitive Iberian rhythms". Specifically, some of the polyrhythms in the "ostinato" refer to the "Charradas" of Salamanca, and at some rate, to the Fandango.
Second movement ("Con vivacità").
In this movement he takes advantage of the hexachords to establish a link between the transformations of the tone-row and its inversions. In this way, on one hand he pairs each transposition of the original sequence, with its retrograde and, on the other, with another inversion and its respective retrogradation.
Third movement ("Grave").
In this movement he takes advantage of the hexachords to establish a link between the transformations of the tone-row and its inversions. In this way, on one hand he pairs each transposition of the original sequence, with its retrograde and, on the other, with another inversion and its respective retrogradation.
As for what temporality respects, the same tone-row allows deriving from itself the duration of the notes: it is achieved by making a simple subtraction in modulo 12, that is, subtracting and considering the results in the interval formula_0 so that if a number is left out, you can add or subtract as many 12 as necessary. This procedure greatly influences the structure of the work, as it determines blocks of 78 formula_1 eight notes where a single row acts. In addition, he divides the sequence into two hexachords so that their respective durations are 33 and 45. The 33:45 ratio has an essential implication throughout the movement, as he will divide it into 4 structured sections as follows: There are 3 sections of 11 bars (33 x 3 x 11) and the last one is 12 bars (12 x 45 – 33). formula_2.
Thus, in addition to the rows, there is a rhythmic series that is used and that governs, in the words of Julian White "the movement, duration, and temporal succession of the total of sound events".
Fourth movement ("Molto allegro").
This movement contains the same compositional principles as the previous ones but also introduces the technique of cyclic meter trains, which has the function of introducing musical structures with a certain number of beats on a metric with a different amount, which causes that the drop of the metric and the particular structure to not coincide until after a certain number of iterations (which will be the least common multiple of the pulsations that make up both elements).
In particular, the following diagram (Figure 6) shows how each structure (a 9/8 and 7/8, respectively) sounds twice, adjusting with the 2/8 bar every 9 and 7 iterations, respectively.
Criticism on serialism.
For the first time, Gerhard radically applied the rhythmic series theory in this quartet: The third movement can be understood as a study of proportions, and for it to be effective it is necessary to ensure perceptibility to the listener. These proportions, in part, could be derived from the same tone-row, and play with this ambivalence. Gerhard continually sought to keep updated with the innovations in all the art forms, science and technology, which greatly influenced the way he conceived and analyzed music."In the final two movements of the Quartet, Gerhard tries to rationalize such correspondences and to establish in his music precise connections between the pitch- and time-dimensions, which derive from a preconceived constructive plan. Thus, to every note in the series measured in semitones from a 'root-note' in the hexachordal system, a number is made to correspond it such that it can equally refer to a scale of time or of metrical values. Such a plan of organization may appear extremely rigid. But Roberto Gerhard is no pedant. He always knows how to preserve his freedom of action in confronting any musical problem." Roman VladInstead of assigning a number sequence to all rhythms (which would determine the rhythmic pattern a priori), he preferred to work the series on more general instances (metric, rhythmic) and afterwards specify at a smaller scale. This distinguished him from other styles of serialism, criticized for lacking freedom and intuition. The other distinctive feature is the fact that he works with different time proportions that serve as a guide.
Premiere and recordings.
Completed in 1955, the quartet (dedicated to the Parrenin Quartet) premiered in August of the following year at Dartington Summer School, with a total duration of 19 minutes.
Notable recordings include the Kreutzer quartet and the Arditti Quartet.
Bibliography.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[1, 12]"
},
{
"math_id": 1,
"text": "(78= 1 + 2 + \\ldots + 12)"
},
{
"math_id": 2,
"text": "(12=45-33)"
}
] |
https://en.wikipedia.org/wiki?curid=63758760
|
63770253
|
Dirichlet negative multinomial distribution
|
Probability multivariate distribution
In probability theory and statistics, the Dirichlet negative multinomial distribution is a multivariate distribution on the non-negative integers. It is a multivariate extension of the beta negative binomial distribution. It is also a generalization of the negative multinomial distribution (NM("k", "p")) allowing for heterogeneity or overdispersion to the probability vector. It is used in quantitative marketing research to flexibly model the number of household transactions across multiple brands.
If parameters of the Dirichlet distribution are formula_1, and if
formula_2
where
formula_3
then the marginal distribution of "X" is a Dirichlet negative multinomial distribution:
formula_4
In the above, formula_5 is the negative multinomial distribution and formula_6 is the Dirichlet distribution.
Motivation.
Dirichlet negative multinomial as a compound distribution.
The Dirichlet distribution is a conjugate distribution to the negative multinomial distribution. This fact leads to an analytically tractable compound distribution.
For a random vector of category counts formula_7, distributed according to a negative multinomial distribution, the compound distribution is obtained by integrating on the distribution for p which can be thought of as a random vector following a Dirichlet distribution:
formula_8
formula_9
which results in the following formula:
formula_10
where formula_11 and formula_12 are the formula_13 dimensional vectors created by appending the scalars formula_14 and formula_15 to the formula_16 dimensional vectors formula_17 and formula_18 respectively and formula_19 is the multivariate version of the beta function. We can write this equation explicitly as
formula_20
Alternative formulations exist. One convenient representation is
formula_21
where formula_22 and formula_23.
This can also be written
formula_24
Properties.
Marginal distributions.
To obtain the marginal distribution over a subset of Dirichlet negative multinomial random variables, one only needs to drop the irrelevant formula_25's (the variables that one wants to marginalize out) from the formula_1 vector. The joint distribution of the remaining random variates is formula_26 where formula_27 is the vector with the removed formula_25's. The univariate marginals are said to be beta negative binomially distributed.
Conditional distributions.
If "m"-dimensional x is partitioned as follows
formula_28
and accordingly formula_1
formula_29
then the conditional distribution of formula_30 on formula_31 is formula_32 where
formula_33
and
formula_34.
That is,
formula_35
Conditional on the sum.
The conditional distribution of a Dirichlet negative multinomial distribution on formula_36 is Dirichlet-multinomial distribution with parameters formula_37 and formula_1. That is
formula_38.
Notice that the expression does not depend on formula_14 or formula_15.
Aggregation.
If
formula_39
then, if the random variables with positive subscripts "i" and "j" are dropped from the vector and replaced by their sum,
formula_40
Correlation matrix.
For formula_0 the entries of the correlation matrix are
formula_41
formula_42
Heavy tailed.
The Dirichlet negative multinomial is a heavy tailed distribution. It does not have a finite mean for formula_43 and it has infinite covariance matrix for formula_44. Therefore the moment generating function does not exist.
Applications.
Dirichlet negative multinomial as a Pólya urn model.
In the case when the formula_45 parameters formula_46 and formula_1 are positive integers the Dirichlet negative multinomial can also be motivated by an urn model - or more specifically a basic Pólya urn model. Consider an urn initially containing formula_47 balls of formula_13 various colors including formula_15 red balls (the stopping color). The vector formula_1 gives the respective counts of the other balls of various formula_16 non-red colors. At each step of the model, a ball is drawn at random from the urn and replaced, along with one additional ball of the same color. The process is repeated over and over, until formula_14 red colored balls are drawn. The random vector formula_48 of observed draws of the other formula_16 non-red colors are distributed according to a formula_49. Note, at the end of the experiment, the urn always contains the fixed number formula_50 of red balls while containing the random number formula_51 of the other formula_16 colors.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha_0>2"
},
{
"math_id": 1,
"text": "\\boldsymbol{\\alpha}"
},
{
"math_id": 2,
"text": "\nX \\mid p \\sim \\operatorname{NM}(x_0,\\mathbf{p}),\n"
},
{
"math_id": 3,
"text": "\n \\mathbf{p} \\sim \\operatorname{Dir}(\\alpha_0,\\boldsymbol\\alpha),\n"
},
{
"math_id": 4,
"text": "\nX \\sim \\operatorname{DNM}(x_0,\\alpha_0,\\boldsymbol{\\alpha}).\n"
},
{
"math_id": 5,
"text": " \\operatorname{NM}(x_0, \\mathbf{p})"
},
{
"math_id": 6,
"text": " \\operatorname{Dir}(\\alpha_0,\\boldsymbol\\alpha) "
},
{
"math_id": 7,
"text": "\\mathbf{x}=(x_1,\\dots,x_m)"
},
{
"math_id": 8,
"text": "\\Pr(\\mathbf{x}\\mid x_0, \\alpha_0, \\boldsymbol{\\alpha})=\\int_{\\mathbf{p}}\\mathrm{NegMult}(\\mathbf{x}\\mid x_0, \\mathbf{p}) \\mathrm{Dir}(\\mathbf{p}\\mid\\alpha_0,\\boldsymbol{\\alpha})\\textrm{d}\\mathbf{p}"
},
{
"math_id": 9,
"text": "\\Pr(\\mathbf{x}\\mid x_0, \\alpha_0, \\boldsymbol{\\alpha})={\\frac{\\Gamma\\left(\\sum_{i=0}^m{x_i}\\right)}{\\Gamma(x_0)\\prod_{i=1}^m x_i!}} \\frac{1}{\\mathrm{B}(\\boldsymbol\\alpha_+)}\\int_{\\mathbf{p}} \\prod_{i=0}^m p_i^{x_i+\\alpha_i - 1}\\textrm{d}\\mathbf{p} "
},
{
"math_id": 10,
"text": "\\Pr(\\mathbf{x}\\mid x_0, \\alpha_0, \\boldsymbol{\\alpha})={\\frac{\\Gamma\\left(\\sum_{i=0}^m{x_i}\\right)}{\\Gamma(x_0)\\prod_{i=1}^m x_i!}} \\frac{{\\mathrm{B}}(\\mathbf{x_+}+\\boldsymbol\\alpha_+)}{\\mathrm{B}(\\boldsymbol\\alpha_+)} "
},
{
"math_id": 11,
"text": "\\mathbf{x_+}"
},
{
"math_id": 12,
"text": "\\boldsymbol\\alpha_+"
},
{
"math_id": 13,
"text": "m+1"
},
{
"math_id": 14,
"text": "x_0"
},
{
"math_id": 15,
"text": "\\alpha_0"
},
{
"math_id": 16,
"text": "m"
},
{
"math_id": 17,
"text": "\\mathbf{x}"
},
{
"math_id": 18,
"text": "\\boldsymbol\\alpha"
},
{
"math_id": 19,
"text": "\\mathrm{B}"
},
{
"math_id": 20,
"text": "\\Pr(\\mathbf{x}\\mid x_0, \\alpha_0, \\boldsymbol{\\alpha})=x_0\\frac{\\Gamma(\\sum_{i=0}^m x_i)\\Gamma(\\sum_{i=0}^m \\alpha_i)}{\\Gamma(\\sum_{i=0}^m (x_i+\\alpha_i))} \\prod_{i=0}^m \\frac{\\Gamma(x_i+\\alpha_i)}{\\Gamma(x_i+1)\\Gamma(\\alpha_i)}."
},
{
"math_id": 21,
"text": "\\Pr(\\mathbf{x}\\mid x_0, \\alpha_0, \\boldsymbol{\\alpha})= \\frac{\\Gamma(x_\\bullet)}{\\Gamma(x_0)\\prod_{i=1}^m \\Gamma(x_i+1)} \\times \\frac{\\Gamma(\\alpha_\\bullet)}{\\prod_{i=0}^m \\Gamma(\\alpha_i)} \\times \\frac{\\prod_{i=0}^m \\Gamma(x_i+\\alpha_i)}{\\Gamma(x_\\bullet+\\alpha_\\bullet)}"
},
{
"math_id": 22,
"text": " x_\\bullet= x_0+x_1+ \\cdots + x_m "
},
{
"math_id": 23,
"text": " \\alpha_{\\bullet}= \\alpha_0+\\alpha_1+ \\cdots + \\alpha_m "
},
{
"math_id": 24,
"text": "\\Pr(\\mathbf{x}\\mid x_0, \\alpha_0, \\boldsymbol{\\alpha})=\\frac{\\mathrm{B}(x_\\bullet,\\alpha_\\bullet)}{\\mathrm{B}(x_0,\\alpha_0)}\\prod_{i=1}^m \\frac{\\Gamma(x_i+\\alpha_i)}{x_i! \\Gamma(\\alpha_i)}.\n"
},
{
"math_id": 25,
"text": "\\alpha_i"
},
{
"math_id": 26,
"text": "\\mathrm{DNM}(x_0,\\alpha_0,\\boldsymbol{\\alpha_{(-)}})"
},
{
"math_id": 27,
"text": "\\boldsymbol{\\alpha_{(-)}}"
},
{
"math_id": 28,
"text": "\n\\mathbf{x}\n=\n\\begin{bmatrix}\n \\mathbf{x}^{(1)} \\\\\n \\mathbf{x}^{(2)}\n\\end{bmatrix}\n\n\\text{ with sizes }\\begin{bmatrix} q \\times 1 \\\\ (m-q) \\times 1 \\end{bmatrix}"
},
{
"math_id": 29,
"text": "\n\\boldsymbol\\alpha\n=\n\\begin{bmatrix}\n \\boldsymbol\\alpha^{(1)} \\\\\n \\boldsymbol\\alpha^{(2)}\n\\end{bmatrix}\n\\text{ with sizes }\\begin{bmatrix} q \\times 1 \\\\ (m-q) \\times 1 \\end{bmatrix}"
},
{
"math_id": 30,
"text": "\\mathbf{X}^{(1)}"
},
{
"math_id": 31,
"text": "\\mathbf{X}^{(2)}=\\mathbf{x}^{(2)}"
},
{
"math_id": 32,
"text": "\\mathrm{DNM}(x_0^{\\prime},\\alpha_0^{\\prime},\\boldsymbol\\alpha^{(1)}) "
},
{
"math_id": 33,
"text": "\nx_0^{\\prime} = x_0 + \\sum_{i=1}^{m-q} x_i^{(2)}\n"
},
{
"math_id": 34,
"text": "\n\\alpha_0^{\\prime} = \\alpha_0 + \\sum_{i=1}^{m-q} \\alpha_i^{(2)}\n"
},
{
"math_id": 35,
"text": "\\Pr(\\mathbf{x}^{(1)}\\mid \\mathbf{x}^{(2)}, x_0, \\alpha_0, \\boldsymbol{\\alpha})= \\frac{\\mathrm{B}(x_\\bullet,\\alpha_\\bullet)}{\\mathrm{B}(x_0^{\\prime} ,\\alpha_0^{\\prime}) }\\prod_{i=1}^q\\frac{\\Gamma(x_i^{(1)}+\\alpha_i^{(1)})}{(x_i^{(1)}!)\\Gamma(\\alpha_i^{(1)})} "
},
{
"math_id": 36,
"text": "\\sum_{i=1}^m x_i = n"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "\\Pr(\\mathbf{x} \\mid \\sum_{i=1}^m x_i =n, x_0, \\alpha_0, \\boldsymbol{\\alpha})= \\frac{n!\\Gamma\\left(\\sum_{i=1}^m \\alpha_i\\right)}\n{\\Gamma\\left(n+\\sum_{i=1}^m \\alpha_i\\right)}\\prod_{i=1}^m\\frac{\\Gamma(x_{i}+\\alpha_{i})}{x_{i}!\\Gamma(\\alpha_{i})} "
},
{
"math_id": 39,
"text": "X = (X_1, \\ldots, X_m)\\sim\\operatorname{DNM}(x_0, \\alpha_0, \\alpha_1,\\ldots,\\alpha_m)"
},
{
"math_id": 40,
"text": "X' = (X_1, \\ldots, X_i + X_j, \\ldots, X_m)\\sim\\operatorname{DNM} \\left(x_0, \\alpha_0, \\alpha_1,\\ldots,\\alpha_i+\\alpha_j,\\ldots,\\alpha_m \\right)."
},
{
"math_id": 41,
"text": "\\rho(X_i,X_i) = 1."
},
{
"math_id": 42,
"text": "\\rho(X_i,X_j) = \\frac{\\operatorname{cov}(X_i,X_j)}{\\sqrt{\\operatorname{var}(X_i)\\operatorname{var}(X_j)}} = \\sqrt{\\frac{\\alpha_i \\alpha_j}{(\\alpha_0+\\alpha_i-1)(\\alpha_0+\\alpha_j-1)}}."
},
{
"math_id": 43,
"text": "\\alpha_0 \\leq 1"
},
{
"math_id": 44,
"text": "\\alpha_0 \\leq 2"
},
{
"math_id": 45,
"text": "m+2"
},
{
"math_id": 46,
"text": "x_0, \\alpha_0"
},
{
"math_id": 47,
"text": "\\sum_{i=0}^m{\\alpha_i} "
},
{
"math_id": 48,
"text": "\\mathbf{X}"
},
{
"math_id": 49,
"text": "\\mathrm{DNM}(x_0, \\alpha_0, \\boldsymbol{\\alpha})"
},
{
"math_id": 50,
"text": "x_0+\\alpha_0"
},
{
"math_id": 51,
"text": "\\mathbf{X}+\\boldsymbol{\\alpha}"
}
] |
https://en.wikipedia.org/wiki?curid=63770253
|
63771247
|
PET for bone imaging
|
Medical imaging technique
Positron emission tomography for bone imaging, as an in vivo tracer technique, allows the measurement of the regional concentration of radioactivity proportional to the image pixel values averaged over a region of interest (ROI) in bones. Positron emission tomography is a functional imaging technique that uses [18F]NaF radiotracer to visualise and quantify regional bone metabolism and blood flow. [18F]NaF has been used for imaging bones for the last 60 years. This article focuses on the pharmacokinetics of [18F]NaF in bones, and various semi-quantitative and quantitative methods for quantifying regional bone metabolism using [18F]NaF PET images.
Use of [18F]NaF PET.
The measurement of regional bone metabolism is critical to understand the pathophysiology of metabolic bone diseases.
Pharmacokinetics of [18F]NaF.
The chemically stable anion of Fluorine-18-Fluoride is a bone-seeking radiotracer in skeletal imaging. [18F]NaF has an affinity to deposit at areas where the bone is newly mineralizing. Many studies have [18F]NaF PET to measure bone metabolism at the hip, lumbar spine, and humerus. [18F]NaF is taken-up in an exponential manner representing the equilibration of tracer with the extracellular and cellular fluid spaces with a half-life of 0.4 hours, and with kidneys with a half-life of 2.4 hours. The single passage extraction of [18F]NaF in bone is 100%. After an hour, only 10% of the injected activity remains in the blood.
18F- ions are considered to occupy extracellular fluid spaces because, firstly, they equilibrate with transcellular fluid spaces and secondly, they are not entirely extracellular ions. Fluoride undergoes equilibrium with hydrogen fluoride, which has a high permeability allowing fluoride to cross the plasma blood membrane. The fluoride circulation in red blood cells accounts for 30%. However, it is freely available to the bone surface for uptake because the equilibrium between erythrocytes and plasma is much faster than the capillary transit time. This is supported by studies reporting 100% single-passage extraction of whole-blood 18F- ion by bone and the rapid release of 18F- ions from erythrocytes with a rate constant of 0.3 per second.
[18F]NaF is also taken-up by immature erythrocytes in the bone marrow, which plays a role in fluoride kinetics. The plasma protein binding of [18F]NaF is negligible. [18F]NaF renal clearance is affected by diet and pH level, due to its re-absorption in the nephron, which is mediated by hydrogen fluoride. However, large differences in urine flow rate are avoided for controlled experiments by keeping patents well hydrated.
The exchangeable pool and the size of the metabolically active surfaces in bones determines the amount of tracer accumulated or exchanged with bone extracellular fluid, chemisorption onto hydroxyapatite crystals to form fluorapatite, as shown in Equation-1:
formula_0 Equation-1
Fluoride ions from the crystalline matrix of bone are released when the bone is remodelled, thus providing a measure of the rate of bone metabolism.
Measuring SUV.
Definition.
The standardized uptake value (SUV) is defined as tissue concentration (KBq/ml) divided by activity injected normalized for body weight.
Appropriateness.
The SUV measured from the large ROI smooths out the noise and, therefore, more appropriate in [18F]NaF bone studies as the radiotracer is fairly uniformly taken up throughout the bone. The measurement of SUV is easy, cheap, and quicker to perform, making it more attractive for clinical use. It has been used in diagnosing and assessing the efficacy of therapy. SUV can be measured at a single site, or the whole skeleton using a series of static scans and restricted by the small field-of-view of the PET scanner.
Known Issues.
The SUV has emerged as a clinically useful, albeit controversial, semi-quantitative tool in PET analysis. Standardizing imaging protocols and measuring the SUV at the same time post-injection of the radiotracer, is necessary to obtain a correct SUV because imaging before the uptake plateau introduces unpredictable errors of up to 50% with SUVs. Noise, image resolution, and reconstruction do affect the accuracy of SUVs, but correction with phantom can minimize these differences when comparing SUVs for multi-centre clinical trials. SUV may lack sensitivity in measuring response to treatment as it is a simple measure of tracer uptake in bone, which is affected by the tracer uptake in other competing tissues and organs in addition to the target ROI.
Measuring Ki.
The quantification of dynamic PET studies to measure Ki requires the measurement of the skeletal time-activity curves (TAC) from the region of interest (ROI) and the arterial input function (AIF), which can be measured in various different ways. However, the most common is to correct the image-based blood time-activity curves using several venous blood samples taken at discrete time points while the patient is scanned. The calculation of rate constants or Ki requires three steps:
Spectral method.
The method was first described by Cunningham & Jones in 1993 for the analysis of dynamic PET data obtained in the brain. It assumes that the tissue impulse response function (IRF) can be described as a combination of many exponentials. Since A tissue TAC can be expressed as a convolution of measured arterial input function with IRF, Cbone(t) can be expressed as:
formula_1
where, formula_2 is a convolution operator, Cbone(t) is the bone tissue activity concentration of tracer (in units: MBq/ml) over a period of time t, Cplasma(t) is the plasma concentration of tracer (in units: MBq/ml) over a period of time t, IRF(t) is equal to the sum of exponentials, β values are fixed between 0.0001 sec−1 and 0.1 sec−1 in intervals of 0.0001, n is the number of α components that resulted from the analysis and β1, β2..., βn corresponds to the respective α1, α2..., αn components from the resulted spectrum. The values of α are then estimated from the analysis by fitting multi-exponential to the IRF. The intercept of the linear fit to the slow component of this exponential curve is considered the plasma clearance (Ki) to the bone mineral.
Deconvolution method.
The method was first described by Williams et al. in the clinical context. The method was used by numerous other studies. This is perhaps the simplest of all the mathematical methods for the calculation of "Ki" but the one most sensitive to noise present in the data. A tissue TAC is modelled as a convolution of measured arterial input function with IRF, the estimates for IRF are obtained iteratively to minimise the differences between the left- and right-hand side of the following Equation:
formula_3
where, formula_2 is a convolution operator, Cbone(t) is the bone tissue activity concentration of tracer (in units: MBq/ml) over a period of time t, Cplasma(t) is the plasma concentration of tracer (in units: MBq/ml) over a period of time t, and IRF(t) is the impulse response of the system (i.e., a tissue in this case). The "Ki" is obtained from the IRF in a similar fashion to that obtained for the spectral analysis, as shown in the figure.
Hawkins model.
The measurement of Ki from dynamic PET scans require tracer kinetic modelling to obtain the model parameters describing the biological processes in bone, as described by Hawkins et al. Since this model has two tissue compartments, it is sometimes called a two-tissue compartmental model. Various different versions of this model exist; however, the most fundamental approach is considered here with two tissue compartments and four tracer-exchange parameters. The whole kinetic modelling process using Hawkins model can be summed up in a single image as seen on the right-hand-side. The following differential equations are solved to obtain the rate constants:
formula_4
formula_5
The rate constant "K1" (in units: ml/min/ml) describes the unidirectional clearance of fluoride from plasma to the whole of the bone tissue, "k2" (in units: min−1) describes the reverse transport of fluoride from the ECF compartment to plasma, "k3" and "k4" (in units min−1) describe the forward and backward transportation of fluoride from the bone mineral compartment.
"Ki" represents the net plasma clearance to bone mineral only. "Ki" is a function of both "K1", reflecting bone blood flow, and the fraction of the tracer that undergoes specific binding to the bone mineral "k3" / ("k2" + "k3"). Therefore, formula_6
Hawkins et al. found that the inclusion of an additional parameter called fractional blood volume (BV), representing the vascular tissue spaces within the ROI, improved the data fitting problem, although this improvement was not statistically significant.
Patlak method.
Patlak method is based on the assumption that the backflow of tracer from bone mineral to bone ECF is zero (i.e., k4=0). The calculation of Ki using Patlak method is simpler than using non-linear regression (NLR) fitting the arterial input function and the tissue time-activity curve data to the Hawkins model. It is crucial to note that Patlak method can only measure bone plasma clearance ("Ki"), and cannot measure the individual kinetic parameters, K1, k2, k3, or k4.
The concentration of tracer in tissue region-of-interest can be represented as a sum of concentration in bone ECF and the bone mineral. It can be mathematically represented as
formula_7
where, within the tissue region-of-interest from the PET image, Cbone(T) is the bone tissue activity concentration of tracer (in units: MBq/ml) at any time T, Cplasma(T) is the plasma concentration of tracer (in units: MBq/ml) at time T, Vo is the fraction of the ROI occupied by the ECF compartment, and formula_8 is the area under the plasma curve is the net tracer delivery to the tissue region of interest (in units: MBq.Sec/ml) over time T. The Patlak equation is a linear equation of the form formula_9
Therefore, linear regression is fitted to the data plotted on Y- and X-axis between 4–60 minutes to obtain m and c values, where m is the slope of the regression line representing Ki and c is the Y-intercept of the regression line representing Vo.
Siddique–Blake method.
The calculation of Ki using arterial input function, time-activity curve, and Hawkins model was limited to a small skeletal region covered by the narrow field-of-view of the PET scanner while acquiring a dynamic scan. However, Siddique et al. showed in 2012 that it is possible to measure Ki values in bones using static [18F]NaF PET scans. Blake et al. later showed in 2019 that the Ki obtained using the Siddique–Blake method has precision errors of less than 10%. The Siddique–Blake approach is based on the combination of the Patlak method, the semi-population based arterial input function, and the information that Vo does not significantly change post-treatment. This method uses the information that a linear regression line can be plotted using the data from a minimum of two time-points, to obtain m and c as explained in the Patlak method. However, if Vo is known or fixed, only one single static PET image is required to obtain the second time-point to measure m, representing the Ki value. This method should be applied with great caution to other clinical areas where these assumptions may not hold true.
SUV vs Ki.
The most fundamental difference between SUV and Ki values is that SUV is a simple measure of uptake, which is normalized to body weight and injected activity. The SUV does not take into consideration the tracer delivery to the local region of interest from where the measurements are obtained, therefore, affected by the physiological process consuming [18F]NaF elsewhere in the body. On the other hand, Ki measures the plasma clearance to bone mineral, taking into account the tracer uptake elsewhere in the body affecting the delivery of tracer to the region of interest from where the measurements are obtained. The difference in the measurement of Ki and SUV in bone tissue using [18F]NaF are explained in more detail by Blake et al.
It is critical to note that most of the methods for calculating Ki require dynamic PET scanning over an hour, except, the Siddique–Blake methods. Dynamic scanning is complicated and costly. However, the calculation of SUV requires a single static PET scan performed approximately 45–60 minutes post-tracer injection at any region imaged within the skeleton.
Many researchers have shown a high correlation between SUV and "Ki" values at various skeletal sites. However, SUV and Ki methods can contradict for measuring response to treatment. Since SUV has not been validated against the histomorphometry, its usefulness in bone studies measuring response to treatment and disease progression is uncertain.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Ca_{10}(PO_4)_6(OH)_2 + 2F- => Ca_{10}(PO_4)_6F_2 + 2.OH- "
},
{
"math_id": 1,
"text": "C_{bone}(t) = \\sum_{k=1}^n \\alpha_i . \\bigl ( C_{plasma}(t) \\otimes exp(-\\beta_i . t) \\bigr)"
},
{
"math_id": 2,
"text": "\\otimes"
},
{
"math_id": 3,
"text": "C_{bone}(t) = C_{plasma}(t) \\otimes IRF(t)"
},
{
"math_id": 4,
"text": "{\\operatorname{d}\\! C_{e}(t) \\over\\operatorname{d}\\! t } = K_1* C_p(t) - (k_2+k_3)*C_e(t) + k_4*C_b(t)\n"
},
{
"math_id": 5,
"text": "{\\operatorname{d}\\! C_{b}(t) \\over\\operatorname{d}\\! t } = k_3*C_e(t) - k_4*C_b(t)\n\n"
},
{
"math_id": 6,
"text": "\nK_i = \\left ( \\frac{K_1 * k_3}{k_2 + k_3} \\right )\n\n"
},
{
"math_id": 7,
"text": "\\frac{C_{bone}(T)}{C_{plasma}(T)} = K_i * \\frac{\\int\\limits_{0}^{T} C_{plasma}(t) dt}{C_{plasma}(T)} + V_o"
},
{
"math_id": 8,
"text": "\\int\\limits_{0}^{T} C_{plasma}(t) dt"
},
{
"math_id": 9,
"text": "Y = m*X + c"
}
] |
https://en.wikipedia.org/wiki?curid=63771247
|
6378204
|
Gauss sum
|
Sum in algebraic number theory
In algebraic number theory, a Gauss sum or Gaussian sum is a particular kind of finite sum of roots of unity, typically
formula_0
where the sum is over elements r of some finite commutative ring R, "ψ" is a group homomorphism of the additive group "R"+ into the unit circle, and "χ" is a group homomorphism of the unit group "R"× into the unit circle, extended to non-unit r, where it takes the value 0. Gauss sums are the analogues for finite fields of the Gamma function.
Such sums are ubiquitous in number theory. They occur, for example, in the functional equations of Dirichlet L-functions, where for a Dirichlet character χ the equation relating "L"("s", "χ") and "L"(1 − "s", "χ") (where is the complex conjugate of χ) involves a factor
formula_1
History.
The case originally considered by Carl Friedrich Gauss was the quadratic Gauss sum, for R the field of residues modulo a prime number p, and χ the Legendre symbol. In this case Gauss proved that "G"("χ")
"p"<templatestyles src="Fraction/styles.css" />1⁄2 or "ip"<templatestyles src="Fraction/styles.css" />1⁄2 for p congruent to 1 or 3 modulo 4 respectively (the quadratic Gauss sum can also be evaluated by Fourier analysis as well as by contour integration).
An alternate form for this Gauss sum is
formula_2.
Quadratic Gauss sums are closely connected with the theory of theta functions.
The general theory of Gauss sums was developed in the early 19th century, with the use of Jacobi sums and their prime decomposition in cyclotomic fields. Gauss sums over a residue ring of integers mod "N" are linear combinations of closely related sums called Gaussian periods.
The absolute value of Gauss sums is usually found as an application of Plancherel's theorem on finite groups. In the case where R is a field of p elements and χ is nontrivial, the absolute value is "p"<templatestyles src="Fraction/styles.css" />1⁄2. The determination of the exact value of general Gauss sums, following the result of Gauss on the quadratic case, is a long-standing issue. For some cases see Kummer sum.
Properties of Gauss sums of Dirichlet characters.
The Gauss sum of a Dirichlet character modulo N is
formula_3
If χ is also primitive, then
formula_4
in particular, it is nonzero. More generally, if "N"0 is the conductor of χ and "χ"0 is the primitive Dirichlet character modulo "N"0 that induces χ, then the Gauss sum of χ is related to that of "χ"0 by
formula_5
where μ is the Möbius function. Consequently, "G"("χ") is non-zero precisely when is squarefree and relatively prime to "N"0.
Other relations between "G"("χ") and Gauss sums of other characters include
formula_6
where is the complex conjugate Dirichlet character, and if "χ"′ is a Dirichlet character modulo "N"′ such that N and "N"′ are relatively prime, then
formula_7
The relation among "G"("χχ"′), "G"("χ"), and "G"("χ"′) when χ and "χ"′ are of the "same" modulus (and "χχ"′ is primitive) is measured by the Jacobi sum "J"("χ", "χ"′). Specifically,
formula_8
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G(\\chi) := G(\\chi, \\psi)= \\sum \\chi(r)\\cdot \\psi(r)"
},
{
"math_id": 1,
"text": "\\frac{ G(\\chi) }{ |G(\\chi)| }. "
},
{
"math_id": 2,
"text": "\\sum e^{2 \\pi i r^2/p}"
},
{
"math_id": 3,
"text": "G(\\chi)=\\sum_{a=1}^N\\chi(a)e^{2\\pi ia/N}."
},
{
"math_id": 4,
"text": "|G(\\chi)|=\\sqrt{N},"
},
{
"math_id": 5,
"text": "G(\\chi)=\\mu\\left(\\frac{N}{N_0}\\right)\\chi_0\\left(\\frac{N}{N_0}\\right)G\\left(\\chi_0\\right)"
},
{
"math_id": 6,
"text": "G(\\overline{\\chi})=\\chi(-1)\\overline{G(\\chi)},"
},
{
"math_id": 7,
"text": " G\\left(\\chi\\chi^\\prime\\right) = \\chi\\left(N^\\prime\\right) \\chi^\\prime(N) G(\\chi) G\\left(\\chi^\\prime\\right). "
},
{
"math_id": 8,
"text": "G\\left(\\chi\\chi^\\prime\\right)=\\frac{G(\\chi)G\\left(\\chi^\\prime\\right)}{J\\left(\\chi,\\chi^\\prime\\right)}."
}
] |
https://en.wikipedia.org/wiki?curid=6378204
|
63805510
|
Electron orbital imaging
|
Electron orbital imaging is an X-ray synchrotron technique used to produce images of electron (or hole) orbitals in real space. It utilizes the technique of X-ray Raman scattering (XRS), also known as Non-resonant Inelastic X-Ray Scattering (NIXS) to inelastically scatter electrons off a single crystal. It is an element specific spectroscopic technique for studying the valence electrons of transition metals.
Background.
Pictures of electron’s wavefunctions are commonplace in most quantum mechanics textbooks. However, the images shown of these orbital shapes of these electrons are entirely mathematical constructs. As a purely experimental technique electron orbital imaging has the ability to solve some problems in condensed matter physics without the use of complementary theoretical approaches. Theoretical approaches, while indispensable, invariably rely on several underlying assumptions, which vary depending on the approach used. The motivation for developing orbital imaging stemmed from the desire to omit the complex theoretical calculations to model experimental spectra; and instead simply “see” the relevant occupied and unoccupied electron orbitals.
Experimental setup.
The non-resonant inelastic x-ray scattering cross section is orders of magnitude smaller than that of photoelectric absorption. Therefore, high-brilliance synchrotron beamlines with efficient spectrometers that are able to span a large solid angle of detection are required. XRS spectrometers are usually based on spherically curved analyzer crystals that act as focusing monochromator after the sample. The energy resolution is on the order of 1 eV for photon energies on the order of 10 keV.
Briefly put, the technique measures the density of electron holes the valence band in the direction of the momentum transfer vector q (Fig. 1), which is defined as the difference in momentum between the incoming qin and outgoing qout photons. The sample is rotated between subsequent measurement (by some angle θ) such that the momentum transfer vector traverses a plane in the crystal. Because holes are simply the inverse of the electron occupation, the occupied (electrons) and unoccupied (holes) orbitals in a given plane can be imaged. In practice, photons ~10keV are used in order to achieve a sufficiently large q (needed to access dipole forbidden transitions, see below Theoretical Basis). The scattered photons are detected at a constant energy, while the incident photon energy is swept above that over a range corresponding to the binding energy of the relevant excitation. For example, if the energy of the photons detected is 10keV, and the nickel 3"s" (binding energy of 111eV) excitation is of interest, then the incident photons are swept in a range around 10.111keV. In this manner the "energy transferred" to the sample is measured. The intensity of a core level electron excitation (such as 3"s"→3"d") is integrated for various directions of the momentum transfer vector q relative to the crystal being measured. An "s" orbital is the most convenient to utilize because it is spherical, and therefore the technique is sensitive only to the shape of the final wavefunction. As such, the integrated intensity of the resulting spectrum is proportional to the hole density in direction of q.
Theoretical basis.
The technique is hinged on its ability to access dipole forbidden electronic transitions.
The double differential cross section for a NIXS measurement is given by:
formula_0
where (dσ/dΩ)Th is the Thomson scattering cross-section (representing the elastic scattering of electromagnetic waves off electrons) and S(q,ω) is the dynamic structure factor, which contains the physics of the material being measured, and is given by:
formula_1
where q = kf - ki is the momentum transfer and the delta function δ conserves energy: ω is the photon energy loss and "E"i & "E"f are the initial and final states of the system, respectively. If q is small then the Taylor expansion of the transition matrix eiq·r implies that only the first (dipole) term in the expansion is important. Orbital imaging relies of the fact that as the momentum transfer increases (~4 to 15 Å−1) further terms in the expansion of the transition matrix become relevant, which allows the experimenter to observe higher multipole transitions (quadrupole, octupole, etc.).
Applications.
Electron orbital imaging has applications in solid state physics wherein the primary goal is to understand the observed bulk properties of a given material—whether electronic or magnetic—from the atomic perspective of the constituent electrons. In many materials it is the case is that there is a delicate balance of competing interactions that together stabilize a particular orbital state, which in turn determines the physical properties. Electron Orbital Imaging allows scientists to directly image the valence electron orbitals in real space. This has the advantage of bypassing theoretical modelling of experimental spectra (which is often an intractable problem), and observing the relevant orbitals directly.
The first application of the technique was published in 2019 and showed the "3d" orbitals (specifically the holes, which are the inverse of the electrons) of Nickel(II) oxide. The shape of the "eg" orbitals were imaged in real space through a cross-sectional cut of a single crystal of NiO.
It has also been applied to the Ising magnetic material Ca3Co2O6 (Fig. 2) in order to show specifically that it is the sixth electron on the high-spin trigonally coordinated cobalt site that gives rise to the observed bulk large orbital magnetic moment.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{\\displaystyle {d^{2}\\sigma \\over d\\Omega d\\omega}=\\left({d\\sigma \\over d\\Omega }\\right)_{\\rm {Th}}\\times S(\\mathbf{q},\\omega)}"
},
{
"math_id": 1,
"text": "S(\\mathbf{q},\\omega)= \\sum _{{f}}|{\\mathrm \\langle{f} |{e}}^{{-i\\mathbf{q}\\cdot \\mathbf{r}}}|i\\rangle|^2 \\delta(E_i+E_f+\\hbar \\omega)"
}
] |
https://en.wikipedia.org/wiki?curid=63805510
|
63809732
|
Autologistic actor attribute models
|
Autologistic actor attribute models (ALAAMs) are a family of statistical models used to model the occurrence of node attributes (individual-level outcomes) in network data. They are frequently used with social network data to model social influence, the process by which connections in a social network influence the outcomes experienced by nodes. The dependent variable can strictly be binary. However, they may be applied to any type of network data that incorporates binary, ordinal or continuous node attributes as dependent variables.
Background.
Autologistic actor attributes models (ALAAMs) are a method for social network analysis. They were originally proposed as alteration of Exponential Random Graph Models (ERGMs) to allow for the study of social influence. ERGMs are a family of statistical models for modeling social selection, how ties within a network form on the basis of node attributes and other ties in the network. ALAAMs adapt the structure of ERGM models, but rather than predicting tie formation based on fixed node attributes, they predict node attributes based on fixed ties. This allows for the modeling of social influence processes, for instance how friendship among adolescents (network ties) may influence whether they smoke (node attributes), influences of networks on other health-related practices, and how attitudes or perceived attitudes may change.
ALAAMs are distinct from other models of social influence on networks, such as epidemic/SIR models, because ALAAMs are used for the analysis of cross-sectional data, observed at only a single point in time.
Nodal attributes can be binary, ordinal, or even continuous. Recently, the software of a Melbourne-based research group has incorporated a multilevel approach for ALAAMs in their MPNet software for directed and undirected networks, as well as valued ties (dyadic attributes). It must be noted that the software strictly does not accept missing variables. Cases will need to be deleted if one of their nodal variables is missing. The software is also not able to study ties 'out of the network cluster.' For example: when pupils in classes not only mention friends in their class, but also friends outside of the class(/school).
An alternative to this model to study a nodal attribute as a dependent variable in cross-sectional data is the Multiple Membership model extension for network analysis (can also be extended to make it longitudinal). Unlike ALAAM, it can be used on a continuous dependent variable, is able to handle missingness, can make use of multiple networks (multiplex) and can take ties 'out of the cluster' into account as well.
Definition.
ALAAMs, like ERGMs, are part of the Exponential family of probability models. ALAAMs are exponential models that describe, for a network, a "joint probability distribution" for whether or not each node in the network exhibits a certain node-level attribute.
formula_0
where formula_1 is a vector of weights, associated with formula_2, the vector of model parameters, and formula_3 is a normalization constant to ensure that the probabilities of all possible combination of node attributes sum to one.
Estimation.
Estimation of model parameters, and evaluation of standard errors (for the purposes of hypothesis testing), is conducted using Markov chain Monte Carlo maximum likelihood estimation (MCMC-MLE), building on approaches such as the Metropolis–Hastings algorithm. Such approaches are required to estimate the model's parameters across an intractable sample space for moderately-size networks. After model estimation, good-of-fit testing, through the sampling of random networks from the fitted model, should be performed to ensure that the model adequately fits the observed data.
ALAAM estimation, while not perfect, has been demonstrated to be relatively robust to partially missing data, due to random sampling or snowball sampling data collection techniques.
Currently, these algorithms for estimating ALAAMs are implemented in the PNet and MPNet software, published by Melnet, a research group at the University of Melbourne
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nP(Y = y | \\theta , X) = \\frac{\\exp(\\theta^{T} s(y,X))}{c(\\theta)}\n"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "s(y,X)"
},
{
"math_id": 3,
"text": "c(\\theta)"
}
] |
https://en.wikipedia.org/wiki?curid=63809732
|
6381782
|
Weyl scalar
|
In the Newman–Penrose (NP) formalism of general relativity, Weyl scalars refer to a set of five complex scalars formula_0 which encode the ten independent components of the Weyl tensor of a four-dimensional spacetime.
Definitions.
Given a complex null tetrad formula_1 and with the convention formula_2, the Weyl-NP scalars are defined by
formula_3
formula_4
formula_5
formula_6
formula_7
Note: If one adopts the convention formula_8, the definitions of formula_9 should take the opposite values; that is to say, formula_10 after the signature transition.
Alternative derivations.
According to the definitions above, one should find out the Weyl tensors before calculating the Weyl-NP scalars via contractions with relevant tetrad vectors. This method, however, does not fully reflect the spirit of Newman–Penrose formalism. As an alternative, one could firstly compute the spin coefficients and then use the NP field equations to derive the five Weyl-NP scalars
formula_11
formula_12
formula_13
formula_14
formula_15
where formula_16 (used for formula_17) refers to the NP curvature scalar formula_18 which could be calculated directly from the spacetime metric formula_19.
Physical interpretation.
Szekeres (1965) gave an interpretation of the different Weyl scalars at large distances:
formula_17 is a "Coulomb" term, representing the gravitational monopole of the source;
formula_20 & formula_21 are ingoing and outgoing "longitudinal" radiation terms;
formula_22 & formula_23 are ingoing and outgoing "transverse" radiation terms.
For a general asymptotically flat spacetime containing radiation (Petrov Type I), formula_20 & formula_21 can be transformed to zero by an appropriate choice of null tetrad. Thus these can be viewed as gauge quantities.
A particularly important case is the Weyl scalar formula_23.
It can be shown to describe outgoing gravitational radiation (in an asymptotically flat spacetime) as
formula_24
Here, formula_25 and formula_26 are the "plus" and "cross" polarizations of gravitational radiation, and the double dots represent double time-differentiation.
There are, however, certain examples in which the interpretation listed above fails. These are exact vacuum solutions of the Einstein field equations with cylindrical symmetry. For instance, a static (infinitely long) cylinder can produce a gravitational field which has not only the expected "Coulomb"-like Weyl component formula_17, but also non-vanishing "transverse wave"-components formula_22 and formula_23. Furthermore, purely outgoing Einstein-Rosen waves have a non-zero "incoming transverse wave"-component formula_22.
|
[
{
"math_id": 0,
"text": "\\{\\Psi_0, \\Psi_1, \\Psi_2,\\Psi_3, \\Psi_4\\}"
},
{
"math_id": 1,
"text": "\\{l^a, n^a, m^a, \\bar{m}^a\\}"
},
{
"math_id": 2,
"text": "\\{(-,+,+,+); l^a n_a=-1\\,,m^a \\bar{m}_a=1\\}"
},
{
"math_id": 3,
"text": "\\Psi_0 := C_{\\alpha\\beta\\gamma\\delta} l^\\alpha m^\\beta l^\\gamma m^\\delta\\ , "
},
{
"math_id": 4,
"text": "\\Psi_1 := C_{\\alpha\\beta\\gamma\\delta} l^\\alpha n^\\beta l^\\gamma m^\\delta\\ , "
},
{
"math_id": 5,
"text": "\\Psi_2 := C_{\\alpha\\beta\\gamma\\delta} l^\\alpha m^\\beta \\bar{m}^\\gamma n^\\delta\\ , "
},
{
"math_id": 6,
"text": "\\Psi_3 := C_{\\alpha\\beta\\gamma\\delta} l^\\alpha n^\\beta \\bar{m}^\\gamma n^\\delta\\ , "
},
{
"math_id": 7,
"text": "\\Psi_4 := C_{\\alpha\\beta\\gamma\\delta} n^\\alpha \\bar{m}^\\beta n^\\gamma \\bar{m}^\\delta\\ . "
},
{
"math_id": 8,
"text": "\\{(+,-,-,-); l^a n_a=1\\,,m^a \\bar{m}_a=-1\\}"
},
{
"math_id": 9,
"text": "\\Psi_i"
},
{
"math_id": 10,
"text": "\\Psi_i\\mapsto-\\Psi_i"
},
{
"math_id": 11,
"text": "\\Psi_0=D\\sigma-\\delta\\kappa-(\\rho+\\bar{\\rho})\\sigma-(3\\varepsilon-\\bar{\\varepsilon})\\sigma+(\\tau-\\bar{\\pi}+\\bar{\\alpha}+3\\beta)\\kappa\\,,"
},
{
"math_id": 12,
"text": "\\Psi_1=D\\beta-\\delta\\varepsilon-(\\alpha+\\pi)\\sigma-(\\bar{\\rho}-\\bar{\\varepsilon})\\beta+(\\mu+\\gamma)\\kappa+(\\bar{\\alpha}-\\bar{\\pi})\\varepsilon\\,,"
},
{
"math_id": 13,
"text": "\\Psi_2=\\bar{\\delta}\\tau-\\Delta\\rho-(\\rho\\bar{\\mu}+\\sigma\\lambda)+(\\bar{\\beta}-\\alpha-\\bar{\\tau})\\tau+(\\gamma+\\bar{\\gamma})\\rho+\\nu\\kappa-2\\Lambda\\,,"
},
{
"math_id": 14,
"text": "\\Psi_3=\\bar{\\delta}\\gamma-\\Delta\\alpha+(\\rho+\\varepsilon)\\nu-(\\tau+\\beta)\\lambda+(\\bar{\\gamma}-\\bar{\\mu})\\alpha+(\\bar{\\beta}-\\bar{\\tau})\\gamma\\,."
},
{
"math_id": 15,
"text": "\\Psi_4=\\delta\\nu-\\Delta\\lambda-(\\mu+\\bar{\\mu})\\lambda-(3\\gamma-\\bar{\\gamma})\\lambda+(3\\alpha+\\bar{\\beta}+\\pi-\\bar{\\tau})\\nu\\,."
},
{
"math_id": 16,
"text": "\\Lambda"
},
{
"math_id": 17,
"text": "\\Psi_2"
},
{
"math_id": 18,
"text": "\\Lambda:=\\frac{R}{24}"
},
{
"math_id": 19,
"text": "g_{ab}"
},
{
"math_id": 20,
"text": "\\Psi_1"
},
{
"math_id": 21,
"text": "\\Psi_3"
},
{
"math_id": 22,
"text": "\\Psi_0"
},
{
"math_id": 23,
"text": "\\Psi_4"
},
{
"math_id": 24,
"text": "\\Psi_4 = \\frac{1}{2}\\left( \\ddot{h}_{\\hat{\\theta} \\hat{\\theta}} - \\ddot{h}_{\\hat{\\phi} \\hat{\\phi}} \\right) + i \\ddot{h}_{\\hat{\\theta}\\hat{\\phi}} = -\\ddot{h}_+ + i \\ddot{h}_\\times\\ ."
},
{
"math_id": 25,
"text": "h_+"
},
{
"math_id": 26,
"text": "h_\\times"
}
] |
https://en.wikipedia.org/wiki?curid=6381782
|
638186
|
Fixed-point theorems in infinite-dimensional spaces
|
Theorems generalizing the Brouwer fixed-point theorem
In mathematics, a number of fixed-point theorems in infinite-dimensional spaces generalise the Brouwer fixed-point theorem. They have applications, for example, to the proof of existence theorems for partial differential equations.
The first result in the field was the Schauder fixed-point theorem, proved in 1930 by Juliusz Schauder (a previous result in a different vein, the Banach fixed-point theorem for contraction mappings in complete metric spaces was proved in 1922). Quite a number of further results followed. One way in which fixed-point theorems of this kind have had a larger influence on mathematics as a whole has been that one approach is to try to carry over methods of algebraic topology, first proved for finite simplicial complexes, to spaces of infinite dimension. For example, the research of Jean Leray who founded sheaf theory came out of efforts to extend Schauder's work.
Schauder fixed-point theorem: Let "C" be a nonempty closed convex subset of a Banach space "V". If "f" : "C" → "C" is continuous with a compact image, then "f" has a fixed point.
Tikhonov (Tychonoff) fixed-point theorem: Let "V" be a locally convex topological vector space. For any nonempty compact convex set "X" in "V", any continuous function "f" : "X" → "X" has a fixed point.
Browder fixed-point theorem: Let "K" be a nonempty closed bounded convex set in a uniformly convex Banach space. Then any non-expansive function "f" : "K" → "K" has a fixed point. (A function formula_0 is called non-expansive if formula_1 for each formula_2 and formula_3.)
Other results include the Markov–Kakutani fixed-point theorem (1936-1938) and the Ryll-Nardzewski fixed-point theorem (1967) for continuous affine self-mappings of compact convex sets, as well as the Earle–Hamilton fixed-point theorem (1968) for holomorphic self-mappings of open domains.
Kakutani fixed-point theorem: Every correspondence that maps a compact convex subset of a locally convex space into itself with a closed graph and convex nonempty images has a fixed point.
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "\\|f(x)-f(y)\\|\\leq \\|x-y\\| "
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
}
] |
https://en.wikipedia.org/wiki?curid=638186
|
638200
|
Schauder fixed-point theorem
|
The Schauder fixed-point theorem is an extension of the Brouwer fixed-point theorem to topological vector spaces, which may be of infinite dimension. It asserts that if formula_0 is a nonempty convex closed subset of a Hausdorff topological vector space formula_1 and formula_2 is a continuous mapping of formula_0 into itself such that formula_3 is contained in a compact subset of formula_0, then formula_2 has a fixed point.
A consequence, called Schaefer's fixed-point theorem, is particularly useful for proving existence of solutions to nonlinear partial differential equations.
Schaefer's theorem is in fact a special case of the far reaching Leray–Schauder theorem which was proved earlier by Juliusz Schauder and Jean Leray.
The statement is as follows:
Let formula_2 be a continuous and compact mapping of a Banach space formula_4 into itself, such that the set
formula_5
is bounded. Then formula_2 has a fixed point. (A "compact mapping" in this context is one for which the image of every bounded set is relatively compact.)
History.
The theorem was conjectured and proven for special cases, such as Banach spaces, by Juliusz Schauder in 1930. His conjecture for the general case was published in the Scottish book. In 1934, Tychonoff proved the theorem for the case when "K" is a compact convex subset of a locally convex space. This version is known as the Schauder–Tychonoff fixed-point theorem. B. V. Singbal proved the theorem for the more general case where "K" may be non-compact; the proof can be found in the appendix of Bonsall's book (see references).
|
[
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "f(K)"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "\n\\{ x \\in X : x = \\lambda f(x) \\mbox{ for some } 0 \\leq \\lambda \\leq 1 \\}\n"
}
] |
https://en.wikipedia.org/wiki?curid=638200
|
63822450
|
Swish function
|
Mathematical activation function in data analysis
The swish function is a mathematical function defined as follows:
formula_0
where β is either constant or a trainable parameter depending on the model. For β = 1, the function becomes equivalent to the Sigmoid Linear Unit or SiLU, first proposed alongside the GELU in 2016. The SiLU was later rediscovered in 2017 as the Sigmoid-weighted Linear Unit (SiL) function used in reinforcement learning. The SiLU/SiL was then rediscovered as the swish over a year after its initial discovery, originally proposed without the learnable parameter β, so that β implicitly equalled 1. The swish paper was then updated to propose the activation with the learnable parameter β, though researchers usually let β = 1 and do not use the learnable parameter β. For β = 0, the function turns into the scaled linear function f("x") = "x"/2. With β → ∞, the sigmoid component approaches a 0-1 function pointwise, so swish approaches the ReLU function pointwise. Thus, it can be viewed as a smoothing function which nonlinearly interpolates between a linear function and the ReLU function. This function uses non-monotonicity, and may have influenced the proposal of other activation functions with this property such as Mish.
When considering positive values, Swish is a particular case of sigmoid shrinkage function defined in (see the doubly parameterized sigmoid shrinkage form given by Equation (3) of this reference).
Applications.
In 2017, after performing analysis on ImageNet data, researchers from Google indicated that using this function as an activation function in artificial neural networks improves the performance, compared to ReLU and sigmoid functions. It is believed that one reason for the improvement is that the swish function helps alleviate the vanishing gradient problem during backpropagation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{swish}(x) = x \\operatorname{sigmoid}(\\beta x) = \\frac{x}{1+e^{-\\beta x}}."
}
] |
https://en.wikipedia.org/wiki?curid=63822450
|
638234
|
Impedance bridging
|
In audio engineering and sound recording, a high impedance bridging, voltage bridging, or simply bridging connection is one in which the load impedance is much larger than the source impedance. The load measures the source's voltage while minimally drawing current or affecting it.
Explanation.
When the output of a device (consisting of the voltage source "V"S and output impedance "Z"S in illustration) is connected to the input of another device (the load impedance "Z"L in the illustration), these two impedances form a voltage divider:
formula_0
One can maximize the signal level "V"L by using a voltage source whose output impedance "Z"S is as small as possible and by using a receiving device whose input impedance "Z"L is as large as possible. When formula_1 (typically by at least ten times), this is called a "bridging connection" and has a number of effects including:
Applications.
Limit attenuation of voltage signal.
Impedance bridging is typically used to avoid unnecessary voltage attenuation and current draw in line or mic level connections where the source device has an unchangeable output impedance "Z"S. Fortunately, the input impedance "Z"L of modern op-amp circuits (and many old vacuum tube circuits) is often naturally much higher than the output impedance of these signal sources and thus are naturally-suited for impedance bridging when receiving and amplifying these voltage signals. The inherently lower output impedance of modern circuit designs facilitate impedance bridging.
For devices with very high output impedances, such as with a guitar pickup or a high-Z mic, a DI box can help with impedance bridging by converting the high output impedances to a lower impedance so as to not require the receiving device to have outrageously high input impedance (which would suffer drawbacks such as increased noise in long cable runs). The DI box is placed close to the source device, so any long cables can be attached to the output of the DI box (which usually also converts unbalanced signals to balanced signals to further increase noise immunity).
Increase electrical efficiency.
As explained in , the efficiency "η" of delivering power to a purely restive load impedance of "R"L from a voltage source with a purely restive output impedance of "R"S is:formula_2This "efficiency" can be increased using impedance bridging, by decreasing "R"S and/or by increasing "R"L.
However, to instead "transfer" the maximum power from the source to the load, impedance matching should be used, according to the maximum power transfer theorem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nV_L = \\frac{Z_L}{Z_S + Z_L} V_S \\, .\n"
},
{
"math_id": 1,
"text": "\nZ_L \\gg Z_S\n"
},
{
"math_id": 2,
"text": "\\eta = \\frac{1}{1 + R_\\mathrm{S} / R_\\mathrm{L}} \\, ."
}
] |
https://en.wikipedia.org/wiki?curid=638234
|
63825293
|
Bayesian history matching
|
Bayesian history matching is a statistical method for calibrating complex computer models. The equations inside many scientific computer models contain parameters which have a true value, but that true value is often unknown; history matching is one technique for learning what these parameters could be.
The name originates from the oil industry, where it refers to any technique for making sure oil reservoir models match up with historical oil production records. Since then, history matching has been widely used in many areas of science and engineering, including galaxy formation, disease modelling, climate science, and traffic simulation.
The basis of history matching is to use observed data to rule-out any parameter settings which are ``implausible’’. Since computer models are often too slow to individually check every possible parameter setting, this is usually done with the help of an emulator. For a set of potential parameter settings formula_0, their implausibility formula_1 can be calculated as:
formula_2
where formula_3 is the expected output of the computer model for that parameter setting, and formula_4 represents the uncertainties around the computer model output for that parameter setting. In other words, a parameter setting is scored based on how different the computer model output is to the real world observations, relative to how much uncertainty there is.
For computer models that output only one value, an implausibility of 3 is considered a good threshold for rejecting parameter settings. For computer models which output more than one output, other thresholds can be used.
A key component of history matching is the notion of iterative refocussing, where new computer model simulations can be chosen to better improve the emulator and the calibration, based on preliminary results.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\boldsymbol\\theta "
},
{
"math_id": 1,
"text": " I(\\boldsymbol\\theta) "
},
{
"math_id": 2,
"text": " I(\\boldsymbol\\theta) = \\frac{|E[f(\\boldsymbol\\theta)] - y|}{\\sqrt{Var[f(\\boldsymbol\\theta)]}} "
},
{
"math_id": 3,
"text": "E[f(\\boldsymbol\\theta)] "
},
{
"math_id": 4,
"text": " Var[f(\\boldsymbol\\theta)] "
}
] |
https://en.wikipedia.org/wiki?curid=63825293
|
638310
|
Damping factor
|
Ratio of impedance of a loudspeaker
In an audio system, the damping factor is defined as the ratio of the rated impedance of the loudspeaker (usually assumed to be ) to the source impedance of the power amplifier. It was originally proposed in 1941. Only the magnitude of the loudspeaker impedance is used, and the power amplifier output impedance is assumed to be totally resistive.
In typical solid state and tube amplifiers, the damping factor varies as a function of frequency. In solid state amplifiers, the damping factor usually has a maximum value at low frequencies, and it reduces progressively at higher frequencies. The figure to the right shows the damping factor of two amplifiers. One is a solid state amplifier (Luxman L-509u) and the other is a tube amplifier (Rogue Atlas). These results are fairly typical of these two types of amplifiers, and they serve to illustrate the fact that tube amplifiers usually have much lower damping factors than modern solid state amplifiers, which is an undesirable characteristic.
Calculation.
The source impedance (that is seen by the loudspeaker) includes the connecting cable impedance. The load impedance formula_0 and the source impedance formula_1 are shown in the circuit diagram.
The definition of damping factor formula_2 normally used to characterize audio amplifiers is:
formula_3
However, in this form formula_2 is not in fact proportional to the electrical circuit damping. The load is the source of energy being damped, and if formula_1 = 0, the damping resistance in series with the energy source cannot fall below formula_0 itself (unless formula_1 is made negative, which is usually impractical). This fact was admitted and an improved definition was proposed:
formula_4
But the former definition has nevertheless become standard.
Explanation.
Pierce undertook an analysis of the effects of amplifier damping factor on the decay time and frequency-dependent response variations of a closed-box, acoustic suspension loudspeaker system. The results indicated that any damping factor over 10 is going to result in inaudible differences between that and a damping factor equal to infinity. However, it was also determined that the frequency-dependent variation in the response of the loudspeaker due to the output resistance of the amplifier is much more significant than the effects on system damping. It is also important to not confuse these effects with damping effects, as they are caused by two quite different mechanisms. The calculations suggested that a damping factor in excess of 50 will not lead to audible improvements, all other things being equal.
For audio power amplifiers employing some global negative feedback, this source impedance formula_1 is generally smaller than , which from the point of view of the driver voice coil is a near short circuit.
The loudspeaker's nominal load impedance (input impedance) of formula_0 is usually around , although other impedance speakers are available, sometimes dropping as low as or . However, the impedance rating of a loudspeaker is simply a number that indicates the nominal minimum impedance of that loudspeaker over a representative portion of its operating frequency range. It needs to be kept in mind that most loudspeakers have an impedance that varies considerably with frequency. For a dynamic loudspeaker driver, a peak in the impedance is present at the free-air resonance frequency of the driver, which can be significantly greater in magnitude than the nominal rated impedance. In addition, the electrical characteristics of every voice coil will change with temperature (high power levels increase voice coil temperature, and thus resistance), the inductance of voice-coil windings leads to a rising impedance at high frequencies, and passive crossover networks (composed of relatively large inductors, capacitors, and resistors) introduce further impedance variations in multi-way loudspeaker systems. Referring to the equation for formula_2 that was given above, this frequency-dependent variation in loudspeaker load impedance results in the value of the damping factor of the amplifier varying with frequency when it is connected to a loudspeaker impedance load.
In loudspeaker systems, the value of the damping factor between a particular loudspeaker and a particular amplifier describes the ability of the amplifier to control undesirable movement of the speaker cone near the resonant frequency of the speaker system. It is usually used in the context of low-frequency driver behavior, and especially so in the case of electrodynamic drivers, which use a magnetic motor to generate the forces which move the diaphragm. A high damping factor in an amplifier is sometimes considered to result in the amplifier having greater control over the movement of the speaker cone, particularly in the bass region near the resonant frequency of the driver's mechanical resonance.
Speaker diaphragms have mass, and their compliant suspension components have stiffness. Together, these form a resonant system, and the mechanical cone resonance may be excited by electrical signals (for example, pulses) at audio frequencies. But a driver with a voice coil is also a current generator, since it has a coil attached to the cone and suspension, and that coil is immersed in a magnetic field. For every motion the coil makes, it will generate a current that will be seen by any electrically attached equipment, such as an amplifier. In fact, the output circuitry of the amplifier will be the main electrical load on the "voice coil current generator". If that load has low resistance, the current will be larger, and the voice coil will be more strongly forced to decelerate. A high damping factor (which requires low output impedance at the amplifier output) very rapidly damps unwanted cone movements induced by the mechanical resonance of the speaker, acting as the equivalent of a "brake" on the voice coil motion (just as a short circuit across the terminals of a rotary electrical generator will make it very hard to turn). It is generally (though not universally) thought that tighter control of voice coil motion is desirable, as it is believed to contribute to better-quality sound.
The damping circuit.
The voltage generated by the moving voice coil forces current through three resistances:
Effect of voice coil resistance.
This is the key factor in limiting the amount of damping that can be achieved electrically, because its value is larger (say between 4 and 8 Ω typically) than any other resistance in the output circuitry of an amplifier that does not use an output transformer (nearly every solid-state amplifier on the mass market).
A loudspeaker's flyback current is not only dissipated through the amplifier output circuit, but also through the internal resistance of the loudspeaker itself. Therefore, the choice of different loudspeakers will lead to different damping factors when coupled with the same amplifier.
Effect of cable resistance.
The damping factor is affected to some extent by the resistance of the speaker cables. The higher the resistance of the speaker cables, the lower the damping factor.
Amplifier output impedance.
Modern solid state amplifiers, which use relatively high levels of negative feedback to control distortion, have very low output impedances—one of the many consequences of using feedback—and small changes in an already low value change overall damping factor by only a small, and therefore negligible, amount.
Thus, high damping factor values do not, by themselves, say very much about the quality of a system; most modern amplifiers have them, but vary in quality nonetheless.
Vacuum-tube amplifiers typically have much lower feedback ratios, and in any case almost always have output transformers that limit how low the output impedance can be. Their lower damping factors are one of the reasons many audiophiles prefer tube amplifiers. Taken even further, some tube amplifiers are designed to have no negative feedback at all.
In practice.
Typical modern solid-state amplifiers with negative feedback tend to have high damping factors, usually above 50 and sometimes even greater than 150. High damping factors tend to reduce the extent to which a loudspeaker "rings" (undergoes unwanted short-term oscillation after an impulse of power is applied), but the extent to which damping factors higher than about 20 help in this respect is easily overstated; there will be significant effective internal resistance, as well as some resistance and reactance in crossover networks and speaker cables. Older amplifiers, plus modern triode and even solid-state amplifiers with low negative feedback, will tend to have damping factors closer to unity, or even less than 1.
Although extremely high values of damping factor in an amplifier will not necessarily make the loudspeaker–amplifier combination sound better, a high damping factor can serve to reduce the intensity of added frequency response variations that are undesirable. The figure on the right shows the effect of damping factor on the frequency response of an amplifier when that amplifier is connected to a simulated loudspeaker impedance load. This load is moderately demanding but not untypical of high-fidelity loudspeakers that are on the market, and it is based on the circuit proposed by Atkinson. The audio magazines "Stereophile" and "Australian Hi-Fi" have recognised the importance of amplifier damping factor, and have made the use of the simulated loudspeaker load a routine part of their amplifier measurements. At around , the real-life difference between an amplifier with a moderate (100) damping factor and one with a low (20) damping factor is about 0.37 dB. However, the amplifier with the low damping factor is acting more like a subtle graphic equaliser than is the amplifier with the moderate damping factor, where the peaks and dips in the amplifier's frequency response correspond closely to the peaks and dips in the loudspeaker impedance response.
It is clear from the various amplifier frequency response curves that low damping factor values result in significant changes in the frequency response of the amplifier in a number of frequency bands. This will result in broad levels of sound coloration that are highly likely to be audible. In addition, the frequency response changes will depend on the frequency-dependent impedance of whichever loudspeaker happens to be connected to the amplifier. Hence, in high-fidelity sound reproduction systems, amplifiers with moderate to high damping factors are the preferred option if accurate sound reproduction is desired when those amplifiers are connected to typical multi-way loudspeaker impedance loads.
Some amplifier designers, such as Nelson Pass, claim that loudspeakers can sound better with lower electrical damping, although this may be attributed to listener preference rather than technical merit. A lower damping factor helps to enhance the bass response of the loudspeaker by several decibels (where the impedance of the speaker would be at its maximum), which is useful if only a single speaker is used for the entire audio range. Therefore, some amplifiers, in particular vintage amplifiers from the 1950s, 1960s and 1970s, feature controls for varying the damping factor. While such bass "enhancement" may be pleasing to some listeners, it nonetheless represents a distortion of the input signal.
One example of a vintage amplifier with a damping control is the Accuphase E-202, which has a three-position switch described by the following excerpt from its owner's manual:
<templatestyles src="Template:Blockquote/styles.css" />Speaker Damping Control enhances characteristic tonal qualities of speakers. The damping factor of solid state amplifiers is generally very large and ideal for damping the speakers. However, some speakers require an amplifier with a low damping factor to reproduce rich, full-bodied sound. The E-202 has a Speaker Damping Control which permits choice of three damping factors and induces maximum potential performance from any speaker. Damping factor with an load becomes more than 50 when this control is set to NORMAL. Likewise, it is 5 at MEDIUM position, and 1 at SOFT position. It enables choosing the speaker sound that one prefers.
Damping is also a concern in guitar amplifiers (an application in which controlled distortion is desirable) and low damping can be better. Numerous guitar amplifiers have damping controls, and the trend to include this feature has been increasing since the 1990s. For instance the Marshall Valvestate 8008 rack-mounted stereo amplifier has a switch between "linear" and "Valvestate" mode:
"Linear/Vstate selector. Slide to select linear or Valvestate performance. The Valvestate mode gives extra warm harmonics plus the richness of tone, which is unique to the Valvestate power stage. Linear mode produces a highly defined hi-fi tone that gives a totally different character to the sound and suits certain modern "metal" styles, or PA applications."
This is actually a damping control based on negative current feedback, which is evident from the schematic, where the same switch is labeled as "Output Power Mode: Current/Voltage".
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Z_\\mathrm{L}"
},
{
"math_id": 1,
"text": "Z_\\mathrm{S}"
},
{
"math_id": 2,
"text": "DF"
},
{
"math_id": 3,
"text": "\nDF = \\frac{Z_\\mathrm{L}}{Z_\\mathrm{S}}\n"
},
{
"math_id": 4,
"text": "\nDF = \\frac{Z_\\mathrm{L}+Z_\\mathrm{S}}{Z_\\mathrm{L}}\n"
}
] |
https://en.wikipedia.org/wiki?curid=638310
|
63833862
|
Judges 20
|
Book of Judges chapter
Judges 20 is the twentieth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the war between the tribe of Benjamin and the other eleven tribes of Israel, belonging to a section comprising Judges 17 to 21.
Text.
This chapter was originally written in the Hebrew language. It is divided into 48 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Double Introduction and Double Conclusion.
Chapters 17 to 21 contain the "Double Conclusion" of the Book of Judges and form a type of inclusio together with their counterpart, the "Double Introduction", in chapters 1 to 3:6 as in the following structure of the whole book:
A. Foreign wars of subjugation with the "ḥērem" being applied (1:1–2:5)
B. Difficulties with foreign religious idols (2:6–3:6)
Main part: the "cycles" section(3:7–16:31)
B'. Difficulties with domestic religious idols (17:1–18:31)
A'. Domestic wars with the "ḥērem" being applied (19:1–21:25)
There are similar parallels between the double introduction and the double conclusion as the following:
The entire double conclusion is connected by the four-time repetition of a unique statement: twice in full at the beginning and the end of the double conclusion and twice in the center of the section as follows:
A. In those days there was no king…
Every man did what right in his own eyes (17:6)
B. In those days there was no king… (18:1)
B'. In those days there was no king… (19:1)
A'. In those days there was no king…
Every man did what right in his own eyes (21:25)
It also contains internal links:
Conclusion 1 (17:1–18:31): A Levite in Judah moving to the hill country of Ephraim and then on to Dan.
Conclusion 2 (19:1–21:25): A Levite in Ephraim looking for his concubine in Bethlehem in Judah.
Both sections end with a reference to Shiloh.
The Bethlehem Trilogy.
Three sections of the Hebrew Bible (Old Testament) — Judges 17–18, Judges 19–21, Ruth 1–4 — form a trilogy with a link to the city Bethlehem of Judah and characterized by the repetitive unique statement:
"In those days there was no king in Israel; everyone did what was right in his own eyes"
(Judges 17:6; ; ; ; cf. )
as in the following chart:
Chapters 19 to 21.
The section comprising Judges 19:1-21:25 has a chiastic structure of five episodes as follows:
A. The Rape of the Concubine (19:1–30)
B. "ḥērem" ("holy war") of Benjamin (20:1–48)
C. Problem: The Oaths-Benjamin Threatened with Extinction (21:1–5)
B'. "ḥērem" ("holy war") of Jabesh Gilead (21:6–14)
A'. The Rape of the Daughters of Shiloh (21:15–25)
The rape of the daughters of Shiloh is the ironic counterpoint to the rape of the Levite's concubine, with the "daughter" motif linking the two stories ( and ), and the women becoming 'doorways leading into and out of war, sources of contention and reconciliation'.
Preparation for war (20:1–11).
This chapter records the detailed process of a civil war that pits the pan-Israelite unity against a tribal unity. It also wrestles with the execution of a 'ban" (Hebrew: "herem"; "holy war") whether Israel should eliminate a whole tribe to root out evil in its own midst as required in Deuteronomy 13:12-18. As stated in Deuteronomy 13:14, an investigation must first be undertaken before the Israel confederation can declare war against alleged miscreants (verses 3–7; cf. 'base fellows' in Deuteronomy 13:13). The tribe of Benjamin did not send any representative to the gathering, although they have heard about the event (verse 3). The Levite was called to testify about the crime committed against his concubine, but as a sole witness he heightened the evil deed of the Gibeahites, while omitting his cowardly sacrifice of her. There was a unity of the tribes ("as one man" in verses 1, 8, 11) and a single-mindedness in rooting out the evil in their midst, that vengeance was to be directed to the entire city of Gibeah, because of the evildoers in their midst, just as the action against a breaker of covenant would be extended to their families and townsmen (cf. Deuteronomy 13:15–16; Joshua 7:24–25).
"Then all the children of Israel went out, and the congregation was gathered together as one man, from Dan even to Beersheba, with the land of Gilead, unto the LORD in Mizpeh."
Benjaminite War (20:12–48).
The war between the tribe of Benjamin against the other tribes of Israel consists of three battles with similar structure of reports in this chapter. The focus is on how the people of Israel would gradually humble themselves before YHWH (after two losses), so that the goals of Israel and YHWH would coincide (a huge victory against the Benjaminites.
The head count of the fighting men both from the Benjaminites and the other tribes of Israel in verses 15-17 can be compared to the last count in Numbers 26 as follows:
Assuming that the ratio between the number of men able to go to war and the total population remains relatively constant, the count indicates a decline of almost 30 percent in Israel's population since they entered the land of Canaan,so 'despite the victories under Joshua, Israel has not prospered since its arrival in Canaan' (cf. Deuteronomy 28:29).
The battle report structure, especially for the first battle in chapter 20, is similar to that in chapter 1 as follows:
The battle accounts appear to end, but because 600 Benjaminites escape, the finale of the battle is not technically a full imposition of the ban, which, in the Books of Deuteronomy and Joshua, is described as the killing of all human enemies.
26"Then all the children of Israel, and all the people, went up, and came unto the house of God, and wept, and sat there before the LORD, and fasted that day until even, and offered burnt offerings and peace offerings before the LORD."
27"And the children of Israel enquired of the LORD, (for the ark of the covenant of God was there in those days,"
28"And Phinehas, the son of Eleazar, the son of Aaron, stood before it in those days,) saying,"
"Shall I yet again go out to battle against the children of Benjamin my brother, or shall I cease?"
"And the Lord said,"
"Go up; for to morrow I will deliver them into thine hand."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63833862
|
63833866
|
2 Samuel 13
|
Second Book of Samuel chapter
2 Samuel 13 is the thirteenth chapter of the Second Book of Samuel in the Old Testament of the Christian Bible or the second part of Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's reign in Jerusalem. This is within a section comprising 2 Samuel 9–20 and continued to 1 Kings 1–2 which deal with the power struggles among David's sons to succeed David's throne until 'the kingdom was established in the hand of Solomon' (1 Kings 2:46).
Text.
This chapter was originally written in the Hebrew language. It is divided into 39 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–6, 13–34, 36–39.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter can be divided into two sections:
The two sections parallel each other:
Both sections opened with the same phrase construction "hyh + l + Absalom" to report that Absalom "had" a sister (13:1) and Absalom "had" sheep shearers (13:23). The victims in both sections unwittingly entered the domains of their attackers, made available to their assailants by King David, with the violence happening around food. The difference is the lengthy description for Tamar's care to her predator before the rape in contrast to very little attention to Amnon before the murder, perhaps to show that Amnon was not an innocent victim.
David played a key role in both episodes, in the first by providing Amnon access to Tamar and in the second by allowing Amnon and Absalom to get together, but crucially, David failed to exact justice for Tamar, and this incited Absalom, Tamar's brother, to take a role of "judge" to punish Amnon by killing him and later he openly took that role (2 Samuel 15) to bolster support for his rebellion against David. These episodes involving Amnon, Tamar, and Absalom have direct bearing on David's succession issue.
Amnon raped Tamar (13:1–22).
The crown-prince of Israel, Amnon, son of David and Ahinoam, was deeply attracted to Tamar, full sister of Absalom, both children of David and Maacah. Apparently virgins were under close guard, so Amnon did not have direct access to Tamar (verse 3), so he had to use trickery suggested by his cousin Jonadab (verses 3–5) to get Tamar come to take care of him (under the pretense of being sick) with David's permission. When left alone with Tamar, Amnon raped his sister, ignoring Tamar's plea for having a proper way of marriage, because Amnon was driven not by love but by lust. Although marriage between blood siblings was recorded in the early part of Hebrew Bible, cf. Genesis 20:12, it was later prohibited by Torah, cf. Leviticus 18:9, 11; 20:17; Deuteronomy 27:22. Whether this is known to Amnon or not, after the rape, Amnon felt an intense loathing of Tamar, and Tamar's expectation that Amnon would marry her (verse 16, cf. Exodus 22:16; Deuteronomy 22:28), she was expelled from his sight with contempt (verses 15,17–18). Tamar immediately went into mourning, by tearing the long gown she was wearing as a virgin princess, as a sign of grief rather than lost virginity, as well as putting ashes on the head and placing a hand on the head (cf. Jeremiah 2:37). Verse 21 notes that David was very angry when he heard, but he did not take any action against Amnon; the Greek text of Septuagint and 4QSama have a reading not found in the Masoretic Text as follows: 'but he would not punish his son Amnon, because he loved him, for he was his firstborn' (NRSV; note in ESV). Absalom would have resented David's leniency, but he restrained himself (verse 22) for two years (verse 23), while made a good plan for revenge.
This section has a structure that meticulously places the rape at the center:
A. The characters and their relationships (13:1–3)
B. Planning the rape (13:4–7)
B1. Jonadab's advice to Amnon (13:4–5)
B2. David provides Amnon access to Tamar (13:6–7)
C. Tamar's actions (13:8–9)
D. Tamar comes into the inner room (13:10)
E. The dialogue before the rape (13:11)
E1. Amnon orders Tamar (13:12–13)
E2. Tamar protests (13:11–14a)
E3. Amnon will not listen to Tamar (13:14a)
F. The rape (13:14b)
E'. The dialogue after the rape (13:15–16)
E1'. Amnon orders Tamar (13:15)
E2'. Tamar protests (13:16ab)
E3'. Amnon will not listen to Tamar (13:16c)
D'. Tamar is thrown out of the room (13:17–18)
C'. Tamar's actions (13:19)
B'. The aftermath of the rape (13:20–21)
B1'. Absalom's advice to Tamar (13:20)
B2'. David's reaction (13:31)
A.' New relationships among the characters (13:22)
The episode began with a description of the relationships among the characters (A), which is permanently ruptured at the end (A'). David's actions (B/B') and Tamar actions (C/C') bracket the center action which is framed by the entrance/exit of Tamar from the room (D/D') and the verbal confrontations between Amnon and Tamar (E/E').
"But Amnon had a friend, whose name was Jonadab, the son of Shimeah, David's brother. And Jonadab was a very crafty man."
Absalom's revenge on Amnon (13:23–39).
The revenge of Absalom toward Amnon was timed to coincide with sheep-shearing festivities at Baal-hazor near Ephraim, which was probably a few miles from Jerusalem, as it was perfectly reasonable for Absalom to invite the king and his servants to the celebrations. David was said to have a slight suspicion of Absalom's personal invitation (verse 24), so he did not go, but was persuaded by Absalom for his permission to allow Amnon to go (verses 25, 27). Apparently David did not realize the extent of Absalom's hatred until he was briefed by Jonadab (cf. verse 32). According to the Septuagint and 4QSama, 'Absalom made a feast like a king's feast' (NRSV). The murderers were only identified as Absalom's servants (verse 29) and it is obvious that Absalom gave the order to kill and encouraged them. An initial report that all the king's sons had been killed had to be corrected by Jonadab, asserting that it was only Amnon who had died and providing David the information of the reason for Absalom's action (verse 32), then the king's sons indeed returned along the 'Horonaim road' (the Septuagint Greek version reads 'the road behind him'). During the period of court mourning for Amnon (verses 36–37), Absalom took refuge with Talmai, king of Geshur, his grandfather on his mother's side, and stayed there in exile for three years (verses 37–38). Fast forward to the end of three years, the narrative records a change in David's 'change of heart' (following the LXX and 4QSama), attributed to his affection for all his sons and perhaps also the realization that Absalom was second in line for succession, thus preparing the way for Absalom's return which is reported in chapter 14. Absalom's temporary exclusion from court was followed by brief reconciliation with David, but Absalom soon set a rebellion (chapters 15–19) which ultimately caused his death, a chain of events which is attributed to the clash of personalities shown in this chapter and chapter 14 between the vindictive (14:33) and determined (14:28–32) Absalom in contrast to the compliant (13:7), indecisive (14:1), and lenient (13:21) David.
The structure in this section centers to the scene of Jonadab informing David that Absalom murdered Amnon for the rape of Tamar:
A. Absalom and David (13:23–27)
B. Absalom acts (13:28–29a)
C. Flight of the king's sons and a first report to David (13:29b–31)
D. Jonadab's report: only Amnon among the king's sons was dead (13:32a)
E. Jonadab informs David of Absalom's motives: because Amnon raped Tamar (13:32b)
D'. Jonadab's report: only Amnon among the king's sons was dead (13:33)
C'. Flight of Absalom and a second report to David (13:34–36)
B'. Absalom acts (13:37–38)
A'. Absalom and David (13:39)
Tamar tried to prevent Amnon from raping her by warning that the action would lead to him being considered a "nabal", a Hebrew word for "scoundrel" (2 Samuel 13:13). This epithet connects the story of Amnon's murder to the death of Nabal, the first husband of Abigail, (1 Samuel 25) as follows:
"But Absalom fled, and went to Talmai, the son of Ammihud, king of Geshur. And David mourned for his son every day."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
] |
https://en.wikipedia.org/wiki?curid=63833866
|
6383817
|
Ancestral reconstruction
|
Extrapolation method to detect common ancestors
Ancestral reconstruction (also known as Character Mapping or Character Optimization) is the extrapolation back in time from measured characteristics of individuals, populations, or species to their common ancestors. It is an important application of phylogenetics, the reconstruction and study of the evolutionary relationships among individuals, populations or species to their ancestors. In the context of evolutionary biology, ancestral reconstruction can be used to recover different kinds of ancestral character states of organisms that lived millions of years ago. These states include the genetic sequence (ancestral sequence reconstruction), the amino acid sequence of a protein, the composition of a genome (e.g., gene order), a measurable characteristic of an organism (phenotype), and the geographic range of an ancestral population or species (ancestral range reconstruction). This is desirable because it allows us to examine parts of phylogenetic trees corresponding to the distant past, clarifying the evolutionary history of the species in the tree. Since modern genetic sequences are essentially a variation of ancient ones, access to ancient sequences may identify other variations and organisms which could have arisen from those sequences. In addition to genetic sequences, one might attempt to track the changing of one character trait to another, such as fins turning to legs.
Non-biological applications include the reconstruction of the vocabulary or phonemes of ancient languages, and cultural characteristics of ancient societies such as oral traditions or marriage practices.
Ancestral reconstruction relies on a sufficiently realistic statistical model of evolution to accurately recover ancestral states. These models use the genetic information already obtained through methods such as phylogenetics to determine the route that evolution has taken and when evolutionary events occurred. No matter how well the model approximates the actual evolutionary history, however, one's ability to accurately reconstruct an ancestor deteriorates with increasing evolutionary time between that ancestor and its observed descendants. Additionally, more realistic models of evolution are inevitably more complex and difficult to calculate. Progress in the field of ancestral reconstruction has relied heavily on the exponential growth of computing power and the concomitant development of efficient computational algorithms (e.g., a dynamic programming algorithm for the joint maximum likelihood reconstruction of ancestral sequences). Methods of ancestral reconstruction are often applied to a given phylogenetic tree that has already been inferred from the same data. While convenient, this approach has the disadvantage that its results are contingent on the accuracy of a single phylogenetic tree. In contrast, some researchers advocate a more computationally intensive Bayesian approach that accounts for uncertainty in tree reconstruction by evaluating ancestral reconstructions over many trees.
History.
The concept of ancestral reconstruction is often credited to Emile Zuckerkandl and Linus Pauling. Motivated by the development of techniques for determining the primary (amino acid) sequence of proteins by Frederick Sanger in 1955, Zuckerkandl and Pauling postulated that such sequences could be used to infer not only the phylogeny relating the observed protein sequences, but also the ancestral protein sequence at the earliest point (root) of this tree. However, the idea of reconstructing ancestors from measurable biological characteristics had already been developing in the field of cladistics, one of the precursors of modern phylogenetics. Cladistic methods, which appeared as early as 1901, infer the evolutionary relationships of species on the basis of the distribution of shared characteristics, of which some are inferred to be descended from common ancestors. Furthermore, Theodosius Dobzhansky and Alfred Sturtevant articulated the principles of ancestral reconstruction in a phylogenetic context in 1938, when inferring the evolutionary history of chromosomal inversions in "Drosophila pseudoobscura".
Thus, ancestral reconstruction has its roots in several disciplines. Today, computational methods for ancestral reconstruction continue to be extended and applied in a diversity of settings, so that ancestral states are being inferred not only for biological characteristics and the molecular sequences, but also for the structure or catalytic properties of ancient versus modern proteins, the geographic location of populations and species (phylogeography) and the higher-order structure of genomes.
Methods and algorithms.
Any attempt at ancestral reconstruction begins with a phylogeny. In general, a phylogeny is a tree-based hypothesis about the order in which populations (referred to as taxa) are related by descent from common ancestors. Observed taxa are represented by the "tips" or "terminal nodes" of the tree that are progressively connected by branches to their common ancestors, which are represented by the branching points of the tree that are usually referred to as the "ancestral" or "internal nodes". Eventually, all lineages converge to the most recent common ancestor of the entire sample of taxa. In the context of ancestral reconstruction, a phylogeny is often treated as though it were a known quantity (with Bayesian approaches being an important exception). Because there can be an enormous number of phylogenies that are nearly equally effective at explaining the data, reducing the subset of phylogenies supported by the data to a single representative, or point estimate, can be a convenient and sometimes necessary simplifying assumption.
Ancestral reconstruction can be thought of as the direct result of applying a hypothetical model of evolution to a given phylogeny. When the model contains one or more free parameters, the overall objective is to estimate these parameters on the basis of measured characteristics among the observed taxa (sequences) that descended from common ancestors. Parsimony is an important exception to this paradigm: though it has been shown that there are circumstances under which it is the maximum likelihood estimator, at its core, it is simply based on the heuristic that changes in character state are rare, without attempting to quantify that rarity.
There are three different classes of method for ancestral reconstruction. In chronological order of discovery, these are maximum parsimony, maximum likelihood, and Bayesian Inference. Maximum parsimony considers all evolutionary events equally likely; maximum likelihood accounts for the differing likelihood of certain classes of event; and Bayeisan inference relates the conditional probability of an event to the likelihood of the tree, as well as the amount of uncertainty that is associated with that tree. Maximum parsimony and maximum likelihood yield a single most probable outcome, whereas Bayesian inference accounts for uncertainties in the data and yields a sample of possible trees.
Maximum parsimony.
Parsimony, known colloquially as "Occam's razor", refers to the principle of selecting the simplest of competing hypotheses. In the context of ancestral reconstruction, parsimony endeavours to find the distribution of ancestral states within a given tree which minimizes the total number of character state changes that would be necessary to explain the states observed at the tips of the tree. This method of maximum parsimony is one of the earliest formalized algorithms for reconstructing ancestral states, as well as one of the simplest.
Maximum parsimony can be implemented by one of several algorithms. One of the earliest examples is Fitch's method, which assigns ancestral character states by parsimony via two traversals of a rooted binary tree. The first stage is a post-order traversal that proceeds from the tips toward the root of a tree by visiting descendant (child) nodes before their parents. Initially, we are determining the set of possible character states "Si" for the "i"-th ancestor based on the observed character states of its descendants. Each assignment is the set intersection of the character states of the ancestor's descendants; if the intersection is the empty set, then it is the set union. In the latter case, it is implied that a character state change has occurred between the ancestor and one of its two immediate descendants. Each such event counts towards the algorithm's cost function, which may be used to discriminate among alternative trees on the basis of maximum parsimony. Next, a pre-order traversal of the tree is performed, proceeding from the root towards the tips. Character states are then assigned to each descendant based on which character states it shares with its parent. Since the root has no parent node, one may be required to select a character state arbitrarily, specifically when more than one possible state has been reconstructed at the root.
For example, consider a phylogeny recovered for a genus of plants containing 6 species A - F, where each plant is pollinated by either a "bee", "hummingbird" or "wind". One obvious question is what the pollinators at deeper nodes were in the phylogeny of this genus of plants. Under maximum parsimony, an ancestral state reconstruction for this clade reveals that "hummingbird" is the most parsimonious ancestral state for the lower clade (plants D, E, F), that the ancestral states for the nodes in the top clade (plants A, B, C) are equivocal and that both "hummingbird" or "bee" pollinators are equally plausible for the pollination state at the root of the phylogeny. Supposing we have strong evidence from the fossil record that the root state is "hummingbird". Resolution of the root to "hummingbird" would yield the pattern of ancestral state reconstruction depicted by the symbols at the nodes with the state requiring the fewest changes circled.
Parsimony methods are intuitively appealing and highly efficient, such that they are still used in some cases to seed maximum likelihood optimization algorithms with an initial phylogeny. However, the underlying assumption that evolution attained a certain end result as fast as possible is inaccurate. Natural selection and evolution do not work towards a goal, they simply select for or against randomly occurring genetic changes. Parsimony methods impose six general assumptions: that the phylogenetic tree you are using is correct, that you have all of the relevant data, in which no mistakes were made in coding, that all branches of the phylogenetic tree are equally likely to change, that the rate of evolution is slow, and that the chance of losing or gaining a characteristic is the same. In reality, assumptions are often violated, leading to several issues:
Maximum likelihood.
Maximum likelihood (ML) methods of ancestral state reconstruction treat the character states at internal nodes of the tree as parameters, and attempt to find the parameter values that maximize the probability of the data (the observed character states) given the hypothesis (a model of evolution and a phylogeny relating the observed sequences or taxa). In other words, this method assumes that the ancestral states are those which are statistically most likely, given the observed phenotypes. Some of the earliest ML approaches to ancestral reconstruction were developed in the context of genetic sequence evolution; similar models were also developed for the analogous case of discrete character evolution.
The use of a model of evolution accounts for the fact that not all events are equally likely to happen. For example, a transition, which is a type of point mutation from one purine to another, or from one pyrimidine to another is much more likely to happen than a transversion, which is the chance of a purine being switched to a pyrimidine, or vice versa. These differences are not captured by maximum parsimony. However, just because some events are more likely than others does not mean that they always happen. We know that throughout evolutionary history there have been times when there was a large gap between what was most likely to happen, and what actually occurred. When this is the case, maximum parsimony may actually be more accurate because it is more willing to make large, unlikely leaps than maximum likelihood is. Maximum likelihood has been shown to be quite reliable in reconstructing character states, but it does not do as good of a job at giving accurate estimations of the stability of proteins. Maximum likelihood always overestimates the stability of proteins, which makes sense since it assumes that the proteins that were made and used were the most stable and optimal. The merits of maximum likelihood have been subject to debate, with some having concluded that maximum likelihood test represents a good medium between accuracy and speed. However, other studies have complained that maximum likelihood takes too much time and computational power to be useful in some scenarios.
These approaches employ the same probabilistic framework as used to infer the phylogenetic tree. In brief, the evolution of a genetic sequence is modelled by a time-reversible continuous time Markov process. In the simplest of these, all characters undergo independent state transitions (such as nucleotide substitutions) at a constant rate over time. This basic model is frequently extended to allow different rates on each branch of the tree. In reality, mutation rates may also vary over time (due, for example, to environmental changes); this can be modelled by allowing the rate parameters to evolve along the tree, at the expense of having an increased number of parameters. A model defines transition probabilities from states "i" to "j" along a branch of length "t" (in units of evolutionary time). The likelihood of a phylogeny is computed from a nested sum of transition probabilities that corresponds to the hierarchical structure of the proposed tree. At each node, the likelihood of its descendants is summed over all possible ancestral character states at that node:
formula_0
where we are computing the likelihood of the subtree rooted at node "x" with direct descendants "y" and "z", formula_1 denotes the character state of the "i"-th node, formula_2 is the branch length (evolutionary time) between nodes "i" and "j", and formula_3 is the set of all possible character states (for example, the nucleotides A, C, G, and T). Thus, the objective of ancestral reconstruction is to find the assignment to formula_4 for all "x" internal nodes that maximizes the likelihood of the observed data for a given tree.
Marginal and joint likelihood.
Rather than compute the overall likelihood for alternative trees, the problem for ancestral reconstruction is to find the combination of character states at each ancestral node with the highest marginal maximum likelihood. Generally speaking, there are two approaches to this problem. First, one can assign the most likely character state to each ancestor independently of the reconstruction of all other ancestral states. This approach is referred to as "marginal reconstruction". It is akin to summing over all combinations of ancestral states at all of the other nodes of the tree (including the root node), other than those for which data is available. Marginal reconstruction is finding the state at the current node that maximizes the likelihood integrating over all other states at all nodes, in proportion to their probability. Second, one may instead attempt to find the joint combination of ancestral character states throughout the tree which jointly maximizes the likelihood of the entire dataset. Thus, this approach is referred to as joint reconstruction. Not surprisingly, joint reconstruction is more computationally complex than marginal reconstruction. Nevertheless, efficient algorithms for joint reconstruction have been developed with a time complexity that is generally linear with the number of observed taxa or sequences.
ML-based methods of ancestral reconstruction tend to provide greater accuracy than MP methods in the presence of variation in rates of evolution among characters (or across sites in a genome). However, these methods are not yet able to accommodate variation in rates of evolution over time, otherwise known as heterotachy. If the rate of evolution for a specific character accelerates on a branch of the phylogeny, then the amount of evolution that has occurred on that branch will be underestimated for a given length of the branch and assuming a constant rate of evolution for that character. In addition to that, it is difficult to distinguish heterotachy from variation among characters in rates of evolution.
Since ML (unlike maximum parsimony) requires the investigator to specify a model of evolution, its accuracy may be affected by the use of a grossly incorrect model (model misspecification). Furthermore, ML can only provide a single reconstruction of character states (what is often referred to as a "point estimate") — when the likelihood surface is highly non-convex, comprising multiple peaks (local optima), then a single point estimate cannot provide an adequate representation, and a Bayesian approach may be more suitable.
Bayesian inference.
Bayesian inference uses the likelihood of observed data to update the investigator's belief, or prior distribution, to yield the posterior distribution. In the context of ancestral reconstruction, the objective is to infer the posterior probabilities of ancestral character states at each internal node of a given tree. Moreover, one can integrate these probabilities over the posterior distributions over the parameters of the evolutionary model and the space of all possible trees. This can be expressed as an application of Bayes' theorem:
formula_5
where "S" represents the ancestral states, "D" corresponds to the observed data, and formula_6 represents both the evolutionary model and the phylogenetic tree. formula_7 is the likelihood of the observed data which can be computed by Felsenstein's pruning algorithm as given above. formula_8 is the prior probability of the ancestral states for a given model and tree. Finally, formula_9 is the probability of the data for a given model and tree, integrated over all possible ancestral states.
Bayesian inference is the method that many have argued is the most accurate. In general, Bayesian statistical methods allow investigators to combine pre-existing information with new hypothesis. In the case of evolution, it combines the likelihood of the data observed with the likelihood that the events happened in the order they did, while recognizing the potential for error and uncertainty. Overall, it is the most accurate method for reconstructing ancestral genetic sequences, as well as protein stability. Unlike the other two methods, Bayesian inference yields a distribution of possible trees, allowing for more accurate and easily interpretable estimates of the variance of possible outcomes.
We have given two formulations above to emphasize the two different applications of Bayes' theorem, which we discuss in the following section.
Empirical and hierarchical Bayes.
One of the first implementations of a Bayesian approach to ancestral sequence reconstruction was developed by Yang and colleagues, where the maximum likelihood estimates of the evolutionary model and tree, respectively, were used to define the prior distributions. Thus, their approach is an example of an empirical Bayes method to compute the posterior probabilities of ancestral character states; this method was first implemented in the software package PAML. In terms of the above Bayesian rule formulation, the empirical Bayes method fixes formula_6 to the empirical estimates of the model and tree obtained from the data, effectively dropping formula_6 from the posterior likelihood, and prior terms of the formula. Moreover, Yang and colleagues used the empirical distribution of site patterns (i.e., assignments of nucleotides to tips of the tree) in their alignment of observed nucleotide sequences in the denominator in place of exhaustively computing formula_10 over all possible values of "S" given formula_6. Computationally, the empirical Bayes method is akin to the maximum likelihood reconstruction of ancestral states except that, rather than searching for the ML assignment of states based on their respective probability distributions at each internal node, the probability distributions themselves are reported directly.
Empirical Bayes methods for ancestral reconstruction require the investigator to assume that the evolutionary model parameters and tree are known without error. When the size or complexity of the data makes this an unrealistic assumption, it may be more prudent to adopt the fully hierarchical Bayesian approach and infer the joint posterior distribution over the ancestral character states, model, and tree. Huelsenbeck and Bollback first proposed a hierarchical Bayes method to ancestral reconstruction by using Markov chain Monte Carlo (MCMC) methods to sample ancestral sequences from this joint posterior distribution. A similar approach was also used to reconstruct the evolution of symbiosis with algae in fungal species (lichenization). For example, the Metropolis-Hastings algorithm for MCMC explores the joint posterior distribution by accepting or rejecting parameter assignments on the basis of the ratio of posterior probabilities.
Put simply, the empirical Bayes approach calculates the probabilities of various ancestral states for a specific tree and model of evolution. By expressing the reconstruction of ancestral states as a set of probabilities, one can directly quantify the uncertainty for assigning any particular state to an ancestor. On the other hand, the hierarchical Bayes approach averages these probabilities over all possible trees and models of evolution, in proportion to how likely these trees and models are, given the data that has been observed.
Whether the hierarchical Bayes method confers a substantial advantage in practice remains controversial, however. Moreover, this fully Bayesian approach is limited to analyzing relatively small numbers of sequences or taxa because the space of all possible trees rapidly becomes too vast, making it computationally infeasible for chain samples to converge in a reasonable amount of time.
Calibration.
Ancestral reconstruction can be informed by the observed states in historical samples of known age, such as fossils or archival specimens. Since the accuracy of ancestral reconstruction generally decays with increasing time, the use of such specimens provides data that are closer to the ancestors being reconstructed and will most likely improve the analysis, especially when rates of character change vary through time. This concept has been validated by an experimental evolutionary study in which replicate populations of bacteriophage T7 were propagated to generate an artificial phylogeny. In revisiting these experimental data, Oakley and Cunningham found that maximum parsimony methods were unable to accurately reconstruct the known ancestral state of a continuous character (plaque size); these results were verified by computer simulation. This failure of ancestral reconstruction was attributed to a directional bias in the evolution of plaque size (from large to small plaque diameters) that required the inclusion of "fossilized" samples to address.
Studies of both mammalian carnivores and fishes have demonstrated that without incorporating fossil data, the reconstructed estimates of ancestral body sizes are unrealistically large. Moreover, Graham Slater and colleagues showed using caniform carnivorans that incorporating fossil data into prior distributions improved both the Bayesian inference of ancestral states and evolutionary model selection, relative to analyses using only contemporaneous data.
Models.
Many models have been developed to estimate ancestral states of discrete and continuous characters from extant descendants. Such models assume that the evolution of a trait through time may be modelled as a stochastic process. For discrete-valued traits (such as "pollinator type"), this process is typically taken to be a Markov chain; for continuous-valued traits (such as "brain size"), the process is frequently taken to be a Brownian motion or an Ornstein-Uhlenbeck process. Using this model as the basis for statistical inference, one can now use maximum likelihood methods or Bayesian inference to estimate the ancestral states.
Discrete-state models.
Suppose the trait in question may fall into one of formula_11 states, labelled formula_12. The typical means of modelling evolution of this trait is via a continuous-time Markov chain, which may be briefly described as follows. Each state has associated to it rates of transition to all of the other states. The trait is modelled as stepping between the formula_11 states; when it reaches a given state, it starts an exponential "clock" for each of the other states that it can step to. It then "races" the clocks against each other, and it takes a step towards the state whose clock is the first to ring. In such a model, the parameters are the transition rates formula_13, which can be estimated using, for example, maximum likelihood methods, where one maximizes over the set of all possible configurations of states of the ancestral nodes.
In order to recover the state of a given ancestral node in the phylogeny (call this node formula_14) by maximum likelihood, the procedure is: find the maximum likelihood estimate formula_15 of formula_16; then compute the likelihood of each possible state for formula_14 conditioning on formula_17; finally, choose the ancestral state which maximizes this. One may also use this substitution model as the basis for a Bayesian inference procedure, which would consider the posterior belief in the state of an ancestral node given some user-chosen prior.
Because such models may have as many as formula_18 parameters, overfitting may be an issue. Some common choices that reduce the parameter space are:
Example: Binary state speciation and extinction model.
The binary state speciation and extinction model (BiSSE) is a discrete-space model that does not directly follow the framework of those mentioned above. It allows estimation of ancestral binary character states jointly with diversification rates associated with different character states; it may also be straightforwardly extended to a more general multiple-discrete-state model. In its most basic form, this model involves six parameters: two speciation rates (one each for lineages in states 0 and 1); similarly, two extinction rates; and two rates of character change. This model allows for hypothesis testing on the rates of speciation/extinction/character change, at the cost of increasing the number of parameters.
Continuous-state models.
In the case where the trait instead takes non-discrete values, one must instead turn to a model where the trait evolves as some continuous process. Inference of ancestral states by maximum likelihood (or by Bayesian methods) would proceed as above, but with the likelihoods of transitions in state between adjacent nodes given by some other continuous probability distribution.
Applications.
Character evolution.
Ancestral reconstruction is widely used to infer the ecological, phenotypic, or biogeographic traits associated with ancestral nodes in a phylogenetic tree. All methods of ancestral trait reconstructions have pitfalls, as they use mathematical models to predict how traits have changed with large amounts of missing data. This missing data includes the states of extinct species, the relative rates of evolutionary changes, knowledge of initial character states, and the accuracy of phylogenetic trees. In all cases where ancestral trait reconstruction is used, findings should be justified with an examination of the biological data that supports model based conclusions. Griffith O.W. "et al."
Ancestral reconstruction allows for the study of evolutionary pathways, adaptive selection, developmental gene expression, and functional divergence of the evolutionary past. For a review of biological and computational techniques of ancestral reconstruction see Chang "et al.". For criticism of ancestral reconstruction computation methods see Williams P.D. "et al.".
Behavior and life history evolution.
In horned lizards (genus "Phrynosoma"), viviparity (live birth) has evolved multiple times, based on ancestral reconstruction methods.
Diet reconstruction in Galapagos finches.
Both phylogenetic and character data are available for the radiation of finches inhabiting the Galapagos Islands. These data allow testing of hypotheses concerning the timing and ordering of character state changes through time via ancestral state reconstruction. During the dry season, the diets of the 13 species of Galapagos finches may be assorted into three broad diet categories, first those that consume grain-like foods are considered "granivores", those that ingest arthropods are termed "insectivores" and those that consume vegetation are classified as "folivores". Dietary ancestral state reconstruction using maximum parsimony recover 2 major shifts from an insectivorous state: one to granivory, and one to folivory. Maximum-likelihood ancestral state reconstruction recovers broadly similar results, with one significant difference: the common ancestor of the tree finch ("Camarhynchus") and ground finch ("Geospiza") clades are most likely granivorous rather than insectivorous (as judged by parsimony). In this case, this difference between ancestral states returned by maximum parsimony and maximum likelihood likely occurs as a result of the fact that ML estimates consider branch lengths of the phylogenetic tree.
Morphological and physiological character evolution.
Phrynosomatid lizards show remarkable morphological diversity, including in the relative muscle fiber type composition in their hindlimb muscles. Ancestor reconstruction based on squared-change parsimony (equivalent to maximum likelihood under Brownian motion character evolution) indicates that horned lizards, one of the three main subclades of the lineage, have undergone a major evolutionary increase in the proportion of fast-oxidative glycolytic fibers in their iliofibularis muscles.
Mammalian body mass.
In an analysis of the body mass of 1,679 placental mammal species comparing stable models of continuous character evolution to Brownian motion models, Elliot and Mooers showed that the evolutionary process describing mammalian body mass evolution is best characterized by a stable model of continuous character evolution, which accommodates rare changes of large magnitude. Under a stable model, ancestral mammals retained a low body mass through early diversification, with large increases in body mass coincident with the origin of several Orders of large body massed species (e.g. ungulates). By contrast, simulation under a Brownian motion model recovered a less realistic, order of magnitude larger body mass among ancestral mammals, requiring significant reductions in body size prior to the evolution of Orders exhibiting small body size (e.g. Rodentia). Thus stable models recover a more realistic picture of mammalian body mass evolution by permitting large transformations to occur on a small subset of branches.
Correlated character evolution.
Phylogenetic comparative methods (inferences drawn through comparison of related taxa) are often used to identify biological characteristics that do not evolve independently, which can reveal an underlying dependence. For example, the evolution of the shape of a finch's beak may be associated with its foraging behaviour. However, it is not advisable to search for these associations by the direct comparison of measurements or genetic sequences because these observations are not independent because of their descent from common ancestors. For discrete characters, this problem was first addressed in the framework of maximum parsimony by evaluating whether two characters tended to undergo a change on the same branches of the tree. Felsenstein identified this problem for continuous character evolution and proposed a solution similar to ancestral reconstruction, in which the phylogenetic structure of the data was accommodated statistically by directing the analysis through computation of "independent contrasts" between nodes of the tree related by non-overlapping branches.
Molecular evolution.
On a molecular level, amino acid residues at different locations of a protein may evolve non-independently because they have a direct physicochemical interaction, or indirectly by their interactions with a common substrate or through long-range interactions in the protein structure. Conversely, the folded structure of a protein could potentially be inferred from the distribution of residue interactions. One of the earliest applications of ancestral reconstruction, to predict the three-dimensional structure of a protein through residue contacts, was published by Shindyalov and colleagues. Phylogenies relating 67 different protein families were generated by a distance-based clustering method (unweighted pair group method with arithmetic mean, UPGMA), and ancestral sequences were reconstructed by parsimony. The authors reported a weak but significant tendency for co-evolving pairs of residues to be co-located in the known three-dimensional structure of the proteins.
The reconstruction of ancient proteins and DNA sequences has only recently become a significant scientific endeavour. The developments of extensive genomic sequence databases in conjunction with advances in biotechnology and phylogenetic inference methods have made ancestral reconstruction cheap, fast, and scientifically practical. This concept has been applied to identify co-evolving residues in protein sequences using more advanced methods for the reconstruction of phylogenies and ancestral sequences. For example, ancestral reconstruction has been used to identify co-evolving residues in proteins encoded by RNA virus genomes, particularly in HIV.
Ancestral protein and DNA reconstruction allows for the recreation of protein and DNA evolution in the laboratory so that it can be studied directly. With respect to proteins, this allows for the investigation of the evolution of present-day molecular structure and function. Additionally, ancestral protein reconstruction can lead to the discoveries of new biochemical functions that have been lost in modern proteins. It also allows insights into the biology and ecology of extinct organisms. Although the majority of ancestral reconstructions have dealt with proteins, it has also been used to test evolutionary mechanisms at the level of bacterial genomes and primate gene sequences.
Vaccine design.
RNA viruses such as the human immunodeficiency virus (HIV) evolve at an extremely rapid rate, orders of magnitude faster than mammals or birds. For these organisms, ancestral reconstruction can be applied on a much shorter time scale; for example, in order to reconstruct the global or regional progenitor of an epidemic that has spanned decades rather than millions of years. A team around Brian Gaschen proposed that such reconstructed strains be used as targets for vaccine design efforts, as opposed to sequences isolated from patients in the present day. Because HIV is extremely diverse, a vaccine designed to work on one patient's viral population might not work for a different patient, because the evolutionary distance between these two viruses may be large. However, their most recent common ancestor is closer to each of the two viruses than they are to each other. Thus, a vaccine designed for a common ancestor could have a better chance of being effective for a larger proportion of circulating strains. Another team took this idea further by developing a center-of-tree reconstruction method to produce a sequence whose total evolutionary distance to contemporary strains is as small as possible. Strictly speaking, this method was not "ancestral" reconstruction, as the center-of-tree (COT) sequence does not necessarily represent a sequence that has ever existed in the evolutionary history of the virus. However, Rolland and colleagues did find that, in the case of HIV, the COT virus was functional when synthesized. Similar experiments with synthetic ancestral sequences obtained by maximum likelihood reconstruction have likewise shown that these ancestors are both functional and immunogenic, lending some credibility to these methods. Furthermore, ancestral reconstruction can potentially be used to infer the genetic sequence of the transmitted HIV variants that have gone on to establish the next infection, with the objective of identifying distinguishing characteristics of these variants (as a non-random selection of the transmitted population of viruses) that may be targeted for vaccine design.
Genome rearrangements.
Rather than inferring the ancestral DNA sequence, one may be interested in the larger-scale molecular structure and content of an ancestral genome. This problem is often approached in a combinatorial framework, by modelling genomes as permutations of genes or homologous regions. Various operations are allowed on these permutations, such as an inversion (a segment of the permutation is reversed in-place), deletion (a segment is removed), transposition (a segment is removed from one part of the permutation and spliced in somewhere else), or gain of genetic content through recombination, duplication or horizontal gene transfer. The "genome rearrangement problem", first posed by Watterson and colleagues, asks: given two genomes (permutations) and a set of allowable operations, what is the shortest sequence of operations that will transform one genome into the other? A generalization of this problem applicable to ancestral reconstruction is the "multiple genome rearrangement problem": given a set of genomes and a set of allowable operations, find (i) a binary tree with the given genomes as its leaves, and (ii) an assignment of genomes to the internal nodes of the tree, such that the total number of operations across the whole tree is minimized. This approach is similar to parsimony, except that the tree is inferred along with the ancestral sequences. Unfortunately, even the single genome rearrangement problem is NP-hard, although it has received much attention in mathematics and computer science (for a review, see Fertin and colleagues).
The reconstruction of ancestral genomes is also called karyotype reconstruction. Chromosome painting is currently the main experimental technique. Recently, researchers have developed computational methods to reconstruct the ancestral karyotype by taking advantage of comparative genomics. Furthermore, comparative genomics and ancestral genome reconstruction has been applied to identify ancient horizontal gene transfer events at the last common ancestor of a lineage (e.g. "Candidatus" Accumulibacter phosphatis) to identify the evolutionary basis for trait acquisition.
Spatial applications.
Migration.
Ancestral reconstruction is not limited to biological traits. Spatial location is also a trait, and ancestral reconstruction methods can infer the locations of ancestors of the individuals under consideration. Such techniques were used by Lemey and colleagues to geographically trace the ancestors of 192 Avian influenza A-H5N1 strains sampled from twenty localities in Europe and Asia, and for 101 rabies virus sequences sampled across twelve African countries.
Treating locations as discrete states (countries, cities, etc.) allows for the application of the discrete-state models described above. However, unlike in a model where the state space for the trait is small, there may be many locations, and transitions between certain pairs of states may rarely or never occur; for example, migration between distant locales may never happen directly if air travel between the two places does not exist, so such migrations must pass through intermediate locales first. This means that there could be many parameters in the model which are zero or close to zero. To this end, Lemey and colleagues used a Bayesian procedure to not only estimate the parameters and ancestral states, but also to select which migration parameters are not zero; their work suggests that this procedure does lead to more efficient use of the data. They also explore the use of prior distributions that incorporate geographical structure or hypotheses about migration dynamics, finding that those they considered had little effect on the findings.
Using this analysis, the team around Lemey found that the most likely hub of diffusion of A-H5N1 is Guangdong, with Hong Kong also receiving posterior support. Further, their results support the hypothesis of long-standing presence of African rabies in West Africa.
Species ranges.
Inferring historical biogeographic patterns often requires reconstructing ancestral ranges of species on phylogenetic trees. For instance, a well-resolved phylogeny of plant species in the genus "Cyrtandra" was used together with information of their geographic ranges to compare four methods of ancestral range reconstruction. The team compared Fitch parsimony, (FP; parsimony) stochastic mapping (SM; maximum likelihood), dispersal-vicariance analysis (DIVA; parsimony), and dispersal-extinction-cladogenesis (DEC; maximum-likelihood). Results indicated that both parsimony methods performed poorly, which was likely due to the fact that parsimony methods do not consider branch lengths. Both maximum-likelihood methods performed better; however, DEC analyses that additionally allow incorporation of geological priors gave more realistic inferences about range evolution in "Cyrtandra" relative to other methods.
Another maximum likelihood method recovers the phylogeographic history of a gene by reconstructing the ancestral locations of the sampled taxa. This method assumes a spatially explicit random walk model of migration to reconstruct ancestral locations given the geographic coordinates of the individuals represented by the tips of the phylogenetic tree. When applied to a phylogenetic tree of chorus frogs "Pseudacris feriarum", this method recovered recent northward expansion, higher per-generation dispersal distance in the recently colonized region, a non-central ancestral location, and directional migration.
The first consideration of the multiple genome rearrangement problem, long before its formalization in terms of permutations, was presented by Sturtevant and Dobzhansky in 1936. They examined genomes of several strains of fruit fly from different geographic locations, and observed that one configuration, which they called "standard", was the most common throughout all the studied areas. Remarkably, they also noticed that four different strains could be obtained from the standard sequence by a single inversion, and two others could be related by a second inversion. This allowed them to hypothesize a phylogeny for the sequences, and to infer that the standard sequence was probably also the ancestral one.
Linguistic Evolution.
Reconstructions of the words and phenomes of ancient proto-languages such as Proto-Indo-European have been performed based on the observed analogues in present-day languages. Typically, these analyses are carried out manually using the "comparative method". First, words from different languages with a common etymology (cognates) are identified in the contemporary languages under study, analogous to the identification of orthologous biological sequences. Second, correspondences between individual sounds in the cognates are identified, a step similar to biological sequence alignment, although performed manually. Finally, likely ancestral sounds are hypothesised by manual inspection and various heuristics (such as the fact that most languages have both nasal and non-nasal vowels).
Software.
There are many software packages available which can perform ancestral state reconstruction. Generally, these software packages have been developed and maintained through the efforts of scientists in related fields and released under free software licenses. The following table is not meant to be a comprehensive itemization of all available packages, but provides a representative sample of the extensive variety of packages that implement methods of ancestral reconstruction with different strengths and features.
Package descriptions.
Molecular evolution.
The majority of these software packages are designed for analyzing genetic sequence data. For example, PAML is a collection of programs for the phylogenetic analysis of DNA and protein sequence alignments by maximum likelihood. Ancestral reconstruction can be performed using the "codeml" program. In addition, LAZARUS is a collection of Python scripts that wrap the ancestral reconstruction functions of PAML for batch processing and greater ease-of-use. Software packages such as MEGA, HyPhy, and Mesquite also perform phylogenetic analysis of sequence data, but are designed to be more modular and customizable. HyPhy implements a joint maximum likelihood method of ancestral sequence reconstruction that can be readily adapted to reconstructing a more generalized range of discrete ancestral character states such as geographic locations by specifying a customized model in its batch language. Mesquite provides ancestral state reconstruction methods for both discrete and continuous characters using both maximum parsimony and maximum likelihood methods. It also provides several visualization tools for interpreting the results of ancestral reconstruction. MEGA is a modular system, too, but places greater emphasis on ease-of-use than customization of analyses. As of version 5, MEGA allows the user to reconstruct ancestral states using maximum parsimony, maximum likelihood, and empirical Bayes methods.
The Bayesian analysis of genetic sequences may confer greater robustness to model misspecification. MrBayes allows inference of ancestral states at ancestral nodes using the full hierarchical Bayesian approach. The PREQUEL program distributed in the PHAST package performs comparative evolutionary genomics using ancestral sequence reconstruction. SIMMAP stochastically maps mutations on phylogenies. BayesTraits analyses discrete or continuous characters in a Bayesian framework to evaluate models of evolution, reconstruct ancestral states, and detect correlated evolution between pairs of traits.
Other character types.
Other software packages are more oriented towards the analysis of qualitative and quantitative traits (phenotypes). For example, the "ape" package in the statistical computing environment R also provides methods for ancestral state reconstruction for both discrete and continuous characters through the "'ace"' function, including maximum likelihood. Phyrex implements a maximum parsimony-based algorithm to reconstruct ancestral gene expression profiles, in addition to a maximum likelihood method for reconstructing ancestral genetic sequences (by wrapping around the baseml function in PAML).
Several software packages also reconstruct phylogeography. BEAST (Bayesian Evolutionary Analysis by Sampling Trees) and BEAST 2 provides tools for reconstructing ancestral geographic locations from observed sequences annotated with location data using Bayesian MCMC sampling methods. Diversitree is an R package providing methods for ancestral state reconstruction under Mk2 (a continuous time Markov model of binary character evolution). and BiSSE (Binary State Speciation and Extinction) models. Lagrange performs analyses on reconstruction of geographic range evolution on phylogenetic trees. Phylomapper is a statistical framework for estimating historical patterns of gene flow and ancestral geographic locations. RASP infers ancestral states using statistical dispersal-vicariance analysis, Lagrange, Bayes-Lagrange, BayArea and BBM methods. VIP infers historical biogeography by examining disjunct geographic distributions.
Genome rearrangements provide valuable information in comparative genomics between species. ANGES compares extant related genomes through ancestral reconstruction of genetic markers. BADGER uses a Bayesian approach to examining the history of gene rearrangement. Count reconstructs the evolution of the size of gene families. EREM analyses the gain and loss of genetic features encoded by binary characters. PARANA performs parsimony based inference of ancestral biological networks that represent gene loss and duplication.
Web applications.
Finally, there are several web-server based applications that allow investigators to use maximum likelihood methods for ancestral reconstruction of different character types without having to install any software. For example, Ancestors is web-server for ancestral genome reconstruction by the identification and arrangement of syntenic regions. FastML is a web-server for probabilistic reconstruction of ancestral sequences by maximum likelihood that uses a gap character model for reconstructing indel variation. MLGO is a web-server for maximum likelihood gene order analysis.
Future directions.
The development and application of computational algorithms for ancestral reconstruction continues to be an active area of research across disciplines. For example, the reconstruction of sequence insertions and deletions (indels) has lagged behind the more straightforward application of substitution models. Bouchard-Côté and Jordan recently described a new model (the Poisson Indel Process) which represents an important advance on the archetypal Thorne-Kishino-Felsenstein model of indel evolution. In addition, the field is being driven forward by rapid advances in the area of next-generation sequencing technology, where sequences are generated from millions of nucleic acid templates by extensive parallelization of sequencing reactions in a custom apparatus. These advances have made it possible to generate a "deep" snapshot of the genetic composition of a rapidly evolving population, such as RNA viruses or tumour cells, in a relatively short amount of time. At the same time, the massive amount of data and platform-specific sequencing error profiles has created new bioinformatic challenges for processing these data for ancestral sequence reconstruction.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nL_x = \\sum_{S_x\\in \\Omega} P(S_x) \\left(\\sum_{S_y\\in \\Omega} P(S_y | S_x, t_{xy}) L_y \\sum_{S_z\\in \\Omega} P(S_z | S_x, t_{xz}) L_z\\right)\n"
},
{
"math_id": 1,
"text": "S_i"
},
{
"math_id": 2,
"text": "t_{ij}"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "S_x"
},
{
"math_id": 5,
"text": "\n\\begin{align}\nP(S | D, \\theta) &= \\frac{ P(D|S,\\theta) P(S|\\theta) }{ P(D|\\theta) }\\\\\n & \\propto P(D|S,\\theta) P(S|\\theta) P(\\theta)\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "P(D|S,\\theta)"
},
{
"math_id": 8,
"text": "P(S|\\theta)"
},
{
"math_id": 9,
"text": "P(D|\\theta)"
},
{
"math_id": 10,
"text": "P(D)"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "1,\\ldots, k"
},
{
"math_id": 13,
"text": "\\mathbf{q} = \\{q_{ij}: 1\\leq i, j\\leq k, i\\not= j\\}"
},
{
"math_id": 14,
"text": "\\alpha"
},
{
"math_id": 15,
"text": "\\hat{\\mathbf{q}}"
},
{
"math_id": 16,
"text": "\\mathbf{q}"
},
{
"math_id": 17,
"text": "\\mathbf{q} = \\hat{\\mathbf{q}}"
},
{
"math_id": 18,
"text": "k(k-1)"
},
{
"math_id": 19,
"text": "q"
},
{
"math_id": 20,
"text": "q_\\mbox{inc}"
},
{
"math_id": 21,
"text": "q_\\mbox{dec}"
},
{
"math_id": 22,
"text": "0"
},
{
"math_id": 23,
"text": "U"
},
{
"math_id": 24,
"text": "V"
},
{
"math_id": 25,
"text": "t"
},
{
"math_id": 26,
"text": "x"
},
{
"math_id": 27,
"text": "y"
},
{
"math_id": 28,
"text": "\\sigma^2 t"
},
{
"math_id": 29,
"text": "\\sigma^2"
}
] |
https://en.wikipedia.org/wiki?curid=6383817
|
63840940
|
Hidden linear function problem
|
The hidden linear function problem, is a search problem that generalizes the Bernstein–Vazirani problem. In the Bernstein–Vazirani problem, the hidden function is implicitly specified in an oracle; while in the 2D hidden linear function problem (2D HLF), the hidden function is explicitly specified by a matrix and a binary vector. 2D HLF can be solved exactly by a constant-depth quantum circuit restricted to a 2-dimensional grid of qubits using bounded fan-in gates but can't be solved by any sub-exponential size, constant-depth classical circuit using unbounded fan-in AND, OR, and NOT gates.
While Bernstein–Vazirani's problem was designed to prove an oracle separation between complexity classes BQP and BPP, 2D HLF was designed to prove an explicit separation between the circuit classes formula_0 and formula_1 (formula_2).
2D HLF problem statement.
Given formula_3(an upper-triangular binary matrix of size formula_4) and formula_5 (a binary vector of length formula_6),
define a function formula_7:
formula_8
and
formula_9
There exists a formula_10 such that
formula_11
Find formula_12.
2D HLF algorithm.
With 3 registers; the first holding formula_13, the second containing formula_14 and the third carrying an formula_6-qubit state, the circuit has controlled gates which implement
formula_15 from the first two registers to the third.
This problem can be solved by a quantum circuit, formula_16, where "H" is the Hadamard gate, "S" is the S gate and CZ is CZ gate. It is solved by this circuit because with formula_17, formula_18 iff formula_12 is a solution.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "QNC^{0}"
},
{
"math_id": 1,
"text": "NC^{0}"
},
{
"math_id": 2,
"text": "QNC^{0} \\nsubseteq NC^{0}"
},
{
"math_id": 3,
"text": "A \\in \\mathbb{F}_2^{n \\times n}"
},
{
"math_id": 4,
"text": "n \\times n"
},
{
"math_id": 5,
"text": "b \\in \\mathbb{F}_2^n"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "q : \\mathbb{F}_2^n \\to \\mathbb{Z}_4"
},
{
"math_id": 8,
"text": "q(x) = (2 x^T A x + b^T x) \\bmod 4 = \\left(2 \\sum_{i,j}A_{i,j} x_i x_j + \\sum_i b_i x_i \\right) \\bmod 4 , "
},
{
"math_id": 9,
"text": "\\mathcal{L}_q = \\Big\\{x \\in \\mathbb{F}_2^n : q(x \\oplus y) = (q(x) + q(y)) \\bmod 4 ~~ \\forall y \\in \\mathbb{F}_2^n \\Big\\}."
},
{
"math_id": 10,
"text": "z \\in \\mathbb{F}_2^n"
},
{
"math_id": 11,
"text": "q(x) = 2 z^T x ~~\\forall x \\in \\mathcal{L}_q."
},
{
"math_id": 12,
"text": "z"
},
{
"math_id": 13,
"text": "A"
},
{
"math_id": 14,
"text": "b"
},
{
"math_id": 15,
"text": "U_q = \\prod_{1 < i < j < n} CZ_{ij}^{A_{ij}} \\cdot \\bigotimes_{j=1}^n S_j^{b_j}"
},
{
"math_id": 16,
"text": "H^{\\otimes n} U_q H^{\\otimes n} \\mid 0^n \\rangle"
},
{
"math_id": 17,
"text": "p(z) = \\left| \\langle z | H^{\\otimes n} U_q H ^ {\\otimes n} | 0^n \\rangle \\right|^2"
},
{
"math_id": 18,
"text": "p(z)>0"
}
] |
https://en.wikipedia.org/wiki?curid=63840940
|
63847941
|
Spectroscopic optical coherence tomography
|
Optical imaging technique
Spectroscopic optical coherence tomography (SOCT) is an optical imaging and sensing technique, which provides localized spectroscopic information of a sample based on the principles of optical coherence tomography (OCT) and low coherence interferometry. The general principles behind SOCT arise from the large optical bandwidths involved in OCT, where information on the spectral content of backscattered light can be obtained by detection and processing of the interferometric OCT signal. SOCT signal can be used to quantify depth-resolved spectra to retrieve the concentration of tissue chromophores (e.g., hemoglobin and bilirubin), characterize tissue light scattering, and/or used as a functional contrast enhancement for conventional OCT imaging.
Theory.
The following discussion of techniques for quantitatively obtaining localized optical properties using SOCT is a summary of the concepts discussed in Bosscharrt "et al."
Localized spectroscopic information.
The general form of the detected OCT interferogram is written as:
formula_0
Where, "formula_1" and formula_2 are the fields returning from sample and reference arm, respectively, with wavenumber formula_3 with formula_4 the wavelength. Further, formula_5 is the optical path length difference so that formula_6 is the assigned depth location in the tissue. Both the spatial domain and spectral domain descriptions of the collected OCT signal, can be related by Fourier transformation:
formula_7
where formula_8 is the Fourier transform. However, due to the wavelength dependence with depth for both scattering and absorption in tissue, direct Fourier transform cannot be applied to obtain localized spectroscopic information from the OCT signal. For this reason, a time-frequency analysis method must be applied.
Time-frequency analysis methods.
Time-frequency analysis allows for extraction of information of both time and frequency components of a signal. In most SOCT applications a continuous short-time Fourier transform (STFT) method is used,
formula_9
where formula_10 is a spatially confined windowing function that extracts spatially-localized frequency information by suppressing information from outside of the window, commonly a Gaussian distribution, centered around formula_6 with width formula_11. As a result, there exists an inherent trade-off between spatial and frequency resolution using the STFT method.
A wavelet transform (WT) approach may also be considered. Using both a series of function localized in both real and Fourier space from the complex window function w, by translations and dilations
formula_12
Where formula_13 is the scaling factor, which dilates or compress the wavelet formula_10. In this case, the physical process can be considered as an array of band-filters with constant relative bandwidth to the center frequency, using short windows at high frequencies and long windows at low frequencies. Unlike the STFT, the WT method is not constrained by constraint bandwidth and may adapt the window size to a desired frequency. For this method the tradeoff is this between time and frequency resolutions.
Bilinear transforms may be applied, where under the right conditions have a reduced resolution penalty. For SOCT purposes the Wigner distribution:
formula_14
can be used to extract structural knowledge of samples from time-localized information contained within the cross-terms. The Wigner distribution applies a Fourier transform to the autocorrelation of the OCT interferogram. The drawback of this method lies in its quadratic nature, contained in its interference terms. Separation between the two overlapping signal terms is challenging as this information is contained within the interference terms. For time-frequency analysis, the WD effectively suppresses the interference terms and as a result compromises joint time-frequency resolution with the level of suppression of the interference terms.
Quantitative determination of optical properties.
The time-frequency analysis methods described above, result in a wavelength resolved power spectrum formula_15 as a function of depth formula_6. Assuming the first Born approximation, we can describe formula_16 using Beer's law:
formula_17
formula_18 is the OCT signal attenuation coefficient and the factor 2 accounts for the double pass attenuation from depth formula_6. The parameters formula_19 and formula_20 determine the amplitude of formula_16 at d = 0. These system-dependent parameters are defined such that with formula_21 the source power spectrum incident on the sample and T the axial PSF. The backscattering coefficient, formula_20 is sample dependent and is discussed in further detail below.
From the experimentally determined value of the OCT attenuation coefficient can be further expressed as:
formula_22
with the total attenuation coefficient formula_23, being the sum of both the scattering coefficient formula_24 and the absorption coefficient formula_25. The backscattering coefficient is both sample and source dependent and defined as:
formula_26
Where formula_27 is the scattering phase function, integrated over the numerical aperture formula_28.
The backscattering coefficient may be experimentally determined as long as a full understanding of zeta. Commonly zeta is measured by separate calibration with a sample having a known backscattering coefficient defined by Mie theory.
Separation of μs and μa.
Several approaches have been used to effectively isolate the individual contributions of absorption (formula_25) and scattering (formula_24) from the overall OCT signal attenuation (formula_18)
One method is by least-squares fitting, where the scattering dependence on wavelength with a power law. In this approach the absorption spectrum is regarded as the total absorption contribution overall known chromophores, with a least-squares fitting to the measured attenuation values.
formula_29
The first term on the right represents the scattering component with a scaling factor formula_30 and scatter power formula_31, and the second term modeling the total absorption overall chromophores "formula_32" with individual contribution formula_33. A limitation of this method is that the localization of present chromophores and their absorption properties need to be known to be effective.
Similarly another common approach is simply though calibration measurements, if the absorption coefficient of a scattering sample can be obtained through a separate calibration measurement, then isolating the scattering coefficient is pretty straight forward. One problem with this method is it assumes that tissue scattering is equal across various tissue regions, but if different structures have different absorption parameters it would just throw off the measurements.
Finally for certain applications, the real and imaginary part of the complex refractive index may be used to isolate the individual contributions from both absorption and scattering. using Kramers-Kronig (KK) relations. This is because the imaginary part of the refractive index can be tied to the absorption spectra using Kramer-Kronig relations. Robles "et al." showed it was possible to separate the necessary contributions from the real part of the refractive index from a nonlinear dispersion phase term in the OCT signal.
Accuracy.
The overall accuracy of SOCT to isolate the localized optical spectra is limited by several factors:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i_d=|E_s|^2+|E_r|^2+2 \\{E_sE_r \\cos(k2d)\\} "
},
{
"math_id": 1,
"text": "E_s"
},
{
"math_id": 2,
"text": "E_r"
},
{
"math_id": 3,
"text": "k = 2 \\pi / \\lambda"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "2d"
},
{
"math_id": 6,
"text": "d"
},
{
"math_id": 7,
"text": "i_d(2d)=|\\mathcal{F}\\{i_d(k)\\}| "
},
{
"math_id": 8,
"text": "\\mathcal{F}"
},
{
"math_id": 9,
"text": "\\text{STFT}(k,d;w)=\\int_{-\\infty}^{\\infty}i_d(d')w(d-d';\\Delta d)e^{-ikd'}d(d') "
},
{
"math_id": 10,
"text": "w"
},
{
"math_id": 11,
"text": "\\Delta d"
},
{
"math_id": 12,
"text": "\\text{WT}(k,d)=\\int_{-\\infty}^{\\infty}(d')w\\bigg(\\frac{d-d'}{\\kappa}\\bigg)d(d') "
},
{
"math_id": 13,
"text": "\\kappa"
},
{
"math_id": 14,
"text": "\\text{WD}(k,d)=\\int_{-\\infty}^{\\infty}i_d(d+d')i_d*(d-d')e^{-ikd'}d(d') "
},
{
"math_id": 15,
"text": "S"
},
{
"math_id": 16,
"text": "S(d)"
},
{
"math_id": 17,
"text": "S(d)=\\xi \\cdot \\mu_{b,NA}e^{-2\\mu_{OCT}d} "
},
{
"math_id": 18,
"text": "\\mu_{OCT}"
},
{
"math_id": 19,
"text": "\\xi"
},
{
"math_id": 20,
"text": "\\mu_{b,NA}"
},
{
"math_id": 21,
"text": "S_0"
},
{
"math_id": 22,
"text": "\\mu_{OCT}=\\mu_{t}=\\mu_{s}+\\mu_{a} "
},
{
"math_id": 23,
"text": "\\mu_{t}"
},
{
"math_id": 24,
"text": "\\mu_{s}"
},
{
"math_id": 25,
"text": "\\mu_{a}"
},
{
"math_id": 26,
"text": "\\mu_{b,NA}=\\mu_{s}\\cdot 2 \\pi\\textstyle \\int_{\\pi-NA}^{\\pi} p(\\theta)\\sin\\theta d \\theta "
},
{
"math_id": 27,
"text": "p(y)"
},
{
"math_id": 28,
"text": "NA"
},
{
"math_id": 29,
"text": "\\mu_{OCT}=a \\cdot \\lambda^{-b}\\textstyle \\sum_{i} \\displaystyle (c_i \\mu_{a,i}) "
},
{
"math_id": 30,
"text": "a"
},
{
"math_id": 31,
"text": "b"
},
{
"math_id": 32,
"text": "i"
},
{
"math_id": 33,
"text": "c_i"
}
] |
https://en.wikipedia.org/wiki?curid=63847941
|
63848931
|
LB-space
|
In mathematics, an "LB"-space, also written ("LB")-space, is a topological vector space formula_0 that is a locally convex inductive limit of a countable inductive system formula_1 of Banach spaces.
This means that formula_0 is a direct limit of a direct system formula_2 in the category of locally convex topological vector spaces and each formula_3 is a Banach space.
If each of the bonding maps formula_4 is an embedding of TVSs then the "LB"-space is called a strict "LB"-space. This means that the topology induced on formula_3 by formula_5 is identical to the original topology on formula_6
Some authors (e.g. Schaefer) define the term ""LB"-space" to mean "strict "LB"-space."
Definition.
The topology on formula_0 can be described by specifying that an absolutely convex subset formula_7 is a neighborhood of formula_8 if and only if formula_9 is an absolutely convex neighborhood of formula_8 in formula_3 for every formula_10
Properties.
A strict "LB"-space is complete, barrelled, and bornological (and thus ultrabornological).
Examples.
If formula_11 is a locally compact topological space that is countable at infinity (that is, it is equal to a countable union of compact subspaces) then the space formula_12 of all continuous, complex-valued functions on formula_11 with compact support is a strict "LB"-space. For any compact subset formula_13 let formula_14 denote the Banach space of complex-valued functions that are supported by formula_15 with the uniform norm and order the family of compact subsets of formula_11 by inclusion.
Let
formula_16
denote the space of finite sequences, where formula_17 denotes the space of all real sequences.
For every natural number formula_18 let formula_19 denote the usual Euclidean space endowed with the Euclidean topology and let formula_20 denote the canonical inclusion defined by formula_21 so that its image is
formula_22
and consequently,
formula_23
Endow the set formula_24 with the final topology formula_25 induced by the family formula_26 of all canonical inclusions.
With this topology, formula_24 becomes a complete Hausdorff locally convex sequential topological vector space that is not a Fréchet–Urysohn space.
The topology formula_25 is strictly finer than the subspace topology induced on formula_24 by formula_27 where formula_17 is endowed with its usual product topology.
Endow the image formula_28 with the final topology induced on it by the bijection formula_29 that is, it is endowed with the Euclidean topology transferred to it from formula_19 via formula_30
This topology on formula_28 is equal to the subspace topology induced on it by formula_31
A subset formula_32 is open (resp. closed) in formula_33 if and only if for every formula_18 the set formula_34 is an open (resp. closed) subset of formula_35
The topology formula_25 is coherent with family of subspaces formula_36
This makes formula_33 into an LB-space.
Consequently, if formula_37 and formula_38 is a sequence in formula_24 then formula_39 in formula_33 if and only if there exists some formula_40 such that both formula_41 and formula_38 are contained in formula_28 and formula_39 in formula_35
Often, for every formula_18 the canonical inclusion formula_42 is used to identify formula_19 with its image formula_28 in formula_43 explicitly, the elements formula_44 and formula_45 are identified together.
Under this identification, formula_46 becomes a direct limit of the direct system formula_47 where for every formula_48 the map formula_49 is the canonical inclusion defined by formula_50 where there are formula_51 trailing zeros.
Counter-examples.
There exists a bornological LB-space whose strong bidual is not bornological.
There exists an LB-space that is not quasi-complete.
Citations.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "(X_n, i_{nm})"
},
{
"math_id": 2,
"text": "\\left( X_n, i_{nm} \\right)"
},
{
"math_id": 3,
"text": "X_n"
},
{
"math_id": 4,
"text": "i_{nm}"
},
{
"math_id": 5,
"text": "X_{n+1}"
},
{
"math_id": 6,
"text": "X_n."
},
{
"math_id": 7,
"text": "U"
},
{
"math_id": 8,
"text": "0"
},
{
"math_id": 9,
"text": "U \\cap X_n"
},
{
"math_id": 10,
"text": "n."
},
{
"math_id": 11,
"text": "D"
},
{
"math_id": 12,
"text": "C_c(D)"
},
{
"math_id": 13,
"text": "K \\subseteq D,"
},
{
"math_id": 14,
"text": "C_c(K)"
},
{
"math_id": 15,
"text": "K"
},
{
"math_id": 16,
"text": "\\begin{alignat}{4}\n\\R^{\\infty} \n~&:=~ \\left\\{ \\left(x_1, x_2, \\ldots \\right) \\in \\R^{\\N} ~:~ \\text{ all but finitely many } x_i \\text{ are equal to 0 } \\right\\},\n\\end{alignat}\n"
},
{
"math_id": 17,
"text": "\\R^{\\N}"
},
{
"math_id": 18,
"text": "n \\in \\N,"
},
{
"math_id": 19,
"text": "\\R^n"
},
{
"math_id": 20,
"text": "\\operatorname{In}_{\\R^n} : \\R^n \\to \\R^{\\infty}"
},
{
"math_id": 21,
"text": "\\operatorname{In}_{\\R^n}\\left(x_1, \\ldots, x_n\\right) := \\left(x_1, \\ldots, x_n, 0, 0, \\ldots \\right)"
},
{
"math_id": 22,
"text": "\\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right) \n= \\left\\{ \\left(x_1, \\ldots, x_n, 0, 0, \\ldots \\right) ~:~ x_1, \\ldots, x_n \\in \\R \\right\\} \n= \\R^n \\times \\left\\{ (0, 0, \\ldots) \\right\\}"
},
{
"math_id": 23,
"text": "\\R^{\\infty} = \\bigcup_{n \\in \\N} \\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right)."
},
{
"math_id": 24,
"text": "\\R^{\\infty}"
},
{
"math_id": 25,
"text": "\\tau^{\\infty}"
},
{
"math_id": 26,
"text": "\\mathcal{F} := \\left\\{ \\; \\operatorname{In}_{\\R^n} ~:~ n \\in \\N \\; \\right\\}"
},
{
"math_id": 27,
"text": "\\R^{\\N},"
},
{
"math_id": 28,
"text": "\\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right)"
},
{
"math_id": 29,
"text": "\\operatorname{In}_{\\R^n} : \\R^n \\to \\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right);"
},
{
"math_id": 30,
"text": "\\operatorname{In}_{\\R^n}."
},
{
"math_id": 31,
"text": "\\left(\\R^{\\infty}, \\tau^{\\infty}\\right)."
},
{
"math_id": 32,
"text": "S \\subseteq \\R^{\\infty}"
},
{
"math_id": 33,
"text": "\\left(\\R^{\\infty}, \\tau^{\\infty}\\right)"
},
{
"math_id": 34,
"text": "S \\cap \\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right)"
},
{
"math_id": 35,
"text": "\\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right)."
},
{
"math_id": 36,
"text": "\\mathbb{S} := \\left\\{ \\; \\operatorname{Im} \\left( \\operatorname{In}_{\\R^n} \\right) ~:~ n \\in \\N \\; \\right\\}."
},
{
"math_id": 37,
"text": "v \\in \\R^{\\infty}"
},
{
"math_id": 38,
"text": "v_{\\bull}"
},
{
"math_id": 39,
"text": "v_{\\bull} \\to v"
},
{
"math_id": 40,
"text": "n \\in \\N"
},
{
"math_id": 41,
"text": "v"
},
{
"math_id": 42,
"text": "\\operatorname{In}_{\\R^n}"
},
{
"math_id": 43,
"text": "\\R^{\\infty};"
},
{
"math_id": 44,
"text": "\\left( x_1, \\ldots, x_n \\right) \\in \\mathbb{R}^n"
},
{
"math_id": 45,
"text": "\\left( x_1, \\ldots, x_n, 0, 0, 0, \\ldots \\right)"
},
{
"math_id": 46,
"text": "\\left( \\left(\\R^{\\infty}, \\tau^{\\infty}\\right), \\left(\\operatorname{In}_{\\R^n}\\right)_{n \\in \\N}\\right)"
},
{
"math_id": 47,
"text": "\\left( \\left(\\R^n\\right)_{n \\in \\N}, \\left(\\operatorname{In}_{\\R^m}^{\\R^n}\\right)_{m \\leq n \\text{ in } \\N}, \\N \\right),"
},
{
"math_id": 48,
"text": "m \\leq n,"
},
{
"math_id": 49,
"text": "\\operatorname{In}_{\\R^m}^{\\R^n} : \\R^m \\to \\R^n"
},
{
"math_id": 50,
"text": "\\operatorname{In}_{\\R^m}^{\\R^n}\\left(x_1, \\ldots, x_m\\right) := \\left(x_1, \\ldots, x_m, 0, \\ldots, 0 \\right),"
},
{
"math_id": 51,
"text": "n - m"
}
] |
https://en.wikipedia.org/wiki?curid=63848931
|
63849351
|
Blumberg theorem
|
Any real function on R admits a continuous restriction on a dense subset of R
In mathematics, the Blumberg theorem states that for any real function formula_0 there is a dense subset formula_1 of formula_2 such that the restriction of formula_3 to formula_1 is continuous. It is named after its discoverer, the Russian-American mathematician Henry Blumberg.
Examples.
For instance, the restriction of the Dirichlet function (the indicator function of the rational numbers formula_4) to formula_4 is continuous, although the Dirichlet function is nowhere continuous in formula_5
Blumberg spaces.
More generally, a Blumberg space is a topological space formula_6 for which any function formula_7 admits a continuous restriction on a dense subset of formula_8 The Blumberg theorem therefore asserts that formula_9 (equipped with its usual topology) is a Blumberg space.
If formula_6 is a metric space then formula_6 is a Blumberg space if and only if it is a Baire space. The Blumberg problem is to determine whether a compact Hausdorff space must be Blumberg. A counterexample was given in 1974 by Ronnie Levy, conditional on Luzin's hypothesis, that formula_10 The problem was resolved in 1975 by William A. R. Weiss, who gave an unconditional counterexample. It was constructed by taking the disjoint union of two compact Hausdorff spaces, one of which could be proven to be non-Blumberg if the Continuum Hypothesis was true, the other if it was false.
Motivation and discussion.
The restriction of any continuous function to any subset of its domain (dense or otherwise) is always continuous, so the conclusion of the Blumberg theorem is only interesting for functions that are not continuous. Given a function that is not continuous, it is typically not surprising to discover that its restriction to some subset is once again not continuous, and so only those restrictions that are continuous are (potentially) interesting.
Such restrictions are not all interesting, however. For example, the restriction of any function (even one as interesting as the Dirichlet function) to any subset on which it is constant will be continuous, although this fact is as uninteresting as constant functions.
Similarly uninteresting, the restriction of any function (continuous or not) to a single point or to any finite subset of formula_2 (or more generally, to any discrete subspace of formula_11 such as the integers formula_12) will be continuous.
One case that is considerably more interesting is that of a non-continuous function formula_3 whose restriction to some dense subset formula_1 (of its domain) is continuous.
An important fact about continuous formula_2-valued functions defined on dense subsets is that a continuous extension to all of formula_11 if one exists, will be unique (there exist continuous functions defined on dense subsets of formula_11 such as formula_13 that cannot be continuously extended to all of formula_2).
Thomae's function, for example, is not continuous (in fact, it is discontinuous at every rational number) although its restriction to the dense subset formula_14 of irrational numbers is continuous.
Similarly, every additive function formula_15 that is not linear (that is, not of the form formula_16 for some constant formula_17) is a nowhere continuous function whose restriction to formula_4 is continuous (such functions are the non-trivial solutions to Cauchy's functional equation).
This raises the question: can such a dense subset always be found? The Blumberg theorem answer this question in the affirmative.
In other words, every function formula_18 − no matter how poorly behaved it may be − can be restricted to some dense subset on which it is continuous.
Said differently, the Blumberg theorem shows that there does not exist a function formula_18 that is so poorly behaved (with respect to continuity) that all of its restrictions to all possible dense subsets are discontinuous.
The theorem's conclusion becomes more interesting as the function becomes more pathological or poorly behaved. Imagine, for instance, defining a function formula_0 by picking each value formula_19 completely at random (so its graph would be appear as infinitely many points scattered randomly about the plane formula_20); no matter how you ended up imagining it, the Blumberg theorem guarantees that even this function has some dense subset on which its restriction is continuous.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f : \\Reals \\to \\Reals"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "\\Reals"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\\Q"
},
{
"math_id": 5,
"text": "\\Reals."
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "f : X \\to \\Reals"
},
{
"math_id": 8,
"text": "X."
},
{
"math_id": 9,
"text": "\\mathbb{R}"
},
{
"math_id": 10,
"text": "2^{\\aleph_0}=2^{\\aleph_1}."
},
{
"math_id": 11,
"text": "\\Reals,"
},
{
"math_id": 12,
"text": "\\Z"
},
{
"math_id": 13,
"text": "f(x) = 1/x,"
},
{
"math_id": 14,
"text": "\\R\\setminus\\Q"
},
{
"math_id": 15,
"text": "\\Reals \\to \\Reals"
},
{
"math_id": 16,
"text": "x \\mapsto c x"
},
{
"math_id": 17,
"text": "c \\in \\Reals"
},
{
"math_id": 18,
"text": "\\R \\to \\R"
},
{
"math_id": 19,
"text": "f(x)"
},
{
"math_id": 20,
"text": "\\Reals^2"
}
] |
https://en.wikipedia.org/wiki?curid=63849351
|
63852
|
Chosen-plaintext attack
|
Attack model for cryptanalysis with presumed access to ciphertexts for chosen plaintexts
A chosen-plaintext attack (CPA) is an attack model for cryptanalysis which presumes that the attacker can obtain the ciphertexts for arbitrary plaintexts. The goal of the attack is to gain information that reduces the security of the encryption scheme.
Modern ciphers aim to provide semantic security, also known as "ciphertext indistinguishability under chosen-plaintext attack", and they are therefore, by design, generally immune to chosen-plaintext attacks if correctly implemented.
Introduction.
In a chosen-plaintext attack the adversary can (possibly adaptively) ask for the ciphertexts of arbitrary plaintext messages. This is formalized by allowing the adversary to interact with an encryption oracle, viewed as a black box. The attacker’s goal is to reveal all or a part of the secret encryption key.
It may seem infeasible in practice that an attacker could obtain ciphertexts for given plaintexts. However, modern cryptography is implemented in software or hardware and is used for a diverse range of applications; for many cases, a chosen-plaintext attack is often very feasible (see also In practice). Chosen-plaintext attacks become extremely important in the context of public key cryptography where the encryption key is public and so attackers can encrypt any plaintext they choose.
Different forms.
There are two forms of chosen-plaintext attacks:
General method of an attack.
A general batch chosen-plaintext attack is carried out as follows :
Consider the following extension of the above situation. After the last step,
A cipher has indistinguishable encryptions under a chosen-plaintext attack if after running the above experiment with n=1 the adversary can't guess correctly (b=b') with probability non-negligibly better than 1/2.
Examples.
The following examples demonstrate how some ciphers that meet other security definitions may be broken with a chosen-plaintext attack.
Caesar cipher.
The following attack on the Caesar cipher allows full recovery of the secret key:
With more intricate or complex encryption methodologies the decryption method becomes more resource-intensive, however, the core concept is still relatively the same.
One-time pads.
The following attack on a one-time pad allows full recovery of the secret key. Suppose the message length and key length are equal to n.
While the one-time pad is used as an example of an information-theoretically secure cryptosystem, this security only holds under security definitions weaker than CPA security. This is because under the formal definition of CPA security the encryption oracle has no state. This vulnerability may not be applicable to all practical implementations – the one-time pad can still be made secure if key reuse is avoided (hence the name "one-time" pad).
In practice.
In World War II US Navy cryptanalysts discovered that Japan was planning to attack a location referred to as "AF". They believed that "AF" might be Midway Island, because other locations in the Hawaiian Islands had codewords that began with "A". To prove their hypothesis that "AF" corresponded to "Midway Island" they asked the US forces at Midway to send a plaintext message about low supplies. The Japanese intercepted the message and immediately reported to their superiors that "AF" was low on water, confirming the Navy's hypothesis and allowing them to position their force to win the battle.
Also during World War II, Allied codebreakers at Bletchley Park would sometimes ask the Royal Air Force to lay mines at a position that didn't have any abbreviations or alternatives in the German naval system's grid reference. The hope was that the Germans, seeing the mines, would use an Enigma machine to encrypt a warning message about the mines and an "all clear" message after they were removed, giving the allies enough information about the message to break the German naval Enigma. This process of "planting" a known-plaintext was called "gardening". Allied codebreakers also helped craft messages sent by double agent Juan Pujol García, whose encrypted radio reports were received in Madrid, manually decrypted, and then re-encrypted with an Enigma machine for transmission to Berlin. This helped the codebreakers decrypt the code used on the second leg, having supplied the original text.
In modern day, chosen-plaintext attacks (CPAs) are often used to break symmetric ciphers. To be considered CPA-secure, the symmetric cipher must not be vulnerable to chosen-plaintext attacks. Thus, it is important for symmetric cipher implementors to understand how an attacker would attempt to break their cipher and make relevant improvements.
For some chosen-plaintext attacks, only a small part of the plaintext may need to be chosen by the attacker; such attacks are known as plaintext injection attacks.
Relation to other attacks.
A chosen-plaintext attack is more powerful than known-plaintext attack, because the attacker can directly target specific terms or patterns without having to wait for these to appear naturally, allowing faster gathering of data relevant to cryptanalysis. Therefore, any cipher that prevents chosen-plaintext attacks is also secure against known-plaintext and ciphertext-only attacks.
However, a chosen-plaintext attack is less powerful than a chosen-ciphertext attack, where the attacker can obtain the plaintexts of arbitrary ciphertexts. A CCA-attacker can sometimes break a CPA-secure system. For example, the El Gamal cipher is secure against chosen plaintext attacks, but vulnerable to chosen ciphertext attacks because it is unconditionally malleable.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "b\\leftarrow\\{0,1\\}"
}
] |
https://en.wikipedia.org/wiki?curid=63852
|
63857858
|
Spectral theory of normal C*-algebras
|
In functional analysis, every C*-algebra is isomorphic to a subalgebra of the C*-algebra formula_0 of bounded linear operators on some Hilbert space formula_1 This article describes the spectral theory of closed normal subalgebras of formula_0. A subalgebra formula_2 of formula_0 is called normal if it is commutative and closed under the formula_3 operation: for all formula_4, we have formula_5 and that formula_6.
Resolution of identity.
Throughout, formula_7 is a fixed Hilbert space.
A projection-valued measure on a measurable space formula_8 where formula_9 is a σ-algebra of subsets of formula_10 is a mapping formula_11 such that for all formula_12 formula_13 is a self-adjoint projection on formula_7 (that is, formula_13 is a bounded linear operator formula_14 that satisfies formula_15 and formula_16) such that
formula_17
(where formula_18 is the identity operator of formula_7) and for every formula_19 the function formula_20 defined by formula_21 is a complex measure on formula_22 (that is, a complex-valued countably additive function).
A resolution of identity on a measurable space formula_23 is a function formula_11 such that for every formula_24:
If formula_9 is the formula_25-algebra of all Borels sets on a Hausdorff locally compact (or compact) space, then the following additional requirement is added:
Conditions 2, 3, and 4 imply that formula_26 is a projection-valued measure.
Properties.
Throughout, let formula_26 be a resolution of identity.
For all formula_27 formula_28 is a positive measure on formula_9 with total variation formula_29 and that satisfies formula_30 for all formula_31
For every formula_24:
L∞(π) - space of essentially bounded function.
The formula_11 be a resolution of identity on formula_32
Essentially bounded functions.
Suppose formula_33 is a complex-valued formula_9-measurable function. There exists a unique largest open subset formula_34 of formula_35 (ordered under subset inclusion) such that formula_36
To see why, let formula_37 be a basis for formula_35's topology consisting of open disks and suppose that formula_38 is the subsequence (possibly finite) consisting of those sets such that formula_39; then formula_40 Note that, in particular, if formula_41 is an open subset of formula_35 such that formula_42 then formula_43 so that formula_44 (although there are other ways in which formula_45 may equal 0). Indeed, formula_46
The essential range of formula_47 is defined to be the complement of formula_48 It is the smallest closed subset of formula_35 that contains formula_49 for almost all formula_50 (that is, for all formula_50 except for those in some set formula_51 such that formula_52). The essential range is a closed subset of formula_35 so that if it is also a bounded subset of formula_35 then it is compact.
The function formula_47 is essentially bounded if its essential range is bounded, in which case define its essential supremum, denoted by formula_53 to be the supremum of all formula_54 as formula_55 ranges over the essential range of formula_56
Space of essentially bounded functions.
Let formula_57 be the vector space of all bounded complex-valued formula_9-measurable functions formula_58 which becomes a Banach algebra when normed by formula_59
The function formula_60 is a seminorm on formula_61 but not necessarily a norm.
The kernel of this seminorm, formula_62 is a vector subspace of formula_57 that is a closed two-sided ideal of the Banach algebra formula_63
Hence the quotient of formula_57 by formula_64 is also a Banach algebra, denoted by formula_65 where the norm of any element formula_66 is equal to formula_67 (since if formula_68 then formula_69) and this norm makes formula_70 into a Banach algebra.
The spectrum of formula_71 in formula_70 is the essential range of formula_56
This article will follow the usual practice of writing formula_47 rather than formula_71 to represent elements of formula_72
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_11 be a resolution of identity on formula_32 There exists a closed normal subalgebra formula_2 of formula_0 and an isometric *-isomorphism formula_73 satisfying the following properties:
Spectral theorem.
The maximal ideal space of a Banach algebra formula_2 is the set of all complex homomorphisms formula_74 which we'll denote by formula_75 For every formula_76 in formula_77 the Gelfand transform of formula_76 is the map formula_78 defined by formula_79 formula_80 is given the weakest topology making every formula_78 continuous. With this topology, formula_80 is a compact Hausdorff space and every formula_76 in formula_77 formula_81 belongs to formula_82 which is the space of continuous complex-valued functions on formula_75 The range of formula_81 is the spectrum formula_83 and that the spectral radius is equal to formula_84 which is formula_85
<templatestyles src="Math_theorem/styles.css" />
Theorem — Suppose formula_2 is a closed normal subalgebra of formula_0 that contains the identity operator formula_18 and let formula_86 be the maximal ideal space of formula_87 Let formula_9 be the Borel subsets of formula_88 For every formula_76 in formula_77 let formula_78 denote the Gelfand transform of formula_76 so that formula_89 is an injective map formula_90 There exists a unique resolution of identity formula_91 that satisfies:
formula_92
the notation formula_93 is used to summarize this situation.
Let formula_94 be the inverse of the Gelfand transform formula_95 where formula_96 can be canonically identified as a subspace of formula_72 Let formula_97 be the closure (in the norm topology of formula_0) of the linear span of formula_98
Then the following are true:
The above result can be specialized to a single normal bounded operator.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{B}(H)"
},
{
"math_id": 1,
"text": "H."
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "\\ast"
},
{
"math_id": 4,
"text": "x,y\\in A"
},
{
"math_id": 5,
"text": "x^\\ast\\in A"
},
{
"math_id": 6,
"text": "xy = yx"
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": "(X, \\Omega),"
},
{
"math_id": 9,
"text": "\\Omega"
},
{
"math_id": 10,
"text": "X,"
},
{
"math_id": 11,
"text": "\\pi : \\Omega \\to \\mathcal{B}(H)"
},
{
"math_id": 12,
"text": "\\omega \\in \\Omega,"
},
{
"math_id": 13,
"text": "\\pi(\\omega)"
},
{
"math_id": 14,
"text": "\\pi(\\omega) : H \\to H"
},
{
"math_id": 15,
"text": "\\pi(\\omega) = \\pi(\\omega)^*"
},
{
"math_id": 16,
"text": "\\pi(\\omega) \\circ \\pi(\\omega) = \\pi(\\omega)"
},
{
"math_id": 17,
"text": "\\pi(X) = \\operatorname{Id}_H \\quad"
},
{
"math_id": 18,
"text": "\\operatorname{Id}_H"
},
{
"math_id": 19,
"text": "x, y \\in H,"
},
{
"math_id": 20,
"text": "\\Omega \\to \\Complex"
},
{
"math_id": 21,
"text": "\\omega \\mapsto \\langle \\pi(\\omega)x, y \\rangle"
},
{
"math_id": 22,
"text": "M"
},
{
"math_id": 23,
"text": "(X, \\Omega)"
},
{
"math_id": 24,
"text": "\\omega_1, \\omega_2 \\in \\Omega"
},
{
"math_id": 25,
"text": "\\sigma"
},
{
"math_id": 26,
"text": "\\pi"
},
{
"math_id": 27,
"text": "x \\in H,"
},
{
"math_id": 28,
"text": "\\pi_{x, x} : \\Omega \\to \\Complex"
},
{
"math_id": 29,
"text": "\\left\\|\\pi_{x, x}\\right\\| = \\pi_{x, x}(X) = \\|x\\|^2"
},
{
"math_id": 30,
"text": "\\pi_{x, x}(\\omega) = \\langle \\pi(\\omega) x, x \\rangle = \\|\\pi(\\omega) x\\|^2"
},
{
"math_id": 31,
"text": "\\omega \\in \\Omega."
},
{
"math_id": 32,
"text": "(X, \\Omega)."
},
{
"math_id": 33,
"text": "f : X \\to \\Complex"
},
{
"math_id": 34,
"text": "V_f"
},
{
"math_id": 35,
"text": "\\Complex"
},
{
"math_id": 36,
"text": "\\pi\\left(f^{-1}\\left(V_f\\right)\\right) = 0."
},
{
"math_id": 37,
"text": "D_1, D_2, \\ldots"
},
{
"math_id": 38,
"text": "D_{i_1}, D_{i_2}, \\ldots"
},
{
"math_id": 39,
"text": "\\pi\\left(f^{-1}\\left(D_{i_k}\\right)\\right) = 0"
},
{
"math_id": 40,
"text": "D_{i_1} \\cup D_{i_2} \\cup \\cdots = V_f."
},
{
"math_id": 41,
"text": "D"
},
{
"math_id": 42,
"text": "D \\cap \\operatorname{Im} f = \\varnothing"
},
{
"math_id": 43,
"text": "\\pi\\left(f^{-1}(D)\\right) = \\pi (\\varnothing) = 0"
},
{
"math_id": 44,
"text": "D \\subseteq V_f"
},
{
"math_id": 45,
"text": "\\pi\\left(f^{-1}(D)\\right)"
},
{
"math_id": 46,
"text": "\\Complex \\setminus \\operatorname{cl}(\\operatorname{Im} f) \\subseteq V_f."
},
{
"math_id": 47,
"text": "f"
},
{
"math_id": 48,
"text": "V_f."
},
{
"math_id": 49,
"text": "f(x)"
},
{
"math_id": 50,
"text": "x \\in X"
},
{
"math_id": 51,
"text": "\\omega \\in \\Omega"
},
{
"math_id": 52,
"text": "\\pi(\\omega) = 0"
},
{
"math_id": 53,
"text": "\\|f\\|^{\\infty},"
},
{
"math_id": 54,
"text": "|\\lambda|"
},
{
"math_id": 55,
"text": "\\lambda"
},
{
"math_id": 56,
"text": "f."
},
{
"math_id": 57,
"text": "\\mathcal{B}(X, \\Omega)"
},
{
"math_id": 58,
"text": "f : X \\to \\Complex,"
},
{
"math_id": 59,
"text": "\\|f\\|_{\\infty} := \\sup_{x \\in X}|f(x) |."
},
{
"math_id": 60,
"text": "\\|\\,\\cdot\\,\\|^{\\infty}"
},
{
"math_id": 61,
"text": "\\mathcal{B}(X, \\Omega),"
},
{
"math_id": 62,
"text": "N^{\\infty} := \\left\\{ f \\in \\mathcal{B}(X, \\Omega) : \\|f\\|^{\\infty} = 0 \\right\\},"
},
{
"math_id": 63,
"text": "\\left(\\mathcal{B}(X, \\Omega), \\| \\cdot \\|_{\\infty}\\right)."
},
{
"math_id": 64,
"text": "N^{\\infty}"
},
{
"math_id": 65,
"text": "L^{\\infty}(\\pi) := \\mathcal{B}(X, \\Omega) / N^{\\infty}"
},
{
"math_id": 66,
"text": "f + N^{\\infty} \\in L^{\\infty}(\\pi)"
},
{
"math_id": 67,
"text": "\\|f\\|^{\\infty}"
},
{
"math_id": 68,
"text": "f + N^{\\infty} = g + N^{\\infty}"
},
{
"math_id": 69,
"text": "\\|f\\|^{\\infty} = \\| g \\|^{\\infty}"
},
{
"math_id": 70,
"text": "L^{\\infty}(\\pi)"
},
{
"math_id": 71,
"text": "f + N^{\\infty}"
},
{
"math_id": 72,
"text": "L^{\\infty}(\\pi)."
},
{
"math_id": 73,
"text": "\\Psi : L^{\\infty}(\\pi) \\to A"
},
{
"math_id": 74,
"text": "A \\to \\Complex,"
},
{
"math_id": 75,
"text": "\\sigma_A."
},
{
"math_id": 76,
"text": "T"
},
{
"math_id": 77,
"text": "A,"
},
{
"math_id": 78,
"text": "G(T) : \\sigma_A \\to \\Complex"
},
{
"math_id": 79,
"text": "G(T)(h) := h(T)."
},
{
"math_id": 80,
"text": "\\sigma_A"
},
{
"math_id": 81,
"text": "G(T)"
},
{
"math_id": 82,
"text": "C \\left(\\sigma_A\\right),"
},
{
"math_id": 83,
"text": "\\sigma(T)"
},
{
"math_id": 84,
"text": "\\max \\left\\{ |G(T)(h)|: h \\in \\sigma_A \\right\\},"
},
{
"math_id": 85,
"text": "\\leq \\|T\\|."
},
{
"math_id": 86,
"text": "\\sigma = \\sigma_A"
},
{
"math_id": 87,
"text": "A."
},
{
"math_id": 88,
"text": "\\sigma."
},
{
"math_id": 89,
"text": "G"
},
{
"math_id": 90,
"text": "G : A \\to C\\left(\\sigma_A\\right)."
},
{
"math_id": 91,
"text": "\\pi : \\Omega \\to A"
},
{
"math_id": 92,
"text": "\\langle T x, y \\rangle = \\int_{\\sigma_A} G(T) \\operatorname{d} \\pi_{x, y} \\quad \\text{ for all } x, y \\in H \\text{ and all } T \\in A;"
},
{
"math_id": 93,
"text": "T = \\int_{\\sigma_A} G(T) \\operatorname{d} \\pi"
},
{
"math_id": 94,
"text": "I : \\operatorname{Im} G \\to A"
},
{
"math_id": 95,
"text": "G : A \\to C\\left(\\sigma_A\\right)"
},
{
"math_id": 96,
"text": "\\operatorname{Im} G"
},
{
"math_id": 97,
"text": "B"
},
{
"math_id": 98,
"text": "\\operatorname{Im} \\pi."
}
] |
https://en.wikipedia.org/wiki?curid=63857858
|
6385832
|
Local convex hull
|
Local convex hull (LoCoH) is a method for estimating size of the home range of an animal or a group of animals (e.g. a pack of wolves, a pride of lions, or herd of buffaloes), and for constructing a utilization distribution. The latter is a probability distribution that represents the probabilities of finding an animal within a given area of its home range at any point in time; or, more generally, at points in time for which the utilization distribution has been constructed. In particular, different utilization distributions can be constructed from data pertaining to particular periods of a diurnal or seasonal cycle.
Utilization distributions are constructed from data providing the location of an individual or several individuals in space at different points in time by associating a local distribution function with each point and then summing and normalizing these local distribution functions to obtain a distribution function that pertains to the data as a whole. If the local distribution function is a parametric distribution, such as a symmetric bivariate normal distribution then the method is referred to as a kernel method, but more correctly should be designated as a parametric kernel method. On the other hand, if the local kernel element associated with each point is a local convex polygon constructed from the point and its "k"-1 nearest neighbors, then the method is nonparametric and referred to as a "k"-LoCoH or "fixed point" LoCoH method. This is in contrast to "r"-LoCoH (fixed radius) and "a"-LoCoH (adaptive radius) methods.
In the case of LoCoH utilization distribution constructions, the home range can be taken as the outer boundary of the distribution (i.e. the 100th percentile). In the case of utilization distributions constructed from unbounded kernel elements, such as bivariate normal distributions, the utilization distribution is itself unbounded. In this case the most often used convention is to regard the 95th percentile of the utilization distribution as the boundary of the home range.
To construct a "k"-LoCoH utilization distribution:
In this sense, LoCoH methods are a generalization of the home range estimator method based on constructing the minimum convex polygon (MCP) associated with the data. The LoCoH method has a number of advantages over parametric kernel methods. In particular:
LoCoH has a number of implementations including a now-defunct LoCoH Web Application.
LoCoH was formerly known as "k"-NNCH, for "k"-nearest neighbor convex hulls. It has recently been shown that the "a"-LoCoH is the best of the three LoCoH methods mentioned above (see Getz et al. in the references below).
T-LoCoH.
T-LoCoH (time local convex hull) is an enhanced version of LoCoH which incorporates time into the home range construction. Time is incorporated into the algorithm via an alternative measure of 'distance', called time scaled distance (TSD), which combines the spatial distance and temporal distance between any two points. This presumes that each point has a time stamp associated with it, as with GPS data. T-LoCoH uses TSD rather than Euclidean distance to identify each point's nearest neighbors, resulting in hulls that are localized in both space and time. Hulls are then sorted and progressively unioned into isopleths. Like LoCoH, UDs created by T-LoCoH generally do a good job modeling sharp edges in habitat such as water bodies; in addition T-LoCoH isopleths can delineate temporal partitions of space use. T-LoCoH also offers additional sorting options for hulls, allowing it to generate isopleths that differentiate internal space by both intensity of use (the conventional UD) and a variety of behavioral proxies, including directionality and time use metrics.
Time scaled distance.
The TSD for any two locations "i" and "j" separated in time by formula_0 is given by
formula_1
Conceptually, TSD transforms the period of time between two observations into spatial units by estimating how far the individual could have traveled during the time period if it had been moving at its maximum observed speed. This theoretical movement distance is then mapped onto a third axis of space, and distance calculated using standard Euclidean distance equations. The TSD equation also features a scaling parameter "s" which controls the degree to which the temporal difference scales to spatial units. When "s"=0, the temporal distance drops out and TSD is equivalent to Euclidean distance (thus T-LoCoH is backward compatible with LoCoH). As "s" increases, the temporal distance becomes more and more influential, eventually swamping out the distance in space. The TSD metric is not based on a mechanistic or diffusion model of movement, but merely serves to generate hulls that are local in space and/or time.
|
[
{
"math_id": 0,
"text": "\\Delta t_{ij}"
},
{
"math_id": 1,
"text": "\\Psi_{ij} = \\sqrt{\\Delta x_{ij}^2 + \\Delta y_{ij}^2 + (sv_{max}\\Delta t_{ij})^2}"
}
] |
https://en.wikipedia.org/wiki?curid=6385832
|
6386360
|
Heap (mathematics)
|
Algebraic structure with a ternary operation
In abstract algebra, a semiheap is an algebraic structure consisting of a non-empty set "H" with a ternary operation denoted formula_0 that satisfies a modified associativity property:
formula_1
A biunitary element "h" of a semiheap satisfies ["h","h","k"] = "k" = ["k","h","h"] for every "k" in "H".
A heap is a semiheap in which every element is biunitary. It can be thought of as a group with the identity element "forgotten".
The term "heap" is derived from груда, Russian for "heap", "pile", or "stack". Anton Sushkevich used the term in his "Theory of Generalized Groups" (1937) which influenced Viktor Wagner, promulgator of semiheaps, heaps, and generalized heaps. Груда contrasts with группа (group) which was taken into Russian by transliteration. Indeed, a heap has been called a groud in English text.)
Examples.
Two element heap.
Turn formula_2 into the cyclic group formula_3, by defining formula_4 the identity element, and formula_5. Then it produces the following heap:
formula_6
formula_7
Defining formula_8 as the identity element and formula_9 would have given the same heap.
Heap of integers.
If formula_10 are integers, we can set formula_11 to produce a heap. We can then choose any integer formula_12 to be the identity of a new group on the set of integers, with the operation formula_13
formula_14
and inverse
formula_15.
Heap of a group.
The previous two examples may be generalized to any group "G" by defining the ternary relation as formula_16 using the multiplication and inverse of "G".
Heap of a groupoid with two objects.
The heap of a group may be generalized again to the case of a groupoid which has two objects "A" and "B" when viewed as a category. The elements of the heap may be identified with the morphisms from A to B, such that three morphisms "x", "y", "z" define a heap operation according to formula_17
This reduces to the heap of a group if a particular morphism between the two objects is chosen as the identity. This intuitively relates the description of isomorphisms between two objects as a heap and the description of isomorphisms between multiple objects as a groupoid.
Heterogeneous relations.
Let "A" and "B" be different sets and formula_18 the collection of heterogeneous relations between them. For formula_19 define the ternary operator
formula_20 where "q"T is the converse relation of "q". The result of this composition is also in formula_18 so a mathematical structure has been formed by the ternary operation. Viktor Wagner was motivated to form this heap by his study of transition maps in an atlas which are partial functions. Thus a heap is more than a tweak of a group: it is a general concept including a group as a trivial case.
Theorems.
Theorem: A semiheap with a biunitary element "e" may be considered an involuted semigroup with operation given by "ab" = ["a", "e", "b"] and involution by "a"–1 = ["e", "a", "e"].
When the above construction is applied to a heap, the result is in fact a group. Note that the identity "e" of the group can be chosen to be any element of the heap.
Theorem: Every semiheap may be embedded in an involuted semigroup.
As in the study of semigroups, the structure of semiheaps is described in terms of ideals with an "i-simple semiheap" being one with no proper ideals. Mustafaeva translated the Green's relations of semigroup theory to semiheaps and defined a ρ class to be those elements generating the same principle two-sided ideal. He then proved that no i-simple semiheap can have more than two ρ classes.
He also described regularity classes of a semiheap "S":
formula_21 where "n" and "m" have the same parity and the ternary operation of the semiheap applies at the left of a string from "S".
He proves that "S" can have at most 5 regularity classes. Mustafaev calls an ideal "B" "isolated" when formula_22 He then proves that when "S" = D(2,2), then every ideal is isolated and conversely.
Studying the semiheap Z("A, B") of heterogeneous relations between sets "A" and "B", in 1974 K. A. Zareckii followed Mustafaev's lead to describe ideal equivalence, regularity classes, and ideal factors of a semiheap.
Generalizations and related concepts.
A semigroud is a generalised groud if the relation → defined by
formula_31
is reflexive (idempotence) and antisymmetric. In a generalised groud, → is an order relation.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "[x,y,z] \\in H"
},
{
"math_id": 1,
"text": "\\forall a,b,c,d,e \\in H \\quad [[a,b,c],d,e] = [a,[d,c,b],e] = [a,b,[c,d,e]]."
},
{
"math_id": 2,
"text": "H=\\{a,b\\}"
},
{
"math_id": 3,
"text": "\\mathrm{C}_2"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "bb = a"
},
{
"math_id": 6,
"text": "[a,a,a]=a,\\, [a,a,b]=b,\\, [b,a,a]=b,\\, [b,a,b]=a,"
},
{
"math_id": 7,
"text": "[a,b,a]=b,\\, [a,b,b]=a,\\, [b,b,a]=a,\\, [b,b,b]=b."
},
{
"math_id": 8,
"text": "b"
},
{
"math_id": 9,
"text": "aa = b"
},
{
"math_id": 10,
"text": "x,y,z"
},
{
"math_id": 11,
"text": "[x,y,z] = x-y+z"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "*"
},
{
"math_id": 14,
"text": "x*y = x+y-k"
},
{
"math_id": 15,
"text": "x^{-1} = 2k-x"
},
{
"math_id": 16,
"text": "[x,y,z] = xy^{-1}z,"
},
{
"math_id": 17,
"text": "[x,y,z] = xy^{-1}z."
},
{
"math_id": 18,
"text": "\\mathcal{B}(A,B)"
},
{
"math_id": 19,
"text": "p, q, r \\in \\mathcal{B}(A,B)"
},
{
"math_id": 20,
"text": "[p, q, r] = p q^T r"
},
{
"math_id": 21,
"text": "D(m,n) = \\{a \\mid \\exists x \\in S : a = a^n x a^m \\}"
},
{
"math_id": 22,
"text": "a^n \\in B \\implies a \\in B ."
},
{
"math_id": 23,
"text": "[[a,b,c],d,e] = [a,b,[c,d,e]] ."
},
{
"math_id": 24,
"text": "f"
},
{
"math_id": 25,
"text": "X"
},
{
"math_id": 26,
"text": "f(x, x, y) = f(y, x, x) = y"
},
{
"math_id": 27,
"text": "[x,y,z] = x \\cdot y^\\mathrm{T} \\cdot z"
},
{
"math_id": 28,
"text": " [a,a,a] = a "
},
{
"math_id": 29,
"text": "[a,a,[b,b,x]] = [b,b,[a,a,x]] "
},
{
"math_id": 30,
"text": " [[x,a,a],b,b] = [[x,b,b],a,a] "
},
{
"math_id": 31,
"text": "a \\rightarrow b \\Leftrightarrow [a,b,a] = a "
}
] |
https://en.wikipedia.org/wiki?curid=6386360
|
63866
|
Palermo Technical Impact Hazard Scale
|
Logarithmic scale in astronomy
The Palermo Technical Impact Hazard Scale is a logarithmic scale used by astronomers to rate the potential hazard of impact of a near-Earth object (NEO). It combines two types of data—probability of impact and estimated kinetic yield—into a single "hazard" value. A rating of 0 means the hazard is equivalent to the background hazard (defined as the average risk posed by objects of the same size or larger over the years until the date of the potential impact). A rating of +2 would indicate the hazard is 100 times as great as a random background event. Scale values less than −2 reflect events for which there are no likely consequences, while Palermo Scale values between −2 and 0 indicate situations that merit careful monitoring. A similar but less complex scale is the Torino Scale, which is used for simpler descriptions in the non-scientific media.
As of May 2024, one asteroid has a cumulative Palermo Scale value above −2: 101955 Bennu (−1.41). Six have cumulative Palermo Scale values between −2 and −3: (29075) 1950 DA (−2.05), (−2.63), 1979 XB (−2.71), (−2.78), (−2.86), and (−2.98). Of those that have a cumulative Palermo Scale value between −3 and −4, one was discovered in 2024: 2024 BY15 (−3.30).
Scale.
The scale compares the likelihood of the detected potential impact with the average risk posed by objects of the same size or larger over the years until the date of the potential impact. This average risk from random impacts is known as the background risk. The Palermo Scale value, "P", is defined by the equation:
formula_0
where
*"pi" is the impact probability
*"T" is the time interval over which "pi" is considered
*"fB" is the background impact frequency
The background impact frequency is defined for this purpose as:
formula_1
where the energy threshold E is measured in megatons, and yr is the unit of T divided by one year.
For instance, this formula implies that the expected value of the time from now until the next impact greater than 1 megatonne is 33 years, and that when it occurs, there is a 50% chance that it will be above 2.4 megatonnes. This formula is only valid over a certain range of "E".
However, another paper published in 2002 – the same year as the paper on which the Palermo scale is based – found a power law with different constants:
formula_2
This formula gives considerably lower rates for a given "E". For instance, it gives the rate for bolides of 10 megatonnes or more (like the Tunguska explosion) as 1 per thousand years, rather than 1 per 210 years as in the Palermo formula. However, the authors give a rather large uncertainty (once in 400 to 1800 years for 10 megatonnes), due in part to uncertainties in determining the energies of the atmospheric impacts that they used in their determination.
Positive rating.
In 2002 the near-Earth object reached a positive rating on the scale of 0.18, indicating a higher-than-background threat. The value was subsequently lowered after more measurements were taken. 2002 NT7 is no longer considered to pose any risk and was removed from the Sentry Risk Table on 1 August 2002.
In September 2002, the highest Palermo rating was that of asteroid (29075) 1950 DA, with a value of 0.17 for a possible collision in the year 2880. By March 2022, the rating had been reduced to −2.0.
For a brief period in late December 2004, with an observation arc of 190 days, asteroid 99942 Apophis (then known only by its provisional designation 2004 MN4) held the record for the highest Palermo scale value, with a value of 1.10 for a possible collision in the year 2029. The 1.10 value indicated that a collision with this object was considered to be almost 12.6 times as likely as a random background event: 1 in 37 instead of 1 in 472. With further observation through 2021 there is no risk from Apophis for the next 100+ years.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P \\equiv \\log_{10} \\frac {p_i} {f_B T}"
},
{
"math_id": 1,
"text": "f_B = 0.03\\, E^{-\\frac45} \\text{ yr}^{-1}\\;"
},
{
"math_id": 2,
"text": "f_B = 0.00737 E^{-0.9} \\;"
}
] |
https://en.wikipedia.org/wiki?curid=63866
|
63871781
|
System of differential equations
|
Group of differential equations
In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.
Linear systems of differential equations.
A first-order linear system of ODEs is a system in which every equation is first order and depends on the unknown functions linearly. Here we consider systems with an equal number of unknown functions and equations. These may be written as
formula_0
where formula_1 is a positive integer, and formula_2 are arbitrary functions of the independent variable t. A first-order linear system of ODEs may be written in matrix form:
formula_3
or simply
formula_4.
Homogeneous systems of differential equations.
A linear system is said to be homogeneous if formula_5 for each formula_6 and for all values of formula_7, otherwise it is referred to as non-homogeneous. Homogeneous systems have the property that if formula_8 are linearly independent solutions to the system, then any linear combination of these, formula_9, is also a solution to the linear system where formula_10 are constant.
The case where the coefficients formula_11 are all constant has a general solution: formula_12, where formula_13 is an eigenvalue of the matrix formula_14 with corresponding eigenvectors formula_15 for formula_16. This general solution only applies in cases where formula_14 has n distinct eigenvalues, cases with fewer distinct eigenvalues must be treated differently.
Linear independence of solutions.
For an arbitrary system of ODEs, a set of solutions formula_17 are said to be linearly-independent if:
formula_18 is satisfied only for formula_19.
A second-order differential equation formula_20 may be converted into a system of first order linear differential equations by defining formula_21, which gives us the first-order system:
formula_22
Just as with any linear system of two equations, two solutions may be called linearly-independent if formula_23 implies formula_24, or equivalently that formula_25 is non-zero. This notion is extended to second-order systems, and any two solutions to a second-order ODE are called linearly-independent if they are linearly-independent in this sense.
Overdetermination of systems of differential equations.
Like any system of equations, a system of linear differential equations is said to be overdetermined if there are more equations than the unknowns. For an overdetermined system to have a solution, it needs to satisfy the compatibility conditions. For example, consider the system:
formula_26
Then the necessary conditions for the system to have a solution are:
formula_27
See also: Cauchy problem and Ehrenpreis's fundamental principle.
Nonlinear system of differential equations.
Perhaps the most famous example of a nonlinear system of differential equations is the Navier–Stokes equations. Unlike the linear case, the existence of a solution of a nonlinear system is a difficult problem (cf. Navier–Stokes existence and smoothness.)
Other examples of nonlinear systems of differential equations include the Lotka–Volterra equations.
Differential system.
A differential system is a means of studying a system of partial differential equations using geometric ideas such as differential forms and vector fields.
For example, the compatibility conditions of an overdetermined system of differential equations can be succinctly stated in terms of differential forms (i.e., for a form to be exact, it needs to be closed). See integrability conditions for differential systems for more.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{dx_j}{dt} = a_{j1}(t) x_1 + \\ldots + a_{jn}(t)x_n + g_{j}(t), \\qquad j=1,\\ldots,n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "a_{ji}(t),g_{j}(t)"
},
{
"math_id": 3,
"text": "\n\\frac{d}{dt} \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{bmatrix} = \\begin{bmatrix} a_{11} & \\ldots & a_{1n} \\\\ a_{21} & \\ldots & a_{2 n} \\\\ \\vdots & \\ldots & \\vdots \\\\ a_{n1} & & a_{n n} \\end{bmatrix} \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{bmatrix} + \\begin{bmatrix} g_1 \\\\ g_2 \\\\ \\vdots \\\\ g_n \\end{bmatrix} ,"
},
{
"math_id": 4,
"text": "\n\\mathbf{\\dot{x}}(t) = \\mathbf{A}(t)\\mathbf{x}(t) + \\mathbf{g}(t)"
},
{
"math_id": 5,
"text": "\ng_j(t)=0"
},
{
"math_id": 6,
"text": "\nj"
},
{
"math_id": 7,
"text": "\nt"
},
{
"math_id": 8,
"text": "\\mathbf{x_1},\\ldots ,\\mathbf{x_p}"
},
{
"math_id": 9,
"text": "C_1 \\mathbf{x _1}+ \\ldots + C_p \\mathbf{x _p}"
},
{
"math_id": 10,
"text": "C_1, \\ldots, C_p"
},
{
"math_id": 11,
"text": "a_{ji}(t)"
},
{
"math_id": 12,
"text": "\\mathbf{x} = C_1 \\mathbf{v_1}e^{\\lambda_1 t } + \\ldots + C_n \\mathbf{v_n}e^{\\lambda_n t }"
},
{
"math_id": 13,
"text": "\\lambda_i"
},
{
"math_id": 14,
"text": "\\mathbf{A}"
},
{
"math_id": 15,
"text": "\\mathbf{v}_i"
},
{
"math_id": 16,
"text": "1 \\leq i \\leq n"
},
{
"math_id": 17,
"text": "\\mathbf{x_1}(t), \\ldots ,\\mathbf{x_n}(t)"
},
{
"math_id": 18,
"text": "C_1\\mathbf{x_1}(t) + \\ldots + C_n \\mathbf{x_n} = 0 \\quad \\forall t"
},
{
"math_id": 19,
"text": "C_1 = \\ldots = C_n=0"
},
{
"math_id": 20,
"text": "\\ddot{x} = f(t,x,\\dot{x})"
},
{
"math_id": 21,
"text": "y=\\dot{x}"
},
{
"math_id": 22,
"text": "\\begin{cases} \\dot{x} & = & y \\\\ \\dot{y} & = & f(t,x,y) \\end{cases}"
},
{
"math_id": 23,
"text": "C_1 \\mathbf{x}_1 + C_2 \\mathbf{x}_2=\\mathbf{0 }"
},
{
"math_id": 24,
"text": "C_1 = C_2 = 0"
},
{
"math_id": 25,
"text": "\\begin{vmatrix} x_1 & x_ 2 \\\\ \\dot{x}_ 1 & \\dot{x}_ 2 \\end{vmatrix}"
},
{
"math_id": 26,
"text": "\\frac{\\partial u}{\\partial x_i} = f_i, 1 \\le i \\le m."
},
{
"math_id": 27,
"text": "\\frac{\\partial f_i}{\\partial x_k} - \\frac{\\partial f_k}{\\partial x_i} = 0, 1 \\le i, k \\le m."
}
] |
https://en.wikipedia.org/wiki?curid=63871781
|
6387477
|
No-broadcasting theorem
|
Theorem of quantum information processing
In physics, the no-broadcasting theorem is a result of quantum information theory. In the case of pure quantum states, it is a corollary of the no-cloning theorem. The no-cloning theorem for pure states says that it is impossible to create two copies of an unknown state given a single copy of the state. Since quantum states cannot be copied in general, they cannot be broadcast. Here, the word "broadcast" is used in the sense of conveying the state to two or more recipients. For multiple recipients to each receive the state, there must be, in some sense, a way of duplicating the state. The no-broadcast theorem generalizes the no-cloning theorem for mixed states.
The theorem also includes a converse: if two quantum states do commute, there is a method for broadcasting them: they must have a common basis of eigenstates diagonalizing them simultaneously, and the map that clones every state of this basis is a legitimate quantum operation, requiring only physical resources independent of the input state to implement—a completely positive map. A corollary is that there is a physical process capable of broadcasting every state in some set of quantum states if, and only if, every pair of states in the set commutes. This broadcasting map, which works in the commuting case, produces an overall state in which the two copies are perfectly correlated in their eigenbasis.
Remarkably, the theorem does not hold if more than one copy of the initial state is provided: for example, broadcasting six copies starting from four copies of the original state is allowed, even if the states are drawn from a non-commuting set. The purity of the state can even be increased in the process, a phenomenon known as superbroadcasting.
Generalized No-Broadcast Theorem.
The generalized quantum no-broadcasting theorem, originally proven by Barnum, Caves, Fuchs, Jozsa and Schumacher for mixed states of finite-dimensional quantum systems, says that given a pair of quantum states which do not commute, there is no method capable of taking a single copy of either state and succeeding, no matter which state was supplied and without incorporating knowledge of which state has been supplied, in producing a state such that one part of it is the same as the original state and the other part is also the same as the original state. That is, given an initial unknown state formula_0 drawn from the set formula_1 such that formula_2, there is no process (using physical means independent of those used to select the state) guaranteed to create a state formula_3 in a Hilbert space formula_4 whose partial traces are formula_5 and formula_6. Such a process was termed broadcasting in that paper.
No-Local-Broadcasting Theorem.
The second theorem states that local broadcasting is only possible when the state is a classical probability distribution. This means that a state can only be broadcast locally if it does not have any quantum correlations. Luo reconciled this theorem with the generalized no-broadcast theorem by making the conjecture that when a state is a classical-quantum state, correlations (rather than the state itself) in a bipartite state can be locally broadcast. By mathematically proving that his conjecture and the two theorems all relate to and imply one another, Luo proved that all three statements are logically equivalent.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rho_i,"
},
{
"math_id": 1,
"text": " \\{\\rho_i\\}_{i \\in \\{1,2\\}}"
},
{
"math_id": 2,
"text": "[\\rho_1,\\rho_2] \\ne 0"
},
{
"math_id": 3,
"text": "\\rho_{AB}"
},
{
"math_id": 4,
"text": "H_A \\otimes H_B"
},
{
"math_id": 5,
"text": "\\operatorname{Tr}_A\\rho_{AB} = \\rho_i"
},
{
"math_id": 6,
"text": "\\operatorname{Tr}_B\\rho_{AB} = \\rho_i"
}
] |
https://en.wikipedia.org/wiki?curid=6387477
|
638834
|
Economic model
|
Simplified representation of economic reality
An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world.
Overview.
In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study.
"Simplification" is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful.
"Selection" is important because the nature of an economic model will often determine what facts will be looked at and how they will be compiled. For example, inflation is a general economic concept, but to measure inflation requires a model of behavior, so that an economist can differentiate between changes in relative prices and changes in price that are to be attributed to inflation.
In addition to their professional academic interest, uses of models include:
A model establishes an "argumentative framework" for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model.
Economic models in current use do not pretend to be "theories of everything economic"; any such pretensions would immediately be thwarted by computational infeasibility and the incompleteness or lack of theories for various types of economic behavior. Therefore, conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic process.
The details of model construction vary with type of model and its application, but a generic process can be identified. Generally, any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it purports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set.
Types of models.
According to whether all the model variables are deterministic, economic models can be classified as stochastic or non-stochastic models; according to whether all the variables are quantitative, economic models are classified as discrete or continuous choice model; according to the model's intended purpose/function, it can be classified as
quantitative or qualitative; according to the model's ambit, it can be classified as a general equilibrium model, a partial equilibrium model, or even a non-equilibrium model; according to the economic agent's characteristics, models can be classified as rational agent models, representative agent models etc.
At a more practical level, quantitative modelling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples that illustrate some particularly relevant points of model construction.
algebraic sum of inflows = sinks − sources
This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash, which we would interpret as showing the experiment was conducted improperly.
formula_0
where formula_1 is the price that a product commands in the market if it is supplied at the rate formula_2, formula_3 is the revenue obtained from selling the product, formula_4 is the cost of bringing the product to market at the rate formula_2, and formula_5 is the tax that the firm must pay per unit of the product sold.
The profit maximization assumption states that a firm will produce at the output rate "x" if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on "x" under which this holds. The first order maximization condition for "x" is
formula_6
Regarding "x" as an implicitly defined function of "t" by this equation (see implicit function theorem), one concludes that the derivative of "x" with respect to "t" has the same sign as
formula_7
which is negative if the second order conditions for a local maximum are satisfied.
Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality.
Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an "operationally meaningful theorem"; that is one requiring some economically meaningful assumption that is falsifiable under certain conditions.
Problems with economic models.
Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and a large literature has grown up discussing problems with economic models, or at least asserting that their results are unreliable.
History.
One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the eighteenth century. Among these economists, François Quesnay was known particularly for his development and use of tables he called "Tableaux économiques". These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below.
All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations), simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of "The Doctrine of Chances". Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi. In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis, where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probabilities (1812). Thus, by the time David Ricardo came along he had a well-established mathematical basis to draw from.
Tests of macroeconomic predictions.
In the late 1980s, the Brookings Institution compared 12 leading macroeconomic models available at the time. They compared the models' predictions for how the economy would respond to specific economic shocks (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the models simplified the world and started from a stable, known common parameters the various models gave significantly different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between.
Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to 'fine-tune' the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are:
Comparison with models in other sciences.
Complex systems specialist and mathematician David Orrell wrote on this issue in his book Apollo's Arrow and explained that the weather, human health and economics use similar methods of prediction (mathematical models). Their systems—the atmosphere, the human body and the economy—also have similar levels of complexity. He found that forecasts fail because the models suffer from two problems: (i) they cannot capture the full detail of the underlying system, so rely on approximate equations; (ii) they are sensitive to small changes in the exact form of these equations. This is because complex systems like the economy or the climate consist of a delicate balance of opposing forces, so a slight imbalance in their representation has big effects. Thus, predictions of things like economic recessions are still highly inaccurate, despite the use of enormous models running on fast computers.
See .
Effects of deterministic chaos on economic models.
Economic and meteorological simulations may share a fundamental limit to their predictive powers: chaos. Although the modern mathematical work on chaotic systems began in the 1970s the danger of chaos had been identified and defined in "Econometrica" as early as 1958:
"Good theorising consists to a large extent in avoiding assumptions ... [with the property that] a small change in what is posited will seriously affect the conclusions."
(William Baumol, Econometrica, 26 "see": "Economics on the Edge of Chaos").
It is straightforward to design economic models susceptible to butterfly effects of initial-condition sensitivity.
However, the econometric research program to identify which variables are chaotic (if any) has largely concluded that aggregate macroeconomic variables probably do not behave chaotically. This would mean that refinements to the models could ultimately produce reliable long-term forecasts. However, the validity of this conclusion has generated two challenges:
More recently, chaos (or the butterfly effect) has been identified as less significant than previously thought to explain prediction errors. Rather, the predictive power of economics and meteorology would mostly be limited by the models themselves and the nature of their underlying systems (see Comparison with models in other sciences above).
Critique of hubris in planning.
A key strand of free market economic thinking is that the market's invisible hand guides an economy to prosperity more efficiently than central planning using an economic model. One reason, emphasized by Friedrich Hayek, is the claim that many of the true forces shaping the economy can never be captured in a single plan. This is an argument that cannot be made through a conventional (mathematical) economic model because it says that there are critical systemic-elements that will always be omitted from any top-down analysis of the economy.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\pi(x,t) = x p(x) - C(x) - t x \\quad"
},
{
"math_id": 1,
"text": "p(x)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "xp(x)"
},
{
"math_id": 4,
"text": "C(x)"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": " \\frac{\\partial \\pi(x,t)}{\\partial x} =\\frac{\\partial (x p(x) - C(x))}{\\partial x} -t= 0 "
},
{
"math_id": 7,
"text": " \\frac{\\partial^2 (x p(x) - C(x))}{\\partial^2 x}={\\partial^2\\pi(x,t)\\over \\partial x^2},"
}
] |
https://en.wikipedia.org/wiki?curid=638834
|
63904912
|
Ultrabornological space
|
In functional analysis, a topological vector space (TVS) formula_0 is called ultrabornological if every bounded linear operator from formula_0 into another TVS is necessarily continuous. A general version of the closed graph theorem holds for ultrabornological spaces.
Ultrabornological spaces were introduced by Alexander Grothendieck (Grothendieck [1955, p. 17] "espace du type (β)").
Definitions.
Let formula_0 be a topological vector space (TVS).
Preliminaries.
A disk is a convex and balanced set.
A disk in a TVS formula_0 is called bornivorous if it absorbs every bounded subset of formula_1
A linear map between two TVSs is called infrabounded if it maps Banach disks to bounded disks.
A disk formula_2 in a TVS formula_0 is called infrabornivorous if it satisfies any of the following equivalent conditions:
while if formula_0 locally convex then we may add to this list:
while if formula_0 locally convex and Hausdorff then we may add to this list:
Ultrabornological space.
A TVS formula_0 is ultrabornological if it satisfies any of the following equivalent conditions:
while if formula_0 is a locally convex space then we may add to this list:
while if formula_0 is a Hausdorff locally convex space then we may add to this list:
Properties.
Every locally convex ultrabornological space is barrelled, quasi-ultrabarrelled space, and a bornological space but there exist bornological spaces that are not ultrabornological.
Examples and sufficient conditions.
The finite product of locally convex ultrabornological spaces is ultrabornological. Inductive limits of ultrabornological spaces are ultrabornological.
Every Hausdorff sequentially complete bornological space is ultrabornological. Thus every complete Hausdorff bornological space is ultrabornological. In particular, every Fréchet space is ultrabornological.
The strong dual space of a complete Schwartz space is ultrabornological.
Every Hausdorff bornological space that is quasi-complete is ultrabornological.
There exist ultrabarrelled spaces that are not ultrabornological.
There exist ultrabornological spaces that are not ultrabarrelled.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X."
},
{
"math_id": 2,
"text": "D"
}
] |
https://en.wikipedia.org/wiki?curid=63904912
|
6390507
|
Multivalued dependency
|
In database theory, a multivalued dependency is a full constraint between two sets of attributes in a relation.
In contrast to the functional dependency, the multivalued dependency requires that certain tuples be present in a relation. Therefore, a multivalued dependency is a special case of "tuple-generating dependency". The multivalued dependency plays a role in the 4NF database normalization.
A multivalued dependency is a special case of a join dependency, with only two sets of values involved, i.e. it is a binary join dependency.
A multivalued dependency exists when there are at least three attributes (like X,Y and Z) in a relation and for a value of X there is a well defined set of values of Y and a well defined set of values of Z. However, the set of values of Y is independent of set Z and vice versa.
Formal definition.
The formal definition is as follows:
Let formula_0 be a relation schema and let formula_1 and formula_2 be sets of attributes. The multivalued dependency formula_3 ("formula_4 multidetermines formula_5") holds on formula_0 if, for any legal relation formula_6 and all pairs of tuples formula_7 and formula_8 in formula_9 such that formula_10, there exist tuples formula_11 and formula_12 in formula_9 such that:
formula_13
Informally, if one denotes by formula_14 the tuple having values for formula_15 formula_16 formula_17 collectively equal to formula_18 formula_19 formula_20, then whenever the tuples formula_21 and formula_22 exist in formula_9, the tuples formula_23 and formula_24 should also exist in formula_9.
The multivalued dependency can be schematically depicted as shown below:
formula_25
Example.
Consider this example of a relation of university courses, the books recommended for the course, and the lecturers who will be teaching the course:
Because the lecturers attached to the course and the books attached to the course are independent of each other, this database design has a multivalued dependency; if we were to add a new book to the AHA course, we would have to add one record for each of the lecturers on that course, and vice versa.
<br>Put formally, there are two multivalued dependencies in this relation: {course} formula_26 {book} and equivalently {course} formula_26 {lecturer}.
<br>Databases with multivalued dependencies thus exhibit redundancy. In database normalization, fourth normal form requires that for every nontrivial multivalued dependency "X" formula_26 "Y", "X" is a superkey. A multivalued dependency "X" formula_26 "Y" is trivial if "Y" is a subset of "X", or if formula_27 is the whole set of attributes of the relation.
Properties.
The following also involve functional dependencies:
The above rules are sound and complete.
|
[
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "\\alpha \\subseteq R"
},
{
"math_id": 2,
"text": "\\beta \\subseteq R"
},
{
"math_id": 3,
"text": "\\alpha \\twoheadrightarrow \\beta"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\beta"
},
{
"math_id": 6,
"text": "r(R)"
},
{
"math_id": 7,
"text": "t _1"
},
{
"math_id": 8,
"text": "t _2"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "t _1[\\alpha]=t _2[\\alpha]"
},
{
"math_id": 11,
"text": "t _3"
},
{
"math_id": 12,
"text": "t _4"
},
{
"math_id": 13,
"text": "\n\\begin{matrix}\nt_1[\\alpha] = t_2[\\alpha] = t_3[\\alpha] = t_4[\\alpha]\\\\\nt_1[\\beta] = t_3[\\beta]\\\\\nt_2[\\beta] = t_4[\\beta]\\\\\nt_1[R\\setminus(\\alpha\\cup\\beta)] = t_4[R\\setminus(\\alpha\\cup\\beta)]\\\\\nt_2[R\\setminus(\\alpha\\cup\\beta)] = t_3[R\\setminus(\\alpha\\cup\\beta)]\n\\end{matrix}\n"
},
{
"math_id": 14,
"text": "(x,y,z)"
},
{
"math_id": 15,
"text": "\\alpha,"
},
{
"math_id": 16,
"text": "\\beta,"
},
{
"math_id": 17,
"text": "R - \\alpha - \\beta"
},
{
"math_id": 18,
"text": "x,"
},
{
"math_id": 19,
"text": "y,"
},
{
"math_id": 20,
"text": "z"
},
{
"math_id": 21,
"text": "(a,b,c)"
},
{
"math_id": 22,
"text": "(a,d,e)"
},
{
"math_id": 23,
"text": "(a,b,e)"
},
{
"math_id": 24,
"text": "(a,d,c)"
},
{
"math_id": 25,
"text": "\n\\begin{matrix}\n\\text{tuple} & \\alpha & \\beta & R\\setminus(\\alpha\\cup\\beta) \\\\\nt_1 & a_1 .. a_n & b_1 .. b_m & d_1 .. d_k \\\\\nt_2 & a_1 .. a_n & c_1 .. c_m & e_1 .. e_k \\\\\nt_3 & a_1 .. a_n & b_1 .. b_m & e_1 .. e_k \\\\\nt_4 & a_1 .. a_n & c_1 .. c_m & d_1 .. d_k\n\\end{matrix}\n"
},
{
"math_id": 26,
"text": "\\twoheadrightarrow"
},
{
"math_id": 27,
"text": "X \\cup Y"
},
{
"math_id": 28,
"text": "\\alpha \\twoheadrightarrow R - \\beta"
},
{
"math_id": 29,
"text": "\\gamma \\subseteq \\delta"
},
{
"math_id": 30,
"text": "\\alpha \\delta \\twoheadrightarrow \\beta \\gamma"
},
{
"math_id": 31,
"text": "\\beta \\twoheadrightarrow \\gamma"
},
{
"math_id": 32,
"text": "\\alpha \\twoheadrightarrow \\gamma - \\beta"
},
{
"math_id": 33,
"text": "\\alpha \\rightarrow \\beta"
},
{
"math_id": 34,
"text": "\\beta \\rightarrow \\gamma"
},
{
"math_id": 35,
"text": "\\rightarrow "
},
{
"math_id": 36,
"text": "\\twoheadrightarrow "
},
{
"math_id": 37,
"text": "\\subseteq"
},
{
"math_id": 38,
"text": "\\twoheadrightarrow "
},
{
"math_id": 39,
"text": "\\exist "
},
{
"math_id": 40,
"text": "\\cap"
},
{
"math_id": 41,
"text": "\\empty "
},
{
"math_id": 42,
"text": "\\rightarrow"
},
{
"math_id": 43,
"text": "\\subseteq "
},
{
"math_id": 44,
"text": "R - \\beta"
},
{
"math_id": 45,
"text": "R = \\alpha \\cup \\beta"
},
{
"math_id": 46,
"text": "\\beta \\subseteq \\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=6390507
|
63906911
|
Bottema's theorem
|
Theorem about the midpoint of a line connecting squares on two sides of a triangle
Bottema's theorem is a theorem in plane geometry by the Dutch mathematician Oene Bottema (Groningen, 1901–1992).
The theorem can be stated as follows: in any given triangle formula_2, construct squares on any two adjacent sides, for example formula_3 and formula_4. The midpoint of the line segment that connects the vertices of the squares opposite the common vertex, "formula_0", of the two sides of the triangle is independent of the location of formula_0.
The theorem is true when the squares are constructed in one of the following ways:
If formula_6 is the projection of formula_1 onto formula_7, Then formula_8.
If the squares are replaced by regular polygons of the same type, then a generalized Bottema theorem is obtained:
In any given triangle formula_2 construct two regular polygons on two sides formula_3 and formula_4.
Take the points formula_9 and formula_10 on the circumcircles of the polygons, which are diametrically opposed of the common vertex formula_0. Then, the midpoint of the line segment formula_11 is independent of the location of formula_0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "ABC"
},
{
"math_id": 3,
"text": "AC"
},
{
"math_id": 4,
"text": "BC"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "AB"
},
{
"math_id": 8,
"text": "AS=BS=MS"
},
{
"math_id": 9,
"text": "D_1"
},
{
"math_id": 10,
"text": "D_2"
},
{
"math_id": 11,
"text": "D_1D_2"
}
] |
https://en.wikipedia.org/wiki?curid=63906911
|
63907968
|
Borel graph theorem
|
In functional analysis, the Borel graph theorem is generalization of the closed graph theorem that was proven by L. Schwartz.
The Borel graph theorem shows that the closed graph theorem is valid for linear maps defined on and valued in most spaces encountered in analysis.
Statement.
A topological space is called a Polish space if it is a separable complete metrizable space and that a Souslin space is the continuous image of a Polish space. The weak dual of a separable Fréchet space and the strong dual of a separable Fréchet–Montel space are Souslin spaces. Also, the space of distributions and all Lp-spaces over open subsets of Euclidean space as well as many other spaces that occur in analysis are Souslin spaces. The Borel graph theorem states:
Let formula_0 and formula_1 be Hausdorff locally convex spaces and let formula_2 be linear. If formula_0 is the inductive limit of an arbitrary family of Banach spaces, if formula_1 is a Souslin space, and if the graph of formula_3 is a Borel set in formula_4 then formula_3 is continuous.
Generalization.
An improvement upon this theorem, proved by A. Martineau, uses K-analytic spaces. A topological space formula_0 is called a formula_5 if it is the countable intersection of countable unions of compact sets. A Hausdorff topological space formula_1 is called <templatestyles src="Template:Visible anchor/styles.css" />K-analytic if it is the continuous image of a formula_5 space (that is, if there is a formula_5 space formula_0 and a continuous map of formula_0 onto formula_1). Every compact set is K-analytic so that there are non-separable K-analytic spaces.
Also, every Polish, Souslin, and reflexive Fréchet space is K-analytic as is the weak dual of a Fréchet space. The generalized theorem states:
Let formula_0 and formula_1 be locally convex Hausdorff spaces and let formula_2 be linear. If formula_0 is the inductive limit of an arbitrary family of Banach spaces, if formula_1 is a K-analytic space, and if the graph of formula_3 is closed in formula_4 then formula_3 is continuous.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "u : X \\to Y"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "X \\times Y,"
},
{
"math_id": 5,
"text": "K_{\\sigma \\delta}"
}
] |
https://en.wikipedia.org/wiki?curid=63907968
|
63909474
|
TEM-function
|
In petroleum engineering, TEM (true effective mobility), also called TEM-function is a criterion to characterize dynamic two-phase flow characteristics of rocks (or dynamic rock quality). TEM is a function of relative permeability, porosity, absolute permeability and fluid viscosity, and can be determined for each fluid phase separately. TEM-function has been derived from Darcy's law for multiphase flow.
formula_0
in which formula_1 is the absolute permeability, formula_2 is the relative permeability, φ is the porosity, and μ is the fluid viscosity.
Rocks with better fluid dynamics (i.e., experiencing a lower pressure drop in conducting a fluid phase) have higher TEM versus saturation curves. Rocks with lower TEM versus saturation curves resemble low quality systems.
TEM-function in analyzing relative permeability data is analogous with Leverett J-function in analyzing capillary pressure data. Furthermore, TEM-function in two-phase flow systems is an extension of RQI (rock quality index) for single-phase systems.
Also, TEM-function can be used for averaging relative permeability curves (for each fluid phase separately, i.e., water, oil, gas, CO2).
formula_3
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathit{TEM} = \\frac{k k_{\\mathit{r}}}{\\phi \\mu}"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "k_\\mathit{r}"
},
{
"math_id": 3,
"text": "\\text{Average kr} = \\frac{\\sum_{i=1}^n\\mathit{TEM}_i}{\\sum_{i=1}^n\\left(\\frac{k}{\\phi \\mu}\\right)_i} = \\frac{\\sum_{i=1}^n\\left(\\frac{k k_{\\mathit{r}}}{\\phi \\mu}\\right)_i}{\\sum_{i=1}^n\\left(\\frac{k}{\\phi \\mu}\\right)_i}"
}
] |
https://en.wikipedia.org/wiki?curid=63909474
|
63910051
|
Glycan-protein interactions
|
Class of biological intermolecular interactions
Glycan-Protein interactions represent a class of biomolecular interactions that occur between free or protein-bound glycans and their cognate binding partners. Intramolecular glycan-protein (protein-glycan) interactions occur between glycans and proteins that they are covalently attached to. Together with protein-protein interactions, they form a mechanistic basis for many essential cell processes, especially for cell-cell interactions and host-cell interactions. For instance, SARS-CoV-2, the causative agent of COVID-19, employs its extensively glycosylated spike (S) protein to bind to the ACE2 receptor, allowing it to enter host cells. The spike protein is a trimeric structure, with each subunit containing 22 N-glycosylation sites, making it an attractive target for vaccine search.
Glycosylation, i.e., the addition of glycans (a generic name for monosaccharides and oligosaccharides) to a protein, is one of the major post-translational modification of proteins contributing to the enormous biological complexity of life. Indeed, three different hexoses could theoretically produce from 1056 to 27,648 unique trisaccharides in contrast to only 6 peptides or oligonucleotides formed from 3 amino acids or 3 nucleotides respectively. In contrast to template-driven protein biosynthesis, the "language" of glycosylation is still unknown, making glycobiology a hot topic of current research given their prevalence in living organisms.
The study of glycan-protein interactions provides insight into the mechanisms of cell-signaling and allows to create better-diagnosing tools for many diseases, including cancer. Indeed, there are no known types of cancer that do not involve erratic patterns of protein glycosylation.
Thermodynamics of Binding.
The binding of glycan-binding proteins (GBPs) to glycans could be modeled with simple equilibrium. Denoting glycans as formula_0 and proteins as formula_1:
formula_2
With an associated equilibrium constant of
formula_3
Which is rearranged to give dissociation constant formula_4 following biochemical conventions:
formula_5
Given that many GBPs exhibit multivalency, this model may be expanded to account for multiple equilibria:
formula_6
formula_7
formula_8
formula_9
Denoting cumulative equilibrium of binding with formula_10 ligands as
formula_11
With corresponding equilibrium constant:
formula_12
And writing material balance for protein (formula_13 denotes the total concentration of protein):
formula_14
Expressing the terms through an equilibrium constant, a final result is found:
formula_15
The concentration of free protein is, thus:
formula_16
If formula_17, i.e. there is only one carbohydrate receptor domain, the equation reduces to
formula_18
With increasing formula_10 the concentration of free protein decreases; hence, the apparent formula_19 decreases too.
Binding with aromatic rings.
The chemical intuition suggests that the glycan-binding sites may be enriched in polar amino acid residues that form non-covalent interactions, such as hydrogen bonds, with polar carbohydrates. Indeed, a statistical analysis of carbohydrate-binding pockets shows that aspartic acid and asparagine residues are present twice as often as would be predicted by chance. Surprisingly, there is an even stronger preference for aromatic amino acids: tryptophan has a 9-fold increase in prevalence, tyrosine a 3-fold one, and histidine a 2-fold increase. It has been shown that the underlying force is the formula_20 interaction between the aromatic formula_21 system and the formula_22 in carbohydrate as shown in "Figure 1". The formula_20 interaction is identified if the formula_23°, the formula_20 distance (distance from formula_24 to formula_25) is less than 4.5Å.
Effects of stereochemistry.
This formula_20 interaction strongly depends on the stereochemistry of the carbohydrate molecule. For example, consider the top (formula_26) and bottom (formula_27) faces of formula_26-D-Glucose and formula_26-D-Galactose. It has been shown that a single change in the stereochemistry at C4 carbon shifts preference for aromatic residues from formula_26 side (2.7 fold preference for glucose) to the formula_27 side (14 fold preference for galactose).
Effects of electronics.
The comparison of electrostatic surface potentials (ESPs) of aromatic rings in tryptophan, tyrosine, phenylalanine, and histidine suggests that electronic effects also play a role in the binding to glycans (see "Figure 2"). After normalizing the electron densities for surface area, the tryptophan still remains the most electron rich acceptor of formula_20 interactions, suggesting a possible reason for its 9-fold prevalence in carbohydrate binding pockets. Overall, the electrostatic potential maps follow the prevalence trend of <chem>Trp » Tyr > (Phe) > His</chem>.
Carbohydrate-binding partners.
There are many proteins capable of binding to glycans, including lectins, antibodies, microbial adhesins, viral agglutinins, etc.
Lectins.
Lectins is a generic name for proteins with carbohydrate-recognizing domains (CRD). Although it became almost synonymous with glycan-binding proteins, it does not include antibodies which also belong to the class.
Lectins found in plants and fungi cells have been extensively used in research as a tool to detect, purify, and analyze glycans. However, useful lectins usually have sub-optimal specificities. For instance, "Ulex europaeus" agglutinin-1 (UEA-1), a plant-extracted lectin capable of binding to human blood type O antigen, can also bind to unrelated glycans such as 2'-fucosyllactose, GalNAcα1-4(Fucα1-2)Galβ1-4GlcNAc, and Lewis-Y antigen.
Antibodies.
Although antibodies exhibit nanomolar affinities toward protein antigens, the specificity against glycans is very limited. In fact, available antibodies may bind only <4% of the 7000 mammalian glycan antigens; moreover, most of those antibodies have low affinity and exhibit cross-reactivity.
Lambodies.
In contrast with jawed vertebrates whose immunity is based on variable, diverse, and joining gene segments (VDJs) of immunoglobulins, the jawless invertebrates, such as lamprey and hagfish, create a receptor diversity by somatic DNA rearrangement of leucine-rich repeat (LRR) modules that are incorporate in *vlr* genes (variable leukocyte receptors). Those LRR form 3D structures resembling curved solenoids that selectively bind specific glycans.
A study from University of Maryland has shown that lamprey antibodies (lambodies) could selectively bind to tumor-associated carbohydrate antigens (such as Tn and TFformula_27) at nanomolar affinities. The T-nouvelle antigen (Tn) and TFformula_27 are present in proteins in as much as 90% of different cancer cells after post-translational modification, whereas in healthy cells those antigens are much more complex. A selection of lambodies that could bind to aGPA, a human erythrocyte membrane glycoprotein that is covered with 16 TFformula_27 moieties, through magnetic-activated cell sorting (MACS) and fluorescence-activated cell sorting (FACS) has yielded a leucine-rich lambody "VLRB.aGPA.23". This lambody selectively stained (over healthy samples) cells from 14 different types of adenocarcinomas: bladder, esophagus, ovary, tongue, cheek, cervix, liver, nose, nasopharynx, greater omentum, colon, breast, larynx, and lung. Moreover, patients whose tissues stained positive with "VLRB.aGPA.23" had a significantly smaller survival rate.
A close look at the crystal structure of "VLRB.aGPA.23" reveals a tryptophan residue at position 187 right over the carbohydrate binding pocket.
Multivalency in structure.
Many glycan binding proteins (GBPs) are oligomeric and typically contain multiple sites for glycan binding (also called carbohydrate-recognition domains). The ability to form multivalent protein-ligand interactions significantly enhances the strength of binding: while formula_19 values for individual CRD-glycan interactions may be in the mM range, the overall affinity of GBP towards glycans may reach nanomolar or even picomolar ranges. The overall strength of interactions is described as "avidity" formula_19 (in contrast with an "affinity" formula_19 which describes single equilibrium). Sometimes the "avidity" is also called an "apparent" formula_19 to emphasize the non-equilibrium nature of the interaction.
Common oligomerization structures of lectins are shown below. For example, galectins are usually observed as dimers, while intelectins form trimers and pentraxins assemble into pentamers. Larger structures, like hexameric Reg proteins, may assemble into membrane penetrating pores. Collectins may form even more bizarre complexes: bouquets of trimers or even cruciform-like structures (e.g. in SP-D).
Current Research.
Given the importance of glycan-protein interactions, there is an ongoing research dedicated to the a) creation of new tools to detect glycan-protein interactions and b) using those tools to decipher the so-called sugar code.
Glycan Arrays.
One of the most widely used tools for probing glycan-protein interactions is glycan arrays. A glycan array usually is an NHS- or epoxy-activated glass slides on which various glycans were printed using robotic printing. These commercially available arrays may contain up to 600 different glycans, specificity of which has been extensively studied.
Glycan-protein interactions may be detected by testing proteins of interest (or libraries of those) that bear fluorescent tags. The structure of the glycan-binding protein may be deciphered by several analytical methods based on mass-spectrometry, including MALDI-MS, LC-MS, tandem MS-MS, and/or 2D NMR.
Bioinformatics driven research.
Computational methods have been applied to search for parameters (e.g. residue propensity, hydrophobicity, planarity) that could distinguish glycan-binding proteins from other surface patches. For example, a model trained on 19 non-homologous carbohydrate binding structures was able to predict carbohydrate-binding domains (CRDs) with an accuracy of 65% for non-enzymatic structures and 87% for enzymatic ones. Further studies have employed calculations of Van der Waals energies of protein-probe interactions and amino acid propensities to identify CRDs with 98% specificity at 73% sensitivity. More recent methods can predict CRDs even from protein sequences, by comparing the sequence with those for which structures are already known.
Sugar code.
In contrast with protein studies, where a primary protein structure is unambiguously defined by the sequence of nucleotides (the genetic code), the glycobiology still cannot explain how a certain "message" is encoded using carbohydrates or how it is "read" and "translated" by other biological entities.
An interdisciplinary effort, combining chemistry, biology, and biochemistry, studies glycan-protein interactions to see how different sequences of carbohydrates initiate different cellular responses.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": " Protein (P) + Glycan (G) \\rightleftharpoons PG "
},
{
"math_id": 3,
"text": " K_a = \\frac{[PG]}{[P][G]} "
},
{
"math_id": 4,
"text": "K_d"
},
{
"math_id": 5,
"text": "K_d = \\frac{[P][G]}{[PG]} "
},
{
"math_id": 6,
"text": " P + G \\rightleftharpoons PG "
},
{
"math_id": 7,
"text": " PG + G \\rightleftharpoons PG_2 "
},
{
"math_id": 8,
"text": " \\dots "
},
{
"math_id": 9,
"text": " PG_{n-1} + G \\rightleftharpoons PG_n "
},
{
"math_id": 10,
"text": "i"
},
{
"math_id": 11,
"text": " P + iG \\rightleftharpoons PG_i "
},
{
"math_id": 12,
"text": " \\beta_i = \\frac{[PG_i]}{[P][G]^i} "
},
{
"math_id": 13,
"text": "c_P"
},
{
"math_id": 14,
"text": " c_P = [P] + [PG] + \\dots + [PG_n] "
},
{
"math_id": 15,
"text": " c_P = [P](1 + \\beta_1[G] + \\dots + \\beta_n [G]^n "
},
{
"math_id": 16,
"text": " [P] = \\frac{c_P}{1 + \\sum_{i=1}^{n}{\\beta_i[G]^i}} "
},
{
"math_id": 17,
"text": "n=1"
},
{
"math_id": 18,
"text": " [P] = \\frac{c_P}{1 + \\beta_1 [G]} "
},
{
"math_id": 19,
"text": "K_D"
},
{
"math_id": 20,
"text": "CH-\\pi"
},
{
"math_id": 21,
"text": "\\pi"
},
{
"math_id": 22,
"text": "C-H"
},
{
"math_id": 23,
"text": "\\theta \\leqslant 40"
},
{
"math_id": 24,
"text": "C"
},
{
"math_id": 25,
"text": "X"
},
{
"math_id": 26,
"text": "\\beta"
},
{
"math_id": 27,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=63910051
|
63915539
|
Fickett–Jacobs cycle
|
The Fickett–Jacobs cycle is a conceptual thermodynamic cycle that allows to compute an upper limit to the amount of mechanical work obtained from a cycle using an unsteady detonation process (explosive). The Fickett–Jacobs (FJ) cycle is based on Chapman–Jouguet (CJ) theory, an approximation for the detonation wave's velocity during a detonation. This cycle is researched for rotating detonation engines (RDE), considered to be more efficient than the classical combustion engines that are based on the Brayton or Humphrey cycles.
The FJ cycle for detonation is an elaboration of the original ideas of Jacobs (1956). The first to propose applying thermodynamic cycles to detonation was Yakov Zeldovich in 1940. In his work, he concluded that the efficiency of the detonation cycle is slightly larger than that of previous constant-volume combustion cycles. Zeldovich's ideas were not known to Jacbos or Fickett.
Since 1940, serious attempts have been discussed for detonating propulsion systems, nevertheless, until today, no practical approach has been found. Detonation is the process by which material is very rapidly burned and converted into energy (extremely high combustion rate). The major difficulty involved in the process is the necessity to rapidly mix the fuel and air at high speeds and sustaining the detonation in a controllable manner.
Thermodynamic Cycle Model.
The FJ cycle is based on a closed piston-cylinder where the reactants and explosion products are constantly contained inside. The explosives, pistons, and cylinder define the closed thermodynamic system. In addition, the cylinder and the pistons are assumed to be rigid, massless, and adiabatic.
The ideal FJ cycle consists of five processes:
The entire cycle is shown in Figure 1.
The net work done by the system is equal to the sum of the work done during each step of the cycle. Since all processes in the cycles shown in Figure 2 are reversible, except for the detonation process, the work computed is an upper limit to the work that can be obtained during any cyclic process with a propagating detonation as the combustion step.
Mathematical interpretation of the cycle's total work.
In the following equations, all subscripts correspond to the different steps in the Fickett–Jacobs cycle as shown in Figure 2. In addition, a representation of the work done by the system and the external work applied on the system is shown is Figure 1.
Initially, the work done to the system to begin a cycling detonation is
formula_0
Where Ρi is the initial pressure applied to unit area Α and velocity up from time formula_1. The time to reach the end of the cylinder is calculated using length L of the cylinder and the propagation wave's velocity (approximated by Chapman–Jouguet ), UCJ: formula_2. The fact that the mass of the explosive is formula_3, where ρ is the explosive's density, the equation above becomes
formula_4
The work done by the system (detonation) per unit mass of explosive is
formula_5
The work done by the adiabatic expansion of the reaction products is
formula_6
Where Ρ is the pressure on the isentrope through state 1, and V2 is the specific volume on that isentrope at the initial pressure Ρ0.
The work done through steps 2 to 0 (including 3) was considered by Fickett to be negligible, nevertheless, it is added in order to have a complete thermodynamic cycle and be consistent with the First Law of Thermodynamics. The additional work is
formula_7
The total work done by the system is then
formula_8
Where formula_9 is the enthalpy difference between steps 0 to 2 (passing through step 3).
Thermal Efficiency.
The thermal efficiency of the FJ cycle is the ratio between the net work done to the specific heat of combustion.
formula_10
Where qc is the specific heat of combustion, defined as the enthalpy difference between the reactants and the products at initial pressure and temperature: formula_11.
The FJ cycle overall shows the amount of work available from a detonating system.
The thermal efficiency for the FJ cycle is shown to be dependent on its initial pressure. The thermal efficiency decreases when the initial pressure decreases due to the increasing in dissociation at low pressures. Dissociation is an endothermic process, hence reducing the amount of energy released in a detonation or the maximum amount of work that can be obtained from the FJ cycle. Exothermic reactions are encouraged when increasing the initial pressure of the system, hence, increasing the amount of work generated during the FJ cycle.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " W_i = -P_i A u_p(t-t_0) "
},
{
"math_id": 1,
"text": " t-t_0 "
},
{
"math_id": 2,
"text": " t-t_0 = \\frac{L}{U_{CJ}}"
},
{
"math_id": 3,
"text": " M = \\rho L A"
},
{
"math_id": 4,
"text": " W_i = - \\frac{P_i u_p}{\\rho U_{CJ}}"
},
{
"math_id": 5,
"text": " W_{01} = \\frac{1}{2}u_{p}^2 "
},
{
"math_id": 6,
"text": "W_{12} =\n\\int_{V_1}^{V_2}\nPdV"
},
{
"math_id": 7,
"text": " W_{20} = -P_0(V_0 - V_2) "
},
{
"math_id": 8,
"text": " W_{tot} = W_i + W_{01} + W_{12} + W_{20} = W_i + H_0 - H_2 "
},
{
"math_id": 9,
"text": " H_0 - H_2 "
},
{
"math_id": 10,
"text": " \\eta = \\frac{W_{tot}}{q_c} = \\frac{H_0 - H_2}{q_c} "
},
{
"math_id": 11,
"text": " q_c = H_0 - H_3 "
}
] |
https://en.wikipedia.org/wiki?curid=63915539
|
63918514
|
Return on brand
|
Indicator of the effectiveness of brand use by companies
The return on brand (ROB) is an indicator used to measure brand management performance. It is an indicator of the effectiveness of brand use in terms of generating net income, a special case of return on assets.
ROB is calculated as the ratio of net income to brand value:
formula_0
Usage.
Return on brand can be used in multi-criteria models for assessing the effectiveness of branding, as well as intellectual capital (since the brand is a component of relational capital).
It is believed that if the brand value of the company increases, its net profit should also increase, otherwise the value of ROB will decrease, which indicates a decrease in the effectiveness of brand management in terms of creating net profit. At the same time, if the brand value falls, and this does not lead to a decrease in the net profit of the enterprise, the ROB value increases, which indicates a relative increase in the brand management efficiency. The change in brand value itself, although it makes it possible to judge the effectiveness of brand management, is only indirectly, since the company does not sell the brand directly, because it is an intangible asset associated directly with company and its products. If a company sells its brand as an intangible asset to another organization, it terminates branding events with respect to it, since this function transfers to the new owner of the brand. Thus, ROB allows to clarify how effective it is for a company to change the value of the brand associated with it. For this reason, the diagnosis of the impact of brand value on a business is relevant only with a joint analysis of ROB.
Application examples.
Return on brand can be applied in several branding assessment models:
The approach of T. Munoz and S. Kumar, who propose to build a branding assessment system based on three classes of metrics (perception metrics, behavioral metrics, financial metrics), which make it possible to evaluate branding effectiveness.
A model for assessing the effectiveness of branding based on the concept of contact branding, which is based on the fact that by isolating and controlling points of contact between the brand and the consumer, it is possible to evaluate the effectiveness of brand management.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{ROB} = \\frac{\\mbox{net income}}{\\mbox{brand value}}"
}
] |
https://en.wikipedia.org/wiki?curid=63918514
|
639245
|
Transverse Mercator projection
|
Adaptation of the standard Mercator projection
The transverse Mercator map projection (TM, TMP) is an adaptation of the standard Mercator projection. The transverse version is widely used in national and international mapping systems around the world, including the Universal Transverse Mercator. When paired with a suitable geodetic datum, the transverse Mercator delivers high accuracy in zones less than a few degrees in east-west extent.
Standard and transverse aspects.
The transverse Mercator projection is the transverse aspect of the standard (or "Normal") Mercator projection. They share the same underlying mathematical construction and consequently the transverse Mercator inherits many traits from the normal Mercator:
Since the central meridian of the transverse Mercator can be chosen at will, it may be used to construct highly accurate maps (of narrow width) anywhere on the globe. The secant, ellipsoidal form of the transverse Mercator is the most widely applied of all projections for accurate large-scale maps.
Spherical transverse Mercator.
In constructing a map on any projection, a sphere is normally chosen to model the Earth when the extent of the mapped region exceeds a few hundred kilometers in length in both dimensions. For maps of smaller regions, an ellipsoidal model must be chosen if greater accuracy is required; see next section. The spherical form of the transverse Mercator projection was one of the seven new projections presented, in 1772, by Johann Heinrich Lambert. (The text is also available in a modern English translation.) Lambert did not name his projections; the name "transverse Mercator" dates from the second half of the nineteenth century. The principal properties of the transverse projection are here presented in comparison with the properties of the normal projection.
Ellipsoidal transverse Mercator.
The ellipsoidal form of the transverse Mercator projection was developed by Carl Friedrich Gauss in 1822 and further analysed by Johann Heinrich Louis Krüger in 1912.
The projection is known by several names: the "(ellipsoidal) transverse Mercator" in the US; Gauss conformal or Gauss–Krüger in Europe; or Gauss–Krüger transverse Mercator more generally.
Other than just a synonym for the ellipsoidal transverse Mercator map projection, the term Gauss–Krüger may be used in other slightly different ways:
The projection is conformal with a constant scale on the central meridian. (There are other conformal generalisations of the transverse Mercator from the sphere to the ellipsoid but only Gauss-Krüger has a constant scale on the central meridian.) Throughout the twentieth century the Gauss–Krüger transverse Mercator was adopted, in one form or another, by many nations (and international bodies); in addition it provides the basis for the Universal Transverse Mercator series of projections. The Gauss–Krüger projection is now the most widely used projection in accurate large-scale mapping.
The projection, as developed by Gauss and Krüger, was expressed in terms of low order power series which were assumed to diverge in the east-west direction, exactly as in the spherical version. This was proved to be untrue by British cartographer E. H. Thompson, whose unpublished exact (closed form) version of the projection, reported by Laurence Patrick Lee in 1976, showed that the ellipsoidal projection is finite (below). This is the most striking difference between the spherical and ellipsoidal versions of the transverse Mercator projection: Gauss–Krüger gives a reasonable projection of the "whole" ellipsoid to the plane, although its principal application is to accurate large-scale mapping "close" to the central meridian.
Features.
In most applications the Gauss–Krüger coordinate system is applied to a narrow strip near the central meridians where the differences between the spherical and ellipsoidal versions are small, but nevertheless important in accurate mapping. Direct series for scale, convergence and distortion are functions of eccentricity and both latitude and longitude on the ellipsoid: inverse series are functions of eccentricity and both "x" and "y" on the projection. In the secant version the lines of true scale on the projection are no longer parallel to central meridian; they curve slightly. The convergence angle between projected meridians and the "x" constant grid lines is no longer zero (except on the equator) so that a grid bearing must be corrected to obtain an azimuth from true north. The difference is small, but not negligible, particularly at high latitudes.
Implementations of the Gauss–Krüger projection.
In his 1912 paper, Krüger presented two distinct solutions, distinguished here by the expansion parameter:
The Krüger–"λ" series were the first to be implemented, possibly because they were much easier to evaluate on the hand calculators of the mid twentieth century.
The Krüger–"n" series have been implemented (to fourth order in "n") by the following nations.
Higher order versions of the Krüger–"n" series have been implemented to seventh order by Engsager and Poder and to tenth order by Kawase. Apart from a series expansion for the transformation between latitude and conformal latitude, Karney has implemented the series to thirtieth order.
Exact Gauss–Krüger and accuracy of the truncated series.
An exact solution by E. H. Thompson is described by L. P. Lee. It is constructed in terms of elliptic functions (defined in chapters 19 and 22 of the NIST handbook) which can be calculated to arbitrary accuracy using algebraic computing systems such as Maxima. Such an implementation of the exact solution is described by Karney (2011).
The exact solution is a valuable tool in assessing the accuracy of the truncated "n" and λ series. For example, the original 1912 Krüger–"n" series compares very favourably with the exact values: they differ by less than 0.31 μm within 1000 km of the central meridian and by less than 1 mm out to 6000 km. On the other hand, the difference of the Redfearn series used by GEOTRANS and the exact solution is less than 1 mm out to a longitude difference of 3 degrees, corresponding to a distance of 334 km from the central meridian at the equator but a mere 35 km at the northern limit of an UTM zone. Thus the Krüger–"n" series are very much better than the Redfearn λ series.
The Redfearn series becomes much worse as the zone widens. Karney discusses Greenland as an instructive example. The long thin landmass is centred on 42W and, at its broadest point, is no more than 750 km from that meridian while the span in longitude reaches almost 50 degrees. Krüger–"n" is accurate to within 1 mm but the Redfearn version of the Krüger–"λ" series has a maximum error of 1 kilometre.
Karney's own 8th-order (in "n") series is accurate to 5 nm within 3900 km of the central meridian.
Formulae for the spherical transverse Mercator.
Spherical normal Mercator revisited.
The normal cylindrical projections are described in relation to a cylinder tangential at the equator with axis along the polar axis of the sphere. The cylindrical projections are constructed so that all points on a meridian are projected to points with formula_0 (where formula_1 is the Earth radius) and formula_2 is a prescribed function of formula_3. For a tangent Normal Mercator projection the (unique) formulae which guarantee conformality are:
formula_4
Conformality implies that the point scale, "k", is independent of direction: it is a function of latitude only:
formula_5
For the secant version of the projection there is a factor of "k"0 on the right hand side of all these equations: this ensures that the scale is equal to "k"0 on the equator.
Normal and transverse graticules.
The figure on the left shows how a transverse cylinder is related to the conventional graticule on the sphere. It is tangential to some arbitrarily chosen meridian and its axis is perpendicular to that of the sphere. The "x"- and "y"-axes defined on the figure are related to the equator and central meridian exactly as they are for the normal projection. In the figure on the right a rotated graticule is related to the transverse cylinder in the same way that the normal cylinder is related to the standard graticule. The 'equator', 'poles' (E and W) and 'meridians' of the rotated graticule are identified with the chosen central meridian, points on the equator 90 degrees east and west of the central meridian, and great circles through those points.
The position of an arbitrary point ("φ","λ") on the standard graticule can also be identified in terms of angles on the rotated graticule: "φ′" (angle M′CP) is an effective latitude and −"λ′" (angle M′CO) becomes an effective longitude. (The minus sign is necessary so that ("φ′","λ′") are related to the rotated graticule in the same way that ("φ","λ") are related to the standard graticule). The Cartesian ("x′","y′") axes are related to the rotated graticule in the same way that the axes ("x","y") axes are related to the standard graticule.
The tangent transverse Mercator projection defines the coordinates ("x′","y′") in terms of −"λ′" and "φ′" by the transformation formulae of the tangent Normal Mercator projection:
formula_6
This transformation projects the central meridian to a straight line of finite length and at the same time projects the great circles through E and W (which include the equator) to infinite straight lines perpendicular to the central meridian. The true parallels and meridians (other than equator and central meridian) have no simple relation to the rotated graticule and they project to complicated curves.
The relation between the graticules.
The angles of the two graticules are related by using spherical trigonometry on the spherical triangle NM′P defined by the true meridian through the origin, OM′N, the true meridian through an arbitrary point, MPN, and the great circle WM′PE. The results are:
formula_7
Direct transformation formulae.
The direct formulae giving the Cartesian coordinates ("x","y") follow immediately from the above. Setting "x" = "y′" and "y" = −"x′" (and restoring factors of "k"0 to accommodate secant versions)
formula_8
The above expressions are given in Lambert and also (without derivations) in Snyder, Maling and Osborne (with full details).
Inverse transformation formulae.
Inverting the above equations gives
formula_9
Point scale.
In terms of the coordinates with respect to the rotated graticule the point scale factor is given by "k" = sec "φ′": this may be expressed either in terms of the geographical coordinates or in terms of the projection coordinates:
formula_10
The second expression shows that the scale factor is simply a function of the distance from the central meridian of the projection. A typical value of the scale factor is "k"0 = 0.9996 so that "k" = 1 when "x" is approximately 180 km. When "x" is approximately 255 km and "k"0 = 1.0004: the scale factor is within 0.04% of unity over a strip of about 510 km wide.
Convergence.
The convergence angle "γ" at a point on the projection is defined by the angle measured "from" the projected meridian, which defines true north, "to" a grid line of constant "x", defining grid north. Therefore, "γ" is positive in the quadrant north of the equator and east of the central meridian and also in the quadrant south of the equator and west of the central meridian. The convergence must be added to a grid bearing to obtain a bearing from true north. For the secant transverse Mercator the convergence may be expressed either in terms of the geographical coordinates or in terms of the projection coordinates:
formula_11
Formulae for the ellipsoidal transverse Mercator.
Details of actual implementations
Coordinates, grids, eastings and northings.
The projection coordinates resulting from the various developments of the ellipsoidal transverse Mercator are Cartesian coordinates such that the central meridian corresponds to the "x" axis and the equator corresponds to the "y" axis. Both "x" and "y" are defined for all values of "λ" and "ϕ". The projection does not define a grid: the grid is an independent construct which could be defined arbitrarily. In practice the national implementations, and UTM, do use grids aligned with the Cartesian axes of the projection, but they are of finite extent, with origins which need not coincide with the intersection of the central meridian with the equator.
The true grid origin is always taken on the central meridian so that grid coordinates will be negative west of the central meridian. To avoid such negative grid coordinates, standard practice defines a false origin to the west (and possibly north or south) of the grid origin: the coordinates relative to the false origin define eastings and northings which will always be positive. The false easting, "E"0, is the distance of the true grid origin east of the false origin. The false northing, "N"0, is the distance of the true grid origin north of the false origin. If the true origin of the grid is at latitude "φ"0 on the central meridian and the scale factor the central meridian is "k"0 then these definitions give eastings and northings by:
formula_12
The terms "eastings" and "northings" do not mean strict east and north directions. Grid lines of the transverse projection, other than the "x" and "y" axes, do not run north-south or east-west as defined by parallels and meridians. This is evident from the global projections shown above. Near the central meridian the differences are small but measurable. The difference between the north-south grid lines and the true meridians is the angle of convergence.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x = a\\lambda"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "x = a\\lambda\\,,\\qquad\ny = a\\ln \\left[\\tan \\left(\\frac{\\pi}{4} + \\frac{\\varphi}{2} \\right)\\right]\n = \\frac{a}{2}\\ln\\left[\\frac{1+\\sin\\varphi}{1-\\sin\\varphi}\\right].\n"
},
{
"math_id": 5,
"text": "k(\\varphi)=\\sec\\varphi.\\,"
},
{
"math_id": 6,
"text": "x' = -a\\lambda'\\,\\qquad\ny' = \\frac{a}{2}\n \\ln\\left[\\frac{1+\\sin\\varphi'}{1-\\sin\\varphi'}\\right].\n"
},
{
"math_id": 7,
"text": "\n\\begin{align}\n\\sin\\varphi'&=\\sin\\lambda\\cos\\varphi,\\\\\n\\tan\\lambda'&=\\sec\\lambda\\tan\\varphi.\n\\end{align}\n"
},
{
"math_id": 8,
"text": "\n\\begin{align}\nx(\\lambda,\\varphi)&= \\frac{1}{2}k_0a\n \\ln\\left[\n \\frac{1+\\sin\\lambda\\cos\\varphi}\n {1-\\sin\\lambda\\cos\\varphi}\\right],\\\\[5px]\ny(\\lambda,\\varphi)&= k_0 a\\arctan\\left[\\sec\\lambda\\tan\\varphi\\right],\n\\end{align}\n"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n \\lambda(x,y)&\n= \\arctan\\left[ \\sinh\\frac{x}{k_0a}\n \\sec\\frac{y}{k_0a} \\right],\n\\\\[5px]\n \\varphi(x,y)&= \\arcsin\\left[ \\mbox{sech}\\;\\frac{x}{k_0a}\n \\sin\\frac{y}{k_0a} \\right].\n\\end{align}\n"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n k(\\lambda,\\varphi)&=\\frac{k_0}{\\sqrt{1-\\sin^2\\lambda\\cos^2\\varphi}},\\\\[5px]\nk(x,y)&=k_0\\cosh\\left(\\frac{x}{k_0a}\\right).\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\n\\begin{align}\n\\gamma(\\lambda,\\varphi)&=\\arctan(\\tan\\lambda\\sin\\varphi),\\\\[5px]\n\\gamma(x,y)&=\\arctan\\left(\\tanh\\frac{x}{k_0a}\\tan\\frac{y}{k_0a}\\right).\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n E&=E_0+x(\\lambda,\\varphi),\\\\[5px]\n N&=N_0+y(\\lambda,\\varphi)-k_0 m(\\varphi_0).\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=639245
|
63925564
|
Blackmer gain cell
|
The Blackmer gain cell is an audio frequency voltage-controlled amplifier (VCA) circuit with an exponential control law. It was invented and patented by David E. Blackmer between 1970 and 1973. The four-transistor core of the original Blackmer cell contains two complementary bipolar current mirrors that perform log-antilog operations on input voltages in a push-pull, alternating fashion. Earlier log-antilog modulators using the fundamental exponential characteristic of a p–n junction were unipolar; Blackmer's application of push-pull signal processing allowed modulation of bipolar voltages and bidirectional currents.
The Blackmer cell, which has been manufactured since 1973, is the first precision VCA circuit that was suitable for professional audio. As early as the 1970s, production Blackmer cells achieved control range with total harmonic distortion of no more than 0.01% and very high compliance with ideal exponential control law. The circuit was used in remote-controlled mixing consoles, signal compressors, microphone amplifiers, and dbx noise reduction systems. In the 21st century, the Blackmer cell, along with Douglas Frey's Operational Voltage Controlled Element (OVCE), remains one of two integrated VCA topologies that are still widely used in studio and stage equipment.
Development and applications.
In the 1960s, American recording studios adopted multitrack recording. Narrow tracks of multitrack recorders were noisier than wide tracks of their predecessors; mixing down many narrow tracks further degraded the signal-to-noise ratio of master tapes. Mixing became a complex process requiring the precisely timed operation of numerous controls and faders, which were too numerous to operate manually. These problems of early multitrack studios created a demand for professional-grade noise reduction and console automation. At the core of both of these functions was the voltage-controlled amplifier (VCA).
The earliest solid-state VCA topology was an attenuator rather than an amplifier; it employed a junction field-effect transistor in voltage-controlled resistance mode. These attenuators, which were state of the art in the early 1970s, were successfully used in professional Dolby A and consumer Dolby B noise reduction systems but did not meet all of the demands of mixing engineers. In 1968, Barrie Gilbert invented the Gilbert cell that was quickly adopted by radio and analog computer designers but lacked the precision required for studio equipment. Between 1970 and 1973, David E. Blackmer invented and patented the four-transistor multiplying log-antilog cell, targeting professional audio.1989b
The Blackmer cell was more precise and had a greater dynamic range that prior VCA topologies but it required well-matched complementary transistors of both polarity types that could not yet be implemented in a silicon integrated circuit (IC). Contemporary junction isolation technology offered poorly performing p-n-p transistors so integrated circuit designers had to use n-p-n transistors alone. The Gilbert and Dolby circuits were easily integrated in silicon but the Blackmer cell had to be assembled from tediously selected, precision-matched, discrete transistors. To ensure isothermal operation, these metal-can transistors were firmly held together with a thermally conductive ceramic block and insulated from the environment with a steel can. The first hybrid integrated circuits of this type, the "black can" dbx202, were manufactured by Blackmer's company in 1973. Five years later, Blackmer released the improved dbx202C "gold can" hybrid IC; total harmonic distortion decreased from 0.03% to 0.01% and gain control range increased from to . In 1980, Blackmer released a version designed by Bob Adams, the dbx2001. Unlike earlier Blackmer cells that operated in lean class AB, the dbx2001 operated in class A. Distortion dropped to less than 0.001% but the noise and dynamic range of the dbx2001 were inferior to those of class AB circuits. This first generation of Blackmer VCAs had a very long service life; as of 2002, analogue consoles built around the original dbx202 "cans" were still being used in professional recording studios.
By 1980, complementary bipolar ICs became possible and Allison Research released the first monolithic Blackmer gain cell IC. The ECG-101, which was designed by Paul Buff, contained only the core of a modified Blackmer cell – a set of eight matched transistors – and was intended for pure class A operation. It had a unique sonic signature that had almost no undesirable, odd-order harmonics and was easier to stabilize than the original Blackmer cell. In 1981 dbx, Inc. released their own monolithic IC, the dbx2150/2151/2155, which was designed by Dave Welland, the future co-founder of Silicon Labs. The three numeric designations denoted three grades of the same chip; 2151 being the best, 2155 the worst; the middle-of-the-line 2150 was the most widely used version. The eight-pin single-in-line package (SIP8) assured good isolation between inputs and outputs, and became the industry standard that was used in the later dbx2100, THAT2150 and THAT2181 ICs. These circuits, like the original hybrid dbx ICs, were a small-volume niche product that was used exclusively in professional analogue audio. Typical applications include mixing consoles, compressors, noise gates, duckers, de-essers and state variable filters. The dbx noise reduction system, which used the Blackmer cell, had limited success in semi-professional market and failed in consumer markets, losing to Dolby C. The only mass market where dbx achieved substantial use was the North American Multichannel Television Sound, which was introduced in 1984 and operating until the end of analogue television broadcasting in 2009.
In the 21st century, professional Blackmer ICs are manufactured by THAT Corporation – a direct descendant of Blackmers' dbx, Inc. – using dielectric isolation technology. As of April 2020, the company offered one dual-channel and two single-channel Blackmer ICs, and four "analog engine" ICs containing Blackmer cells that are controlled by Blackmer RMS detectors.
Operation.
Log-antilog principle.
The Blackmer cell is a direct descendant of a two-transistor log-antilog circuit, itself a derivative of the simple current mirror. Normally, the bases of two transistors of a mirror are tied together to ensure the collector current I2 of the output transistor T2 exactly mirrors the collector current I1 of the input transistor T1. Additional positive or negative bias voltage VY applied between the bases of T1 and T2 converts the mirror into a current amplifier or attenuator. Scale factor or current gain follows the exponential Shockley formula:
formula_0
where formula_1 it the thermal voltage, proportional to absolute temperature, and equal to at .
The control voltage VY is usually referenced to ground, either with one terminal grounded or with both terminals driven differentially with zero volts common-mode voltage. This requires lowering emitter potential below ground, usually with an operational amplifier A1 that also converts input voltage VX into input current I1 (so-called transdiode configuration). A second operational amplifier A2 converts output current I2 into output voltage VXY.
In mathematics, logarithm function is defined for positive argument only. A log-antilog circuit built with NPN transistors will only accept positive input voltage VX or only negative VX in the case of PNP transistors. This is unacceptable in audio applications, which have to handle alternating current (AC) signals. Adding direct current (DC) offset to audio signals, as was proposed by Embley in 1970, will work at a fixed gain setting but any changes in gain will modulate the output DC offset.
Four-transistor Blackmer core.
The Blackmer circuit consists of two complementary log-antilog VCAs. Its four-transistor core – the Blackmer cell proper – combines two complementary current mirrors that are wired back-to-back and operate in a push-pull fashion. The lower NPN-type mirror (T1, T2) sinks input current I1; the upper PNP-type mirror (T3, T4) is sourcing input current I1 in the opposite direction. A VBE multiplier thermally coupled to the core maintains around 1.5 V (2 VBE) across its power supply terminals and regulates its idle current ( or less in production monolithic ICs). Signal voltage is applied to terminals VX and control voltage to terminals VY. Operational amplifiers A1 and A2 perform same voltage-to-current and current-to-voltage converter functions as their counterparts in a unipolar log-antilog circuit, and maintain virtual ground potential at the core's input and output nodes. Values of feedback resistors are usually set at ( in early hybrid ICs); they must be equal to ensure unity gain at zero control voltage. Potentials of all of the core's nodes other than Vy are almost independent of input signals, which is common to all current-mode circuits, which process signal currents rather than voltages.
When the control voltage VY=0 the core operates as a bidirectional current follower, replicating the input current I1 to output current I2. In cores biased to pure class A, both mirrors contribute their shares of I2 simultaneously; in cores biased to class AB, this is only true for very small values of VX and I1. At higher VX one of the mirrors of a class AB core shuts down and all output current I2 is sunk or sourced with the other mirror, which is active. With positive (negative) VY current through the active mirror or both mirrors in class A increases (decreases) exponentially, exactly as it does in a single-quadrant log-antilog circuit:
formula_2
formula_3 assuming equal values of R in A1 and A2
At , the slope of the exponential control law equals (or ) for either negative or positive values of VX. In practice, the slope is inconveniently steep and the core is usually decoupled from real-world control voltages with an active attenuator. This attenuator, or any other source of VY, must have very low noise and very low output impedance, which is only attainable in op-amp-based circuits. A single-ended VY drive is almost as good as a symmetric balanced drive; having two VY terminals allows control of the cell by two independent single-ended voltages.
The gain of the Blackmer cell has an inverse relationship with temperature; the hotter the IC, the lower the slope of exponential control law. For example, VY= at translates to a gain of 10 times or . As the die temperature rises to , gain at VY= decreases by to ; at maximum operating temperature of () it drops to . In practice this shortcoming is easily overcome by using a control scale that is proportional to absolute temperature (PTAT). In dbx noise reduction systems and THAT Corp's analog engine, this is ensured by the physics of the Blackmer RMS detector, which is PTAT by design. In old mixing consoles, the same effect was achieved using positive temperature coefficient (PTC) thermistors.
Eight-transistor core.
Mismatches of PNP and NPN transistors of a basic Blackmer cell are usually balanced with trimming. Alternatively, transistors can be balanced by design via inclusion of opposite-type, diode-wired transistors into each leg of the core. Each of the four legs of the modified core contains one NPN and one PNP type transistor; although they are still functionally asymmetrical, the degree of asymmetry is greatly reduced. The slope of exponential control law is exactly half of that of the four-transistor cell. This improvement was invented by recording engineer Paul Conrad Buff and manufactured since 1980 as the monolithic ECG-101 IC by Allison Research and the identical TA-101 by Valley People.
Eight-transistor core with log error correction.
Parasitic base and emitter resistances distort current-voltage characteristics of real-world transistors, introducing logarithming error and distorting the output signal. To improve precision beyond what was attainable through the use of oversized core transistors, Blackmer suggested using his eight-transistor core with interleaved local feedback loops. The circuit, which was first produced as hybrid dbx202C in 1978 and as monolithic 2150/2151/2155 ICs in 1981, minimizes log-error distortion when the value of each feedback resistor equals the sum of equivalent emitter resistances on NPN and PNP transistors. A simple model predicts this approach neutralizes all sources of logarthming error but in reality, feedback cannot compensate for current crowding effects, which can only be reduced by increasing transistor sizes. Cores of monolithic Blackmer ICs are so large effective feedback resistor values are less than one ohm.
Parallel wiring of cores.
Blackmer cores, being current-in, current-out devices, can easily be connected in parallel. Wiring identical cores in parallel increases input and output currents proportionally to the number of cores, however, noise current rises only as the square root of same number. Paralleling four cores, for example, increases signal current four times and increases noise current two times, improving signal-to-noise ratio by 6 dB. The first production circuit of this type, the hybrid dbx202x, contained eight parallel cores made up of discrete transistors; the hybrid THAT2002 contained four monolithic THAT2181 dies.
Performance.
The design of a Blackmer cell IC is a compromise favoring a specific combination of distortion, noise and dynamic range of gain settings. These properties are critical for professional audio application and are interrelated and cannot be perfected simultaneously. The choice of circuit simplicity (built-in, wafer-level trimming) or lowest distortion (external in-circuit trimming) is also fixed at die level.
Distortion.
Distortion of a class AB Blackmer core has three main sources:
The first two sources are contained within the core and define distortion patterns at low frequencies. Both are suppressed by increasing transistor sizes, although effective neutralization of logarithming error is only possible in
improved eight-transistor cores. Large transistors have lesser parasitic resistances and are less sensitive to inevitable random area mismatches. Temporal mismatches caused by thermal gradients are avoided by careful placement of core transistors and surrounding components on the IC. The residual mismatch of PNP and NPN mirrors is compensated for with trimming, usually by injecting a very small current into one of the core's two output transistors. This creates a small, asymmetric biasing voltage of a few millivolts or less, which should ideally be proportional to absolute temperature. In monolithic ICs, this is ensured by using a thermally-coupled PTAT source of bias current. Wafer-level trimming suffers from random shifts during subsequent die packaging; wafer-trimmed ICs have maximum rated THD from 0.01% (best grade) to 0.05% (worst grade) at 1V RMS input. Further reductions to 0.001% THD require in-circuit fine trimming, which is normally performed once using a precision THD analyzer and needs no further adjustments.
The output amplifier A2 operates at fixed closed-loop gain, drives a benign constant-impedance load and does not degrade distortion. The input amplifier A1 drives a nonlinear feedback loop wrapped around the core and must remain stable at any possible combination of VX and VY. To avoid crossover distortion, A1 must have very high bandwidth and a fast slew rate but at treble audio frequencies, its nonlinearity becomes the dominant factor of distortion as the open-loop gain of A1 decreases. This type of distortion is common to operational amplifiers with voltage output; in production ICs, it is effectively nulled by replacing the voltage-output amplifier with a current-output transconductance amplifier.
Noise.
Estimation and measurement of signal-to-noise ratio is difficult and ambiguous because of the complex, non-linear relationship between currents, voltages and noise. At zero or very small input signals, the core has a very low noise floor. At high input signals, this residual noise is swamped by far larger modulation noise containing products of shot noise, thermal noise from the core's transistors, and external noises that are injected into VY terminals. Higher input signals cause greater modulation: "noise follows the signal", in a nonlinear fashion.
At moderate gain or attenuation settings, noise of the core – assuming noise-free surrounding circuitry – is determined by collector current shot noise, which is proportional to the square root of emitter current. Thus lowest noise is attained in class AB cores with very small idle currents. Designs for lowest distortion require pure class A operation at the cost of higher noise. For example, in THAT Corp's ICs, increase of idle current from 20 μA (class AB) to 750 μA (class A) causes a rise in no-signal noise floor by 17 dB; in dbx, Inc. hybrid "cans" the difference was either 10 or 16 dB. In practice, there is no perfect compromise; the choice of low-noise class AB or low-distortion class A depends on application.
Noise of operational amplifiers A1 and A2 is only material at very low or very high gain settings. In class AB ICs by THAT Corporation, noise of A2 becomes dominant at gain of or less, the noise of A2 becomes dominant at gains of or more. At high output levels, the noise signature is dominated by noises injected via control terminals, even when proper care has been taken to suppress their sources.
Injection of noise and distortion via control terminals.
Blackmer cells are particularly sensitive to interference at control terminals. Any signal arriving at VY port, either a useful control voltage or unwanted noise, directly modulates the output signal at a rate of for a four-transistor cell or for an eight-transistor cell. of random noise or hum results in either 4% or 2% modulation, degrading signal-to-noise ratio to absolutely unacceptable values. Contamination of VY with input signal VX causes not noise, but unacceptably high harmonic distortion.
Circuits driving VY terminals must be designed as thoroughly as professional-grade audio paths are. In practice, VY terminals are usually interfaced to external control signals with low-noise operational amplifiers directly, ensuring the lowest possible output impedance; low-cost amplifiers like the NE5532 are an inferior but acceptable alternative to quieter but more expensive models. Amplifiers of this class are characterized by audio frequency noise density of a few nV/formula_4Hz which, although low, will swamp other noise sources at high signal levels.
Control range.
In class AB cores, off-state suppression of input signal, which marks the lowest end of control scale, reaches at 1 kHz but deteriorates at higher audio frequencies due to parasitic capacitances. Single-in-line IC packages, otherwise obsolete, perform well in this respect due to the relatively long distance between input and output pins. Care should be taken to prevent capacitive coupling from VX input to A1 non-inverting input. In class A cores, the control scale is inevitably narrower due to higher residual noise level.
Control voltage feedthrough.
In class AB cores, at low frequencies, feedthrough of control voltage VY into the output signal has two principal sources: mismatches in core transistors that are reduced by increasing transistor sizes, and feedthrough of input bias current. Any DC component of VX, and input offset voltage of amplifier A1 inject DC components into input current I1, which are replicated at the output and modulated by the core along with the AC input signal. These sources of feedthrough can be neutralized with capacitive coupling, leaving one undesirable DC component, input bias current of A1. This current can be reduced to a few nanoamperes with bias-canceling input stages. At high frequencies, VY is coupled to the output node directly via collector-base capacitances of the core transistors. Differential VY drive does not eliminate the problem because of the different capacitances of PNP and NPN transistors. The residual VY feedthrough can be nulled by feedforward injection of inverted VY into the output node via a small-value capacitor, restoring capacitive symmetry of the core.
Class A cores, in general, are more prone to control voltage feedthrough owing to thermal gradients in the core (in class AB same gradients manifest themselves as distortion). Early class A ICs used as muting gates produced audible, low-frequency "thumps" but subsequent improvements in production ICs significantly reduced the undesirable feedthrough.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "I_2 = I_1 e^{ \\frac { V_Y } {\\phi_t} },"
},
{
"math_id": 1,
"text": "\\phi_t"
},
{
"math_id": 2,
"text": "I_2 = I_1 e^{ \\frac { V_Y } {\\phi_t} } = { \\frac { V_X } { R} } e^{ \\frac { V_Y } {\\phi_t} },"
},
{
"math_id": 3,
"text": "V_{XY} = V_X e^{ \\frac { V_Y } {\\phi_t} },"
},
{
"math_id": 4,
"text": "\\sqrt{}"
}
] |
https://en.wikipedia.org/wiki?curid=63925564
|
63926753
|
QST (genetics)
|
In quantitative genetics, QST is a statistic intended to measure the degree of genetic differentiation among populations with regard to a quantitative trait. It was developed by Ken Spitze in 1993. Its name reflects that QST was intended to be analogous to the fixation index for a single genetic locus (FST). QST is often compared with FST of neutral loci to test if variation in a quantitative trait is a result of divergent selection or genetic drift, an analysis known as QST–FST comparisons.
Calculation of QST.
Equations.
QST represents the proportion of variance among subpopulations, and is it’s calculation is synonymous to FST developed by Sewall Wright. However, instead of using genetic differentiation, QST is calculated by finding the variance of a quantitative trait within and among subpopulations, and for the total population. Variance of a quantitative trait among populations (σ2GB) is described as:
formula_0
And the variance of a quantitative trait within populations (σ2GW) is described as:
formula_1
Where σ2T is the total genetic variance in all populations. Therefore, QST can be calculated with the following equation:
formula_2
Assumptions.
Calculation of QST is subject to several assumptions: populations must be in Hardy-Weinberg Equilibrium, observed variation is assumed to be due to additive genetic effects only, selection and linkage disequilibrium are not present, and the subpopulations exist within an island model.
QST-FST comparisons.
QST–FST analyses often involve culturing organisms in consistent environmental conditions, known as common garden experiments, and comparing the phenotypic variance to genetic variance. If QST is found to exceed FST, this is interpreted as evidence of divergent selection, because it indicates more differentiation in the trait than could be produced solely by genetic drift. If QST is less than FST, balancing selection is expected to be present. If the values of QST and FSTare equivalent, the observed trait differentiation could be due to genetic drift.
Suitable comparison of QST and FST is subject to multiple ecological and evolutionary assumptions, and since the development of QST, multiple studies have examined the limitations and constrictions of QST-FST analyses. Leinonen et al. notes FST must be calculated with neutral loci, however over filtering of non-neutral loci can artificially reduce FSTvalues. Cubry et al. found QST is reduced in the presence of dominance, resulting in conservative estimates of divergent selection when QST is high, and inconclusive results of balancing selection when QST is low. Additionally, population structure can significantly impact QST-FST ratios. Stepping stone models, which can generate more evolutionary noise than island models, are more likely to experience type 1 errors. If a subset of populations act as sources, such as during invasion, weighting the genetic contributions of each population can increase detection of adaptation. In order to improve precision of QST analyses, more populations (>20) should be included in analyses.
QST applications in literature.
Multiple studies have incorporated QST to separate effects of natural selection and genetic drift, and QST is often observed to exceed FST, indicating local adaptation. In an ecological restoration study, Bower and Aitken used QST to evaluate suitable populations for seed transfer of whitebark pine. They found high QST values in many populations, suggesting local adaptation for cold-adapted characteristics. During an assessment of the invasive species, "Brachypodium sylvaticum", Marchini et al. found divergence between native and invasive populations during initial establishment in the invaded range, but minimal divergence during range expansion. In an examination of the common snapdragon ("Antirrhinum majus") along an elevation gradient, QST-FST analyses revealed different adaptation trends between two subspecies ("A. m. pseudomajus" and "A. m. striatum"). While both subspecies occur at all elevations, "A. m. striatum" had high QST values for traits associated with altitude adaptation: plant height, number of branches, and internode length. "A. m. pseudomajus" had lower QST than FST values for germination time.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma_{GB}^2 = (1-Q_{ST})\\sigma_T^2"
},
{
"math_id": 1,
"text": "\\sigma _{GW}^2 = 2Q_{ST}\\sigma_T^2"
},
{
"math_id": 2,
"text": "Q_{ST} = \\frac{\\sigma_{GB}^2}{\\sigma_{GB}^2 + 2\\sigma_{GW}^2}"
}
] |
https://en.wikipedia.org/wiki?curid=63926753
|
63929722
|
Deformed Hermitian Yang–Mills equation
|
In mathematics and theoretical physics, and especially gauge theory, the deformed Hermitian Yang–Mills (dHYM) equation is a differential equation describing the equations of motion for a D-brane in the B-model (commonly called a B-brane) of string theory. The equation was derived by Mariño-Minasian-Moore-Strominger in the case of Abelian gauge group (the unitary group formula_0), and by Leung–Yau–Zaslow using mirror symmetry from the corresponding equations of motion for D-branes in the A-model of string theory.
Definition.
In this section we present the dHYM equation as explained in the mathematical literature by Collins-Xie-Yau. The deformed Hermitian–Yang–Mills equation is a fully non-linear partial differential equation for a Hermitian metric on a line bundle over a compact Kähler manifold, or more generally for a real formula_1-form. Namely, suppose formula_2 is a Kähler manifold and formula_3 is a class. The case of a line bundle consists of setting formula_4 where formula_5 is the first Chern class of a holomorphic line bundle formula_6. Suppose that formula_7 and consider the topological constant
formula_8
Notice that formula_9 depends only on the class of formula_10 and formula_11. Suppose that formula_12. Then this is a complex number
formula_13
for some real formula_14 and angle formula_15 which is uniquely determined.
Fix a smooth representative differential form formula_11 in the class formula_16. For a smooth function formula_17 write formula_18, and notice that formula_19. The deformed Hermitian Yang–Mills equation for formula_2 with respect to formula_16 is
formula_20
The second condition should be seen as a positivity condition on solutions to the first equation. That is, one looks for solutions to the equation formula_21 such that formula_22. This is in analogy to the related problem of finding Kähler-Einstein metrics by looking for metrics formula_23 solving the Einstein equation, subject to the condition that formula_24 is a Kähler potential (which is a positivity condition on the form formula_23).
Discussion.
Relation to Hermitian Yang–Mills equation.
The dHYM equations can be transformed in several ways to illuminate several key properties of the equations. First, simple algebraic manipulation shows that the dHYM equation may be equivalently written
formula_25
In this form, it is possible to see the relation between the dHYM equation and the regular Hermitian Yang–Mills equation. In particular, the dHYM equation should look like the regular HYM equation in the so-called large volume limit. Precisely, one replaces the Kähler form formula_10 by formula_26 for a positive integer formula_27, and allows formula_28. Notice that the phase formula_29 for formula_30 depends on formula_27. In fact, formula_31, and we can expand
formula_32
Here we see that
formula_33
and we see the dHYM equation for formula_26 takes the form
formula_34
for some topological constant formula_35 determined by formula_36. Thus we see the leading order term in the dHYM equation is
formula_37
which is just the HYM equation (replacing formula_11 by formula_38 if necessary).
Local form.
The dHYM equation may also be written in local coordinates. Fix formula_39 and holomorphic coordinates formula_40 such that at the point formula_41, we have
formula_42
Here formula_43 for all formula_44 as we assumed formula_11 was a real form. Define the Lagrangian phase operator to be
formula_45
Then simple computation shows that the dHYM equation in these local coordinates takes the form
formula_46
where formula_47. In this form one sees that the dHYM equation is fully non-linear and elliptic.
Solutions.
It is possible to use algebraic geometry to study the existence of solutions to the dHYM equation, as demonstrated by the work of Collins–Jacob–Yau and Collins–Yau. Suppose that formula_48 is any analytic subvariety of dimension formula_41. Define the central charge formula_49 by
formula_50
When the dimension of formula_51 is 2, Collins–Jacob–Yau show that if formula_52, then there exists a solution of the dHYM equation in the class formula_53 if and only if for every curve formula_54 we have
formula_55
In the specific example where formula_56, the blow-up of complex projective space, Jacob-Sheu show that formula_16 admits a solution to the dHYM equation if and only if formula_57 and for any formula_48, we similarly have
formula_58
It has been shown by Gao Chen that in the so-called supercritical phase, where formula_59, algebraic conditions analogous to those above imply the existence of a solution to the dHYM equation. This is achieved through comparisons between the dHYM and the so-called J-equation in Kähler geometry. The J-equation appears as the *small volume limit* of the dHYM equation, where formula_10 is replaced by formula_60 for a small real number formula_61 and one allows formula_62.
In general it is conjectured that the existence of solutions to the dHYM equation for a class formula_63 should be equivalent to the Bridgeland stability of the line bundle formula_64. This is motivated both from comparisons with similar theorems in the non-deformed case, such as the famous Kobayashi–Hitchin correspondence which asserts that solutions exist to the HYM equations if and only if the underlying bundle is slope stable. It is also motivated by physical reasoning coming from string theory, which predicts that physically realistic B-branes (those admitting solutions to the dHYM equation for example) should correspond to Π-stability.
Relation to string theory.
Superstring theory predicts that spacetime is 10-dimensional, consisting of a Lorentzian manifold of dimension 4 (usually assumed to be Minkowski space or De sitter or anti-De Sitter space) along with a Calabi–Yau manifold formula_51 of dimension 6 (which therefore has complex dimension 3). In this string theory open strings must satisfy Dirichlet boundary conditions on their endpoints. These conditions require that the end points of the string lie on so-called D-branes (D for Dirichlet), and there is much mathematical interest in describing these branes.
In the B-model of topological string theory, homological mirror symmetry suggests D-branes should be viewed as elements of the derived category of coherent sheaves on the Calabi–Yau 3-fold formula_51. This characterisation is abstract, and the case of primary importance, at least for the purpose of phrasing the dHYM equation, is when a B-brane consists of a holomorphic submanifold formula_65 and a holomorphic vector bundle formula_66 over it (here formula_67 would be viewed as the support of the coherent sheaf formula_68 over formula_51), possibly with a compatible Chern connection on the bundle.
This Chern connection arises from a choice of Hermitian metric formula_69 on formula_68, with corresponding connection formula_70 and curvature form formula_38. Ambient on the spacetime there is also a B-field or Kalb–Ramond field formula_71 (not to be confused with the B in B-model), which is the string theoretic equivalent of the classical background electromagnetic field (hence the use of formula_71, which commonly denotes the magnetic field strength). Mathematically the B-field is a gerbe or bundle gerbe over spacetime, which means formula_71 consists of a collection of two-forms formula_72 for an open cover formula_73 of spacetime, but these forms may not agree on overlaps, where they must satisfy cocycle conditions in analogy with the transition functions of line bundles (0-gerbes). This B-field has the property that when pulled back along the inclusion map formula_74 the gerbe is trivial, which means the B-field may be identified with a globally defined two-form on formula_67, written formula_75. The differential form formula_11 discussed above in this context is given by formula_76, and studying the dHYM equations in the special case where formula_77 or equivalently formula_63 should be seen as "turning the B-field off" or setting formula_78, which in string theory corresponds to a spacetime with no background higher electromagnetic field.
The dHYM equation describes the equations of motion for this D-brane formula_79 in spacetime equipped with a B-field formula_71, and is derived from the corresponding equations of motion for A-branes through mirror symmetry. Mathematically the A-model describes D-branes as elements of the Fukaya category of formula_51, special Lagrangian submanifolds of formula_51 equipped with a flat unitary line bundle over them, and the equations of motion for these A-branes is understood. In the above section the dHYM equation has been phrased for the D6-brane formula_80.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{U}(1)"
},
{
"math_id": 1,
"text": "(1,1)"
},
{
"math_id": 2,
"text": "(X,\\omega)"
},
{
"math_id": 3,
"text": "[\\alpha] \\in H^{1,1}(X,\\mathbb{R})"
},
{
"math_id": 4,
"text": "[\\alpha]=c_1(L)"
},
{
"math_id": 5,
"text": "c_1(L)"
},
{
"math_id": 6,
"text": "L\\to X"
},
{
"math_id": 7,
"text": "\\dim X = n"
},
{
"math_id": 8,
"text": "\\hat z([\\omega], [\\alpha]) = \\int_X (\\omega + i \\alpha)^n."
},
{
"math_id": 9,
"text": "\\hat z"
},
{
"math_id": 10,
"text": "\\omega"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "\\hat z\\ne 0"
},
{
"math_id": 13,
"text": "\\hat z([\\omega], [\\alpha]) = r e^{i \\theta}"
},
{
"math_id": 14,
"text": "r>0"
},
{
"math_id": 15,
"text": "\\theta\\in [0,2\\pi)"
},
{
"math_id": 16,
"text": "[\\alpha]"
},
{
"math_id": 17,
"text": "\\phi: X \\to \\mathbb{R}"
},
{
"math_id": 18,
"text": "\\alpha_{\\phi} = \\alpha + i \\partial \\bar \\partial \\phi"
},
{
"math_id": 19,
"text": "[\\alpha_{\\phi}] = [\\alpha]"
},
{
"math_id": 20,
"text": "\\begin{cases}\\operatorname{Im}(e^{-i\\theta} (\\omega + i \\alpha_{\\phi})^n) = 0\\\\ \n\\operatorname{Re}(e^{-i\\theta} (\\omega + i \\alpha_{\\phi})^n) > 0.\\end{cases}"
},
{
"math_id": 21,
"text": "\\operatorname{Im}(e^{-i\\theta} (\\omega + i \\alpha_{\\phi})^n) = 0"
},
{
"math_id": 22,
"text": "\\operatorname{Re}(e^{-i\\theta} (\\omega + i \\alpha_{\\phi})^n) > 0"
},
{
"math_id": 23,
"text": "\\omega + i \\partial \\bar \\partial \\phi"
},
{
"math_id": 24,
"text": "\\phi"
},
{
"math_id": 25,
"text": "\\operatorname{Im}((\\omega+i\\alpha)^n)=\\tan \\theta \\operatorname{Re}((\\omega+i\\alpha)^n)."
},
{
"math_id": 26,
"text": "k\\omega"
},
{
"math_id": 27,
"text": "k"
},
{
"math_id": 28,
"text": "k\\to \\infty"
},
{
"math_id": 29,
"text": "\\theta_k"
},
{
"math_id": 30,
"text": "(X,k\\omega,[\\alpha])"
},
{
"math_id": 31,
"text": "\\tan \\theta_k = O(k^{-1})"
},
{
"math_id": 32,
"text": "(k\\omega + i \\alpha)^n = k^n \\omega^n + i n k^{n-1} \\omega^{n-1}\\wedge \\alpha + O(k^{n-2})."
},
{
"math_id": 33,
"text": "\\operatorname{Re}((k\\omega + i \\alpha)^n) = k^n \\omega^n + O(k^{n-2}),\\quad \\operatorname{Im}((k\\omega + i \\alpha)^n) = nk^{n-1} \\omega^{n-1}\\wedge \\alpha + O(k^{n-3}),"
},
{
"math_id": 34,
"text": "C k^{n-1} \\omega^n + O(k^{n-3}) = n k^{n-1} \\omega^{n-1} \\wedge \\alpha + O(k^{n-3})"
},
{
"math_id": 35,
"text": "C"
},
{
"math_id": 36,
"text": "\\tan \\theta"
},
{
"math_id": 37,
"text": "n\\omega^{n-1}\\wedge \\alpha = C \\omega^n"
},
{
"math_id": 38,
"text": "F(h)"
},
{
"math_id": 39,
"text": "p\\in X"
},
{
"math_id": 40,
"text": "(z^1,\\dots,z^n)"
},
{
"math_id": 41,
"text": "p"
},
{
"math_id": 42,
"text": "\\omega = \\sum_{j=1}^n i dz^j \\wedge d\\bar z^j,\\quad \\alpha = \\sum_{j=1}^n \\lambda_j i dz^j \\wedge d\\bar z^j."
},
{
"math_id": 43,
"text": "\\lambda_j \\in \\mathbb{R}"
},
{
"math_id": 44,
"text": "j"
},
{
"math_id": 45,
"text": "\\Theta_{\\omega}(\\alpha) = \\sum_{j=1}^n \\arctan(\\lambda_j)."
},
{
"math_id": 46,
"text": "\\Theta_{\\omega}(\\alpha) = \\phi"
},
{
"math_id": 47,
"text": "\\phi = \\theta\\mod 2\\pi"
},
{
"math_id": 48,
"text": "V\\subset X"
},
{
"math_id": 49,
"text": "Z_V([\\alpha])"
},
{
"math_id": 50,
"text": "Z_V([\\alpha]) = -\\int_V e^{-i\\omega + \\alpha}."
},
{
"math_id": 51,
"text": "X"
},
{
"math_id": 52,
"text": "\\operatorname{Im}(Z_X([\\alpha]))>0"
},
{
"math_id": 53,
"text": "[\\alpha]\\in H^{1,1}(X,\\mathbb{R})"
},
{
"math_id": 54,
"text": "C\\subset X"
},
{
"math_id": 55,
"text": "\\operatorname{Im}\\left(\\frac{Z_C([\\alpha])}{Z_X([\\alpha])}\\right)>0."
},
{
"math_id": 56,
"text": "X=\\operatorname{Bl}_p \\mathbb{CP}^n"
},
{
"math_id": 57,
"text": "Z_X([\\alpha])\\ne 0"
},
{
"math_id": 58,
"text": "\\operatorname{Im}\\left(\\frac{Z_V([\\alpha])}{Z_X([\\alpha])}\\right)>0."
},
{
"math_id": 59,
"text": "\\frac{(n-2)\\pi}{2} < \\theta < \\frac{n\\pi}{2}"
},
{
"math_id": 60,
"text": "\\varepsilon \\omega"
},
{
"math_id": 61,
"text": "\\varepsilon>0"
},
{
"math_id": 62,
"text": "\\epsilon\\to 0"
},
{
"math_id": 63,
"text": "[\\alpha] = c_1(L)"
},
{
"math_id": 64,
"text": "L"
},
{
"math_id": 65,
"text": "Y\\subset X"
},
{
"math_id": 66,
"text": "E\\to Y"
},
{
"math_id": 67,
"text": "Y"
},
{
"math_id": 68,
"text": "E"
},
{
"math_id": 69,
"text": "h"
},
{
"math_id": 70,
"text": "\\nabla"
},
{
"math_id": 71,
"text": "B"
},
{
"math_id": 72,
"text": "B_i \\in \\Omega^2(U_i)"
},
{
"math_id": 73,
"text": "U_i"
},
{
"math_id": 74,
"text": "\\iota: Y \\to X"
},
{
"math_id": 75,
"text": "\\beta"
},
{
"math_id": 76,
"text": "\\alpha = F(h) + \\beta"
},
{
"math_id": 77,
"text": "\\alpha = F(h)"
},
{
"math_id": 78,
"text": "\\beta = 0"
},
{
"math_id": 79,
"text": "(Y,E)"
},
{
"math_id": 80,
"text": "Y=X"
}
] |
https://en.wikipedia.org/wiki?curid=63929722
|
6393146
|
Deceleration parameter
|
The deceleration parameter formula_0 in cosmology is a dimensionless measure of the cosmic acceleration of the expansion of space in a Friedmann–Lemaître–Robertson–Walker universe. It is defined by:
formula_1
where formula_2 is the scale factor of the universe and the dots indicate derivatives by proper time. The expansion of the universe is said to be "accelerating" if formula_3 (recent measurements suggest it is), and in this case the deceleration parameter will be negative. The minus sign and name "deceleration parameter" are historical; at the time of definition formula_4 was expected to be negative, so a minus sign was inserted in the definition to make formula_0 positive in that case. Since the evidence for the accelerating universe in the 1998–2003 era, it is now believed that formula_4 is positive therefore the present-day value formula_5 is negative (though formula_6 was positive in the past before dark energy became dominant). In general formula_6 varies with cosmic time, except in a few special cosmological models; the present-day value is denoted formula_5.
The Friedmann acceleration equation can be written as
formula_7
where the sum formula_8 extends over the different components, matter, radiation and dark energy, formula_9 is the equivalent mass density of each component, formula_10 is its pressure, and formula_11 is the equation of state for each component. The value of formula_12 is 0 for non-relativistic matter (baryons and dark matter), 1/3 for radiation, and −1 for a cosmological constant; for more general dark energy it may differ from −1, in which case it is denoted formula_13 or simply formula_14.
Defining the critical density as
formula_15
and the density parameters formula_16, substituting formula_17 in the acceleration equation gives
formula_18
where the density parameters are at the relevant cosmic epoch.
At the present day formula_19 is negligible, and if formula_20 (cosmological constant) this simplifies to
formula_21
where the density parameters are present-day values; with ΩΛ + Ωm ≈ 1, and ΩΛ = 0.7 and then Ωm = 0.3, this evaluates to formula_22 for the parameters estimated from the Planck spacecraft data. (Note that the CMB, as a high-redshift measurement, does not directly measure formula_5; but its value can be inferred by fitting cosmological models to the CMB data, then calculating formula_5 from the other measured parameters as above).
The time derivative of the Hubble parameter can be written in terms of the deceleration parameter:
formula_23
Except in the speculative case of phantom energy (which violates all the energy conditions), all postulated forms of mass-energy yield a deceleration parameter formula_24 Thus, any non-phantom universe should have a decreasing Hubble parameter, except in the case of the distant future of a Lambda-CDM model, where formula_0 will tend to −1 from above and the Hubble parameter will asymptote to a constant value of formula_25.
The above results imply that the universe would be decelerating for any cosmic fluid with equation of state formula_26 greater than formula_27 (any fluid satisfying the strong energy condition does so, as does any form of matter present in the Standard Model, but excluding inflation). However observations of distant type Ia supernovae indicate that formula_0 is negative; the expansion of the universe is accelerating. This is an indication that the gravitational attraction of matter, on the cosmological scale, is more than counteracted by the negative pressure of dark energy, in the form of either quintessence or a positive cosmological constant.
Before the first indications of an accelerating universe, in 1998, it was thought that the universe was dominated by matter with negligible pressure, formula_28 This implied that the deceleration parameter would be equal to formula_29, e.g. formula_30 for a universe with formula_31 or formula_32 for a low-density zero-Lambda model. The experimental effort to discriminate these cases with supernovae actually revealed negative formula_33, evidence for cosmic acceleration, which has subsequently grown stronger.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "q"
},
{
"math_id": 1,
"text": "q \\ \\stackrel{\\mathrm{def}}{=}\\ -\\frac{\\ddot{a} a }{\\dot{a}^2}"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "\\ddot{a} > 0"
},
{
"math_id": 4,
"text": "\\ddot{a}"
},
{
"math_id": 5,
"text": "q_0"
},
{
"math_id": 6,
"text": " q "
},
{
"math_id": 7,
"text": "\\frac{\\ddot{a}}{a} =-\\frac{4 \\pi G}{3} \\sum_i (\\rho_i +\\frac{3\\,p_i}{c^2})= -\\frac{4\\pi G}{3} \\sum_i \\rho_i (1 + 3 w_i), "
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "\\rho_i"
},
{
"math_id": 10,
"text": "p_i"
},
{
"math_id": 11,
"text": "w_i = p_i/(\\rho_i c^2) "
},
{
"math_id": 12,
"text": "w_i"
},
{
"math_id": 13,
"text": "w_{DE}"
},
{
"math_id": 14,
"text": " w "
},
{
"math_id": 15,
"text": " \\rho_{c} = \\frac{3 H^2}{8 \\pi G} "
},
{
"math_id": 16,
"text": " \\Omega_i \\equiv \\rho_i / \\rho_c "
},
{
"math_id": 17,
"text": "\\rho_i = \\Omega_i\\,\\rho_c"
},
{
"math_id": 18,
"text": "q= \\frac{1}{2} \\sum \\Omega_i (1+3w_i) = \\Omega_\\text{rad}(z) +\\frac{1}{2}\\Omega_m(z) + \\frac{1+3w_\\text{DE} }{2} \\Omega_\\text{DE}(z) \\ . "
},
{
"math_id": 19,
"text": " \\Omega_\\text{rad} \\sim 10^{-4} "
},
{
"math_id": 20,
"text": " w_{DE} = -1 "
},
{
"math_id": 21,
"text": " q_0 = \\frac{1}{2} \\Omega_m - \\Omega_\\Lambda . "
},
{
"math_id": 22,
"text": " q_0 \\approx -0.55 "
},
{
"math_id": 23,
"text": "\\frac{\\dot{H}}{H^2}=-(1+q)."
},
{
"math_id": 24,
"text": "q \\geqslant -1."
},
{
"math_id": 25,
"text": " H_0 \\sqrt{\\Omega_\\Lambda} "
},
{
"math_id": 26,
"text": "w"
},
{
"math_id": 27,
"text": "-\\tfrac{1}{3}"
},
{
"math_id": 28,
"text": "w \\approx 0."
},
{
"math_id": 29,
"text": " \\Omega_m/2 "
},
{
"math_id": 30,
"text": " q_0 = 1/2 "
},
{
"math_id": 31,
"text": " \\Omega_m = 1 "
},
{
"math_id": 32,
"text": " q_0 \\sim 0.1 "
},
{
"math_id": 33,
"text": " q_0 \\sim -0.6 \\pm 0.2 "
}
] |
https://en.wikipedia.org/wiki?curid=6393146
|
63933490
|
Blackmer RMS detector
|
The Blackmer RMS detector is an electronic true RMS converter invented by David E. Blackmer in 1971. The Blackmer detector, coupled with the Blackmer gain cell, forms the core of the dbx noise reduction system and various professional audio signal processors developed by dbx, Inc.
Unlike earlier RMS detectors that time-averaged algebraic square of input signal, the Blackmer detector performs time-averaging on the logarithm of the input, being the first successful, commercialized instance of log-domain filter. The circuit, created by trial and error, computes root mean squared of various waveforms with high precision, although exact nature of its operation was not known to the inventor. First mathematical analysis of log-domain filtering and mathematical proof of Blackmer's invention were proposed by Robert Adams in 1979; general log-domain filter synthesis theory was developed by Douglas Frey in 1993.
Operation.
Root mean square (RMS), defined as the square root of the mean square of input signal over time, is a useful metric of alternating currents. Unlike peak value or average value, RMS is directly related to energy, being equivalent to the direct current that would be required to get the same heating effect. In audio applications, RMS is the only metric directly related to perceived loudness, being insensitive to the phase of harmonics in complex waveforms. Magnetic recording and playback inevitably shifts phases of harmonics; a true RMS converter will not react to such phase shift. Simpler peak detectors or average detectors, on the contrary, respond to changes in phase with changing output values, although energy level and loudness remain unchanged. For this reason David Blackmer, designer of dbx noise reduction system, needed a cost-efficient precision RMS detector compatible with the Blackmer gain cell. The latter had an exponential control characteristic, so a suitable detector had to have logarithmic output.
Contemporary electronic RMS detectors had "normal", linear outputs, and were built exactly following the definition of RMS. The detector would compute square of the input signal, time-average the square using a low-pass filter or an integrator, and then compute square root of that average to produce linear, not logarithmic, output. Analog computation of squares and square roots was performed using either expensive variable-transconductance analog multipliers (which remain expensive in the 21st century) or simpler and cheaper logarithmic converters employing exponential current-voltage characteristic of a bipolar transistor. Thermal RMS conversion was too slow for audio purposes; electronic RMS detectors worked fine in measurement instruments, but their dynamic range was too narrow for professional audio - precisely because they operated on "squares" of input signal, taking up twice its dynamic range.
Blackmer reasoned that the log-antilog detector may be simplified by taking up processing to log domain, omitting physical squaring of input signals and thus retaining its full dynamic range. Squaring and taking square roots in log domain is very cheap, being simple scaling by a factor of 2 or 1/2. However, simple linear filters do not work in log domain, producing incorrect, irrelevant output. Correct time-averaging required nonlinear filters of yet unknown topology. Blackmer proposed simple replacement of a resistor in RC network with a silicon diode biased with a fixed idle current. Since small-signal impedance of such diode is controlled linearly by current, changing this current controls settling time of the detector. Cutoff frequency of this first order filter equals<br>
formula_0,
where formula_1 is thermal voltage (hence the frequency shifts with temperature). The equation is valid for a range of idle currents over , allowing wide tuning opportunity. The circuit has fast attack and slow decay, which are locked to each other and cannot be adjusted separately. Logarithmic output voltage is proportional to the mean of the square at a rate of around 3 mV/dB, and proportional to RMS at around 6 mV/dB.
When the crude test circuit was built, Blackmer and his associates did not expect it to work as a true RMS detector, but it did. According to Robert Adams, it "seemed to behave ideally", and rigorous tests with various waveforms confirmed ideal RMS performance. The circuit was absolutely insensitive to phase shifts in input signal. It was immediately patented and employed in dbx, Inc. professional audio processors. No one in the company, including Blackmer, could explain why it works at all until 1977, when Robert Adams began work on proper mathematical proof of RMS compliance. Adams tried to extend log-domain concept to Sallen–Key topology and failed. He published his thesis in 1979, and was later credited as the inventor of log-domain filter concept, but the idea remained unknown to general public until the 1993 pioneering work by Douglas Frey.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f_c = \\frac { 1 } { 2 \\pi } ( \\frac { I_{idle} } { \\phi_t C } )"
},
{
"math_id": 1,
"text": "\\phi_t"
}
] |
https://en.wikipedia.org/wiki?curid=63933490
|
63934091
|
Bridgeland stability condition
|
In mathematics, and especially algebraic geometry, a Bridgeland stability condition, defined by Tom Bridgeland, is an algebro-geometric stability condition defined on elements of a triangulated category. The case of original interest and particular importance is when this triangulated category is the derived category of coherent sheaves on a Calabi–Yau manifold, and this situation has fundamental links to string theory and the study of D-branes.
Such stability conditions were introduced in a rudimentary form by Michael Douglas called formula_0-stability and used to study BPS B-branes in string theory. This concept was made precise by Bridgeland, who phrased these stability conditions categorically, and initiated their study mathematically.
Definition.
The definitions in this section are presented as in the original paper of Bridgeland, for arbitrary triangulated categories. Let formula_1 be a triangulated category.
Slicing of triangulated categories.
A slicing formula_2 of formula_1 is a collection of full additive subcategories formula_3 for each formula_4 such that
with formula_14 for all formula_15.
The last property should be viewed as axiomatically imposing the existence of Harder–Narasimhan filtrations on elements of the category formula_1.
Stability conditions.
A Bridgeland stability condition on a triangulated category formula_1 is a pair formula_16 consisting of a slicing formula_2 and a group homomorphism formula_17, where formula_18 is the Grothendieck group of formula_1, called a central charge, satisfying
It is convention to assume the category formula_1 is essentially small, so that the collection of all stability conditions on formula_1 forms a set formula_22. In good circumstances, for example when formula_23 is the derived category of coherent sheaves on a complex manifold formula_24, this set actually has the structure of a complex manifold itself.
Technical remarks about stability condition.
It is shown by Bridgeland that the data of a Bridgeland stability condition is equivalent to specifying a bounded t-structure formula_25 on the category formula_1 and a central charge formula_26 on the heart formula_27 of this t-structure which satisfies the Harder–Narasimhan property above.
An element formula_28 is semi-stable (resp. stable) with respect to the stability condition formula_16 if for every surjection formula_29 for formula_30, we have formula_31 where formula_32 and similarly for formula_33.
Examples.
From the Harder–Narasimhan filtration.
Recall the Harder–Narasimhan filtration for a smooth projective curve formula_24 implies for any coherent sheaf formula_34 there is a filtrationformula_35such that the factors formula_36 have slope formula_37. We can extend this filtration to a bounded complex of sheaves formula_38 by considering the filtration on the cohomology sheaves formula_39 and defining the slope of formula_40, giving a functionformula_41for the central charge.
Elliptic curves.
There is an analysis by Bridgeland for the case of Elliptic curves. He finds there is an equivalenceformula_42where formula_43 is the set of stability conditions and formula_44 is the set of autoequivalences of the derived category formula_45.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Pi"
},
{
"math_id": 1,
"text": "\\mathcal{D}"
},
{
"math_id": 2,
"text": "\\mathcal{P}"
},
{
"math_id": 3,
"text": "\\mathcal{P}(\\varphi)"
},
{
"math_id": 4,
"text": "\\varphi\\in \\mathbb{R}"
},
{
"math_id": 5,
"text": "\\mathcal{P}(\\varphi)[1] = \\mathcal{P}(\\varphi+1)"
},
{
"math_id": 6,
"text": "\\varphi"
},
{
"math_id": 7,
"text": "[1]"
},
{
"math_id": 8,
"text": "\\varphi_1 > \\varphi_2"
},
{
"math_id": 9,
"text": "A\\in \\mathcal{P}(\\varphi_1)"
},
{
"math_id": 10,
"text": "B\\in \\mathcal{P}(\\varphi_2)"
},
{
"math_id": 11,
"text": "\\operatorname{Hom}(A,B)=0"
},
{
"math_id": 12,
"text": "E\\in \\mathcal{D}"
},
{
"math_id": 13,
"text": "\\varphi_1>\\varphi_2>\\cdots>\\varphi_n"
},
{
"math_id": 14,
"text": "A_i\\in \\mathcal{P}(\\varphi_i)"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "(Z,\\mathcal{P})"
},
{
"math_id": 17,
"text": "Z: K(\\mathcal{D}) \\to \\mathbb{C}"
},
{
"math_id": 18,
"text": "K(\\mathcal{D})"
},
{
"math_id": 19,
"text": "0\\ne E\\in \\mathcal{P}(\\varphi)"
},
{
"math_id": 20,
"text": "Z(E) = m(E) \\exp(i\\pi \\varphi)"
},
{
"math_id": 21,
"text": "m(E) \\in \\mathbb{R}_{> 0}"
},
{
"math_id": 22,
"text": "\\operatorname{Stab}(\\mathcal{D})"
},
{
"math_id": 23,
"text": "\\mathcal{D} = \\mathcal{D}^b \\operatorname{Coh}(X)"
},
{
"math_id": 24,
"text": "X"
},
{
"math_id": 25,
"text": "\\mathcal{P}(>0)"
},
{
"math_id": 26,
"text": "Z: K(\\mathcal{A})\\to \\mathbb{C}"
},
{
"math_id": 27,
"text": "\\mathcal{A} = \\mathcal{P}((0,1])"
},
{
"math_id": 28,
"text": "E\\in\\mathcal{A}"
},
{
"math_id": 29,
"text": "E \\to F"
},
{
"math_id": 30,
"text": "F\\in \\mathcal{A}"
},
{
"math_id": 31,
"text": "\\varphi(E) \\le (\\text{resp.}<) \\, \\varphi(F)"
},
{
"math_id": 32,
"text": "Z(E) = m(E) \\exp(i\\pi \\varphi(E))"
},
{
"math_id": 33,
"text": "F"
},
{
"math_id": 34,
"text": "E"
},
{
"math_id": 35,
"text": "0 = E_0 \\subset E_1 \\subset \\cdots \\subset E_n = E"
},
{
"math_id": 36,
"text": "E_j/E_{j-1}"
},
{
"math_id": 37,
"text": "\\mu_i=\\text{deg}/\\text{rank}"
},
{
"math_id": 38,
"text": "E^\\bullet"
},
{
"math_id": 39,
"text": "E^i = H^i(E^\\bullet)[+i]"
},
{
"math_id": 40,
"text": "E^i_j = \\mu_i + j"
},
{
"math_id": 41,
"text": "\\phi : K(X) \\to \\mathbb{R}"
},
{
"math_id": 42,
"text": "\\text{Stab}(X)/\\text{Aut}(X) \\cong \\text{GL}^+(2,\\mathbb{R})/\\text{SL}(2,\\mathbb{Z})"
},
{
"math_id": 43,
"text": "\\text{Stab}(X)"
},
{
"math_id": 44,
"text": "\\text{Aut}(X)"
},
{
"math_id": 45,
"text": "D^b(X)"
}
] |
https://en.wikipedia.org/wiki?curid=63934091
|
6394087
|
Revelation principle
|
The revelation principle is a fundamental result in mechanism design, social choice theory, and game theory which shows it is always possible to design a strategy-resistant implementation of a social decision-making mechanism (such as an electoral system or market). It can be seen as a kind of mirror image to Gibbard's theorem. The revelation principle says that if a social choice function can be implemented with some non-honest mechanism—one where players have an incentive to lie—the same function can be implemented by an incentive-compatible (honesty-promoting) mechanism with the same equilibrium outcome (payoffs).
The revelation principle shows that, while Gibbard's theorem proves it is impossible to design a system that will always be fully invulnerable to strategy (if we do not know how players will behave), it "is" possible to design a system that encourages honesty given a solution concept (if the corresponding equilibrium is unique).
The idea behind the revelation principle is that, if we know which strategy the players in a game will use, we can simply ask all the players to submit their true payoffs or utility functions; then, we take those preferences and calculate each voter's optimal strategy before executing it for them. This procedure means that an honest report of preferences is now the best-possible strategy, because it guarantees the mechanism will play the optimal strategy for the player.
Examples.
Consider the following example. There is a certain item that Alice values as formula_0 and Bob values as formula_1. The government needs to decide who will receive that item and in what terms.
Proof.
Suppose we have an arbitrary mechanism Mech that implements Soc.
We construct a direct mechanism Mech' that is truthful and implements Soc.
Mech' simply simulates the equilibrium strategies of the players in Game(Mech). i.e.
Reporting the true valuations in Mech' is like playing the equilibrium strategies in Mech. Hence, reporting the true valuations is a Nash equilibrium in Mech', as desired. Moreover, the equilibrium payoffs are the same, as desired.
Finding solutions.
In mechanism design, the revelation principle is importance in finding solutions. The researcher need only look at the set of equilibria characterized by incentive compatibility. That is, if the mechanism designer wants to implement some outcome or property, they can restrict their search to mechanisms in which agents are willing to reveal their private information to the mechanism designer that has that outcome or property. If no such direct and truthful mechanism exists, no mechanism can implement this outcome by contraposition. By narrowing the area needed to be searched, the problem of finding a mechanism becomes much easier.
Variants.
The principle comes in various flavors corresponding to different kinds of incentive-compatibility:
The revelation principle also works for correlated equilibria: for every arbitrary "coordinating device" a.k.a. correlating, there exists another direct device for which the state space equals the action space of each player. Then the coordination is done by directly informing each player of his action.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "v_A"
},
{
"math_id": 1,
"text": "v_B"
},
{
"math_id": 2,
"text": "v_B>v_A"
},
{
"math_id": 3,
"text": "(v_A,v_B)"
}
] |
https://en.wikipedia.org/wiki?curid=6394087
|
6394160
|
Embedded atom model
|
In computational chemistry and computational physics, the embedded atom model, embedded-atom method or EAM, is an approximation describing the energy between atoms
and is a type of interatomic potential. The energy is a function of a sum of functions of the separation between an atom and its neighbors. In the original model, by Murray Daw and Mike Baskes, the latter functions represent the electron density. The EAM is related to the second moment approximation to tight binding theory, also known as the Finnis-Sinclair model. These models are particularly appropriate for metallic systems. Embedded-atom methods are widely used in molecular dynamics simulations.
Model simulation.
In a simulation, the potential energy of an atom, formula_0, is given by
formula_1,
where formula_2 is the distance between atoms formula_0 and formula_3, formula_4 is a pair-wise potential function, formula_5 is the contribution to the electron charge density from atom formula_3 of type formula_6 at the location of atom formula_0, and formula_7 is an embedding function that represents the energy required to place atom formula_0 of type formula_8 into the electron cloud.
Since the electron cloud density is a summation over many atoms, usually limited by a cutoff radius, the EAM potential is a multibody potential. For a single element system of atoms, three scalar functions must be specified: the embedding function, a pair-wise interaction, and an electron cloud contribution function. For a binary alloy, the EAM potential requires seven functions: three pair-wise interactions (A-A, A-B, B-B), two embedding functions, and two electron cloud contribution functions. Generally these functions are provided in a tabularized format and interpolated by cubic splines.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i"
},
{
"math_id": 1,
"text": "E_i = F_\\alpha\\left(\\sum_{j\\neq i} \\rho_\\beta (r_{ij}) \\right) + \\frac{1}{2} \\sum_{j\\neq i} \\phi_{\\alpha\\beta}(r_{ij})"
},
{
"math_id": 2,
"text": "r_{ij}"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "\\phi_{\\alpha\\beta}"
},
{
"math_id": 5,
"text": "\\rho_\\beta"
},
{
"math_id": 6,
"text": "\\beta"
},
{
"math_id": 7,
"text": "F"
},
{
"math_id": 8,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=6394160
|
6394169
|
Mental age
|
Concept relating to intelligence
Mental age is a concept related to intelligence. It looks at how a specific individual, at a specific age, performs intellectually, compared to average intellectual performance for that individual's actual chronological age (i.e. time elapsed since birth). The intellectual performance is based on performance in tests and live assessments by a psychologist. The score achieved by the individual is compared to the median average scores at various ages, and the mental age ("x", say) is derived such that the individual's score equates to the average score at age "x".
However, mental age depends on what kind of intelligence is measured. For instance, a child's intellectual age can be average for their actual age, but the same child's emotional intelligence can be immature for their physical age. Psychologists often remark that girls are more emotionally mature than boys at around the age of puberty. Also, a six-year-old child intellectually gifted can remain a three-year-old child in terms of emotional maturity. Mental age can be considered a controversial concept.
History.
Early theories.
During much of the 19th century, theories of intelligence focused on measuring the size of human skulls. Anthropologists well known for their attempts to correlate cranial size and capacity with intellectual potential were Samuel Morton and Paul Broca.
The modern theories of intelligence began to emerge along with experimental psychology. This is when much of psychology was moving from philosophical to more biology and medical science basis. In 1890, James Cattell published what some consider the first "mental test". Cattell was more focused on heredity rather than environment. This spurs much of the debate about the nature of intelligence.
Mental age was first defined by the French psychologist Alfred Binet, who introduced the Binet-Simon Intelligence Test in 1905, with the assistance of Theodore Simon. Binet's experiments on French schoolchildren laid the framework for future experiments into the mind throughout the 20th century. He created an experiment that was designed as a test to be completed quickly and was taken by children of various ages. In general, of course older children performed better on these tests than younger ones. However, the younger children who had exceeded the average of their age group were said to have a higher "mental age", and those who performed below that average were deemed to have a lower "mental age". Binet's theories suggested that while mental age was a useful indicator, it was by no means fixed permanently, and individual growth or decline could be attributed to changes in teaching methods and experiences.
Henry Herbert Goddard was the first psychologist to bring Binet's test to the United States. He was one of the many psychologists in the 1910s who believed intelligence was a fixed quantity. While Binet believed this was not true, the majority of those in the USA believed it was hereditary.
Modern theories.
The limitations of the Stanford-Binet caused David Wechsler to publish the Wechsler Adult Intelligence Scale (WAIS) in
1955. These two tests were split into two different ones for children. The WAIS-IV is the known current publication of the test
for adults. The reason for this test was to score the individual and compare it to others of the same age
group rather than to score by chronological age and mental age. The fixed average is 100 and the normal range is between
85 and 115. This is a standard currently used and is used in the Stanford-Binet test as well.
Recent studies showed that mental age and biological age are connected.
Mental age and IQ.
Modern intelligence tests, such as the current Stanford-Binet test, no longer compute the IQ using the above "ratio IQ" formula. Instead, the results of several different standardized tests are combined to derive a score. This score reflects how far the person's performance deviates from the average performance of others who are the same age, arbitrarily defined as an average score of 100. An individual's "deviation IQ" is then estimated, using a more complicated formula or table, from their score's percentile at their chronological age. But at least as recently as 2007, older tests using ratio IQs were sometimes still used for a child whose percentile was too high for this to be precise, or whose abilities may exceed a deviation IQ test's ceiling.
A child's IQ can be roughly estimated using the formula: formula_0
Controversy.
Measures such as mental age and IQ have limitations. Binet did not believe these measures represented a single, permanent, and inborn level of intelligence. He stressed that intelligence overall is too broad to be represented by a single number. It is influenced by many factors such as the individual's background, and it changes over time.
Throughout much of the 20th century, many psychologists believed intelligence was fixed and hereditary while others believed other factors would affect intelligence.
After World War I, the concept of intelligence as fixed, hereditary, and unchangeable became the dominant theory within the experimental psychological community. By the mid-1930s, there was no longer agreement among researchers on whether or not intelligence was hereditary. There are still recurring debates about the influence of environment and heredity upon an individual's intelligence.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\quad \\mathrm{IQ} = \\frac{\\mathrm{mental\\;age}}{\\mathrm{chronological\\;age}} \\cdot 100"
}
] |
https://en.wikipedia.org/wiki?curid=6394169
|
63944266
|
Differentiable vector–valued functions from Euclidean space
|
Differentiable function in functional analysis
In the mathematical discipline of functional analysis, a differentiable vector-valued function from Euclidean space is a differentiable function valued in a topological vector space (TVS) whose domains is a subset of some finite-dimensional Euclidean space.
It is possible to generalize the notion of derivative to functions whose domain and codomain are subsets of arbitrary topological vector spaces (TVSs) in multiple ways.
But when the domain of a TVS-valued function is a subset of a finite-dimensional Euclidean space then many of these notions become logically equivalent resulting in a much more limited number of generalizations of the derivative and additionally, differentiability is also more well-behaved compared to the general case.
This article presents the theory of formula_0-times continuously differentiable functions on an open subset formula_1 of Euclidean space formula_2 (formula_3), which is an important special case of differentiation between arbitrary TVSs.
This importance stems partially from the fact that every finite-dimensional vector subspace of a Hausdorff topological vector space is TVS isomorphic to Euclidean space formula_2 so that, for example, this special case can be applied to any function whose domain is an arbitrary Hausdorff TVS by restricting it to finite-dimensional vector subspaces.
All vector spaces will be assumed to be over the field formula_4 where formula_5 is either the real numbers formula_6 or the complex numbers formula_7
Continuously differentiable vector-valued functions.
A map formula_8 which may also be denoted by formula_9 between two topological spaces is said to be formula_10-times continuously differentiable or formula_11 if it is continuous. A topological embedding may also be called a formula_11-embedding.
Curves.
Differentiable curves are an important special case of differentiable vector-valued (i.e. TVS-valued) functions which, in particular, are used in the definition of the Gateaux derivative. They are fundamental to the analysis of maps between two arbitrary topological vector spaces formula_12 and so also to the analysis of TVS-valued maps from Euclidean spaces, which is the focus of this article.
A continuous map formula_13 from a subset formula_14 that is valued in a topological vector space formula_15 is said to be (once or formula_16-time) differentiable if for all formula_17 it is differentiable at formula_18 which by definition means the following limit in formula_15 exists:
formula_19
where in order for this limit to even be well-defined, formula_20 must be an accumulation point of formula_21
If formula_13 is differentiable then it is said to be continuously differentiable or formula_22 if its derivative, which is the induced map formula_23 is continuous.
Using induction on formula_24 the map formula_13 is formula_0-times continuously differentiable or formula_25 if its formula_26 derivative formula_27 is continuously differentiable, in which case the formula_28-derivative of formula_29 is the map formula_30
It is called smooth, formula_31 or infinitely differentiable if it is formula_0-times continuously differentiable for every integer formula_32
For formula_33 it is called formula_0-times differentiable if it is formula_34-times continuous differentiable and formula_27 is differentiable.
A continuous function formula_13 from a non-empty and non-degenerate interval formula_35 into a topological space formula_15 is called a curve or a formula_11 curve in formula_36
A path in formula_15 is a curve in formula_15 whose domain is compact while an arc or in formula_15 is a path in formula_15 that is also a topological embedding.
For any formula_37 a curve formula_13 valued in a topological vector space formula_15 is called a formula_25-embedding if it is a topological embedding and a formula_25 curve such that formula_38 for every formula_17 where it is called a formula_25-arc if it is also a path (or equivalently, also a formula_11-arc) in addition to being a formula_25-embedding.
Differentiability on Euclidean space.
The definition given above for curves are now extended from functions valued defined on subsets of formula_6 to functions defined on open subsets of finite-dimensional Euclidean spaces.
Throughout, let formula_1 be an open subset of formula_39 where formula_40 is an integer.
Suppose formula_41 and formula_42 is a function such that formula_43 with formula_20 an accumulation point of formula_44 Then formula_29 is differentiable at formula_20 if there exist formula_45 vectors formula_46 in formula_47 called the partial derivatives of formula_29 at formula_20, such that
formula_48
where formula_49
If formula_29 is differentiable at a point then it is continuous at that point.
If formula_29 is differentiable at every point in some subset formula_50 of its domain then formula_29 is said to be (once or formula_16-time) differentiable in formula_50, where if the subset formula_50 is not mentioned then this means that it is differentiable at every point in its domain.
If formula_29 is differentiable and if each of its partial derivatives is a continuous function then formula_29 is said to be (once or formula_16-time) continuously differentiable or formula_51
For formula_33 having defined what it means for a function formula_29 to be formula_25 (or formula_0 times continuously differentiable), say that formula_29 is formula_52 times continuously differentiable or that formula_29 is formula_53 if formula_29 is continuously differentiable and each of its partial derivatives is formula_54
Say that formula_29 is formula_55 smooth, formula_31 or infinitely differentiable if formula_29 is formula_25 for all formula_56
The support of a function formula_29 is the closure (taken in its domain formula_57) of the set formula_58
Spaces of "C""k" vector-valued functions.
In this section, the space of smooth test functions and its canonical LF-topology are generalized to functions valued in general complete Hausdorff locally convex topological vector spaces (TVSs). After this task is completed, it is revealed that the topological vector space formula_59 that was constructed could (up to TVS-isomorphism) have instead been defined simply as the completed injective tensor product formula_60 of the usual space of smooth test functions formula_61 with formula_62
Throughout, let formula_63 be a Hausdorff topological vector space (TVS), let formula_64 and let formula_1 be either:
Space of "C""k" functions.
For any formula_66 let formula_59 denote the vector space of all formula_25 formula_63-valued maps defined on formula_1 and let formula_67 denote the vector subspace of formula_59 consisting of all maps in formula_59 that have compact support.
Let formula_61 denote formula_68 and formula_69 denote formula_70
Give formula_67 the topology of uniform convergence of the functions together with their derivatives of order formula_71 on the compact subsets of formula_72
Suppose formula_73 is a sequence of relatively compact open subsets of formula_1 whose union is formula_1 and that satisfy formula_74 for all formula_75
Suppose that formula_76 is a basis of neighborhoods of the origin in formula_62 Then for any integer formula_77 the sets:
formula_78
form a basis of neighborhoods of the origin for formula_59 as formula_79 formula_80 and formula_81 vary in all possible ways.
If formula_1 is a countable union of compact subsets and formula_63 is a Fréchet space, then so is formula_82
Note that formula_83 is convex whenever formula_84 is convex.
If formula_63 is metrizable (resp. complete, locally convex, Hausdorff) then so is formula_85
If formula_86 is a basis of continuous seminorms for formula_63 then a basis of continuous seminorms on formula_59 is:
formula_87
as formula_79 formula_80 and formula_81 vary in all possible ways.
Space of "C""k" functions with support in a compact subset.
The definition of the topology of the space of test functions is now duplicated and generalized.
For any compact subset formula_88 denote the set of all formula_29 in formula_59 whose support lies in formula_89 (in particular, if formula_90 then the domain of formula_29 is formula_1 rather than formula_89) and give it the subspace topology induced by formula_85
If formula_89 is a compact space and formula_63 is a Banach space, then formula_91 becomes a Banach space normed by formula_92
Let formula_93 denote formula_94
For any two compact subsets formula_95 the inclusion
formula_96
is an embedding of TVSs and that the union of all formula_97 as formula_89 varies over the compact subsets of formula_98 is formula_99
Space of compactly support "C""k" functions.
For any compact subset formula_88 let
formula_100
denote the inclusion map and endow formula_67 with the strongest topology making all formula_101 continuous, which is known as the final topology induced by these map.
The spaces formula_102 and maps formula_103 form a direct system (directed by the compact subsets of formula_1) whose limit in the category of TVSs is formula_67 together with the injections formula_104
The spaces formula_105 and maps formula_106 also form a direct system (directed by the total order formula_107) whose limit in the category of TVSs is formula_67 together with the injections formula_108
Each embedding formula_101 is an embedding of TVSs.
A subset formula_50 of formula_67 is a neighborhood of the origin in formula_67 if and only if formula_109 is a neighborhood of the origin in formula_102 for every compact formula_110
This direct limit topology (i.e. the final topology) on formula_111 is known as the canonical LF topology.
If formula_63 is a Hausdorff locally convex space, formula_112 is a TVS, and formula_113 is a linear map, then formula_114 is continuous if and only if for all compact formula_88 the restriction of formula_114 to formula_102 is continuous. The statement remains true if "all compact formula_115" is replaced with "all formula_116".
Properties.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Let formula_117 be a positive integer and let formula_118 be an open subset of formula_119
Given formula_120 for any formula_121 let formula_122 be defined by formula_123 and let formula_124 be defined by formula_125
Then
formula_126
is a surjective isomorphism of TVSs.
Furthermore, its restriction
formula_127
is an isomorphism of TVSs (where formula_128 has its canonical LF topology).
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_63 be a Hausdorff locally convex topological vector space and for every continuous linear form formula_129 and every formula_130 let formula_131 be defined by formula_132
Then
formula_133
is a continuous linear map;
and furthermore, its restriction
formula_134
is also continuous (where formula_135 has the canonical LF topology).
Identification as a tensor product.
Suppose henceforth that formula_63 is Hausdorff.
Given a function formula_136 and a vector formula_137 let formula_138 denote the map formula_139 defined by formula_140
This defines a bilinear map formula_141 into the space of functions whose image is contained in a finite-dimensional vector subspace of formula_142
this bilinear map turns this subspace into a tensor product of formula_61 and formula_47 which we will denote by formula_143
Furthermore, if formula_144 denotes the vector subspace of formula_145 consisting of all functions with compact support, then formula_144 is a tensor product of formula_69 and formula_62
If formula_15 is locally compact then formula_146 is dense in formula_147 while if formula_15 is an open subset of formula_148 then formula_149 is dense in formula_150
<templatestyles src="Math_theorem/styles.css" />
Theorem —
If formula_63 is a complete Hausdorff locally convex space, then formula_59 is canonically isomorphic to the injective tensor product formula_151
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "\\Omega"
},
{
"math_id": 2,
"text": "\\R^n"
},
{
"math_id": 3,
"text": "1 \\leq n < \\infty"
},
{
"math_id": 4,
"text": "\\mathbb{F},"
},
{
"math_id": 5,
"text": "\\mathbb{F}"
},
{
"math_id": 6,
"text": "\\R"
},
{
"math_id": 7,
"text": "\\Complex."
},
{
"math_id": 8,
"text": "f,"
},
{
"math_id": 9,
"text": "f^{(0)},"
},
{
"math_id": 10,
"text": "0"
},
{
"math_id": 11,
"text": "C^0"
},
{
"math_id": 12,
"text": "X \\to Y"
},
{
"math_id": 13,
"text": "f : I \\to X"
},
{
"math_id": 14,
"text": "I \\subseteq \\mathbb{R}"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "1"
},
{
"math_id": 17,
"text": "t \\in I,"
},
{
"math_id": 18,
"text": "t,"
},
{
"math_id": 19,
"text": "f^{\\prime}(t) := f^{(1)}(t) \n:= \\lim_{\\stackrel{r \\to t}{t \\neq r \\in I}} \\frac{f(r) - f(t)}{r - t} \n= \\lim_{\\stackrel{h \\to 0}{t \\neq t + h \\in I}} \\frac{f(t + h) - f(t)}{h}"
},
{
"math_id": 20,
"text": "t"
},
{
"math_id": 21,
"text": "I."
},
{
"math_id": 22,
"text": "C^1"
},
{
"math_id": 23,
"text": "f^{\\prime} = f^{(1)} : I \\to X,"
},
{
"math_id": 24,
"text": "1 < k \\in \\N,"
},
{
"math_id": 25,
"text": "C^k"
},
{
"math_id": 26,
"text": "k-1^{\\text{th}}"
},
{
"math_id": 27,
"text": "f^{(k-1)} : I \\to X"
},
{
"math_id": 28,
"text": "k^{\\text{th}}"
},
{
"math_id": 29,
"text": "f"
},
{
"math_id": 30,
"text": "f^{(k)} := \\left(f^{(k-1)}\\right)^{\\prime} : I \\to X."
},
{
"math_id": 31,
"text": "C^\\infty,"
},
{
"math_id": 32,
"text": "k \\in \\N."
},
{
"math_id": 33,
"text": "k \\in \\N,"
},
{
"math_id": 34,
"text": "k-1"
},
{
"math_id": 35,
"text": "I \\subseteq \\R"
},
{
"math_id": 36,
"text": "X."
},
{
"math_id": 37,
"text": "k \\in \\{ 1, 2, \\ldots, \\infty \\},"
},
{
"math_id": 38,
"text": "f^{\\prime}(t) \\neq 0"
},
{
"math_id": 39,
"text": "\\R^n,"
},
{
"math_id": 40,
"text": "n \\geq 1"
},
{
"math_id": 41,
"text": "t = \\left( t_1, \\ldots, t_n \\right) \\in \\Omega"
},
{
"math_id": 42,
"text": "f : \\operatorname{domain} f \\to Y"
},
{
"math_id": 43,
"text": "t \\in \\operatorname{domain} f"
},
{
"math_id": 44,
"text": "\\operatorname{domain} f."
},
{
"math_id": 45,
"text": "n"
},
{
"math_id": 46,
"text": "e_1, \\ldots, e_n"
},
{
"math_id": 47,
"text": "Y,"
},
{
"math_id": 48,
"text": "\\lim_{\\stackrel{p \\to t}{t \\neq p \\in \\operatorname{domain} f}} \\frac{f(p) - f(t) - \\sum_{i=1}^n \\left(p_i - t_i \\right) e_i}{\\|p - t\\|_2} = 0 \\text{ in } Y"
},
{
"math_id": 49,
"text": "p = \\left(p_1, \\ldots, p_n\\right)."
},
{
"math_id": 50,
"text": "S"
},
{
"math_id": 51,
"text": "C^1."
},
{
"math_id": 52,
"text": "k + 1"
},
{
"math_id": 53,
"text": "C^{k+1}"
},
{
"math_id": 54,
"text": "C^k."
},
{
"math_id": 55,
"text": "C^{\\infty},"
},
{
"math_id": 56,
"text": "k = 0, 1, \\ldots."
},
{
"math_id": 57,
"text": "\\operatorname{domain} f"
},
{
"math_id": 58,
"text": "\\{ x \\in \\operatorname{domain} f : f(x) \\neq 0 \\}."
},
{
"math_id": 59,
"text": "C^k(\\Omega;Y)"
},
{
"math_id": 60,
"text": "C^k(\\Omega) \\widehat{\\otimes}_{\\epsilon} Y"
},
{
"math_id": 61,
"text": "C^k(\\Omega)"
},
{
"math_id": 62,
"text": "Y."
},
{
"math_id": 63,
"text": "Y"
},
{
"math_id": 64,
"text": "k \\in \\{ 0, 1, \\ldots, \\infty \\},"
},
{
"math_id": 65,
"text": "0."
},
{
"math_id": 66,
"text": "k = 0, 1, \\ldots, \\infty,"
},
{
"math_id": 67,
"text": "C_c^k(\\Omega;Y)"
},
{
"math_id": 68,
"text": "C^k(\\Omega;\\mathbb{F})"
},
{
"math_id": 69,
"text": "C_c^k(\\Omega)"
},
{
"math_id": 70,
"text": "C_c^k(\\Omega; \\mathbb{F})."
},
{
"math_id": 71,
"text": "< k + 1"
},
{
"math_id": 72,
"text": "\\Omega."
},
{
"math_id": 73,
"text": "\\Omega_1 \\subseteq \\Omega_2 \\subseteq \\cdots"
},
{
"math_id": 74,
"text": "\\overline{\\Omega_i} \\subseteq \\Omega_{i+1}"
},
{
"math_id": 75,
"text": "i."
},
{
"math_id": 76,
"text": "\\left(V_\\alpha\\right)_{\\alpha \\in A}"
},
{
"math_id": 77,
"text": "\\ell < k + 1,"
},
{
"math_id": 78,
"text": "\\mathcal{U}_{i, \\ell, \\alpha} := \\left\\{ f \\in C^k(\\Omega;Y) : \\left(\\partial / \\partial p\\right)^q f (p) \\in U_\\alpha \\text{ for all } p \\in \\Omega_i \\text{ and all } q \\in \\mathbb{N}^n, | q | \\leq \\ell \\right\\}"
},
{
"math_id": 79,
"text": "i,"
},
{
"math_id": 80,
"text": "\\ell,"
},
{
"math_id": 81,
"text": "\\alpha \\in A"
},
{
"math_id": 82,
"text": "C^(\\Omega;Y)."
},
{
"math_id": 83,
"text": "\\mathcal{U}_{i, l, \\alpha}"
},
{
"math_id": 84,
"text": "U_{\\alpha}"
},
{
"math_id": 85,
"text": "C^k(\\Omega;Y)."
},
{
"math_id": 86,
"text": "(p_\\alpha)_{\\alpha \\in A}"
},
{
"math_id": 87,
"text": "\\mu_{i, l, \\alpha}(f) := \\sup_{y \\in \\Omega_i} \\left(\\sum_{| q | \\leq l} p_\\alpha\\left(\\left(\\partial / \\partial p\\right)^q f (p)\\right)\\right)"
},
{
"math_id": 88,
"text": "K \\subseteq \\Omega,"
},
{
"math_id": 89,
"text": "K"
},
{
"math_id": 90,
"text": "f \\in C^k(K;Y)"
},
{
"math_id": 91,
"text": "C^0(K;Y)"
},
{
"math_id": 92,
"text": "\\| f \\| := \\sup_{\\omega \\in \\Omega} \\| f(\\omega) \\|."
},
{
"math_id": 93,
"text": "C^k(K)"
},
{
"math_id": 94,
"text": "C^k(K;\\mathbb{F})."
},
{
"math_id": 95,
"text": "K \\subseteq L \\subseteq \\Omega,"
},
{
"math_id": 96,
"text": "\\operatorname{In}_{K}^{L} : C^k(K;Y) \\to C^k(L;Y)"
},
{
"math_id": 97,
"text": "C^k(K;Y),"
},
{
"math_id": 98,
"text": "\\Omega,"
},
{
"math_id": 99,
"text": "C_c^k(\\Omega;Y)."
},
{
"math_id": 100,
"text": "\\operatorname{In}_K : C^k(K;Y) \\to C_c^k(\\Omega;Y)"
},
{
"math_id": 101,
"text": "\\operatorname{In}_K"
},
{
"math_id": 102,
"text": "C^k(K;Y)"
},
{
"math_id": 103,
"text": "\\operatorname{In}_{K_1}^{K_2}"
},
{
"math_id": 104,
"text": "\\operatorname{In}_{K}."
},
{
"math_id": 105,
"text": "C^k\\left(\\overline{\\Omega_i}; Y\\right)"
},
{
"math_id": 106,
"text": "\\operatorname{In}_{\\overline{\\Omega_i}}^{\\overline{\\Omega_j}}"
},
{
"math_id": 107,
"text": "\\mathbb{N}"
},
{
"math_id": 108,
"text": "\\operatorname{In}_{\\overline{\\Omega_i}}."
},
{
"math_id": 109,
"text": "S \\cap C^k(K;Y)"
},
{
"math_id": 110,
"text": "K \\subseteq \\Omega."
},
{
"math_id": 111,
"text": "C_c^\\infty(\\Omega)"
},
{
"math_id": 112,
"text": "T"
},
{
"math_id": 113,
"text": "u : C_c^k(\\Omega;Y) \\to T"
},
{
"math_id": 114,
"text": "u"
},
{
"math_id": 115,
"text": "K \\subseteq \\Omega"
},
{
"math_id": 116,
"text": "K := \\overline{\\Omega}_i"
},
{
"math_id": 117,
"text": "m"
},
{
"math_id": 118,
"text": "\\Delta"
},
{
"math_id": 119,
"text": "\\R^m."
},
{
"math_id": 120,
"text": "\\phi \\in C^k(\\Omega \\times \\Delta),"
},
{
"math_id": 121,
"text": "y \\in \\Delta"
},
{
"math_id": 122,
"text": "\\phi_y : \\Omega \\to \\mathbb{F}"
},
{
"math_id": 123,
"text": "\\phi_y(x) = \\phi(x, y)"
},
{
"math_id": 124,
"text": "I_k(\\phi) : \\Delta \\to C^k(\\Omega)"
},
{
"math_id": 125,
"text": "I_k(\\phi)(y) := \\phi_y."
},
{
"math_id": 126,
"text": "I_\\infty : C^\\infty(\\Omega \\times \\Delta) \\to C^\\infty(\\Delta; C^\\infty(\\Omega))"
},
{
"math_id": 127,
"text": "I_{\\infty}\\big\\vert_{C_c^{\\infty}\\left(\\Omega \\times \\Delta\\right)} : C_c^\\infty(\\Omega \\times \\Delta) \\to C_c^\\infty\\left(\\Delta; C_c^\\infty(\\Omega)\\right)"
},
{
"math_id": 128,
"text": "C_c^\\infty\\left(\\Omega \\times \\Delta\\right)"
},
{
"math_id": 129,
"text": "y^{\\prime} \\in Y"
},
{
"math_id": 130,
"text": "f \\in C^\\infty(\\Omega;Y),"
},
{
"math_id": 131,
"text": "J_{y^{\\prime}}(f) : \\Omega \\to \\mathbb{F}"
},
{
"math_id": 132,
"text": "J_{y^{\\prime}}(f)(p) = y^{\\prime}(f(p))."
},
{
"math_id": 133,
"text": "J_{y^{\\prime}} : C^\\infty(\\Omega;Y) \\to C^\\infty(\\Omega)"
},
{
"math_id": 134,
"text": "J_{y^{\\prime}}\\big\\vert_{C_c^\\infty(\\Omega;Y)} : C_c^\\infty(\\Omega;Y) \\to C^\\infty(\\Omega)"
},
{
"math_id": 135,
"text": "C_c^\\infty(\\Omega;Y)"
},
{
"math_id": 136,
"text": "f \\in C^k(\\Omega)"
},
{
"math_id": 137,
"text": "y \\in Y,"
},
{
"math_id": 138,
"text": "f \\otimes y"
},
{
"math_id": 139,
"text": "f \\otimes y : \\Omega \\to Y"
},
{
"math_id": 140,
"text": "(f \\otimes y)(p) = f(p) y."
},
{
"math_id": 141,
"text": "\\otimes : C^k(\\Omega) \\times Y \\to C^k(\\Omega;Y)"
},
{
"math_id": 142,
"text": "Y;"
},
{
"math_id": 143,
"text": "C^k(\\Omega) \\otimes Y."
},
{
"math_id": 144,
"text": "C_c^k(\\Omega) \\otimes Y"
},
{
"math_id": 145,
"text": "C^k(\\Omega) \\otimes Y"
},
{
"math_id": 146,
"text": "C_c^{0}(\\Omega) \\otimes Y"
},
{
"math_id": 147,
"text": "C^0(\\Omega;X)"
},
{
"math_id": 148,
"text": "\\R^{n}"
},
{
"math_id": 149,
"text": "C_c^{\\infty}(\\Omega) \\otimes Y"
},
{
"math_id": 150,
"text": "C^k(\\Omega;X)."
},
{
"math_id": 151,
"text": "C^k(\\Omega) \\widehat{\\otimes}_{\\epsilon} Y."
}
] |
https://en.wikipedia.org/wiki?curid=63944266
|
63945914
|
Walter Sidney Abbott
|
American entomologist
Walter Sidney Abbott (May 21, 1879 – October 27, 1942) was an American entomologist who worked at the Bureau of Entomology in Virginia and is best known for Abbott's Formula to calculate insecticide efficiency with a correction involving natural deaths.
Abbott was born in Manchester, New Hampshire, and graduated from the University of New Hampshire in 1910, following which he worked at the New Jersey Agricultural Experiment Station. He joined the Bureau of Entomology in 1912 with the job of enforcing the Insecticide Act of 1910. He worked until his retirement in 1938. He was involved in founding the Insecticide Society of Washington in 1934 and the Manchester Institute of Arts and Sciences. His most significant contribution was the so-called Abbott's Formula that he published in 1925 to calculate the efficiency of insecticides, which subtracts the natural deaths of insects using results from a control plot.
Abbott's correction in its basic form is:
formula_0
where "P"c is the proportion (or numbers) of individuals alive in the control; "P"t is the proportion (or numbers) of individuals alive in the treatment (both of which start initially with the same numbers or densities of test organisms).
He married Lilla Robinson in 1911, and they had a daughter and a son who survived him.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{mortality}\\%=\\left(1 - \\frac {P_t} {P_c}\\right)*100"
}
] |
https://en.wikipedia.org/wiki?curid=63945914
|
63948433
|
Quasi-complete space
|
A topological vector space in which every closed and bounded subset is complete
In functional analysis, a topological vector space (TVS) is said to be quasi-complete or boundedly complete if every closed and bounded subset is complete.
This concept is of considerable importance for non-metrizable TVSs.
Examples and sufficient conditions.
Every complete TVS is quasi-complete.
The product of any collection of quasi-complete spaces is again quasi-complete.
The projective limit of any collection of quasi-complete spaces is again quasi-complete.
Every semi-reflexive space is quasi-complete.
The quotient of a quasi-complete space by a closed vector subspace may "fail" to be quasi-complete.
Counter-examples.
There exists an LB-space that is not quasi-complete.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L_b(X;Y)"
}
] |
https://en.wikipedia.org/wiki?curid=63948433
|
63950387
|
Infrabarrelled space
|
In functional analysis, a discipline within mathematics, a locally convex topological vector space (TVS) is said to be infrabarrelled (also spelled infra) if every bounded barrel is a neighborhood of the origin.
Similarly, quasibarrelled spaces are topological vector spaces (TVS) for which every bornivorous barrelled set in the space is a neighbourhood of the origin.
Quasibarrelled spaces are studied because they are a weakening of the defining condition of barrelled spaces, for which a form of the Banach–Steinhaus theorem holds.
Definition.
A subset formula_0 of a topological vector space (TVS) formula_1 is called bornivorous if it absorbs all bounded subsets of formula_1;
that is, if for each bounded subset formula_2 of formula_3 there exists some scalar formula_4 such that formula_5
A barrelled set or a barrel in a TVS is a set which is convex, balanced, absorbing and closed.
A quasibarrelled space is a TVS for which every bornivorous barrelled set in the space is a neighbourhood of the origin.
Characterizations.
If formula_1 is a Hausdorff locally convex space then the canonical injection from formula_1 into its bidual is a topological embedding if and only if formula_1 is infrabarrelled.
A Hausdorff topological vector space formula_1 is quasibarrelled if and only if every bounded closed linear operator from formula_1 into a complete metrizable TVS is continuous.
By definition, a linear formula_6 operator is called closed if its graph is a closed subset of formula_7
For a locally convex space formula_1 with continuous dual formula_8 the following are equivalent:
If formula_1 is a metrizable locally convex TVS then the following are equivalent:
Properties.
Every quasi-complete infrabarrelled space is barrelled.
A locally convex Hausdorff quasibarrelled space that is sequentially complete is barrelled.
A locally convex Hausdorff quasibarrelled space is a Mackey space, quasi-M-barrelled, and countably quasibarrelled.
A locally convex quasibarrelled space that is also a σ-barrelled space is necessarily a barrelled space.
A locally convex space is reflexive if and only if it is semireflexive and quasibarrelled.
Examples.
Every barrelled space is infrabarrelled.
A closed vector subspace of an infrabarrelled space is, however, not necessarily infrabarrelled.
Every product and locally convex direct sum of any family of infrabarrelled spaces is infrabarrelled.
Every separated quotient of an infrabarrelled space is infrabarrelled.
Every Hausdorff barrelled space and every Hausdorff bornological space is quasibarrelled.
Thus, every metrizable TVS is quasibarrelled.
Note that there exist quasibarrelled spaces that are neither barrelled nor bornological.
There exist Mackey spaces that are not quasibarrelled.
There exist distinguished spaces, DF-spaces, and formula_10-barrelled spaces that are not quasibarrelled.
The strong dual space formula_11 of a Fréchet space formula_1 is distinguished if and only if formula_1 is quasibarrelled.
Counter-examples.
There exists a DF-space that is not quasibarrelled.
There exists a quasibarrelled DF-space that is not bornological.
There exists a quasibarrelled space that is not a σ-barrelled space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "X,"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "S \\subseteq r B."
},
{
"math_id": 6,
"text": "F : X \\to Y"
},
{
"math_id": 7,
"text": "X \\times Y."
},
{
"math_id": 8,
"text": "X^{\\prime}"
},
{
"math_id": 9,
"text": "\\beta(X', X)"
},
{
"math_id": 10,
"text": "\\sigma"
},
{
"math_id": 11,
"text": "X_b^{\\prime}"
}
] |
https://en.wikipedia.org/wiki?curid=63950387
|
63951372
|
Strong dual space
|
Continuous dual space endowed with the topology of uniform convergence on bounded sets
In functional analysis and related areas of mathematics, the strong dual space of a topological vector space (TVS) formula_0 is the continuous dual space formula_1 of formula_0 equipped with the strong (dual) topology or the topology of uniform convergence on bounded subsets of formula_2 where this topology is denoted by formula_3 or formula_4 The coarsest polar topology is called weak topology.
The strong dual space plays such an important role in modern functional analysis, that the continuous dual space is usually assumed to have the strong dual topology unless indicated otherwise.
To emphasize that the continuous dual space, formula_5 has the strong dual topology, formula_6 or formula_7 may be written.
Strong dual topology.
Throughout, all vector spaces will be assumed to be over the field formula_8 of either the real numbers formula_9 or complex numbers formula_10
Definition from a dual system.
Let formula_11 be a dual pair of vector spaces over the field formula_8 of real numbers formula_9 or complex numbers formula_10
For any formula_12 and any formula_13 define
formula_14
Neither formula_0 nor formula_15 has a topology so say a subset formula_12 is said to be bounded by a subset formula_16 if formula_17 for all formula_18
So a subset formula_12 is called bounded if and only if
formula_19
This is equivalent to the usual notion of bounded subsets when formula_0 is given the weak topology induced by formula_20 which is a Hausdorff locally convex topology.
Let formula_21 denote the family of all subsets formula_12 bounded by elements of formula_15; that is, formula_21 is the set of all subsets formula_12 such that for every formula_13
formula_22
Then the strong topology formula_23 on formula_20 also denoted by formula_24 or simply formula_25 or formula_26 if the pairing formula_27 is understood, is defined as the locally convex topology on formula_15 generated by the seminorms of the form
formula_28
The definition of the strong dual topology now proceeds as in the case of a TVS.
Note that if formula_0 is a TVS whose continuous dual space separates point on formula_2 then formula_0 is part of a canonical dual system formula_29
where formula_30
In the special case when formula_0 is a locally convex space, the strong topology on the (continuous) dual space formula_1 (that is, on the space of all continuous linear functionals formula_31) is defined as the strong topology formula_32 and it coincides with the topology of uniform convergence on bounded sets in formula_2 i.e. with the topology on formula_1 generated by the seminorms of the form
formula_33
where formula_34 runs over the family of all bounded sets in formula_35
The space formula_1 with this topology is called strong dual space of the space formula_0 and is denoted by formula_36
Definition on a TVS.
Suppose that formula_0 is a topological vector space (TVS) over the field formula_37
Let formula_21 be any fundamental system of bounded sets of formula_0;
that is, formula_21 is a family of bounded subsets of formula_0 such that every bounded subset of formula_0 is a subset of some formula_38;
the set of all bounded subsets of formula_0 forms a fundamental system of bounded sets of formula_35
A basis of closed neighborhoods of the origin in formula_1 is given by the polars:
formula_39
as formula_34 ranges over formula_21).
This is a locally convex topology that is given by the set of seminorms on formula_1:
formula_40
as formula_34 ranges over formula_41
If formula_0 is normable then so is formula_42 and formula_42 will in fact be a Banach space.
If formula_0 is a normed space with norm formula_43 then formula_1 has a canonical norm (the operator norm) given by formula_44;
the topology that this norm induces on formula_1 is identical to the strong dual topology.
Bidual.
The bidual or second dual of a TVS formula_2 often denoted by formula_45 is the strong dual of the strong dual of formula_0:
formula_46
where formula_6 denotes formula_1 endowed with the strong dual topology formula_47
Unless indicated otherwise, the vector space formula_48 is usually assumed to be endowed with the strong dual topology induced on it by formula_49 in which case it is called the strong bidual of formula_0; that is,
formula_50
where the vector space formula_48 is endowed with the strong dual topology formula_51
Properties.
Let formula_0 be a locally convex TVS.
If formula_0 is a barrelled space, then its topology coincides with the strong topology formula_57 on formula_0 and with the Mackey topology on generated by the pairing formula_58
Examples.
If formula_0 is a normed vector space, then its (continuous) dual space formula_1 with the strong topology coincides with the Banach dual space formula_1; that is, with the space formula_1 with the topology induced by the operator norm. Conversely formula_58-topology on formula_0 is identical to the topology induced by the norm on formula_35
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X^{\\prime}"
},
{
"math_id": 2,
"text": "X,"
},
{
"math_id": 3,
"text": "b\\left(X^{\\prime}, X\\right)"
},
{
"math_id": 4,
"text": "\\beta\\left(X^{\\prime}, X\\right)."
},
{
"math_id": 5,
"text": "X^{\\prime},"
},
{
"math_id": 6,
"text": "X^{\\prime}_b"
},
{
"math_id": 7,
"text": "X^{\\prime}_{\\beta}"
},
{
"math_id": 8,
"text": "\\mathbb{F}"
},
{
"math_id": 9,
"text": "\\R"
},
{
"math_id": 10,
"text": "\\C."
},
{
"math_id": 11,
"text": "(X, Y, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 12,
"text": "B \\subseteq X"
},
{
"math_id": 13,
"text": "y \\in Y,"
},
{
"math_id": 14,
"text": "|y|_B = \\sup_{x \\in B}|\\langle x, y\\rangle|."
},
{
"math_id": 15,
"text": "Y"
},
{
"math_id": 16,
"text": "C \\subseteq Y"
},
{
"math_id": 17,
"text": "|y|_B < \\infty"
},
{
"math_id": 18,
"text": "y \\in C."
},
{
"math_id": 19,
"text": "\\sup_{x \\in B} |\\langle x, y \\rangle| < \\infty \\quad \\text{ for all } y \\in Y."
},
{
"math_id": 20,
"text": "Y,"
},
{
"math_id": 21,
"text": "\\mathcal{B}"
},
{
"math_id": 22,
"text": "|y|_B = \\sup_{x\\in B}|\\langle x, y\\rangle| < \\infty."
},
{
"math_id": 23,
"text": "\\beta(Y, X, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 24,
"text": "b(Y, X, \\langle \\cdot, \\cdot \\rangle)"
},
{
"math_id": 25,
"text": "\\beta(Y, X)"
},
{
"math_id": 26,
"text": "b(Y, X)"
},
{
"math_id": 27,
"text": "\\langle \\cdot, \\cdot\\rangle"
},
{
"math_id": 28,
"text": "|y|_B = \\sup_{x \\in B} |\\langle x, y\\rangle|,\\qquad y \\in Y, \\qquad B \\in \\mathcal{B}."
},
{
"math_id": 29,
"text": "\\left(X, X^{\\prime}, \\langle \\cdot , \\cdot \\rangle\\right)"
},
{
"math_id": 30,
"text": "\\left\\langle x, x^{\\prime} \\right\\rangle := x^{\\prime}(x)."
},
{
"math_id": 31,
"text": "f : X \\to \\mathbb{F}"
},
{
"math_id": 32,
"text": "\\beta\\left(X^{\\prime}, X\\right),"
},
{
"math_id": 33,
"text": "|f|_B = \\sup_{x \\in B} |f(x)|, \\qquad \\text{ where } f \\in X^{\\prime},"
},
{
"math_id": 34,
"text": "B"
},
{
"math_id": 35,
"text": "X."
},
{
"math_id": 36,
"text": "X^{\\prime}_{\\beta}."
},
{
"math_id": 37,
"text": "\\mathbb{F}."
},
{
"math_id": 38,
"text": "B \\in \\mathcal{B}"
},
{
"math_id": 39,
"text": "B^{\\circ} := \\left\\{ x^{\\prime} \\in X^{\\prime} : \\sup_{x \\in B} \\left|x^{\\prime}(x)\\right| \\leq 1 \\right\\}"
},
{
"math_id": 40,
"text": "\\left|x^{\\prime}\\right|_{B} := \\sup_{x \\in B} \\left|x^{\\prime}(x)\\right|"
},
{
"math_id": 41,
"text": "\\mathcal{B}."
},
{
"math_id": 42,
"text": "X^{\\prime}_{b}"
},
{
"math_id": 43,
"text": "\\| \\cdot \\|"
},
{
"math_id": 44,
"text": "\\left\\| x^{\\prime} \\right\\| := \\sup_{\\| x \\| \\leq 1} \\left| x^{\\prime}(x) \\right|"
},
{
"math_id": 45,
"text": "X^{\\prime \\prime},"
},
{
"math_id": 46,
"text": "X^{\\prime \\prime} \\,:=\\, \\left(X^{\\prime}_b\\right)^{\\prime}"
},
{
"math_id": 47,
"text": "b\\left(X^{\\prime}, X\\right)."
},
{
"math_id": 48,
"text": "X^{\\prime \\prime}"
},
{
"math_id": 49,
"text": "X^{\\prime}_b,"
},
{
"math_id": 50,
"text": "X^{\\prime \\prime} \\,:=\\, \\left(X^{\\prime}_b\\right)^{\\prime}_b"
},
{
"math_id": 51,
"text": "b\\left(X^{\\prime\\prime}, X^{\\prime}_b\\right)."
},
{
"math_id": 52,
"text": "X^{\\prime}_b."
},
{
"math_id": 53,
"text": "b\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 54,
"text": "\\left(X, b\\left(X, X^{\\prime}\\right)\\right)"
},
{
"math_id": 55,
"text": "\\mathcal{G}"
},
{
"math_id": 56,
"text": "X^{\\prime}_{b(X^{\\prime}, X)}"
},
{
"math_id": 57,
"text": "\\beta\\left(X, X^{\\prime}\\right)"
},
{
"math_id": 58,
"text": "\\left(X, X^{\\prime}\\right)."
}
] |
https://en.wikipedia.org/wiki?curid=63951372
|
639553
|
Eugène Charles Catalan
|
French-Belgian mathematician
Eugène Charles Catalan (; 30 May 1814 – 14 February 1894) was a French and Belgian mathematician who worked on continued fractions, descriptive geometry, number theory and combinatorics. His notable contributions included discovering a periodic minimal surface in the space formula_0; stating the famous Catalan's conjecture, which was eventually proved in 2002; and introducing the Catalan numbers to solve a combinatorial problem.
Biography.
Catalan was born in Bruges (now in Belgium, then under Dutch rule even though the Kingdom of the Netherlands had not yet been formally instituted), the only child of a French jeweller by the name of Joseph Catalan, in 1814. In 1825, he traveled to Paris and learned mathematics at École Polytechnique, where he met Joseph Liouville (1833). In December 1834 he was expelled along with most of the students in his year as part of a crackdown by the July Monarchy against republican tendencies among the students. He resumed his studies in January 1835, graduated that summer, and went on to teach at Châlons-sur-Marne. Catalan came back to the École Polytechnique, and, with the help of Liouville, obtained his degree in mathematics in 1841. He went on to Charlemagne College to teach descriptive geometry. Though he was politically active and strongly left-wing, leading him to participate in the 1848 Revolution, he had an animated career and also sat in the France's Chamber of Deputies. Later, in 1849, Catalan was visited at his home by the French Police, searching for illicit teaching material; however, none was found.
The University of Liège appointed him chair of analysis in 1865. In 1879, still in Belgium, he became journal editor where he published as a foot note Paul-Jean Busschop's theory after refusing it in 1873 - letting Busschop know that it was too empirical. In 1883, he worked for the Belgian Academy of Science in the field of number theory. He died in Liège, Belgium where he had received a chair.
Work.
He worked on continued fractions, descriptive geometry, number theory and combinatorics. He gave his name to a unique surface (periodic minimal surface in the space formula_0) that he discovered in 1855. Before that, he had stated the famous Catalan's conjecture, which was published in 1844 and was eventually proved in 2002, by the Romanian mathematician Preda Mihăilescu. He introduced the Catalan numbers to solve a combinatorial problem.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^3"
}
] |
https://en.wikipedia.org/wiki?curid=639553
|
6395589
|
Karatsuba algorithm
|
Algorithm for integer multiplication
The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It is a divide-and-conquer algorithm that reduces the multiplication of two "n"-digit numbers to three multiplications of "n"/2-digit numbers and, by repeating this reduction, to at most formula_0 single-digit multiplications. It is therefore asymptotically faster than the traditional algorithm, which performs formula_1 single-digit products.
The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm.
The Toom–Cook algorithm (1963) is a faster generalization of Karatsuba's method, and the Schönhage–Strassen algorithm (1971) is even faster, for sufficiently large "n".
History.
The standard procedure for multiplication of two "n"-digit numbers requires a number of elementary operations proportional to formula_2, or formula_3 in big-O notation. Andrey Kolmogorov conjectured that the traditional algorithm was "asymptotically optimal," meaning that any algorithm for that task would require formula_4 elementary operations.
In 1960, Kolmogorov organized a seminar on mathematical problems in cybernetics at the Moscow State University, where he stated the formula_4 conjecture and other problems in the complexity of computation. Within a week, Karatsuba, then a 23-year-old student, found an algorithm that multiplies two "n"-digit numbers in formula_5 elementary steps, thus disproving the conjecture. Kolmogorov was very excited about the discovery; he communicated it at the next meeting of the seminar, which was then terminated. Kolmogorov gave some lectures on the Karatsuba result at conferences all over the world (see, for example, "Proceedings of the International Congress of Mathematicians 1962", pp. 351–356, and also "6 Lectures delivered at the International Congress of Mathematicians in Stockholm, 1962") and published the method in 1962, in the Proceedings of the USSR Academy of Sciences. The article had been written by Kolmogorov and contained two results on multiplication, Karatsuba's algorithm and a separate result by Yuri Ofman; it listed "A. Karatsuba and Yu. Ofman" as the authors. Karatsuba only became aware of the paper when he received the reprints from the publisher.
Algorithm.
Basic step.
The basic principle of Karatsuba's algorithm is divide-and-conquer, using a formula that allows one to compute the product of two large numbers formula_6 and formula_7 using three multiplications of smaller numbers, each with about half as many digits as formula_6 or formula_7, plus some additions and digit shifts. This basic step is, in fact, a generalization of a similar complex multiplication algorithm, where the imaginary unit i is replaced by a power of the base.
Let formula_6 and formula_7 be represented as formula_8-digit strings in some base formula_9. For any positive integer formula_10 less than formula_8, one can write the two given numbers as
formula_11
formula_12
where formula_13 and formula_14 are less than formula_15. The product is then
formula_16
where
formula_17
formula_18
formula_19
These formulae require four multiplications and were known to Charles Babbage. Karatsuba observed that formula_20 can be computed in only three multiplications, at the cost of a few extra additions. With formula_21 and formula_22 as before and formula_23 one can observe that
formula_24
Thus only three multiplications are required for computing formula_25 and formula_26
Example.
To compute the product of 12345 and 6789, where "B" = 10, choose "m" = 3. We use "m" right shifts for decomposing the input operands using the resulting base ("B""m" = "1000"), as:
12345 = 12 · "1000" + 345
6789 = 6 · "1000" + 789
Only three multiplications, which operate on smaller integers, are used to compute three partial results:
"z"2 = 12 × 6 = 72
"z"0 = 345 × 789 = 272205
"z"1 = (12 + 345) × (6 + 789) − "z"2 − "z"0 = 357 × 795 − 72 − 272205 = 283815 − 72 − 272205 = 11538
We get the result by just adding these three partial results, shifted accordingly (and then taking carries into account by decomposing these three inputs in base "1000" as for the input operands):
result = "z"2 · ("B""m")"2" + "z"1 · ("B""m")"1" + "z"0 · ("B""m")"0", i.e.
result = 72 · "1000"2 + 11538 · "1000" + 272205 = 83810205.
Note that the intermediate third multiplication operates on an input domain which is less than two times larger than for the two first multiplications, its output domain is less than four times larger, and base-"1000" carries computed from the first two multiplications must be taken into account when computing these two subtractions.
Recursive application.
If "n" is four or more, the three multiplications in Karatsuba's basic step involve operands with fewer than "n" digits. Therefore, those products can be computed by recursive calls of the Karatsuba algorithm. The recursion can be applied until the numbers are so small that they can (or must) be computed directly.
In a computer with a full 32-bit by 32-bit multiplier, for example, one could choose "B" = 231 and store each digit as a separate 32-bit binary word. Then the sums "x"1 + "x"0 and "y"1 + "y"0 will not need an extra binary word for storing the carry-over digit (as in carry-save adder), and the Karatsuba recursion can be applied until the numbers to multiply are only one digit long.
Time complexity analysis.
Karatsuba's basic step works for any base "B" and any "m", but the recursive algorithm is most efficient when "m" is equal to "n"/2, rounded up. In particular, if "n" is 2"k", for some integer "k", and the recursion stops only when "n" is 1, then the number of single-digit multiplications is 3"k", which is "n""c" where "c" = log23.
Since one can extend any inputs with zero digits until their length is a power of two, it follows that the number of elementary multiplications, for any "n", is at most formula_27.
Since the additions, subtractions, and digit shifts (multiplications by powers of "B") in Karatsuba's basic step take time proportional to "n", their cost becomes negligible as "n" increases. More precisely, if "T"("n") denotes the total number of elementary operations that the algorithm performs when multiplying two "n"-digit numbers, then
formula_28
for some constants "c" and "d". For this recurrence relation, the master theorem for divide-and-conquer recurrences gives the asymptotic bound formula_29.
It follows that, for sufficiently large "n", Karatsuba's algorithm will perform fewer shifts and single-digit additions than longhand multiplication, even though its basic step uses more additions and shifts than the straightforward formula. For small values of "n", however, the extra shift and add operations may make it run slower than the longhand method.
Implementation.
Here is the pseudocode for this algorithm, using numbers represented in base ten. For the binary representation of integers, it suffices to replace everywhere 10 by 2.
The second argument of the split_at function specifies the number of digits to extract from the "right": for example, split_at("12345", 3) will extract the 3 final digits, giving: high="12", low="345".
function karatsuba(num1, num2)
if (num1 < 10 or num2 < 10)
return num1 × num2 /* fall back to traditional multiplication */
/* Calculates the size of the numbers. */
m = max(size_base10(num1), size_base10(num2))
m2 = floor(m / 2)
/* m2 = ceil (m / 2) will also work */
/* Split the digit sequences in the middle. */
high1, low1 = split_at(num1, m2)
high2, low2 = split_at(num2, m2)
/* 3 recursive calls made to numbers approximately half the size. */
z0 = karatsuba(low1, low2)
z1 = karatsuba(low1 + high1, low2 + high2)
z2 = karatsuba(high1, high2)
return (z2 × 10 ^ (m2 × 2)) + ((z1 - z2 - z0) × 10 ^ m2) + z0
An issue that occurs when implementation is that the above computation of formula_30 and formula_31 for formula_32 may result in overflow (will produce a result in the range formula_33), which require a multiplier having one extra bit. This can be avoided by noting that
formula_34
This computation of formula_35 and formula_36 will produce a result in the range of formula_37. This method may produce negative numbers, which require one extra bit to encode signedness, and would still require one extra bit for the multiplier. However, one way to avoid this is to record the sign and then use the absolute value of formula_35 and formula_36 to perform an unsigned multiplication, after which the result may be negated when both signs originally differed. Another advantage is that even though formula_38 may be negative, the final computation of formula_32 only involves additions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " n^{\\log_23}\\approx n^{1.58}"
},
{
"math_id": 1,
"text": "n^2"
},
{
"math_id": 2,
"text": "n^2\\,\\!"
},
{
"math_id": 3,
"text": "O(n^2)\\,\\!"
},
{
"math_id": 4,
"text": "\\Omega(n^2)\\,\\!"
},
{
"math_id": 5,
"text": "O(n^{\\log_2 3})"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "y"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "B"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "x = x_1 B^m + x_0,"
},
{
"math_id": 12,
"text": "y = y_1 B^m + y_0,"
},
{
"math_id": 13,
"text": "x_0"
},
{
"math_id": 14,
"text": "y_0"
},
{
"math_id": 15,
"text": "B^m"
},
{
"math_id": 16,
"text": "\n\\begin{align}\nxy &= (x_1 B^m + x_0)(y_1 B^m + y_0) \\\\\n &= x_1 y_1 B^{2m} + (x_1 y_0 + x_0 y_1) B^m + x_0 y_0 \\\\\n &= z_2 B^{2m} + z_1 B^m + z_0, \\\\\n\\end{align}\n"
},
{
"math_id": 17,
"text": "z_2 = x_1 y_1,"
},
{
"math_id": 18,
"text": "z_1 = x_1 y_0 + x_0 y_1,"
},
{
"math_id": 19,
"text": "z_0 = x_0 y_0."
},
{
"math_id": 20,
"text": "xy"
},
{
"math_id": 21,
"text": "z_0"
},
{
"math_id": 22,
"text": "z_2"
},
{
"math_id": 23,
"text": "z_3=(x_1 + x_0) (y_1 + y_0),"
},
{
"math_id": 24,
"text": "\n\\begin{align}\nz_1 &= x_1 y_0 + x_0 y_1 \\\\\n &= (x_1 + x_0) (y_0 + y_1) - x_1 y_1 - x_0 y_0 \\\\\n &= z_3 - z_2 - z_0. \\\\\n\\end{align}\n"
},
{
"math_id": 25,
"text": "z_0, z_1"
},
{
"math_id": 26,
"text": "z_2."
},
{
"math_id": 27,
"text": "3^{ \\lceil\\log_2 n \\rceil} \\leq 3 n^{\\log_2 3}\\,\\!"
},
{
"math_id": 28,
"text": "T(n) = 3 T(\\lceil n/2\\rceil) + cn + d"
},
{
"math_id": 29,
"text": "T(n) = \\Theta(n^{\\log_2 3})\\,\\!"
},
{
"math_id": 30,
"text": "(x_1 + x_0)"
},
{
"math_id": 31,
"text": "(y_1 + y_0)"
},
{
"math_id": 32,
"text": "z_1"
},
{
"math_id": 33,
"text": "B^m \\leq \\text{result} < 2 B^m"
},
{
"math_id": 34,
"text": "z_1 = (x_0 - x_1)(y_1 - y_0) + z_2 + z_0."
},
{
"math_id": 35,
"text": "(x_0 - x_1)"
},
{
"math_id": 36,
"text": "(y_1 - y_0)"
},
{
"math_id": 37,
"text": "-B^m < \\text{result} < B^m"
},
{
"math_id": 38,
"text": "(x_0 - x_1)(y_1 - y_0)"
}
] |
https://en.wikipedia.org/wiki?curid=6395589
|
63961033
|
Transactional Asset Pricing Approach
|
In the valuation theory department of economics, the Transactional Asset Pricing Approach (TAPA) is a general reconstruction of asset pricing theory developed in 2000s by a collaboration of Russian and Israeli economists Vladimir B. Michaletz and Andrey I. Artemenkov. It provides a basis for reconstructing the discounted cash flow (DCF) analysis and the resulting income capitalization techniques, such as the Gordon growth formula (see dividend discount model ), from a transactional perspective relying, in the process, on a formulated dynamic principle of transactional equity-in-exchange.
General overview.
TAPA approach originates with the framing of the dynamic inter-temporal "principle of transactional equity-in-exchange" for buyers and sellers in an asset transaction, the essence of which is that by the end of the analysis projection period formula_0 neither party should be a losing side to the transaction, meaning that the capital of the buyer and the seller bound up in the transaction should be mutually equal at the end of Period formula_0. In TAPA, this dynamic valuation predicate forms an underlying new basis for justifying DCF analyses distinct, on the one hand, from the specific-individual-investor DCF premise developed by the American Economist Irving Fisher in his Theory of Interest 1930 book and the perfect-competitive-market-approach to justifying DCF, developed by Merton Miller and Franco Modigliani in their seminal Dividend policy and growth Paper, on the other hand.
Since the Transactional approach to asset valuation, whose genesis can be traced back to Book V of Nicomachean Ethics, implies a distinct accounting for economic interests of both parties to a transaction with an economic asset, the buyer and the seller, it proceeds from developing a dual rate asset pricing model, which is complemented by a deductive-style multi-period discount rate derivation theory, originating as a generalization of the single-period discount rate framework of Burr-Williams, where the single-period discount rate, "r", is conceptualized as being constituted of the current income formula_1 component and the capital value appreciation formula_2 component for a single asset or a portfolio aggregate:
formula_3.
The multi-period discount rate evaluation theory within TAPA, on the other hand, is a portfolio-level theory, in that it applies to an investment aggregate. A general formula for evaluating discount rates/rates of return at a portfolio-level in TAPA looks as follows
formula_4 , where formula_5 is a portfolio-level current return component (yield) for Period 1 of the selected projection period (it is assumed the portfolio is made up of formula_6 assets, each yielding net operating income formula_7 by the end of Analysis Period 1 over the asset formula_8 capital value formula_9 as at the beginning of Period 1);formula_10 - is the expected rate of change in the aggregate net operating incomes formula_7 of the assets making up the portfolio during the period formula_11 (subsequent to Period 1 ) ; in a similar vein, formula_12 is the expected capital value appreciation rate (growth rate) at the market/portfolio level during periods formula_13, respectively. Unlike the Capital asset pricing model(CAPM) which is built in the two-coordinates plane ("standard deviation(risk) -- expected (mean) return") relevant for pricing liquid securities traded on deep financial markets, the TAPA discount rate model is built in the "income growth-capital value growth" coordinates plane, thus it is relevant for pricing all types of income-producing assets, such as property, not just the liquid securities.
Unlike single-period CAPM, TAPA is an explicit multi-period framework for forecasting market (or specific portfolio) rates of return.
The TAPA theory lists conditions under which the developed dual-rate general asset pricing model reduces to the conventional single-rate Discounted cash flow (DCF) analyses framework. Such a framework with time-variable discount rates is called the TAPA BPE (Basic Pricing Equation): formula_14 where, formula_15 stands for the market value of the asset being valued as at the valuation date (formula_16) determined under the Income approach; formula_17-is the time-variable discount rate determined under the TAPA multi-period discount rate forecasting framework above; formula_18 - is the expected rate of change in the capital value formula_19 of the asset being valued (subject asset) over the period formula_20; formula_21 -is the expected rate of change in the net operating income from the subject asset formula_22 over period formula_23. (formula_24). The first term in the TAPA BPE formula above stands for the discounted (present) value of subject asset's benefits represented by the formula_25 series; while the second term—represents the residual (reversionary) value of the subject asset at the end of the projection/holding period formula_0, proportioned, via the imputation of formula_26 terms, to the asset's present value formula_15 sought at the beginning of the projection period. Thus, as can be seen, TAPA's BPE equation of value is a circular equation usually solvable by numeric methods of evaluation. The distinction of formula_27 vis-a-vis formula_28 variables is emphasized in TAPA. The former represent the properties of the subject asset being valued, the latter - the properties of a benchmark (a market aggregate or a portfolio) against which the valuation of subject asset is rendered. The discount rate formula_17, being based on formula_29 and formula_30 variables specific to the benchmark portfolio, therefore represents the dynamic (expected performance) properties of the benchmark, not any of the properties of the subject asset being valued. Such a conceptualization of discount rates in the TAPA context emphasizes the explicit comparative nature of any income-approach-based valuation, with TAPA making such a linkage to a valuation benchmark employed explicit via the specification of discount rates in the multi-period analysis framework that it offers.
The TAPA theory provides original derivations and conditions under which such BPE can be further reduced to most of the known income capitalization formats within the valuation theory, such as the direct income capitalization format (DIC), The Gordon Growth Model, the Inwood and Hoskald capitalization formats. One novel income capitalization format obtainable from the TAPA BPE is known as the "Quick capitalization model"
Dual-rate asset pricing model in TAPA.
Dual-rate asset pricing model developed under TAPA represents a substantial contribution of TAPA to generalizing the Discounted cash flow analysis framework. The General pricing equation for this model is as follows:
formula_31 where, additionally, formula_32 and formula_33 stand for the expected rates-of-return/discount rates utilized by the buyer ("b") and seller ("s") with reference to Period formula_23, respectively, for the purposes of subject asset transactional pricing (hence, the "dual-rate" model); formula_34 - is the expected residual (reversionary) value of the subject asset at the end of its holding period formula_0; formula_35- is an imputed measure of dis-proportionality between the economic interests of the buyer and the seller, with the left-hand-side of the Equation reflecting economic interests of the seller, and the Right-hand side—those of the buyer. If the residual value of the subject asset formula_36 can be assumed to be proportioned to the asset's initial value sought formula_15 via an imputation of the asset's own rates of change in capital value expected over the projection period, formula_37, like it is assumed in the TAPA BPE context, then the presented dual-rate pricing equation becomes amenable to solutions for formula_38 e.g. by numerical methods.
The dynamic principle of equity-in-exchange mentioned above implies formula_39; this principle is needed to reduce the dual-rate pricing model to more conventional-looking DCF analyses and TAPA BPE shown above, along with the assumption that formula_40, i.e. that the buyer's and seller's rates of return converge to some representative market value for such rates,formula_17, called "the discount rate" in the conventional single-rate DCF applications; as mentioned, TAPA's multi-period discount rate evaluation framework summarized in the formula above allows to determine such converged market rates on a valuation benchmark (portfolio) level.
Applications.
TAPA valuation framework is applicable to pricing assets possessing less than perfect liquidity with reference to a selected valuation benchmark for which the discount rates have to be developed, or forecast, by a valuer. In particular, the TAPA approach has found applications for pricing assets to the Equitable/Fair standard of value (valuation basis), which is defined in the International Valuation Standards published by the International Valuation Standards Council. Additionally, the flexible time-variable nature of discount rates in the TAPA BPE provides a novel framework for exploring the effects of market cycles on the prices of assets embedded in the markets
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "r=R+v"
},
{
"math_id": 4,
"text": "\\text{TAPA formula for evaluating discount rates, for period i, in multi-period time-variant discount rate setting: }r(i)= \\frac{R\\prod_{j = 1}^i (1+u(j))}{\\prod_{j=1}^{i-1} (1+v(j))}+ v(i)"
},
{
"math_id": 5,
"text": " R=\\frac{\\quad \\sum_{s=1}^n NOI_s}{\\quad\\sum_{s=1}^n PV_s}"
},
{
"math_id": 6,
"text": "1..s "
},
{
"math_id": 7,
"text": " NOI_s"
},
{
"math_id": 8,
"text": " s "
},
{
"math_id": 9,
"text": "PV_s"
},
{
"math_id": 10,
"text": " u(j) "
},
{
"math_id": 11,
"text": " j "
},
{
"math_id": 12,
"text": "v(i), v(j)"
},
{
"math_id": 13,
"text": "i, j"
},
{
"math_id": 14,
"text": "\\text{TAPA's Basic Pricing Equation (BPE), with time-variable discount rates: } PV(0)= NOI \\cdot\\sum_{k=1}^n\\frac{\\quad\\prod_{i=2}^k (1+u_o(i))}{\\quad\\prod_{i=1}^k(1+r(i))} + PV(0) \\frac{\\quad\\prod_{i=1}^n (1+v_o(i))}{\\quad\\prod_{i=1}^n (1+r(i))}"
},
{
"math_id": 15,
"text": "PV(0)"
},
{
"math_id": 16,
"text": "t=0"
},
{
"math_id": 17,
"text": "r(i)"
},
{
"math_id": 18,
"text": "v_0(i)"
},
{
"math_id": 19,
"text": "PV(i-1)"
},
{
"math_id": 20,
"text": "i ,(i=1..n)"
},
{
"math_id": 21,
"text": " u_o(i)"
},
{
"math_id": 22,
"text": "NOI(i-1)"
},
{
"math_id": 23,
"text": "i"
},
{
"math_id": 24,
"text": " NOI =NOI(1)"
},
{
"math_id": 25,
"text": "NOI"
},
{
"math_id": 26,
"text": " v_O(i)"
},
{
"math_id": 27,
"text": " v_o(i), u_o(i)"
},
{
"math_id": 28,
"text": " v(i), u(i) "
},
{
"math_id": 29,
"text": " v(i) "
},
{
"math_id": 30,
"text": " u(i) "
},
{
"math_id": 31,
"text": "\\text{TAPA dual rate asset pricing model: } PV(0)\\cdot\\prod_{i=1}^n (1+r^s(i)) = z \\left [ \\sum_{k=1}^{n-1} {NOI_k \\cdot \\prod_{i=k+1}^n (1+r^b(i))} +NOI_n +S_{res} \\right] "
},
{
"math_id": 32,
"text": "r^b(i)"
},
{
"math_id": 33,
"text": "r^s(i)"
},
{
"math_id": 34,
"text": "S_{res}"
},
{
"math_id": 35,
"text": "z"
},
{
"math_id": 36,
"text": " S_{res}"
},
{
"math_id": 37,
"text": " v_o(i)"
},
{
"math_id": 38,
"text": " PV(0)"
},
{
"math_id": 39,
"text": "z=1"
},
{
"math_id": 40,
"text": "r^b(i)=r^s(i)=r(i)"
}
] |
https://en.wikipedia.org/wiki?curid=63961033
|
6396576
|
Scheil equation
|
Metallurgical equation for the redistribution of solutes during solidification of an alloy
In metallurgy, the Scheil-Gulliver equation (or Scheil equation) describes solute redistribution during solidification of an alloy.
Assumptions.
Four key assumptions in Scheil analysis enable determination of phases present in a cast part. These assumptions are:
The fourth condition (straight solidus/liquidus segments) may be relaxed when numerical techniques are used, such as those used in CALPHAD software packages, though these calculations rely on calculated equilibrium phase diagrams. Calculated diagrams may include odd artifacts (i.e. retrograde solubility) that influence Scheil calculations.
Derivation.
The hatched areas in the figure represent the amount of solute in the solid and liquid. Considering that the total amount of solute in the system must be conserved, the areas are set equal as follows:
formula_2.
Since the partition coefficient (related to solute distribution) is
formula_3 (determined from the phase diagram)
and mass must be conserved
formula_4
the mass balance may be rewritten as
formula_5.
Using the boundary condition
formula_6 at formula_7
the following integration may be performed:
formula_8.
Integrating results in the Scheil-Gulliver equation for composition of the liquid during solidification:
formula_9
or for the composition of the solid:
formula_10.
Applications of the Scheil equation: Calphad Tools for the Metallurgy of Solidification.
Nowadays, several Calphad softwares are available - in a framework of computational thermodynamics - to simulate solidification in systems with more than two components; these have recently been defined as Calphad Tools for the Metallurgy of Solidification. In recent years, Calphad-based methodologies have reached maturity in several important fields of metallurgy, and especially in solidification-related processes such as semi-solid casting, 3d printing, and welding, to name a few. While there are important studies devoted to the progress of Calphad methodology, there is still space for a systematization of the field, which proceeds from the ability of most Calphad-based software to simulate solidification curves and includes both fundamental and applied studies on solidification, to be substantially appreciated by a wider community than today. The three applied fields mentioned above could be widened by specific successful examples of simple modeling related to the topic of this issue, with the aim of widening the application of simple and effective tools related to Calphad and Metallurgy. See also "Calphad Tools for the Metallurgy of Solidification" in an ongoing issue of an Open Journal. https://www.mdpi.com/journal/metals/special_issues/Calphad_Solidification
Given a specific chemical composition, using a software for computational thermodynamics - which might be open or commercial - the calculation of the Scheil curve is possible if a thermodynamic database is available. A good point in favour of some specific commercial softwares is that the install is easy indeed and you can use it on a windows based system - for instance with students or for self training.
One should get some open, chiefly binary, databases (extension *.tdb), one could find - after registering - at Computational Phase Diagram Database (CPDDB) of the National Institute for Materials Science of Japan, NIMS https://cpddb.nims.go.jp/index_en.html. They are available - for free - and the collection is rather complete; in fact currently 507 binary systems are available in the thermodynamic data base (tdb) format.
Some wider and more specific alloy systems partly open - with tdb compatible format - are available with minor corrections for Pandat use at Matcalc https://www.matcalc.at/index.php/databases/open-databases.
Numerical expression and numerical derivative of the Scheil curve: application to grain size on solidification and semi-solid processing.
A key concept that might be used for applications is the (numerical) derivative of the solid fraction fs with temperature. A numerical example using a copper zinc alloy at composition Zn 30% in weight is proposed as an example here using the opposite sign for using both temperature and its derivative in the same graph.
Kozlov and Schmid-Fetzer have calculated numerically the derivative of the Scheil curve in an open paper https://iopscience.iop.org/article/10.1088/1757-899X/27/1/012001 and applied it to the growth restriction factor Q in Al-Si-Mg-Cu alloys.
formula_11
Application to grain size on solidification.
This - Calphad calculated value of numerical derivative - Q has some interesting applications in the field of metal solidification. In fact, Q reflects the phase diagram of the alloy system and its reciprocal has been found to have a relationship with grain size d on solidification, which empirically has been found in some cases to be linear:
formula_12
where a and b are constants, as illustrated with some examples from the literature for Mg and Al alloys. Before Calphad use, Q values were calculated from the conventional relationship:
Q=m*c0(k−1)
where m is the slope of the liquidus, c0 is the solute concentration, and k is the equilibrium distribution coefficient.
More recently some other possible correlation of Q with grain size d have been found, for instance:
formula_13
where B is a constant independent of alloy composition.
Application to solidification cracking.
In recent publications, prof. Sindo Kou has proposed an approach to evaluate susceptibility to solidification cracking; this approach is based on a similar approach where a quantity, formula_14, which has the dimensions of a temperature is proposed as an index of the cracking susceptibility. Again one could exploit Scheil based solidification curves to link this index to the slope of the (Scheil) solidification curve:
formula_15
∂T/(∂(fS)^1/2)=
∂T/(∂(fS)*(∂(fS)^1/2)/∂(fS))=
(1/2)∂T/∂(fS)*(fS)^1/2=
formula_16
Application to semi-solid processing.
Last but not least prof. E.J.Zoqui has summarized in his work the approach proposed by several researchers in the criteria for semi-solid processing, which involves the stability of the solid phase fs with the temperature; to process semisolid alloys the sensitivity to variation of solid fraction with temperature should be minimal: in one direction it could evolve to a difficult to deform solid, on the other to a liquid which may be difficult to shape without proper moulding. It turns out that we can express this criterion again by evaluating the slope of the solidification curve, in fact ∂(fS)/∂T should be less than a certain threshold, which is commonly accepted in the scientific and technical literature to be below 0.03 1/K. Mathematically this may be expressed by an inequation, ∂(fS)/∂T < 0.03 (1/K) - where K stands for Kelvin degrees - could be equally assumed for a rough estimate of the two main semi-solid casting processing: both rheocasting ( 0.3<fs<0.4 ) and thixoforming (0.6<fs<0.7). If one would go back just to the (numerical) and functional approaches above, one should consider the reciprocal value i.e. ∂T/∂(fS)> 33 (K)
|
[
{
"math_id": 0,
"text": "\\ D_S = 0 "
},
{
"math_id": 1,
"text": "\\ D_L = \\infty"
},
{
"math_id": 2,
"text": "(C_L-C_S) \\ df_S = (f_L) \\ dC_L"
},
{
"math_id": 3,
"text": "k = \\frac{C_S}{C_L}"
},
{
"math_id": 4,
"text": "\\ f_S + f_L = 1"
},
{
"math_id": 5,
"text": "C_L(1-k) \\ df_S = (1-f_S) \\ dC_L"
},
{
"math_id": 6,
"text": "\\ C_L = C_o "
},
{
"math_id": 7,
"text": "\\ f_S = 0"
},
{
"math_id": 8,
"text": "\\displaystyle\\int^{f_S}_0 \\frac{df_S}{1-f_S} = \\frac{1}{1-k} \\displaystyle\\int^{C_L}_{C_o} \\frac{dC_L}{C_L}"
},
{
"math_id": 9,
"text": "\\ C_L = C_o(f_L)^{k - 1}"
},
{
"math_id": 10,
"text": "\\ C_S = kC_o(1-f_S)^{k - 1}"
},
{
"math_id": 11,
"text": "Q=\\lim_{fs \\to 0}\\frac{\\partial T}{\\partial f_S} "
},
{
"math_id": 12,
"text": "d=a+\\frac{b}{Q} "
},
{
"math_id": 13,
"text": "d=\\frac{B}{Q^(1/3)} "
},
{
"math_id": 14,
"text": "\\lim_{fs \\to 1}\\frac{\\partial T}{\\partial f_S^(1/2)} "
},
{
"math_id": 15,
"text": "\\frac{\\partial T}{\\partial fs^(1/2)} = "
},
{
"math_id": 16,
"text": "\\frac{\\partial T}{\\partial fs}*fs^(1/2)/2 "
}
] |
https://en.wikipedia.org/wiki?curid=6396576
|
63967
|
Double pendulum
|
Pendulum with another pendulum attached to its end
In physics and mathematics, in the area of dynamical systems, a double pendulum also known as a chaotic pendulum is a pendulum with another pendulum attached to its end, forming a simple physical system that exhibits rich dynamic behavior with a strong sensitivity to initial conditions. The motion of a double pendulum is governed by a set of coupled ordinary differential equations and is chaotic.
Analysis and interpretation.
Several variants of the double pendulum may be considered; the two limbs may be of equal or unequal lengths and masses, they may be simple pendulums or compound pendulums (also called complex pendulums) and the motion may be in three dimensions or restricted to the vertical plane. In the following analysis, the limbs are taken to be identical compound pendulums of length l and mass m, and the motion is restricted to two dimensions.
In a compound pendulum, the mass is distributed along its length. If the double pendulum mass is evenly distributed, then the center of mass of each limb is at its midpoint, and the limb has a moment of inertia of "I"
"ml"2 about that point.
It is convenient to use the angles between each limb and the vertical as the generalized coordinates defining the configuration of the system. These angles are denoted "θ"1 and "θ"2. The position of the center of mass of each rod may be written in terms of these two coordinates. If the origin of the Cartesian coordinate system is taken to be at the point of suspension of the first pendulum, then the center of mass of this pendulum is at:
formula_0
and the center of mass of the second pendulum is at
formula_1
This is enough information to write out the Lagrangian.
Lagrangian.
The Lagrangian is
formula_2
The first term is the "linear" kinetic energy of the center of mass of the bodies and the second term is the "rotational" kinetic energy around the center of mass of each rod. The last term is the potential energy of the bodies in a uniform gravitational field. The dot-notation indicates the time derivative of the variable in question.
Since (see Chain Rule and List of trigonometric identities)
formula_3
formula_4
formula_5
and
formula_6
formula_7
formula_8
formula_9
substituting the coordinates above and rearranging the equation gives
formula_10
formula_11
The Euler-Lagrange equations then give the two following second-order, non-linear differential equations in formula_12:
formula_13
No closed form solutions for "θ"1 and "θ"2 as functions of time are known, therefore solving the system can only be done numerically, using the Runge Kutta method or similar techniques.
Chaotic motion.
The double pendulum undergoes chaotic motion, and cleary shows a sensitive dependence on initial conditions. The image to the right shows the amount of elapsed time before the pendulum flips over, as a function of initial position when released at rest. Here, the initial value of "θ"1 ranges along the x-direction from −3.14 to 3.14. The initial value "θ"2 ranges along the y-direction, from −3.14 to 3.14. The colour of each pixel indicates whether either pendulum flips within:
Initial conditions that do not lead to a flip within formula_18 are plotted white.
The boundary of the central white region is defined in part by energy conservation with the following curve:
formula_19
Within the region thats defined by this curve, that is if
formula_20
then it is energetically impossible for either pendulum to flip. Outside this region, the pendulum can flip, but it is a complex question to determine when it will flip. Similar behavior is observed for a double pendulum composed of two point masses rather than two rods with distributed mass.
The lack of a natural excitation frequency has led to the use of double pendulum systems in seismic resistance designs in buildings, where the building itself is the primary inverted pendulum, and a secondary mass is connected to complete the double pendulum.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\nx_1 &= \\frac{l}{2} \\sin \\theta_1 \\\\\ny_1 &= -\\frac{l}{2} \\cos \\theta_1\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\nx_2 &= l \\left ( \\sin \\theta_1 + \\tfrac{1}{2} \\sin \\theta_2 \\right ) \\\\\ny_2 &= -l \\left ( \\cos \\theta_1 + \\tfrac{1}{2} \\cos \\theta_2 \\right )\n\\end{align}"
},
{
"math_id": 2,
"text": "\n\\begin{align}L & = \\text{kinetic energy} - \\text{potential energy} \\\\\n & = \\tfrac{1}{2} m \\left ( v_1^2 + v_2^2 \\right ) + \\tfrac{1}{2} I \\left ( {\\dot \\theta_1}^2 + {\\dot \\theta_2}^2 \\right ) - m g \\left ( y_1 + y_2 \\right ) \\\\\n & = \\tfrac{1}{2} m \\left ( {\\dot x_1}^2 + {\\dot y_1}^2 + {\\dot x_2}^2 + {\\dot y_2}^2 \\right ) + \\tfrac{1}{2} I \\left ( {\\dot \\theta_1}^2 + {\\dot \\theta_2}^2 \\right ) - m g \\left ( y_1 + y_2 \\right ) \\end{align}\n"
},
{
"math_id": 3,
"text": "\n\\dot x_1 = \\dot \\theta_1 \\left(\\tfrac{l}{2} \\cos \\theta_1 \\right) \n\\quad \\rightarrow \\quad\n\\dot x_1^2 = \\dot \\theta_1^2 \\left(\\tfrac{l^2}{4} \\cos^2 \\theta_1 \\right) \n"
},
{
"math_id": 4,
"text": "\n\\dot y_1 = \\dot \\theta_1 \\left(\\tfrac{l}{2} \\sin \\theta_1 \\right)\n\\quad \\rightarrow \\quad\n\\dot y_1^2 = \\dot \\theta_1^2 \\left(\\tfrac{l^2}{4} \\sin^2 \\theta_1 \\right) \n"
},
{
"math_id": 5,
"text": "\n\\dot x_1^2 + \\dot y_1^2 = \\dot \\theta_1^2 \\tfrac{l^2}{4} \\left(\\cos^2 \\theta_1 + \\sin^2 \\theta_1 \\right) = \\tfrac{l^2}{4} \\dot \\theta_1^2 ,\n"
},
{
"math_id": 6,
"text": "\n\\dot x_2 = l \\left(\\dot \\theta_1 \\cos \\theta_1 + \\tfrac{1}{2} \\dot \\theta_2 \\cos \\theta_2 \\right)\n\\quad \\rightarrow \\quad\n\\dot x_2^2 = l^2 \\left(\n\\dot \\theta_1^2 \\cos^2 \\theta_1 + \\dot \\theta_1 \\dot \\theta_2 \\cos \\theta_1 \\cos \\theta_2 + \\tfrac{1}{4} \\dot \\theta_2^2 \\cos^2 \\theta_2\n \\right) \n"
},
{
"math_id": 7,
"text": "\n\\dot y_2 = l \\left(\\dot \\theta_1 \\sin \\theta_1 + \\tfrac{1}{2} \\dot \\theta_2 \\sin \\theta_2 \\right)\n\\quad \\rightarrow \\quad\n\\dot y_2^2 = l^2 \\left(\n\\dot \\theta_1^2 \\sin^2 \\theta_1 + \\dot \\theta_1 \\dot \\theta_2 \\sin \\theta_1 \\sin \\theta_2 + \\tfrac{1}{4} \\dot \\theta_2^2 \\sin^2 \\theta_2\n \\right) \n"
},
{
"math_id": 8,
"text": "\n\\dot x_2^2 + \\dot y_2^2 = \nl^2 \\left(\n\\dot \\theta_1^2 \\cos^2 \\theta_1 + \\dot \\theta_1^2 \\sin^2 \\theta_1 + \\tfrac{1}{4} \\dot \\theta_2^2 \\cos^2 \\theta_2 \n+ \\tfrac{1}{4} \\dot \\theta_2^2 \\sin^2 \\theta_2\n+ \\dot \\theta_1 \\dot \\theta_2 \\cos \\theta_1 \\cos \\theta_2 + \\dot \\theta_1 \\dot \\theta_2 \\sin \\theta_1 \\sin \\theta_2\n \\right) \n"
},
{
"math_id": 9,
"text": "\n= l^2 \\left(\n\\dot \\theta_1^2 + \\tfrac{1}{4} \\dot \\theta_2^2 \n+ \\dot \\theta_1 \\dot \\theta_2 \\cos \\left(\\theta_1 - \\theta_2 \\right)\n \\right),\n"
},
{
"math_id": 10,
"text": "\nL = \\tfrac{m l^2}{2} \\left(\\tfrac{1}{4} \\dot \\theta_1^2 + \n\\dot \\theta_1^2 + \\tfrac{1}{4} \\dot \\theta_2^2 \n+ \\dot \\theta_1 \\dot \\theta_2 \\cos \\left(\\theta_1 - \\theta_2 \\right)\n\\right)\n+ \n\\tfrac{m l^2}{24} \\left( \\dot \\theta_1^2 + \\dot \\theta_2^2 \\right)\n- m g \\left(y_1 + y_2 \\right)\n"
},
{
"math_id": 11,
"text": "\n= \\tfrac{1}{6} m l^2 \\left ( {\\dot \\theta_2}^2 + 4 {\\dot \\theta_1}^2 + 3 {\\dot \\theta_1} {\\dot \\theta_2} \\cos (\\theta_1-\\theta_2) \\right ) + \\tfrac{1}{2} m g l \\left ( 3 \\cos \\theta_1 + \\cos \\theta_2 \\right ).\n"
},
{
"math_id": 12,
"text": "(\\theta_1,\\theta_2)"
},
{
"math_id": 13,
"text": "\\begin{array}{l}\n m_2 \\left(g \\sin \\left(\\theta _1\\right)+l_2 \\left(\\left(\\theta _2'\\right){}^2\n \\sin \\left(\\theta _1-\\theta _2\\right)+\\theta _2'' \\cos \\left(\\theta\n _1-\\theta _2\\right)\\right)+l_1 \\theta _1''\\right)+m_1 \\left(g \\sin\n \\left(\\theta _1\\right)+l_1 \\theta _1''\\right)=0 \\\\\n g \\sin \\left(\\theta _2\\right)+l_1 \\left(\\theta _1'' \\cos \\left(\\theta\n _1-\\theta _2\\right)-\\left(\\theta _1'\\right){}^2 \\sin \\left(\\theta _1-\\theta\n _2\\right)\\right)+l_2 \\theta _2''=0 \\\\\n\\end{array}"
},
{
"math_id": 14,
"text": "\\sqrt{\\frac{l}{g}}"
},
{
"math_id": 15,
"text": "10\\sqrt{\\frac{l}{g}}"
},
{
"math_id": 16,
"text": "100\\sqrt{\\frac{l}{g}}"
},
{
"math_id": 17,
"text": "1000\\sqrt{\\frac{l}{g}}"
},
{
"math_id": 18,
"text": "10000\\sqrt{\\frac{l}{g}}"
},
{
"math_id": 19,
"text": "3 \\cos \\theta_1 + \\cos \\theta_2 = 2. "
},
{
"math_id": 20,
"text": "3 \\cos \\theta_1 + \\cos \\theta_2 > 2, "
}
] |
https://en.wikipedia.org/wiki?curid=63967
|
639706
|
Swampland (physics)
|
Low energy theories not compatible with string theory
In physics, the term swampland refers to effective low-energy physical theories which are not compatible with quantum gravity. This is in contrast with the so-called "string theory landscape" that are known to be compatible with string theory, which is hypothesized to be a consistent quantum theory of gravity. In other words, the Swampland is the set of consistent-looking theories with no consistent ultraviolet completion with the addition of gravity.
Developments in string theory also suggest that the string theory landscape of false vacuum is vast, so it is natural to ask if the landscape is as vast as allowed by anomaly-free effective field theories. The Swampland program aims to delineate the theories of quantum gravity by identifying the universal principles shared among all theories compatible with gravitational UV completion. The program was initiated by Cumrun Vafa who argued that string theory suggests that the Swampland is in fact much larger than the string theory landscape.
Quantum gravity differs from quantum field theory in several key ways, including locality and UV/IR decoupling. In quantum gravity, a local structure of observables is emergent rather than fundamental. A concrete example of the emergence of locality is AdS/CFT, where the local quantum field theory description in bulk is only an approximation that emerges within certain limits of the theory. Moreover, in quantum gravity, it is believed that different spacetime topologies can contribute to the gravitational path integral, which suggests that spacetime emerges due to one saddle being more dominant. Moreover, in quantum gravity, UV and IR are closely related. This connection is manifested in black hole thermodynamics, where a semiclassical IR theory calculates the black hole entropy, which captures the density of gravitational UV states known as black holes. In addition to general arguments based on black hole physics, developments in string theory also suggests that there are universal principles shared among all the theories in the string landscape.
The swampland conjectures are a set of conjectured criteria for theories in the quantum gravity landscape. The criteria are often motivated by black hole physics, universal patterns in string theory, and non-trivial self-consistencies among each other.
No global symmetry conjecture.
The no global symmetry conjecture states that any symmetry in quantum gravity is either broken or gauged. In other words, there are no accidental symmetries in quantum gravity. The original motivation for the conjecture goes back to black holes. Hawking radiation of a generic black hole is only sensitive to charges that can be measured outside of the black hole, which are charges under gauge symmetries. Therefore, it is believed that the process of black hole formation and evaporation violates any conservation, which is not protected by gauge symmetry. The no global symmetry conjecture can also be derived from AdS/CFT correspondence in AdS.
Generalization to higher-form symmetries.
The modern understanding of global and gauge symmetries allows for a natural generalization of the no-global symmetry conjectures to higher-form symmetries. A conventional symmetry (0-form symmetry) is a map that acts on point-like operators. For example, a free complex scalar field formula_0 has a formula_1 symmetry which acts on the operator formula_2 as formula_3, where formula_4 is a constant. One can use the symmetry to associate an operator formula_5 to any symmetry element formula_6 and codimension-1 hypersurface formula_7 such that formula_5 maps any charged local operator such as formula_2 to formula_8 if the point formula_9 is enclosed (or linked) by formula_7. By definition, the action of the operator formula_5 does not change by a continuous deformation of formula_7 as long as formula_7 does not hit a charged operator. Due to this feature, the operator formula_10 is called a topological operator. If the algebra governing the fusion of the symmetry operators has an element without an inverse, the corresponding symmetry is called a non-invertible symmetry.
The above definitions can be generalized to higher dimensional charged operators. A collection of codimension-formula_11 topological operators which act non-trivially on dimension-formula_12 operators and are closed under fusion is called a formula_12-form symmetry. Compactification of a higher dimensional theory with a formula_12-form symmetry on a formula_12-dimensional torus can map the higher form symmetry to a formula_13-form symmetry in the lower dimensional theory. Therefore, it is believed that higher-form global symmetries are also excluded from quantum gravity.
Note that gauge symmetry does not satisfy this definition since, in the process of gauging, any local charged operator is excluded from the physical spectrum.
Cobordism conjecture.
Global symmetries are closely connected to conservation laws. The no-global symmetry conjecture essentially states that any conservation law that is not protected by a gauge symmetry can be violated via a dynamical process. This intuition leads to the cobordism conjecture.
Consider a gravitational theory that can be put on two backgrounds with formula_14 non-compact dimensions and internal geometries formula_15 and formula_16. Cobordism conjecture states that there must be a dynamical process which connects the two backgrounds to each other. In other words, there must exist a domain wall in the lower-dimensional theory which separates the two backgrounds. This resembles the idea of cobordism in mathematics, which interpolates between two manifolds by connecting them using a higher dimensional manifold.
Completeness of spectrum hypothesis.
The completeness of spectrum hypothesis conjectures that in quantum gravity, the spectrum of charges under any gauge symmetry is completely realized. This conjecture is universally satisfied in string theory, but is also motivated by black hole physics. The entropy of charged black holes is non-zero. Since the exponential of entropy counts the number of states, the non-zero entropy of black holes suggests that for sufficiently high charges, any charge is realized by at least one black hole state.
Relation to no-global symmetry conjecture.
The completeness of spectrum hypothesis is closely related to the no global symmetry conjecture.
Example:
Consider a formula_1 gauge symmetry. In the absence of charged particles, the theory has a 1-form global symmetry formula_17. For any number formula_18 and any codimension 2 surface formula_7, the symmetry operator formula_19 multiplies a Wilson line that links with formula_7 by formula_20, where the charge associated with the Wilson line is formula_21 units of the fundamental charge.
In the presence of charged particles, Wilson lines can break up. Suppose there is a charged particle with charge formula_22, the Wilson lines can change their charges for multiples of formula_22. Therefore, some of the symmetry operators formula_19 are no longer well-defined. However, if we take formula_22 to be the smallest charge, the values formula_23 give rise to well defined symmetry operators. Therefore, a formula_24 part of the global symmetry survives. To avoid any global symmetry, formula_22 must be 1 which means all charges appear in the spectrum.
The above argument can be generalized to discrete and higher-dimensional symmetries. The completeness of spectrum follows from the absence of generalized global symmetry which also includes non-invertible symmetries.
Weak gravity conjecture.
The weak gravity conjecture (WGC) is a conjecture regarding the strength gravity can have in a theory of quantum gravity relative to the gauge forces in that theory. It roughly states that gravity should be the weakest force in any consistent theory of quantum gravity.
Original conjecture.
The weak gravity conjecture postulates that every black hole must decay unless it is protected by supersymmetry. Suppose there is a formula_1 gauge symmetry, there is an upper bound on the charge of the black holes with a given mass. The black holes that saturate that bound are extremal black holes. The extremal black holes have zero Hawking temperature. However, whether or not a black hole with a charge and a mass that exactly satisfies the extremality condition exists depends on the quantum theory. But given the high entropy of the large extremal black holes, there must exist many states with charges and masses that are arbitrarily close to the extremality condition. Suppose the black hole emits a particle with charge formula_25 and mass formula_26. For the remaining black hole to remain subextremal, we must have formula_27 in Planck units where the extremality condition takes the form formula_28.
Mild version.
Given that black holes are the natural extension of particles beyond a certain mass, it is natural to assume that there must also be black holes with a charge-to-mass ratio that is greater than that of very large black holes. In other words, the correction to the extremality condition formula_29 must be such that formula_30.
Higher dimensional generalization.
Weak gravity conjecture can be generalized to higher-form gauge symmetries. The generalization postulates that for any higher-form gauge symmetry, there exists a brane which has a charge-to-mass ratio that exceeds the charge-to-mass ratio of the extremal branes.
Distance conjecture.
String dualities have played a crucial role in developing the modern understanding of string theory by providing a non-perturbative window into UV physics. In string theory, when one takes the vacuum expectation values of the scalar fields of a theory to a certain limit, a dual description always emerges. An example of this is T-duality, where there are two dual descriptions to understand a string theory with an internal geometry of a circle. However, each perturbative description becomes valid in a different regime of the parameter space. The circle's radius manifests itself as a scalar field in the lower dimensional theory. If one takes the value of this scalar field to infinity, the resulting theory can be described by the original higher dimensional theory. The new description includes a tower of light states corresponding to the Kaluza-Klein (KK) particles. On the other hand, if we take the size of the circle to zero, the strings that wind around the circle will become light. T-duality is the statement that there exists an alternative description which captures these light winding states as KK particles. Note that in the absence of a string, there is no reason to believe any states should become light in the limit where the size of the circle goes to zero. Distance conjecture quantifies the above observation and states that it must happen at any infinite distance limit of the parameter space.
Original conjecture.
If one takes the vacuum expectation value of the scalar fields to infinity, there exists a tower of light and weakly coupled states whose mass in Planck units goes to zero. Moreover, the mass of the particles depends on the canonical distance travelled in the moduli space formula_31 as formula_32, where formula_33 and formula_34 are constants. Moreover, there is a universal dimension-dependent lower bound on formula_34.
The canonical distance between two points in the target space for scalar expectations values (moduli space) is measured using the canonical metric formula_35, which is defined by the kinetic term in action.
formula_36
Emergent string conjecture.
A stronger version of the original distance conjecture additionally postulates that the lightest tower of states at any infinite distance limit is either a KK tower or a string tower. In other words, the leading tower of states can either be understood via dimensional reduction of a higher dimensional theory (just like the example provided above) or as excitations of a weakly coupled string.
This conjecture is often further strengthened by imposing the string to be a fundamental string.
The sharpened distance conjecture.
The sharpened distance conjecture states that in formula_14 spacetime dimensions, formula_37.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\phi(x)"
},
{
"math_id": 1,
"text": "U(1)"
},
{
"math_id": 2,
"text": "\\hat\\phi(x)"
},
{
"math_id": 3,
"text": "\\hat\\phi(x)\\rightarrow e^{i\\alpha}\\hat\\phi(x)"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\mathcal{O}_g(\\Sigma)"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "\\Sigma"
},
{
"math_id": 8,
"text": "g(\\hat\\phi(x))"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": " \\mathcal{O}"
},
{
"math_id": 11,
"text": "(p+1)"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "0"
},
{
"math_id": 14,
"text": "d"
},
{
"math_id": 15,
"text": "M"
},
{
"math_id": 16,
"text": "N"
},
{
"math_id": 17,
"text": "(\\mathbb{R},+)"
},
{
"math_id": 18,
"text": "c\\in\\mathbb{R}"
},
{
"math_id": 19,
"text": "\\mathcal{O}_c(\\Sigma)"
},
{
"math_id": 20,
"text": "e^{icn}"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "k"
},
{
"math_id": 23,
"text": "c\\in\\{1/k,2/k,...,k/k\\}"
},
{
"math_id": 24,
"text": "\\mathbb{Z}_k"
},
{
"math_id": 25,
"text": "q"
},
{
"math_id": 26,
"text": "m"
},
{
"math_id": 27,
"text": "|q|<m"
},
{
"math_id": 28,
"text": "|Q|=M"
},
{
"math_id": 29,
"text": "|Q|=M+\\delta M"
},
{
"math_id": 30,
"text": "\\delta M>0"
},
{
"math_id": 31,
"text": "\\Delta\\phi"
},
{
"math_id": 32,
"text": "m\\sim M_0\\exp(-\\lambda\\Delta\\phi)"
},
{
"math_id": 33,
"text": "M_0"
},
{
"math_id": 34,
"text": "\\lambda"
},
{
"math_id": 35,
"text": "G"
},
{
"math_id": 36,
"text": "S=\\int d^dx \\sqrt{g}\\frac{1}{2}G_{ij}\\partial_\\mu\\phi^i\\partial^\\mu\\phi^j+..."
},
{
"math_id": 37,
"text": "\\lambda\\geq1/\\sqrt{d-2}"
}
] |
https://en.wikipedia.org/wiki?curid=639706
|
63983302
|
Matchbox Educable Noughts and Crosses Engine
|
Mechanical computer made of matchboxes
The Matchbox Educable Noughts and Crosses Engine (sometimes called the Machine Educable Noughts and Crosses Engine or MENACE) was a mechanical computer made from 304 matchboxes designed and built by artificial intelligence researcher Donald Michie in 1961. It was designed to play human opponents in games of noughts and crosses (tic-tac-toe) by returning a move for any given state of play and to refine its strategy through reinforcement learning.
Michie did not have a computer readily available, so he worked around this restriction by building it out of matchboxes. The matchboxes used by Michie each represented a single possible layout of a noughts and crosses grid. When the computer first played, it would randomly choose moves based on the current layout. As it played more games, through a reinforcement loop, it disqualified strategies that led to losing games, and supplemented strategies that led to winning games. Michie held a tournament against MENACE in 1961, wherein he experimented with different openings.
Following MENACE's maiden tournament against Michie, it demonstrated successful artificial intelligence in its strategy. Michie's essays on MENACE's weight initialisation and the BOXES algorithm used by MENACE became popular in the field of computer science research. Michie was honoured for his contribution to machine learning research, and was twice commissioned to program a MENACE simulation on an actual computer.
Origin.
Donald Michie (1923–2007) had been on the team decrypting the German Tunny Code during World War II. Fifteen years later, he wanted to further display his mathematical and computational prowess with an early convolutional neural network. Since computer equipment was not obtainable for such uses, and Michie did not have a computer readily available, he decided to display and demonstrate artificial intelligence in a more esoteric format and constructed a functional mechanical computer out of matchboxes and beads.
MENACE was constructed as the result of a bet with a computer science colleague who postulated that such a machine was impossible. Michie undertook the task of collecting and defining each matchbox as a "fun project", later turned into a demonstration tool. Michie completed his essay on MENACE in 1963, "Experiments on the mechanization of game-learning", as well as his essay on the BOXES Algorithm, written with R. A. Chambers and had built up an AI research unit in Hope Park Square, Edinburgh, Scotland.
MENACE learned by playing increasing matches of noughts and crosses. Each time, it would eliminate a losing strategy by the human player confiscating the beads that corresponded to each move. It reinforced winning strategies by making the moves more likely, by supplying extra beads. This was one of the earliest versions of the Reinforcement Loop, the schematic algorithm of looping the algorithm, dropping unsuccessful strategies until only the winning ones remain. This model starts as completely random, and gradually learns.
Composition.
MENACE was made from 304 matchboxes glued together in an arrangement similar to a chest of drawers. Each box had a code number, which was keyed into a chart. This chart had drawings of tic-tac-toe game grids with various configurations of "X", "O", and empty squares, corresponding to all possible permutations a game could go through as it progressed. After removing duplicate arrangements (ones that were simply rotations or mirror images of other configurations), MENACE used 304 permutations in its chart and thus that many matchboxes.
Each individual matchbox tray contained a collection of coloured beads. Each colour represented a move on a square on the game grid, and so matchboxes with arrangements where positions on the grid were already taken would not have beads for that position. Additionally, at the front of the tray were two extra pieces of card in a "V" shape, the point of the "V" pointing at the front of the matchbox. Michie and his artificial intelligence team called MENACE's algorithm "Boxes", after the apparatus used for the machine. The first stage "Boxes" operated in five phases, each setting a definition and a precedent for the rules of the algorithm in relation to the game.
Operation.
MENACE played first, as O, since all matchboxes represented permutations only relevant to the "X" player. To retrieve MENACE's choice of move, the opponent or operator located the matchbox that matched the current game state, or a rotation or mirror image of it. For example, at the start of a game, this would be the matchbox for an empty grid. The tray would be removed and lightly shaken so as to move the beads around. Then, the bead that had rolled into the point of the "V" shape at the front of the tray was the move MENACE had chosen to make. Its colour was then used as the position to play on, and, after accounting for any rotations or flips needed based on the chosen matchbox configuration's relation to the current grid, the O would be placed on that square. Then the player performed their move, the new state was located, a new move selected, and so on, until the game was finished.
When the game had finished, the human player observed the game's outcome. As a game was played, each matchbox that was used for MENACE's turn had its tray returned to it ajar, and the bead used kept aside, so that MENACE's choice of moves and the game states they belonged to were recorded. Michie described his reinforcement system with "reward" and "punishment". Once the game was finished, if MENACE had won, it would then receive a "reward" for its victory. The removed beads showed the sequence of the winning moves. These were returned to their respective trays, easily identifiable since they were slightly open, as well as three bonus beads of the same colour. In this way, in future games MENACE would become more likely to repeat those winning moves, reinforcing winning strategies. If it lost, the removed beads were not returned, "punishing" MENACE, and meaning that in future it would be less likely, and eventually incapable if that colour of bead became absent, to repeat the moves that cause a loss. If the game was a draw, one additional bead was added to each box.
Results in practice.
Optimal strategy.
Noughts and crosses has a well-known optimal strategy. A player must place their symbol in a way that blocks the other player from achieving any rows while simultaneously making a row themself. However, if both players use this strategy, the game always ends in a draw. If the human player is familiar with the optimal strategy, and MENACE can quickly learn it, then the games will eventually only end in draws. The likelihood of the computer winning increases quickly when the computer plays against a random-playing opponent.
When playing against a player using optimal strategy, the odds of a draw grow to 100%. In Donald Michie's official tournament against MENACE in 1961 he used optimal strategy, and he and the computer began to draw consistently after twenty games. Michie's tournament had the following milestones: Michie began by consistently opening with "Variant 0", the middle square. At 15 games, MENACE abandoned all non-corner openings. At just over 20, Michie switched to consistently using "Variant 1", the bottom-right square. At 60, he returned to Variant 0. As he neared 80 games, he moved to "Variant 2", the top-middle. At 110, he switched to "Variant 3", the top right. At 135, he switched to "Variant 4", middle-right. At 190, he returned to Variant 1, and at 210, he returned to Variant 0.
The trend in changes of beads in the "2" boxes runs:
Correlation.
Depending on the strategy employed by the human player, MENACE produces a different trend on scatter graphs of wins. Using a random turn from the human player results in an almost-perfect positive trend. Playing the optimal strategy returns a slightly slower increase. The reinforcement does not create a perfect standard of wins; the algorithm will draw random uncertain conclusions each time. After the "j"-th round, the correlation of near-perfect play runs:
formula_0
Where "Vi" is the outcome (+1 is win, 0 is draw and -1 is loss) and "D" is the decay factor (average of past values of wins and losses). Below, "Mn" is the multiplier for the "n"-th round of the game.
Legacy.
Donald Michie's MENACE proved that a computer could learn from failure and success to become good at a task. It used what would become core principles within the field of machine learning before they had been properly theorised. For example, the combination of how MENACE starts with equal numbers of types of beads in each matchbox, and how these are then selected at random, creates a learning behaviour similar to weight initialisation in modern artificial neural networks. In 1968, Donald Michie and R.A Chambers made another BOXES-based algorithm called GLEE (Game Learning Expectimaxing Engine) which had to learn how to balance a pole on a cart.
After the resounding reception of MENACE, Michie was invited to the US Office of Naval Research, where he was commissioned to build a BOXES-running program for an IBM Computer for use at Stanford University. Michie created a simulation program of MENACE on a Pegasus 2 computer with the aid of D. Martin. There have been multiple recreations of MENACE in more recent years, both in its original physical form and as a computer program. Its algorithm was later converged into Christopher Watkin's Q-Learning algorithm. Although not as a functional computer, in examples of demonstration, MENACE has been used as a teaching aid for various neural network classes, including a public demonstration from University College London researcher Matthew Scroggs. A copy of MENACE built by Scroggs was featured in the 2019 Royal Institution Christmas Lectures, and in a 2023 episode of QI XL.
MENACE in Popular Culture.
MENACE is referenced in Fred Saberhagen's 1963 short story "Without A Thought", and Thomas J Ryan's 1977 novel "The Adolescence of P-1". In her 2023 book "The Future", author Naomi Alderman includes a fictional lecture with a detailed overview of MENACE.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{1-D \\over D-D^{(j+2)}}\\sum_{i=0}^j D^{(ji+1)} V_i"
}
] |
https://en.wikipedia.org/wiki?curid=63983302
|
63987005
|
Topological homomorphism
|
In functional analysis, a topological homomorphism or simply homomorphism (if no confusion will arise) is the analog of homomorphisms for the category of topological vector spaces (TVSs).
This concept is of considerable importance in functional analysis and the famous open mapping theorem gives a sufficient condition for a continuous linear map between Fréchet spaces to be a topological homomorphism.
Definitions.
A topological homomorphism or simply homomorphism (if no confusion will arise) is a continuous linear map formula_0 between topological vector spaces (TVSs) such that the induced map formula_1 is an open mapping when formula_2 which is the image of formula_3 is given the subspace topology induced by formula_4
This concept is of considerable importance in functional analysis and the famous open mapping theorem gives a sufficient condition for a continuous linear map between Fréchet spaces to be a topological homomorphism.
A TVS embedding or a topological monomorphism is an injective topological homomorphism. Equivalently, a TVS-embedding is a linear map that is also a topological embedding.
Characterizations.
Suppose that formula_0 is a linear map between TVSs and note that formula_5 can be decomposed into the composition of the following canonical linear maps:
formula_6
where formula_7 is the canonical quotient map and formula_8 is the inclusion map.
The following are equivalent:
If in addition the range of formula_5 is a finite-dimensional Hausdorff space then the following are equivalent:
Sufficient conditions.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_0 be a surjective continuous linear map from an LF-space formula_15 into a TVS formula_4
If formula_12 is also an LF-space or if formula_12 is a Fréchet space then formula_0 is a topological homomorphism.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Suppose formula_16 be a continuous linear operator between two Hausdorff TVSs. If formula_17 is a dense vector subspace of formula_15 and if the restriction formula_18 to formula_17 is a topological homomorphism then formula_16 is also a topological homomorphism.
So if formula_19 and formula_20 are Hausdorff completions of formula_15 and formula_21 respectively, and if formula_16 is a topological homomorphism, then formula_22's unique continuous linear extension formula_23 is a topological homomorphism. (However, it is possible for formula_16 to be surjective but for formula_23 to not be injective.)
Open mapping theorem.
The open mapping theorem, also known as Banach's homomorphism theorem, gives a sufficient condition for a continuous linear operator between complete metrizable TVSs to be a topological homomorphism.
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_0 be a continuous linear map between two complete metrizable TVSs.
If formula_24 which is the range of formula_3 is a dense subset of formula_12 then either formula_25 is meager (that is, of the first category) in formula_12 or else formula_0 is a surjective topological homomorphism.
In particular, formula_0 is a topological homomorphism if and only if formula_25 is a closed subset of formula_4
<templatestyles src="Math_theorem/styles.css" />
Corollary —
Let formula_26 and formula_27 be TVS topologies on a vector space formula_15 such that each topology makes formula_15 into a complete metrizable TVSs. If either formula_28 or formula_29 then formula_30
<templatestyles src="Math_theorem/styles.css" />
Corollary —
If formula_15 is a complete metrizable TVS, formula_17 and formula_31 are two closed vector subspaces of formula_10 and if formula_15 is the algebraic direct sum of formula_17 and formula_31 (i.e. the direct sum in the category of vector spaces), then formula_15 is the direct sum of formula_17 and formula_31 in the category of topological vector spaces.
Examples.
Every continuous linear functional on a TVS is a topological homomorphism.
Let formula_15 be a formula_32-dimensional TVS over the field formula_33 and let formula_34 be non-zero. Let formula_35 be defined by formula_36 If formula_33 has it usual Euclidean topology and if formula_15 is Hausdorff then formula_35 is a TVS-isomorphism.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u : X \\to Y"
},
{
"math_id": 1,
"text": "u : X \\to \\operatorname{Im} u"
},
{
"math_id": 2,
"text": "\\operatorname{Im} u := u(X),"
},
{
"math_id": 3,
"text": "u,"
},
{
"math_id": 4,
"text": "Y."
},
{
"math_id": 5,
"text": "u"
},
{
"math_id": 6,
"text": "X ~\\overset{\\pi}{\\rightarrow}~ X / \\operatorname{ker} u ~\\overset{u_0}{\\rightarrow}~ \\operatorname{Im} u ~\\overset{\\operatorname{In}}{\\rightarrow}~ Y"
},
{
"math_id": 7,
"text": "\\pi : X \\to X / \\operatorname{ker} u"
},
{
"math_id": 8,
"text": "\\operatorname{In} : \\operatorname{Im} u \\to Y"
},
{
"math_id": 9,
"text": "\\mathcal{U}"
},
{
"math_id": 10,
"text": "X,"
},
{
"math_id": 11,
"text": "u\\left( \\mathcal{U} \\right)"
},
{
"math_id": 12,
"text": "Y"
},
{
"math_id": 13,
"text": "u_0 : X / \\operatorname{ker} u \\to \\operatorname{Im} u"
},
{
"math_id": 14,
"text": "u^{-1}(0)"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "f : X \\to Y"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "f\\big\\vert_M : M \\to Y"
},
{
"math_id": 19,
"text": "C"
},
{
"math_id": 20,
"text": "D"
},
{
"math_id": 21,
"text": "Y,"
},
{
"math_id": 22,
"text": "f"
},
{
"math_id": 23,
"text": "F : C \\to D"
},
{
"math_id": 24,
"text": "\\operatorname{Im} u,"
},
{
"math_id": 25,
"text": "\\operatorname{Im} u"
},
{
"math_id": 26,
"text": "\\sigma"
},
{
"math_id": 27,
"text": "\\tau"
},
{
"math_id": 28,
"text": "\\sigma \\subseteq \\tau"
},
{
"math_id": 29,
"text": "\\tau \\subseteq \\sigma"
},
{
"math_id": 30,
"text": "\\sigma = \\tau."
},
{
"math_id": 31,
"text": "N"
},
{
"math_id": 32,
"text": "1"
},
{
"math_id": 33,
"text": "\\mathbb{K}"
},
{
"math_id": 34,
"text": "x \\in X"
},
{
"math_id": 35,
"text": "L : \\mathbb{K} \\to X"
},
{
"math_id": 36,
"text": "L(s) := s x."
}
] |
https://en.wikipedia.org/wiki?curid=63987005
|
63988132
|
Ordered topological vector space
|
In mathematics, specifically in functional analysis and order theory, an ordered topological vector space, also called an ordered TVS, is a topological vector space (TVS) "X" that has a partial order ≤ making it into an ordered vector space whose positive cone formula_0 is a closed subset of "X".
Ordered TVSes have important applications in spectral theory.
Normal cone.
If "C" is a cone in a TVS "X" then "C" is normal if formula_1, where formula_2 is the neighborhood filter at the origin, formula_3, and formula_4 is the "C"-saturated hull of a subset "U" of "X".
If "C" is a cone in a TVS "X" (over the real or complex numbers), then the following are equivalent:
and if "X" is a vector space over the reals then also:
If the topology on "X" is locally convex then the closure of a normal cone is a normal cone.
Properties.
If "C" is a normal cone in "X" and "B" is a bounded subset of "X" then formula_15 is bounded; in particular, every interval formula_16 is bounded.
If "X" is Hausdorff then every normal cone in "X" is a proper cone.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C := \\left\\{ x \\in X : x \\geq 0\\right\\}"
},
{
"math_id": 1,
"text": "\\mathcal{U} = \\left[ \\mathcal{U} \\right]_{C}"
},
{
"math_id": 2,
"text": "\\mathcal{U}"
},
{
"math_id": 3,
"text": "\\left[ \\mathcal{U} \\right]_{C} = \\left\\{ \\left[ U \\right] : U \\in \\mathcal{U} \\right\\}"
},
{
"math_id": 4,
"text": "[U]_{C} := \\left(U + C\\right) \\cap \\left(U - C\\right)"
},
{
"math_id": 5,
"text": "\\mathcal{F}"
},
{
"math_id": 6,
"text": "\\lim \\mathcal{F} = 0"
},
{
"math_id": 7,
"text": "\\lim \\left[ \\mathcal{F} \\right]_{C} = 0"
},
{
"math_id": 8,
"text": "\\mathcal{B}"
},
{
"math_id": 9,
"text": "B \\in \\mathcal{B}"
},
{
"math_id": 10,
"text": "\\left[ B \\cap C \\right]_{C} \\subseteq B"
},
{
"math_id": 11,
"text": "\\mathcal{P}"
},
{
"math_id": 12,
"text": "p(x) \\leq p(x + y)"
},
{
"math_id": 13,
"text": "x, y \\in C"
},
{
"math_id": 14,
"text": "p \\in \\mathcal{P}"
},
{
"math_id": 15,
"text": "\\left[ B \\right]_{C}"
},
{
"math_id": 16,
"text": "[a, b]"
},
{
"math_id": 17,
"text": "X^{+}"
}
] |
https://en.wikipedia.org/wiki?curid=63988132
|
63989376
|
Archimedean ordered vector space
|
A binary relation on a vector space
In mathematics, specifically in order theory, a binary relation formula_0 on a vector space formula_1 over the real or complex numbers is called Archimedean if for all formula_2 whenever there exists some formula_3 such that formula_4 for all positive integers formula_5 then necessarily formula_6
An Archimedean (pre)ordered vector space is a (pre)ordered vector space whose order is Archimedean.
A preordered vector space formula_1 is called almost Archimedean if for all formula_2 whenever there exists a formula_3 such that formula_7 for all positive integers formula_5 then formula_8
Characterizations.
A preordered vector space formula_9 with an order unit formula_10 is Archimedean preordered if and only if formula_11 for all non-negative integers formula_12 implies formula_6
Properties.
Let formula_1 be an ordered vector space over the reals that is finite-dimensional. Then the order of formula_1 is Archimedean if and only if the positive cone of formula_1 is closed for the unique topology under which formula_1 is a Hausdorff TVS.
Order unit norm.
Suppose formula_9 is an ordered vector space over the reals with an order unit formula_10 whose order is Archimedean and let formula_13
Then the Minkowski functional formula_14 of formula_15 (defined by formula_16) is a norm called the order unit norm.
It satisfies formula_17 and the closed unit ball determined by formula_14 is equal to formula_18 (that is, formula_19
Examples.
The space formula_20 of bounded real-valued maps on a set formula_21 with the pointwise order is Archimedean ordered with an order unit formula_22 (that is, the function that is identically formula_23 on formula_21).
The order unit norm on formula_20 is identical to the usual sup norm: formula_24
Examples.
Every order complete vector lattice is Archimedean ordered.
A finite-dimensional vector lattice of dimension formula_12 is Archimedean ordered if and only if it is isomorphic to formula_25 with its canonical order.
However, a totally ordered vector order of dimension formula_26 can not be Archimedean ordered.
There exist ordered vector spaces that are almost Archimedean but not Archimedean.
The Euclidean space formula_27 over the reals with the lexicographic order is not Archimedean ordered since formula_28 for every formula_29 but formula_30
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\,\\leq\\,"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "x \\in X,"
},
{
"math_id": 3,
"text": "y \\in X"
},
{
"math_id": 4,
"text": "n x \\leq y"
},
{
"math_id": 5,
"text": "n,"
},
{
"math_id": 6,
"text": "x \\leq 0."
},
{
"math_id": 7,
"text": "-n^{-1} y \\leq x \\leq n^{-1} y"
},
{
"math_id": 8,
"text": "x = 0."
},
{
"math_id": 9,
"text": "(X, \\leq)"
},
{
"math_id": 10,
"text": "u"
},
{
"math_id": 11,
"text": "n x \\leq u"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "U = [-u, u]."
},
{
"math_id": 14,
"text": "p_U"
},
{
"math_id": 15,
"text": "U"
},
{
"math_id": 16,
"text": "p_{U}(x) := \\inf\\left\\{ r > 0 : x \\in r [-u, u] \\right\\}"
},
{
"math_id": 17,
"text": "p_U(u) = 1"
},
{
"math_id": 18,
"text": "[-u, u]"
},
{
"math_id": 19,
"text": "[-u, u] = \\{ x\\in X : p_U(x) \\leq 1 \\}."
},
{
"math_id": 20,
"text": "l_{\\infin}(S, \\R)"
},
{
"math_id": 21,
"text": "S"
},
{
"math_id": 22,
"text": "u := 1"
},
{
"math_id": 23,
"text": "1"
},
{
"math_id": 24,
"text": "\\|f\\| := \\sup_{} |f(S)|."
},
{
"math_id": 25,
"text": "\\R^n"
},
{
"math_id": 26,
"text": "\\,> 1"
},
{
"math_id": 27,
"text": "\\R^2"
},
{
"math_id": 28,
"text": "r(0, 1) \\leq (1, 1)"
},
{
"math_id": 29,
"text": "r > 0"
},
{
"math_id": 30,
"text": "(0, 1) \\neq (0, 0)."
}
] |
https://en.wikipedia.org/wiki?curid=63989376
|
63989481
|
Order bound dual
|
Mathematical concept
In mathematics, specifically in order theory and functional analysis, the order bound dual of an ordered vector space formula_0 is the set of all linear functionals on formula_0 that map order intervals, which are sets of the form formula_1 to bounded sets.
The order bound dual of formula_0 is denoted by formula_2 This space plays an important role in the theory of ordered topological vector spaces.
Canonical ordering.
An element formula_3 of the order bound dual of formula_0 is called positive if formula_4 implies formula_5
The positive elements of the order bound dual form a cone that induces an ordering on formula_6 called the <templatestyles src="Template:Visible anchor/styles.css" />canonical ordering.
If formula_0 is an ordered vector space whose positive cone formula_7 is generating (meaning formula_8) then the order bound dual with the canonical ordering is an ordered vector space.
Properties.
The order bound dual of an ordered vector spaces contains its order dual.
If the positive cone of an ordered vector space formula_0 is generating and if for all positive formula_9 and formula_9 we have formula_10 then the order dual is equal to the order bound dual, which is an order complete vector lattice under its canonical ordering.
Suppose formula_0 is a vector lattice and formula_11 and formula_3 are order bounded linear forms on formula_12
Then for all formula_13
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "[a, b] := \\{ x \\in X : a \\leq x \\text{ and } x \\leq b \\},"
},
{
"math_id": 2,
"text": "X^{\\operatorname{b}}."
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "x \\geq 0"
},
{
"math_id": 5,
"text": "\\operatorname{Re}(f(x)) \\geq 0."
},
{
"math_id": 6,
"text": "X^{\\operatorname{b}}"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "X = C - C"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "[0, x] + [0, y] = [0, x + y],"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "X."
},
{
"math_id": 13,
"text": "x \\in X,"
},
{
"math_id": 14,
"text": "\\sup(f, g)(|x|) = \\sup \\{ f(y) + g(z) : y \\geq 0, z \\geq 0, \\text{ and } y + z = |x| \\}"
},
{
"math_id": 15,
"text": "\\inf(f, g)(|x|) = \\inf \\{ f(y) + g(z) : y \\geq 0, z \\geq 0, \\text{ and } y + z = |x| \\}"
},
{
"math_id": 16,
"text": "|f|(|x|) = \\sup \\{ f(y - z) : y \\geq 0, z \\geq 0, \\text{ and } y + z = |x| \\}"
},
{
"math_id": 17,
"text": "|f(x)| \\leq |f|(|x|)"
},
{
"math_id": 18,
"text": "f \\geq 0"
},
{
"math_id": 19,
"text": "g \\geq 0"
},
{
"math_id": 20,
"text": "r > 0,"
},
{
"math_id": 21,
"text": "x = a + b"
},
{
"math_id": 22,
"text": "a \\geq 0, b \\geq 0, \\text{ and } f(a) + g(b) \\leq r."
}
] |
https://en.wikipedia.org/wiki?curid=63989481
|
63989659
|
Solid set
|
In mathematics, specifically in order theory and functional analysis, a subset formula_0 of a vector lattice is said to be solid and is called an ideal if for all formula_1 and formula_2 if formula_3 then formula_4
An ordered vector space whose order is Archimedean is said to be "Archimedean ordered".
If formula_5 then the ideal generated by formula_0 is the smallest ideal in formula_6 containing formula_7
An ideal generated by a singleton set is called a principal ideal in formula_8
Examples.
The intersection of an arbitrary collection of ideals in formula_6 is again an ideal and furthermore, formula_6 is clearly an ideal of itself;
thus every subset of formula_6 is contained in a unique smallest ideal.
In a locally convex vector lattice formula_9 the polar of every solid neighborhood of the origin is a solid subset of the continuous dual space formula_10;
moreover, the family of all solid equicontinuous subsets of formula_10 is a fundamental family of equicontinuous sets, the polars (in bidual formula_11) form a neighborhood base of the origin for the natural topology on formula_11 (that is, the topology of uniform convergence on equicontinuous subset of formula_10).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "s \\in S"
},
{
"math_id": 2,
"text": "x \\in X,"
},
{
"math_id": 3,
"text": "|x| \\leq |s|"
},
{
"math_id": 4,
"text": "x \\in S."
},
{
"math_id": 5,
"text": "S\\subseteq X"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "S."
},
{
"math_id": 8,
"text": "X."
},
{
"math_id": 9,
"text": "X,"
},
{
"math_id": 10,
"text": "X^{\\prime}"
},
{
"math_id": 11,
"text": "X^{\\prime\\prime}"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "X/N"
}
] |
https://en.wikipedia.org/wiki?curid=63989659
|
63989777
|
Order complete
|
Property of subsets of ordered vector spaces
In mathematics, specifically in order theory and functional analysis, a subset formula_0 of an ordered vector space is said to be order complete in formula_1 if for every non-empty subset formula_2 of formula_1 that is order bounded in formula_0 (meaning contained in an interval, which is a set of the form formula_3 for some formula_4), the supremum formula_5' and the infimum formula_6 both exist and are elements of formula_7
An ordered vector space is called order complete, Dedekind complete, a complete vector lattice, or a complete Riesz space, if it is order complete as a subset of itself, in which case it is necessarily a vector lattice.
An ordered vector space is said to be countably order complete if each countable subset that is bounded above has a supremum.
Being an order complete vector space is an important property that is used frequently in the theory of topological vector lattices.
Examples.
The order dual of a vector lattice is an order complete vector lattice under its canonical ordering.
If formula_1 is a locally convex topological vector lattice then the strong dual formula_8 is an order complete locally convex topological vector lattice under its canonical order.
Every reflexive locally convex topological vector lattice is order complete and a complete TVS.
Properties.
If formula_1 is an order complete vector lattice then for any subset formula_9 formula_1 is the ordered direct sum of the band generated by formula_0 and of the band formula_10 of all elements that are disjoint from formula_7 For any subset formula_0 of formula_11 the band generated by formula_0 is formula_12 If formula_13 and formula_14 are lattice disjoint then the band generated by formula_15 contains formula_14 and is lattice disjoint from the band generated by formula_16 which contains formula_17
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "[a, b] := \\{ x \\in X : a \\leq x \\text{ and } x \\leq b \\},"
},
{
"math_id": 4,
"text": "a, b \\in A"
},
{
"math_id": 5,
"text": "\\sup S"
},
{
"math_id": 6,
"text": "\\inf S"
},
{
"math_id": 7,
"text": "A."
},
{
"math_id": 8,
"text": "X^{\\prime}_b"
},
{
"math_id": 9,
"text": "S \\subseteq X,"
},
{
"math_id": 10,
"text": "A^{\\perp}"
},
{
"math_id": 11,
"text": "X,"
},
{
"math_id": 12,
"text": "A^{\\perp \\perp}."
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": "\\{x\\},"
},
{
"math_id": 16,
"text": "\\{y\\},"
},
{
"math_id": 17,
"text": "x."
}
] |
https://en.wikipedia.org/wiki?curid=63989777
|
63989892
|
Order topology (functional analysis)
|
Topology of an ordered vector space
In mathematics, specifically in order theory and functional analysis, the order topology of an ordered vector space formula_0 is the finest locally convex topological vector space (TVS) topology on formula_1 for which every order interval is bounded, where an order interval in formula_1 is a set of the form formula_2 where formula_3 and formula_4 belong to formula_5
The order topology is an important topology that is used frequently in the theory of ordered topological vector spaces because the topology stems directly from the algebraic and order theoretic properties of formula_6 rather than from some topology that formula_1 starts out having.
This allows for establishing intimate connections between this topology and the algebraic and order theoretic properties of formula_7
For many ordered topological vector spaces that occur in analysis, their topologies are identical to the order topology.
Definitions.
The family of all locally convex topologies on formula_1 for which every order interval is bounded is non-empty (since it contains the coarsest possible topology on formula_1) and the order topology is the upper bound of this family.
A subset of formula_1 is a neighborhood of the origin in the order topology if and only if it is convex and absorbs every order interval in formula_5
A neighborhood of the origin in the order topology is necessarily an absorbing set because formula_8 for all formula_9
For every formula_10 let formula_11 and endow formula_12 with its order topology (which makes it into a normable space).
The set of all formula_12's is directed under inclusion and if formula_13 then the natural inclusion of formula_12 into formula_14 is continuous.
If formula_1 is a regularly ordered vector space over the reals and if formula_15 is any subset of the positive cone formula_16 of formula_1 that is cofinal in formula_16 (e.g. formula_15 could be formula_16), then formula_1 with its order topology is the inductive limit of formula_17 (where the bonding maps are the natural inclusions).
The lattice structure can compensate in part for any lack of an order unit:
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Let formula_1 be a vector lattice with a regular order and let formula_16 denote its positive cone. Then the order topology on formula_1 is the finest locally convex topology on formula_1 for which formula_16 is a normal cone; it is also the same as the Mackey topology induced on formula_1 with respect to the duality formula_18
In particular, if formula_19 is an ordered Fréchet lattice over the real numbers then formula_20 is the ordered topology on formula_1 if and only if the positive cone of formula_1 is a normal cone in formula_21
If formula_1 is a regularly ordered vector lattice then the ordered topology is the finest locally convex TVS topology on formula_1 making formula_1 into a locally convex vector lattice. If in addition formula_1 is order complete then formula_1 with the order topology is a barreled space and every band decomposition of formula_1 is a topological direct sum for this topology.
In particular, if the order of a vector lattice formula_1 is regular then the order topology is generated by the family of all lattice seminorms on formula_5
Properties.
Throughout, formula_0 will be an ordered vector space and formula_22 will denote the order topology on formula_5
Relation to subspaces, quotients, and products.
If formula_31 is a solid vector subspace of a vector lattice formula_32 then the order topology of formula_33 is the quotient of the order topology on formula_5
Examples.
The order topology of a finite product of ordered vector spaces (this product having its canonical order) is identical to the product topology of the topological product of the constituent ordered vector spaces (when each is given its order topology).
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X, \\leq)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "[a, b] := \\left\\{ z \\in X : a \\leq z \\text{ and } z \\leq b \\right\\}"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "X."
},
{
"math_id": 6,
"text": "(X, \\leq),"
},
{
"math_id": 7,
"text": "(X, \\leq)."
},
{
"math_id": 8,
"text": "[x, x] := \\{ x \\}"
},
{
"math_id": 9,
"text": "x \\in X."
},
{
"math_id": 10,
"text": "a \\geq 0,"
},
{
"math_id": 11,
"text": "X_a = \\bigcup_{n=1}^{\\infty} n [-a, a]"
},
{
"math_id": 12,
"text": "X_a"
},
{
"math_id": 13,
"text": "X_a \\subseteq X_b"
},
{
"math_id": 14,
"text": "X_b"
},
{
"math_id": 15,
"text": "H"
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "\\left\\{ X_a : a \\geq 0 \\right\\}"
},
{
"math_id": 18,
"text": "\\left\\langle X, X^{+} \\right\\rangle."
},
{
"math_id": 19,
"text": "(X, \\tau)"
},
{
"math_id": 20,
"text": "\\tau"
},
{
"math_id": 21,
"text": "(X, \\tau)."
},
{
"math_id": 22,
"text": "\\tau_{\\leq}"
},
{
"math_id": 23,
"text": "\\left(X, \\tau_{\\leq}\\right)"
},
{
"math_id": 24,
"text": "\\ell^1"
},
{
"math_id": 25,
"text": "x \\geq 0"
},
{
"math_id": 26,
"text": "y \\geq 0"
},
{
"math_id": 27,
"text": "[0, x] + [0, y] = [0, x + y]"
},
{
"math_id": 28,
"text": "\\,\\leq\\,"
},
{
"math_id": 29,
"text": "X_b = X^+."
},
{
"math_id": 30,
"text": "Y"
},
{
"math_id": 31,
"text": "M"
},
{
"math_id": 32,
"text": "X,"
},
{
"math_id": 33,
"text": "X / M"
}
] |
https://en.wikipedia.org/wiki?curid=63989892
|
63989962
|
Topological vector lattice
|
In mathematics, specifically in functional analysis and order theory, a topological vector lattice is a Hausdorff topological vector space (TVS) formula_0 that has a partial order formula_1 making it into vector lattice that is possesses a neighborhood base at the origin consisting of solid sets.
Ordered vector lattices have important applications in spectral theory.
Definition.
If formula_0 is a vector lattice then by the vector lattice operations we mean the following maps:
If formula_0 is a TVS over the reals and a vector lattice, then formula_0 is locally solid if and only if (1) its positive cone is a normal cone, and (2) the vector lattice operations are continuous.
If formula_0 is a vector lattice and an ordered topological vector space that is a Fréchet space in which the positive cone is a normal cone, then the lattice operations are continuous.
If formula_0 is a topological vector space (TVS) and an ordered vector space then formula_0 is called locally solid if formula_0 possesses a neighborhood base at the origin consisting of solid sets.
A topological vector lattice is a Hausdorff TVS formula_0 that has a partial order formula_1 making it into vector lattice that is locally solid.
Properties.
Every topological vector lattice has a closed positive cone and is thus an ordered topological vector space.
Let formula_8 denote the set of all bounded subsets of a topological vector lattice with positive cone formula_9 and for any subset formula_10, let formula_11 be the formula_9-saturated hull of formula_10.
Then the topological vector lattice's positive cone formula_9 is a strict formula_8-cone, where formula_9 is a strict formula_8-cone means that formula_12 is a fundamental subfamily of formula_8 that is, every formula_13 is contained as a subset of some element of formula_12).
If a topological vector lattice formula_0 is order complete then every band is closed in formula_0.
Examples.
The Lp spaces (formula_14) are Banach lattices under their canonical orderings.
These spaces are order complete for formula_15.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\,\\leq\\,"
},
{
"math_id": 2,
"text": "x \\mapsto|x |"
},
{
"math_id": 3,
"text": "x \\mapsto x^+"
},
{
"math_id": 4,
"text": "x \\mapsto x^{-}"
},
{
"math_id": 5,
"text": "X \\times X"
},
{
"math_id": 6,
"text": "(x, y) \\mapsto \\sup_{} \\{ x, y \\}"
},
{
"math_id": 7,
"text": "(x, y) \\mapsto \\inf_{} \\{ x, y \\}"
},
{
"math_id": 8,
"text": "\\mathcal{B}"
},
{
"math_id": 9,
"text": "C"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "[S]_C := (S + C) \\cap (S - C)"
},
{
"math_id": 12,
"text": "\\left\\{ [B]_C : B \\in \\mathcal{B} \\right\\}"
},
{
"math_id": 13,
"text": "B \\in \\mathcal{B}"
},
{
"math_id": 14,
"text": "1 \\leq p \\leq \\infty"
},
{
"math_id": 15,
"text": "p < \\infty"
}
] |
https://en.wikipedia.org/wiki?curid=63989962
|
63990033
|
Weak order unit
|
In mathematics, specifically in order theory and functional analysis, an element formula_0 of a vector lattice formula_1 is called a weak order unit in formula_1 if formula_2 and also for all formula_3 formula_4
Citations.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "x \\geq 0"
},
{
"math_id": 3,
"text": "y \\in X,"
},
{
"math_id": 4,
"text": "\\inf \\{ x, |y| \\} = 0 \\text{ implies } y = 0."
},
{
"math_id": 5,
"text": "X."
}
] |
https://en.wikipedia.org/wiki?curid=63990033
|
63990101
|
Quasi-interior point
|
In mathematics, specifically in order theory and functional analysis, an element formula_0 of an ordered topological vector space formula_1 is called a quasi-interior point of the positive cone formula_2 of formula_1 if formula_3 and if the order interval formula_4 is a total subset of formula_1; that is, if the linear span of formula_5 is a dense subset of formula_6
Properties.
If formula_1 is a separable metrizable locally convex ordered topological vector space whose positive cone formula_2 is a complete and total subset of formula_7 then the set of quasi-interior points of formula_2 is dense in formula_8
Examples.
If formula_9 then a point in formula_10 is quasi-interior to the positive cone formula_2 if and only it is a weak order unit, which happens if and only if the element (which recall is an equivalence class of functions) contains a function that is formula_11 almost everywhere (with respect to formula_12).
A point in formula_13 is quasi-interior to the positive cone formula_2 if and only if it is interior to formula_8
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "x \\geq 0"
},
{
"math_id": 4,
"text": "[0, x] := \\{ z \\in Z : 0 \\leq z \\text{ and } z \\leq x \\}"
},
{
"math_id": 5,
"text": "[0, x]"
},
{
"math_id": 6,
"text": "X."
},
{
"math_id": 7,
"text": "X,"
},
{
"math_id": 8,
"text": "C."
},
{
"math_id": 9,
"text": "1 \\leq p < \\infty"
},
{
"math_id": 10,
"text": "L^p(\\mu)"
},
{
"math_id": 11,
"text": ">\\, 0"
},
{
"math_id": 12,
"text": "\\mu"
},
{
"math_id": 13,
"text": "L^\\infty(\\mu)"
}
] |
https://en.wikipedia.org/wiki?curid=63990101
|
63990150
|
Abstract L-space
|
In mathematics, specifically in order theory and functional analysis, an abstract "L"-space, an AL-space, or an abstract Lebesgue space is a Banach lattice formula_0 whose norm is additive on the positive cone of "X".
In probability theory, it means the standard probability space.
Examples.
The strong dual of an AM-space with unit is an AL-space.
Properties.
The reason for the name abstract "L"-space is because every AL-space is isomorphic (as a Banach lattice) with some subspace of formula_1
Every AL-space "X" is an order complete vector lattice of minimal type;
however, the order dual of "X", denoted by "X"+, is "not" of minimal type unless "X" is finite-dimensional.
Each order interval in an AL-space is weakly compact.
The strong dual of an AL-space is an AM-space with unit.
The continuous dual space formula_2 (which is equal to "X"+) of an AL-space "X" is a Banach lattice that can be identified with formula_3, where "K" is a compact extremally disconnected topological space;
furthermore, under the evaluation map, "X" is isomorphic with the band of all real Radon measures 𝜇 on "K" such that for every majorized and directed subset "S" of formula_4 we have formula_5
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X, \\| \\cdot \\|)"
},
{
"math_id": 1,
"text": "L^1(\\mu)."
},
{
"math_id": 2,
"text": "X^{\\prime}"
},
{
"math_id": 3,
"text": "C_{\\R} ( K )"
},
{
"math_id": 4,
"text": "C_{\\R} ( K ),"
},
{
"math_id": 5,
"text": "\\lim_{f \\in S} \\mu ( f ) = \\mu ( \\sup S )."
}
] |
https://en.wikipedia.org/wiki?curid=63990150
|
63990225
|
Abstract m-space
|
Concept in order theoryIn mathematics, specifically in order theory and functional analysis, an abstract "m"-space or an AM-space is a Banach lattice formula_0 whose norm satisfies formula_1 for all "x" and "y" in the positive cone of "X".
We say that an AM-space "X" is an AM-space with unit if in addition there exists some "u" ≥ 0 in "X" such that the interval [−"u", "u"] := { "z" ∈ "X" : −"u" ≤ "z" and "z" ≤ "u" } is equal to the unit ball of "X";
such an element "u" is unique and an order unit of "X".
Examples.
The strong dual of an AL-space is an AM-space with unit.
If "X" is an Archimedean ordered vector lattice, "u" is an order unit of "X", and "p""u" is the Minkowski functional of formula_2 then the complete of the semi-normed space ("X", "p""u") is an AM-space with unit "u".
Properties.
Every AM-space is isomorphic (as a Banach lattice) with some closed vector sublattice of some suitable formula_3.
The strong dual of an AM-space with unit is an AL-space.
If "X" ≠ { 0 } is an AM-space with unit then the set "K" of all extreme points of the positive face of the dual unit ball is a non-empty and weakly compact (i.e. formula_4-compact) subset of formula_5 and furthermore, the evaluation map formula_6 defined by formula_7 (where formula_8 is defined by formula_9) is an isomorphism.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(X, \\| \\cdot \\|)"
},
{
"math_id": 1,
"text": "\\left\\| \\sup \\{ x, y \\} \\right\\| = \\sup \\left\\{ \\| x \\|, \\| y \\| \\right\\}"
},
{
"math_id": 2,
"text": "[u, -u] := \\{ x \\in X : -u \\leq x \\text{ and } x \\leq x \\},"
},
{
"math_id": 3,
"text": "C_{\\R}\\left( X \\right)"
},
{
"math_id": 4,
"text": "\\sigma\\left( X^{\\prime}, X \\right)"
},
{
"math_id": 5,
"text": "X^{\\prime}"
},
{
"math_id": 6,
"text": "I : X \\to C_{\\R} \\left( K \\right)"
},
{
"math_id": 7,
"text": "I(x) := I_x"
},
{
"math_id": 8,
"text": "I_x : K \\to \\R"
},
{
"math_id": 9,
"text": "I_x(t) = \\langle x, t \\rangle"
}
] |
https://en.wikipedia.org/wiki?curid=63990225
|
63990309
|
Regularly ordered
|
In mathematics, specifically in order theory and functional analysis, an ordered vector space formula_0 is said to be regularly ordered and its order is called regular if formula_0 is Archimedean ordered and the order dual of formula_0 distinguishes points in formula_0.
Being a regularly ordered vector space is an important property in the theory of topological vector lattices.
Examples.
Every ordered locally convex space is regularly ordered.
The canonical orderings of subspaces, products, and direct sums of regularly ordered vector spaces are again regularly ordered.
Properties.
If formula_0 is a regularly ordered vector lattice then the order topology on formula_0 is the finest topology on formula_0 making formula_0 into a locally convex topological vector lattice.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
}
] |
https://en.wikipedia.org/wiki?curid=63990309
|
63990620
|
Locally convex vector lattice
|
In mathematics, specifically in order theory and functional analysis, a locally convex vector lattice (LCVL) is a topological vector lattice that is also a locally convex space.
LCVLs are important in the theory of topological vector lattices.
Lattice semi-norms.
The Minkowski functional of a convex, absorbing, and solid set is a called a lattice semi-norm.
Equivalently, it is a semi-norm formula_0 such that formula_1 implies formula_2
The topology of a locally convex vector lattice is generated by the family of all continuous lattice semi-norms.
Properties.
Every locally convex vector lattice possesses a neighborhood base at the origin consisting of convex balanced solid absorbing sets.
The strong dual of a locally convex vector lattice formula_3 is an order complete locally convex vector lattice (under its canonical order) and it is a solid subspace of the order dual of formula_3;
moreover, if formula_3 is a barreled space then the continuous dual space of formula_3 is a band in the order dual of formula_3 and the strong dual of formula_3 is a complete locally convex TVS.
If a locally convex vector lattice is barreled then its strong dual space is complete (this is not necessarily true if the space is merely a locally convex barreled space but not a locally convex vector lattice).
If a locally convex vector lattice formula_3 is semi-reflexive then it is order complete and formula_4 (that is, formula_5) is a complete TVS;
moreover, if in addition every positive linear functional on formula_3 is continuous then formula_3 is of formula_3 is of minimal type, the order topology formula_6 on formula_3 is equal to the Mackey topology formula_7 and formula_8 is reflexive.
Every reflexive locally convex vector lattice is order complete and a complete locally convex TVS whose strong dual is a barreled reflexive locally convex TVS that can be identified under the canonical evaluation map with the strong bidual (that is, the strong dual of the strong dual).
If a locally convex vector lattice formula_3 is an infrabarreled TVS then it can be identified under the evaluation map with a topological vector sublattice of its strong bidual, which is an order complete locally convex vector lattice under its canonical order.
If formula_3 is a separable metrizable locally convex ordered topological vector space whose positive cone formula_9 is a complete and total subset of formula_10 then the set of quasi-interior points of formula_9 is dense in formula_11
<templatestyles src="Math_theorem/styles.css" />
Theorem —
Suppose that formula_3 is an order complete locally convex vector lattice with topology formula_12 and endow the bidual formula_13 of formula_3 with its natural topology (that is, the topology of uniform convergence on equicontinuous subsets of formula_14) and canonical order (under which it becomes an order complete locally convex vector lattice). The following are equivalent:
<templatestyles src="Math_theorem/styles.css" />
Corollary —
Let formula_3 be an order complete vector lattice with a regular order. The following are equivalent:
Moreover, if formula_3 is of minimal type then the order topology on formula_3 is the finest locally convex topology on formula_3 for which every order convergent filter converges.
If formula_15 is a locally convex vector lattice that is bornological and sequentially complete, then there exists a family of compact spaces formula_16 and a family of formula_17-indexed vector lattice embeddings formula_18 such that formula_12 is the finest locally convex topology on formula_3 making each formula_19 continuous.
Examples.
Every Banach lattice, normed lattice, and Fréchet lattice is a locally convex vector lattice.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "|y| \\leq |x|"
},
{
"math_id": 2,
"text": "p(y) \\leq p(x)."
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "X_b"
},
{
"math_id": 5,
"text": "\\left( X, b\\left(X, X^{\\prime}\\right) \\right)"
},
{
"math_id": 6,
"text": "\\tau_{\\operatorname{O}}"
},
{
"math_id": 7,
"text": "\\tau\\left(X, X^{\\prime}\\right),"
},
{
"math_id": 8,
"text": "\\left(X, \\tau_{\\operatorname{O}}\\right)"
},
{
"math_id": 9,
"text": "C"
},
{
"math_id": 10,
"text": "X,"
},
{
"math_id": 11,
"text": "C."
},
{
"math_id": 12,
"text": "\\tau"
},
{
"math_id": 13,
"text": "X^{\\prime\\prime}"
},
{
"math_id": 14,
"text": "X^{\\prime}"
},
{
"math_id": 15,
"text": "(X, \\tau)"
},
{
"math_id": 16,
"text": "\\left(X_{\\alpha}\\right)_{\\alpha \\in A}"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "f_{\\alpha} : C_{\\R}\\left(K_{\\alpha}\\right) \\to X"
},
{
"math_id": 19,
"text": "f_{\\alpha}"
}
] |
https://en.wikipedia.org/wiki?curid=63990620
|
63990850
|
Cone-saturated
|
In mathematics, specifically in order theory and functional analysis, if formula_0 is a cone at 0 in a vector space formula_1 such that formula_2 then a subset formula_3 is said to be formula_0-saturated if formula_4 where formula_5
Given a subset formula_6 the formula_0-saturated hull of formula_7 is the smallest formula_0-saturated subset of formula_1 that contains formula_8
If formula_9 is a collection of subsets of formula_1 then formula_10
If formula_11 is a collection of subsets of formula_1 and if formula_9 is a subset of formula_11 then formula_9 is a fundamental subfamily of formula_11 if every formula_12 is contained as a subset of some element of formula_13
If formula_14 is a family of subsets of a TVS formula_1 then a cone formula_0 in formula_1 is called a formula_14-cone if formula_15 is a fundamental subfamily of formula_14 and formula_0 is a strict formula_14-cone if formula_16 is a fundamental subfamily of formula_17
formula_0-saturated sets play an important role in the theory of ordered topological vector spaces and topological vector lattices.
Properties.
If formula_1 is an ordered vector space with positive cone formula_0 then formula_18
The map formula_19 is increasing; that is, if formula_20 then formula_21
If formula_7 is convex then so is formula_22 When formula_1 is considered as a vector field over formula_23 then if formula_7 is balanced then so is formula_22
If formula_9 is a filter base (resp. a filter) in formula_1 then the same is true of formula_24
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "0 \\in C,"
},
{
"math_id": 3,
"text": "S \\subseteq X"
},
{
"math_id": 4,
"text": "S = [S]_C,"
},
{
"math_id": 5,
"text": "[S]_C := (S + C) \\cap (S - C)."
},
{
"math_id": 6,
"text": "S \\subseteq X,"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "S."
},
{
"math_id": 9,
"text": "\\mathcal{F}"
},
{
"math_id": 10,
"text": "\\left[ \\mathcal{F} \\right]_C := \\left\\{ [F]_C : F \\in \\mathcal{F} \\right\\}."
},
{
"math_id": 11,
"text": "\\mathcal{T}"
},
{
"math_id": 12,
"text": "T \\in \\mathcal{T}"
},
{
"math_id": 13,
"text": "\\mathcal{F}."
},
{
"math_id": 14,
"text": "\\mathcal{G}"
},
{
"math_id": 15,
"text": "\\left\\{ \\overline{[G]_C} : G \\in \\mathcal{G} \\right\\}"
},
{
"math_id": 16,
"text": "\\left\\{ [B]_C : B \\in \\mathcal{B} \\right\\}"
},
{
"math_id": 17,
"text": "\\mathcal{B}."
},
{
"math_id": 18,
"text": "[S]_C = \\bigcup \\left\\{ [x, y] : x, y \\in S \\right\\}."
},
{
"math_id": 19,
"text": "S \\mapsto [S]_C"
},
{
"math_id": 20,
"text": "R \\subseteq S"
},
{
"math_id": 21,
"text": "[R]_C \\subseteq [S]_C."
},
{
"math_id": 22,
"text": "[S]_C."
},
{
"math_id": 23,
"text": "\\R,"
},
{
"math_id": 24,
"text": "\\left[ \\mathcal{F} \\right]_C := \\left\\{ [ F ]_C : F \\in \\mathcal{F} \\right\\}."
}
] |
https://en.wikipedia.org/wiki?curid=63990850
|
63990912
|
Normal cone (functional analysis)
|
In mathematics, specifically in order theory and functional analysis, if formula_0 is a cone at the origin in a topological vector space formula_1 such that formula_2 and if formula_3 is the neighborhood filter at the origin, then formula_0 is called normal if formula_4 where formula_5 and where for any subset formula_6 formula_7 is the formula_0-saturatation of formula_8
Normal cones play an important role in the theory of ordered topological vector spaces and topological vector lattices.
Characterizations.
If formula_0 is a cone in a TVS formula_1 then for any subset formula_9 let formula_10 be the formula_0-saturated hull of formula_9 and for any collection formula_11 of subsets of formula_1 let formula_12
If formula_0 is a cone in a TVS formula_1 then formula_0 is normal if formula_4 where formula_3 is the neighborhood filter at the origin.
If formula_13 is a collection of subsets of formula_1 and if formula_14 is a subset of formula_13 then formula_14 is a fundamental subfamily of formula_13 if every formula_15 is contained as a subset of some element of formula_16
If formula_17 is a family of subsets of a TVS formula_1 then a cone formula_0 in formula_1 is called a formula_17-cone if formula_18 is a fundamental subfamily of formula_17 and formula_0 is a strict formula_17-cone if formula_19 is a fundamental subfamily of formula_20
Let formula_21 denote the family of all bounded subsets of formula_22
If formula_0 is a cone in a TVS formula_1 (over the real or complex numbers), then the following are equivalent:
and if formula_1 is an ordered locally convex TVS over the reals whose positive cone is formula_27 then we may add to this list:
If formula_1 is a locally convex TVS, formula_0 is a cone in formula_1 with dual cone formula_28 and formula_17 is a saturated family of weakly bounded subsets of formula_29 then
If formula_1 is a Banach space, formula_0 is a closed cone in formula_23, and formula_24 is the family of all bounded subsets of formula_31 then the dual cone formula_25 is normal in formula_31 if and only if formula_0 is a strict formula_21-cone.
If formula_1 is a Banach space and formula_0 is a cone in formula_1 then the following are equivalent:
Ordered topological vector spaces.
Suppose formula_34 is an ordered topological vector space. That is, formula_34 is a topological vector space, and we define formula_35 whenever formula_36 lies in the cone formula_37. The following statements are equivalent:
Sufficient conditions.
If the topology on formula_1 is locally convex then the closure of a normal cone is a normal cone.
Suppose that formula_56 is a family of locally convex TVSs and that formula_57 is a cone in formula_58
If formula_59 is the locally convex direct sum then the cone formula_60 is a normal cone in formula_1 if and only if each formula_61 is normal in formula_58
If formula_1 is a locally convex space then the closure of a normal cone is a normal cone.
If formula_0 is a cone in a locally convex TVS formula_1 and if formula_25 is the dual cone of formula_27 then formula_62 if and only if formula_0 is weakly normal.
Every normal cone in a locally convex TVS is weakly normal.
In a normed space, a cone is normal if and only if it is weakly normal.
If formula_1 and formula_46 are ordered locally convex TVSs and if formula_17 is a family of bounded subsets of formula_23 then if the positive cone of formula_1 is a formula_17-cone in formula_1 and if the positive cone of formula_46 is a normal cone in formula_46 then the positive cone of formula_55 is a normal cone for the formula_17-topology on formula_63
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "0 \\in C"
},
{
"math_id": 3,
"text": "\\mathcal{U}"
},
{
"math_id": 4,
"text": "\\mathcal{U} = \\left[ \\mathcal{U} \\right]_C,"
},
{
"math_id": 5,
"text": "\\left[ \\mathcal{U} \\right]_C := \\left\\{ [ U ]_C : U \\in \\mathcal{U} \\right\\}"
},
{
"math_id": 6,
"text": "S \\subseteq X,"
},
{
"math_id": 7,
"text": "[S]_C := (S + C) \\cap (S - C)"
},
{
"math_id": 8,
"text": "S."
},
{
"math_id": 9,
"text": "S \\subseteq X"
},
{
"math_id": 10,
"text": "[S]_C := \\left(S + C\\right) \\cap \\left(S - C\\right)"
},
{
"math_id": 11,
"text": "\\mathcal{S}"
},
{
"math_id": 12,
"text": "\\left[ \\mathcal{S} \\right]_C := \\left\\{ \\left[ S \\right]_C : S \\in \\mathcal{S} \\right\\}."
},
{
"math_id": 13,
"text": "\\mathcal{T}"
},
{
"math_id": 14,
"text": "\\mathcal{F}"
},
{
"math_id": 15,
"text": "T \\in \\mathcal{T}"
},
{
"math_id": 16,
"text": "\\mathcal{F}."
},
{
"math_id": 17,
"text": "\\mathcal{G}"
},
{
"math_id": 18,
"text": "\\left\\{ \\overline{\\left[ G \\right]_C} : G \\in \\mathcal{G} \\right\\}"
},
{
"math_id": 19,
"text": "\\left\\{ \\left[ G \\right]_C : G \\in \\mathcal{G} \\right\\}"
},
{
"math_id": 20,
"text": "\\mathcal{G}."
},
{
"math_id": 21,
"text": "\\mathcal{B}"
},
{
"math_id": 22,
"text": "X."
},
{
"math_id": 23,
"text": "X,"
},
{
"math_id": 24,
"text": "\\mathcal{B}^{\\prime}"
},
{
"math_id": 25,
"text": "C^{\\prime}"
},
{
"math_id": 26,
"text": "X^{\\prime}."
},
{
"math_id": 27,
"text": "C,"
},
{
"math_id": 28,
"text": "C^{\\prime} \\subseteq X^{\\prime},"
},
{
"math_id": 29,
"text": "X^{\\prime},"
},
{
"math_id": 30,
"text": "\\left\\langle X, X^{\\prime}\\right\\rangle"
},
{
"math_id": 31,
"text": "X^{\\prime}_b"
},
{
"math_id": 32,
"text": "X = \\overline{C} - \\overline{C}"
},
{
"math_id": 33,
"text": "\\overline{C}"
},
{
"math_id": 34,
"text": "L"
},
{
"math_id": 35,
"text": "x \\geq y"
},
{
"math_id": 36,
"text": "x - y"
},
{
"math_id": 37,
"text": "L_+"
},
{
"math_id": 38,
"text": "c > 0"
},
{
"math_id": 39,
"text": "a \\leq x \\leq b"
},
{
"math_id": 40,
"text": "\\lVert x \\rVert \\leq c \\max\\{\\lVert a \\rVert, \\lVert b \\rVert\\}"
},
{
"math_id": 41,
"text": "[U] = (U + L_+) \\cap (U - L_+)"
},
{
"math_id": 42,
"text": "U"
},
{
"math_id": 43,
"text": "0 \\leq x \\leq y"
},
{
"math_id": 44,
"text": "\\lVert x \\rVert \\leq c \\lVert y \\rVert"
},
{
"math_id": 45,
"text": "X^{\\prime} = C^{\\prime} - C^{\\prime}."
},
{
"math_id": 46,
"text": "Y"
},
{
"math_id": 47,
"text": "D."
},
{
"math_id": 48,
"text": "Y = D - D"
},
{
"math_id": 49,
"text": "H - H"
},
{
"math_id": 50,
"text": "L_s(X; Y)"
},
{
"math_id": 51,
"text": "H"
},
{
"math_id": 52,
"text": "L(X; Y)"
},
{
"math_id": 53,
"text": "L_{s}(X; Y)"
},
{
"math_id": 54,
"text": "L_{\\mathcal{G}}(X; Y),"
},
{
"math_id": 55,
"text": "L_{\\mathcal{G}}(X; Y)"
},
{
"math_id": 56,
"text": "\\left\\{ X_{\\alpha} : \\alpha \\in A \\right\\}"
},
{
"math_id": 57,
"text": "C_\\alpha"
},
{
"math_id": 58,
"text": "X_{\\alpha}."
},
{
"math_id": 59,
"text": "X := \\bigoplus_{\\alpha} X_{\\alpha}"
},
{
"math_id": 60,
"text": "C := \\bigoplus_{\\alpha} C_\\alpha"
},
{
"math_id": 61,
"text": "X_{\\alpha}"
},
{
"math_id": 62,
"text": "X^{\\prime} = C^{\\prime} - C^{\\prime}"
},
{
"math_id": 63,
"text": "L(X; Y)."
}
] |
https://en.wikipedia.org/wiki?curid=63990912
|
63991442
|
Band (order theory)
|
In mathematics, specifically in order theory and functional analysis, a band in a vector lattice formula_0 is a subspace formula_1 of formula_0 that is solid and such that for all formula_2 such that formula_3 exists in formula_4 we have formula_5
The smallest band containing a subset formula_6 of formula_0 is called the band generated by formula_6 in formula_7
A band generated by a singleton set is called a principal band.
Examples.
For any subset formula_6 of a vector lattice formula_4 the set formula_8 of all elements of formula_0 disjoint from formula_6 is a band in formula_7
If formula_9 (formula_10) is the usual space of real valued functions used to define Lp spaces formula_11 then formula_9 is countably order complete (that is, each subset that is bounded above has a supremum) but in general is not order complete.
If formula_12 is the vector subspace of all formula_13-null functions then formula_12 is a solid subset of formula_9 that is not a band.
Properties.
The intersection of an arbitrary family of bands in a vector lattice formula_0 is a band in formula_7
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "S \\subseteq M"
},
{
"math_id": 3,
"text": "x = \\sup S"
},
{
"math_id": 4,
"text": "X,"
},
{
"math_id": 5,
"text": "x \\in M."
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "X."
},
{
"math_id": 8,
"text": "S^{\\perp}"
},
{
"math_id": 9,
"text": "\\mathcal{L}^p(\\mu)"
},
{
"math_id": 10,
"text": "1 \\leq p \\leq \\infty"
},
{
"math_id": 11,
"text": "L^p,"
},
{
"math_id": 12,
"text": "N"
},
{
"math_id": 13,
"text": "\\mu"
}
] |
https://en.wikipedia.org/wiki?curid=63991442
|
64000435
|
Cerium(IV) fluoride
|
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Cerium(IV) fluoride is an inorganic compound with a chemical formula CeF4. It is a strong oxidant that appears as a white crystalline material. Cerium(IV) fluoride has an anhydrous form and a monohydrate form.
Production and properties.
Cerium(IV) fluoride can be produced by fluorinating cerium(III) fluoride or cerium dioxide with fluorine gas at 500 °C
formula_0
formula_1
Its hydrated form (CeF4·xH2O, x≤1) can be produced by reacting 40% hydrofluoric acid and cerium(IV) sulfate solution at 90°C.
Cerium(IV) fluoride can dissolve in DMSO, and react to form the coordination complex [CeF4(DMSO)2].
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{ 2 \\ CeF_3 + F_2 \\longrightarrow 2 \\ CeF_4 }"
},
{
"math_id": 1,
"text": "\\mathrm{ CeO_2 + 2 \\ F_2 \\longrightarrow CeF_4 + O_2 }"
}
] |
https://en.wikipedia.org/wiki?curid=64000435
|
64005
|
Bresenham's line algorithm
|
Line-drawing algorithm
Bresenham's line algorithm is a line drawing algorithm that determines the points of an "n"-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is commonly used to draw line primitives in a bitmap image (e.g. on a computer screen), as it uses only integer addition, subtraction, and bit shifting, all of which are very cheap operations in historically common computer architectures. It is an incremental error algorithm, and one of the earliest algorithms developed in the field of computer graphics. An extension to the original algorithm called the "midpoint circle algorithm" may be used for drawing circles.
While algorithms such as Wu's algorithm are also frequently used in modern computer graphics because they can support antialiasing, Bresenham's line algorithm is still important because of its speed and simplicity. The algorithm is used in hardware such as plotters and in the graphics chips of modern graphics cards. It can also be found in many software graphics libraries. Because the algorithm is very simple, it is often implemented in either the firmware or the graphics hardware of modern graphics cards.
The label "Bresenham" is used today for a family of algorithms extending or modifying Bresenham's original algorithm.
History.
Bresenham's line algorithm is named after Jack Elton Bresenham who developed it in 1962 at IBM. In 2001 Bresenham wrote:
I was working in the computation lab at IBM's San Jose development lab. A Calcomp plotter had been attached to an IBM 1401 via the 1407 typewriter console. [The algorithm] was in production use by summer 1962, possibly a month or so earlier. Programs in those days were freely exchanged among corporations so Calcomp (Jim Newland and Calvin Hefte) had copies. When I returned to Stanford in Fall 1962, I put a copy in the Stanford comp center library.
A description of the line drawing routine was accepted for presentation at the 1963 ACM national convention in Denver, Colorado. It was a year in which no proceedings were published, only the agenda of speakers and topics in an issue of Communications of the ACM. A person from the IBM Systems Journal asked me after I made my presentation if they could publish the paper. I happily agreed, and they printed it in 1965.
Method.
The following conventions will be utilized:
The endpoints of the line are the pixels at formula_0 and formula_1, where the first coordinate of the pair is the column and the second is the row.
The algorithm will be initially presented only for the octant in which the segment goes down and to the right (formula_2 and formula_3), and its horizontal projection formula_4 is longer than the vertical projection formula_5 (the line has a positive slope less than 1).
In this octant, for each column "x" between formula_6 and formula_7, there is exactly one row "y" (computed by the algorithm) containing a pixel of the line, while each row between formula_8 and formula_9 may contain multiple rasterized pixels.
Bresenham's algorithm chooses the integer "y" corresponding to the pixel center that is closest to the ideal (fractional) "y" for the same "x"; on successive columns "y" can remain the same or increase by 1.
The general equation of the line through the endpoints is given by:
formula_10.
Since we know the column, "x", the pixel's row, "y", is given by rounding this quantity to the nearest integer:
formula_11.
The slope formula_12 depends on the endpoint coordinates only and can be precomputed, and the ideal "y" for successive integer values of "x" can be computed starting from formula_8 and repeatedly adding the slope.
In practice, the algorithm does not keep track of the y coordinate, which increases by "m" = "∆y/∆x" each time the "x" increases by one; it keeps an error bound at each
stage, which represents the negative of the distance from (a) the point where the line exits the pixel to (b) the top edge of the pixel.
This value is first set to formula_13 (due to using the pixel's center coordinates), and is incremented by "m" each time the "x" coordinate is incremented by one. If the error becomes greater than "0.5", we know that the line has moved upwards
one pixel, and that we must increment our "y" coordinate and readjust the error to represent the distance from the top of the new pixel – which is done by subtracting one from error.
Derivation.
To derive Bresenham's algorithm, two steps must be taken. The first step is transforming the equation of a line from the typical slope-intercept form into something different; and then using this new equation to draw a line based on the idea of accumulation of error.
Line equation.
The slope-intercept form of a line is written as
formula_14
where formula_15 is the slope and formula_16 is the y-intercept. Because this is a function of only formula_17, it can't represent a vertical line. Therefore, it would be useful to make this equation written as a function of both formula_17 "and" formula_18, to be able to draw lines at any angle. The angle (or slope) of a line can be stated as "rise over run", or formula_19. Then, using algebraic manipulation,
formula_20
Letting this last equation be a function of formula_17 and formula_18, it can be written as
formula_21
where the constants are
The line is then defined for some constants formula_25, formula_26, and formula_27 anywhere formula_28. That is, for any formula_29 not on the line, formula_30. This form involves only integers if formula_17 and formula_18 are integers, since the constants formula_25, formula_26, and formula_27 are defined as integers.
As an example, the line formula_31 then this could be written as formula_32. The point (2,2) is on the line
formula_33
and the point (2,3) is not on the line
formula_34
and neither is the point (2,1)
formula_35
Notice that the points (2,1) and (2,3) are on opposite sides of the line and formula_36 evaluates to positive or negative. A line splits a plane into halves and the half-plane that has a negative formula_36 can be called the negative half-plane, and the other half can be called the positive half-plane. This observation is very important in the remainder of the derivation.
Algorithm.
Clearly, the starting point is on the line
formula_37
only because the line is defined to start and end on integer coordinates (though it is entirely reasonable to want to draw a line with non-integer end points).
Keeping in mind that the slope is at most formula_38, the problem now presents itself as to whether the next point should be at formula_39 or formula_40. Perhaps intuitively, the point should be chosen based upon which is closer to the line at formula_41. If it is closer to the former then include the former point on the line, if the latter then the latter. To answer this, evaluate the line function at the midpoint between these two points:
formula_42
If the value of this is positive then the ideal line is below the midpoint and closer to the candidate point formula_43; i.e. the y coordinate should increase. Otherwise, the ideal line passes through or above the midpoint, and the y coordinate should stay the same; in which case the point formula_44 is chosen. The value of the line function at this midpoint is the sole determinant of which point should be chosen.
The adjacent image shows the blue point (2,2) chosen to be on the line with two candidate points in green (3,2) and (3,3). The black point (3, 2.5) is the midpoint between the two candidate points.
Algorithm for integer arithmetic.
Alternatively, the difference between points can be used instead of evaluating f(x,y) at midpoints. This alternative method allows for integer-only arithmetic, which is generally faster than using floating-point arithmetic. To derive the other method, define the difference to be as follows:
formula_45
For the first decision, this formulation is equivalent to the midpoint method since formula_46 at the starting point. Simplifying this expression yields:
formula_47
Just as with the midpoint method, if formula_48 is positive, then choose formula_43, otherwise choose formula_44.
If formula_44 is chosen, the change in D will be:
formula_49
If formula_43 is chosen the change in D will be:
formula_50
If the new D is positive then formula_51 is chosen, otherwise formula_52. This decision can be generalized by accumulating the error on each subsequent point.
All of the derivation for the algorithm is done. One performance issue is the 1/2 factor in the initial value of D. Since all of this is about the sign of the accumulated difference, then everything can be multiplied by 2 with no consequence.
This results in an algorithm that uses only integer arithmetic.
plotLine(x0, y0, x1, y1)
dx = x1 - x0
dy = y1 - y0
D = 2*dy - dx
y = y0
for x from x0 to x1
plot(x, y)
if D > 0
y = y + 1
D = D - 2*dx
end if
D = D + 2*dy
Running this algorithm for formula_53 from (0,1) to (6,4) yields the following differences with dx=6 and dy=3:
D=2*3-6=0
Loop from 0 to 6
* x=0: plot(0, 1), D≤0: D=0+6=6
* x=1: plot(1, 1), D>0: D=6-12=-6, y=1+1=2, D=-6+6=0
* x=2: plot(2, 2), D≤0: D=0+6=6
* x=3: plot(3, 2), D>0: D=6-12=-6, y=2+1=3, D=-6+6=0
* x=4: plot(4, 3), D≤0: D=0+6=6
* x=5: plot(5, 3), D>0: D=6-12=-6, y=3+1=4, D=-6+6=0
* x=6: plot(6, 4), D≤0: D=0+6=6
The result of this plot is shown to the right. The plotting can be viewed by plotting at the intersection of lines (blue circles) or filling in pixel boxes (yellow squares). Regardless, the plotting is the same.
All cases.
However, as mentioned above this only works for octant zero, that is lines starting at the origin with a slope between 0 and 1 where x increases by exactly 1 per iteration and y increases by 0 or 1.
The algorithm can be extended to cover slopes between 0 and -1 by checking whether y needs to increase or decrease (i.e. dy < 0)
plotLineLow(x0, y0, x1, y1)
dx = x1 - x0
dy = y1 - y0
yi = 1
if dy < 0
yi = -1
dy = -dy
end if
D = (2 * dy) - dx
y = y0
for x from x0 to x1
plot(x, y)
if D > 0
y = y + yi
D = D + (2 * (dy - dx))
else
D = D + 2*dy
end if
By switching the x and y axis an implementation for positive or negative steep slopes can be written as
plotLineHigh(x0, y0, x1, y1)
dx = x1 - x0
dy = y1 - y0
xi = 1
if dx < 0
xi = -1
dx = -dx
end if
D = (2 * dx) - dy
x = x0
for y from y0 to y1
plot(x, y)
if D > 0
x = x + xi
D = D + (2 * (dx - dy))
else
D = D + 2*dx
end if
A complete solution would need to detect whether x1 > x0 or y1 > y0 and reverse the input coordinates before drawing, thus
plotLine(x0, y0, x1, y1)
if abs(y1 - y0) < abs(x1 - x0)
if x0 > x1
plotLineLow(x1, y1, x0, y0)
else
plotLineLow(x0, y0, x1, y1)
end if
else
if y0 > y1
plotLineHigh(x1, y1, x0, y0)
else
plotLineHigh(x0, y0, x1, y1)
end if
end if
In low level implementations which access the video memory directly, it would be typical for the special cases of vertical and horizontal lines to be handled separately as they can be highly optimized.
Some versions use Bresenham's principles of integer incremental error to perform all octant line draws, balancing the positive and negative error between the x and y coordinates.
plotLine(x0, y0, x1, y1)
dx = abs(x1 - x0)
sx = x0 < x1 ? 1 : -1
dy = -abs(y1 - y0)
sy = y0 < y1 ? 1 : -1
error = dx + dy
while true
plot(x0, y0)
if x0 == x1 && y0 == y1 break
e2 = 2 * error
if e2 >= dy
error = error + dy
x0 = x0 + sx
end if
if e2 <= dx
error = error + dx
y0 = y0 + sy
end if
end while
Similar algorithms.
The Bresenham algorithm can be interpreted as slightly modified digital differential analyzer (using 0.5 as error threshold instead of 0, which is required for non-overlapping polygon rasterizing).
The principle of using an incremental error in place of division operations has other applications in graphics. It is possible to use this technique to calculate the U,V co-ordinates during raster scan of texture mapped polygons. The voxel heightmap software-rendering engines seen in some PC games also used this principle.
Bresenham also published a Run-Slice computational algorithm: while the above described Run-Length algorithm runs the loop on the major axis, the Run-Slice variation loops the other way. This method has been represented in a number of US patents:
The algorithm has been extended to:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(x_0,y_0)"
},
{
"math_id": 1,
"text": "(x_1,y_1)"
},
{
"math_id": 2,
"text": "x_0 \\leq x_1"
},
{
"math_id": 3,
"text": "y_0 \\leq y_1"
},
{
"math_id": 4,
"text": "x_1-x_0"
},
{
"math_id": 5,
"text": "y_1-y_0"
},
{
"math_id": 6,
"text": "x_0"
},
{
"math_id": 7,
"text": "x_1"
},
{
"math_id": 8,
"text": "y_0"
},
{
"math_id": 9,
"text": "y_1"
},
{
"math_id": 10,
"text": "\\frac{y - y_0}{y_1-y_0} = \\frac{x-x_0}{x_1-x_0}"
},
{
"math_id": 11,
"text": "y = \\frac{y_1-y_0}{x_1-x_0} (x-x_0) + y_0"
},
{
"math_id": 12,
"text": "(y_1-y_0)/(x_1-x_0)"
},
{
"math_id": 13,
"text": "y_0-0.5"
},
{
"math_id": 14,
"text": "y = f(x) = mx + b"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "b"
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "y"
},
{
"math_id": 19,
"text": "\\Delta y/\\Delta x"
},
{
"math_id": 20,
"text": "\n\\begin{align}\ny & = mx + b \\\\\ny & = \\frac{\\Delta y}{\\Delta x} x + b \\\\\n(\\Delta x) y & = (\\Delta y) x + (\\Delta x) b \\\\\n0 & = (\\Delta y) x - (\\Delta x) y + (\\Delta x) b\n\\end{align}\n"
},
{
"math_id": 21,
"text": "f(x,y) := Ax + By + C = 0"
},
{
"math_id": 22,
"text": "A = \\Delta y = y_1 - y_0"
},
{
"math_id": 23,
"text": "B = - \\Delta x = - (x_1 - x_0)"
},
{
"math_id": 24,
"text": "C = (\\Delta x) b = (x_1 - x_0) b"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "B"
},
{
"math_id": 27,
"text": "C"
},
{
"math_id": 28,
"text": "f(x,y) = 0"
},
{
"math_id": 29,
"text": "(x,y)"
},
{
"math_id": 30,
"text": "f(x,y) \\ne 0"
},
{
"math_id": 31,
"text": "y=\\frac{1}{2}x + 1"
},
{
"math_id": 32,
"text": "f(x,y) = x - 2y + 2"
},
{
"math_id": 33,
"text": "f(2,2) = x - 2y + 2 = (2) - 2(2) + 2 = 2 - 4 + 2 = 0"
},
{
"math_id": 34,
"text": "f(2,3) = (2) - 2(3) + 2 = 2 - 6 + 2 = -2"
},
{
"math_id": 35,
"text": "f(2,1) = (2) - 2(1) + 2 = 2 - 2 + 2 = 2"
},
{
"math_id": 36,
"text": "f(x,y)"
},
{
"math_id": 37,
"text": "f(x_0, y_0) = 0"
},
{
"math_id": 38,
"text": "1"
},
{
"math_id": 39,
"text": "(x_0 + 1, y_0)"
},
{
"math_id": 40,
"text": "(x_0 + 1, y_0 + 1)"
},
{
"math_id": 41,
"text": "x_0 + 1"
},
{
"math_id": 42,
"text": "f(x_0 + 1, y_0 + \\tfrac 1 2)"
},
{
"math_id": 43,
"text": "(x_0+1,y_0+1)"
},
{
"math_id": 44,
"text": "(x_0+1,y_0)"
},
{
"math_id": 45,
"text": "\nD = f(x_0+1,y_0+\\tfrac 1 2) - f(x_0,y_0)\n"
},
{
"math_id": 46,
"text": "f(x_0,y_0)=0"
},
{
"math_id": 47,
"text": "\\begin{array}{rclcl}\nD & = & \\left[ A(x_0+1) + B \\left(y_0+\\frac{1}{2}\\right) + C \\right] & - & \\left[ A x_0 + B y_0 + C \\right] \\\\\n& = & \\left[ Ax_0 + B y_0+ C + A + \\frac {1}{2} B\\right] & - & \\left[ A x_0 + B y_0 + C \\right] \\\\\n& = & A + \\frac{1}{2} B = \\Delta y - \\frac{1}{2} \\Delta x\n\\end{array}"
},
{
"math_id": 48,
"text": "D"
},
{
"math_id": 49,
"text": "\\begin{array}{lclcl}\n\\Delta D &=& f(x_0+2,y_0+\\tfrac 1 2) - f(x_0+1,y_0+\\tfrac 1 2) &=& A &=& \\Delta y \\\\\n\\end{array}"
},
{
"math_id": 50,
"text": "\\begin{array}{lclcl}\n\\Delta D &=& f(x_0+2,y_0+\\tfrac 3 2) - f(x_0+1,y_0+\\tfrac 1 2) &=& A+B &=& \\Delta y - \\Delta x\n\\end{array}"
},
{
"math_id": 51,
"text": "(x_0+2,y_0+1)"
},
{
"math_id": 52,
"text": "(x_0+2,y_0)"
},
{
"math_id": 53,
"text": "f(x,y) = x-2y+2"
}
] |
https://en.wikipedia.org/wiki?curid=64005
|
64005900
|
Magnetic 2D materials
|
Class of atomically thin materials
Magnetic 2D materials or magnetic van der Waals materials are two-dimensional materials that display ordered magnetic properties such as antiferromagnetism or ferromagnetism. After the discovery of graphene in 2004, the family of 2D materials has grown rapidly. There have since been reports of several related materials, all except for magnetic materials. But since 2016 there have been numerous reports of 2D magnetic materials that can be exfoliated with ease just like graphene.
The first few-layered van der Waals magnetism was reported in 2017 (Cr2Ge2Te6, and CrI3). One reason for this seemingly late discovery is that thermal fluctuations tend to destroy magnetic order for 2D magnets more easily compared to 3D bulk. It is also generally accepted in the community that low dimensional materials have different magnetic properties compared to bulk. This academic interest that transition from 3D to 2D magnetism can be measured has been the driving force behind much of the recent works on van der Waals magnets. Much anticipated transition of such has been since observed in both antiferromagnets and ferromagnets: FePS3, Cr2Ge2Te6, CrI3, NiPS3, MnPS3, Fe3GeTe2
Although the field has been only around since 2016, it has become one of the most active fields in condensed matter physics and materials science and engineering. There have been several review articles written up to highlight its future and promise.
Overview.
Magnetic van der Waals materials is a new addition to the growing list of 2d materials. The special feature of these new materials is that they exhibit a magnetic ground state, either antiferromagnetic or ferromagnetic, when they are thinned down to very few sheets or even one layer of materials. Another, a probably more important feature of these materials is that they can be easily produced in few layers or monolayer form using simple means such as scotch tape, which is rather uncommon among other magnetic materials like oxide magnets.
Interest in these materials is based on the possibility of producing two-dimensional magnetic materials with ease. The field started with a series of papers in 2016 with a conceptual paper and a first experimental demonstration. The field was expanded further with the publication of similar observations in ferromagnetism the following year. Since then, several new materials discovered and several review papers have been published.
Theory.
Magnetic materials have their (spins) aligned over a macroscopic length scale. Alignment of the spins is typically driven by exchange interaction between neighboring spins. While at absolute zero (formula_0) the alignment can always exist, thermal fluctuations misalign magnetic moments at temperatures above the Curie temperature (formula_1), causing a phase transition to a non-magnetic state. Whether formula_1 is above the absolute zero depends heavily on the dimensions of the system.
For a 3D system, the Curie temperature is always above zero, while a one-dimensional system can only be in a ferromagnetic state at formula_0
For 2D systems, the transition temperature depends on the spin dimensionality (formula_2). In system with formula_3, the planar spins can be oriented either in or out of plane. A spin dimensionality of two means that the spins are free to point in any direction parallel to the plane. A system with a spin dimensionality of three means there are no constraints on the direction of the spin. A system with formula_3 is described by the 2D Ising model. Onsager's solution to the model demonstrates that formula_4, thus allowing magnetism at obtainable temperatures. On the contrary, an infinite system where formula_5, described by the isotropic Heisenberg model, does not display magnetism at any finite temperature. The long range ordering of the spins for an infinite system is prevented by the Mermin-Wagner theorem stating that spontaneous symmetry breaking required for magnetism is not possible in isotropic two dimensional magnetic systems. Spin waves in this case have finite density of states and are gapless and are therefore easy to excite, destroying magnetic order. Therefore, an external source of magnetocrystalline anisotropy, such as external magnetic field, or a finite-sized system is required for materials with formula_5 to demonstrate magnetism.
The 2D ising model describes the behavior of FePS3, CrI3. and Fe3GeTe2, while Cr2Ge2Te6 and MnPS3 behaves like isotropic Heisenberg model. The intrinsic anisotropy in CrI3 and Fe3GeTe2 is caused by strong spin–orbit coupling, allowing them to remain magnetic down to a monolayer, while Cr2Ge2Te6 has only exhibit magnetism as a bilayer or thicker. The XY model describes the case where formula_6. In this system, there is no transition between the ordered and unordered states, but instead the system undergoes a so-called Kosterlitz–Thouless transition at finite temperature formula_7, where at temperatures below formula_7 the system has quasi-long-range magnetic order. It was reported that the theoretical predictions of the XY model are consistent with those experimental observations of NiPS3. The Heisenberg model describes the case where formula_5. In this system, there is no transition between the ordered and unordered states because of the Mermin-Wagner theorem. The experimental realization of the Heisenberg model was reported using MnPS3.
The above systems can be described by a generalized Heisenberg spin Hamiltonian:
formula_8,
Where formula_9 is the exchange coupling between spins formula_10 and formula_11, and formula_12 and formula_13 are on-site and inter-site magnetic anisotropies, respectively. Setting formula_14 recovered the 2D Ising model and the XY model. (positive sign for formula_3 and negative for formula_6), while formula_15 and formula_16 recovers the Heisenberg model (formula_5). Along with the idealized models described above, the spin Hamiltonian can be used for most experimental setups, and it can also model dipole-dipole interactions by renormalization of the parameter formula_12. However, sometimes including further neighbours or using different exchange coupling, such as antisymmetric exchange, is required.
Measuring two-dimensional magnetism.
Magnetic properties of two-dimensional materials are usually measured using Raman spectroscopy, Magneto-optic Kerr effect, Magnetic circular dichroism or Anomalous Hall effect techniques. The dimensionality of the system can be determined by measuring the scaling behaviour of magnetization (formula_17), susceptibility (formula_18) or correlation length (formula_19) as a function of temperature. The corresponding "critical exponents" are formula_20, formula_21 and formula_22 respectively. They can be retrieved by fitting
formula_23,
formula_24 or
formula_25
to the data. The critical exponents depend on the system and its dimensionality, as demonstrated in Table 1. Therefore, an abrupt change in any of the critical exponents indicates a transition between two models. Furthermore, the Curie temperature can be measured as a function of number of layers (formula_26). This relation for a large formula_26 is given by
formula_27,
where formula_28 is a material dependent constant. For thin layers, the behavior changes to formula_29
Applications.
Magnetic 2D materials can be used as a part of van der Waals heterostructures. They are layered materials consisting of different 2D materials held together by van der Waals forces. One example of such structure is a thin insulating/semiconducting layer between layers of 2D magnetic material, producing a magnetic tunnel junction. This structure can have significant spin valve effect, and thus they can have many applications in the field of spintronics. Another newly emerging direction came from the rather unexpected observation of magnetic exciton in NiPS3.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T = 0"
},
{
"math_id": 1,
"text": "T_C"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "n = 1"
},
{
"math_id": 4,
"text": "T_C > 0"
},
{
"math_id": 5,
"text": "n = 3"
},
{
"math_id": 6,
"text": "n = 2"
},
{
"math_id": 7,
"text": "T_{KT}"
},
{
"math_id": 8,
"text": "H = -\\frac{1}{2} \\sum_{<i,j>} (J \\mathbf{S}_i \\cdot \\mathbf{S}_j + \\Lambda S_j^z S_i^z) - \\sum_{i} A(S_i^z)^2"
},
{
"math_id": 9,
"text": "J"
},
{
"math_id": 10,
"text": "\\mathbf{S}_i"
},
{
"math_id": 11,
"text": "\\mathbf{S}_j"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "\\Lambda"
},
{
"math_id": 14,
"text": "A \\rightarrow \\pm\\infty"
},
{
"math_id": 15,
"text": "A \\approx 0"
},
{
"math_id": 16,
"text": "\\Lambda \\approx 0"
},
{
"math_id": 17,
"text": "M"
},
{
"math_id": 18,
"text": "\\chi"
},
{
"math_id": 19,
"text": "\\xi"
},
{
"math_id": 20,
"text": "\\beta"
},
{
"math_id": 21,
"text": "\\gamma"
},
{
"math_id": 22,
"text": "v"
},
{
"math_id": 23,
"text": "M(T)\\propto(1-T/T_{\\text{C}})^{\\beta}"
},
{
"math_id": 24,
"text": "\\chi(T)\\propto(1-T/T_{\\text{C}})^{-\\gamma}"
},
{
"math_id": 25,
"text": "\\xi(T)\\propto(1-T/T_{\\text{C}})^{-v}"
},
{
"math_id": 26,
"text": "N"
},
{
"math_id": 27,
"text": "T_{\\text{C}}(N)/T_{\\text{C}}^{\\text{3D}} = 1 - (C/N)^{\\frac{1}{v}}"
},
{
"math_id": 28,
"text": "C"
},
{
"math_id": 29,
"text": "T_{\\text{C}} \\propto N "
}
] |
https://en.wikipedia.org/wiki?curid=64005900
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.