id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
7031816 | Rokhlin's theorem | On the intersection form of a smooth, closed 4-manifold with a spin structure
In 4-dimensional topology, a branch of mathematics, Rokhlin's theorem states that if a smooth, orientable, closed 4-manifold "M" has a spin structure (or, equivalently, the second Stiefel–Whitney class formula_0 vanishes), then the signature of its intersection form, a quadratic form on the second cohomology group formula_1, is divisible by 16. The theorem is named for Vladimir Rokhlin, who proved it in 1952.
formula_2
is unimodular on formula_3 by Poincaré duality, and the vanishing of formula_0 implies that the intersection form is even. By a theorem of Cahit Arf, any even unimodular lattice has signature divisible by 8, so Rokhlin's theorem forces one extra factor of 2 to divide the signature.
Proofs.
Rokhlin's theorem can be deduced from the fact that the third stable homotopy group of spheres formula_9 is cyclic of order 24; this is Rokhlin's original approach.
It can also be deduced from the Atiyah–Singer index theorem. See  genus and Rochlin's theorem.
Robion Kirby (1989) gives a geometric proof.
The Rokhlin invariant.
Since Rokhlin's theorem states that the signature of a spin smooth manifold is divisible by 16, the definition of the Rokhlin invariant is deduced as follows:
For 3-manifold formula_10 and a spin structure formula_11 on formula_10, the Rokhlin invariant formula_12 in formula_13 is defined to be the signature of any smooth compact spin 4-manifold with spin boundary formula_14.
If "N" is a spin 3-manifold then it bounds a spin 4-manifold "M". The signature of "M" is divisible by 8, and an easy application of Rokhlin's theorem shows that its value mod 16 depends only on "N" and not on the choice of "M". Homology 3-spheres have a unique spin structure so we can define the Rokhlin invariant of a homology 3-sphere to be the element formula_15 of formula_16, where "M" any spin 4-manifold bounding the homology sphere.
For example, the Poincaré homology sphere bounds a spin 4-manifold with intersection form formula_8, so its Rokhlin invariant is 1. This result has some elementary consequences: the Poincaré homology sphere does not admit a smooth embedding in formula_17, nor does it bound a Mazur manifold.
More generally, if "N" is a spin 3-manifold (for example, any formula_16 homology sphere), then the signature of any spin 4-manifold "M" with boundary "N" is well defined mod 16, and is called the Rokhlin invariant of "N". On a topological 3-manifold "N", the generalized Rokhlin invariant refers to the function whose domain is the spin structures on "N", and which evaluates to the Rokhlin invariant of the pair formula_14 where "s" is a spin structure on "N".
The Rokhlin invariant of M is equal to half the Casson invariant mod 2. The Casson invariant is viewed as the Z-valued lift of the Rokhlin invariant of integral homology 3-sphere.
Generalizations.
The Kervaire–Milnor theorem states that if formula_18 is a characteristic sphere in a smooth compact 4-manifold "M", then
formula_19.
A characteristic sphere is an embedded 2-sphere whose homology class represents the Stiefel–Whitney class formula_0. If formula_0 vanishes, we can take formula_18 to be any small sphere, which has self intersection number 0, so Rokhlin's theorem follows.
The Freedman–Kirby theorem states that if formula_18 is a characteristic surface in a smooth compact 4-manifold "M", then
formula_20.
where formula_21 is the Arf invariant of a certain quadratic form on formula_22. This Arf invariant is obviously 0 if formula_18 is a sphere, so the Kervaire–Milnor theorem is a special case.
A generalization of the Freedman-Kirby theorem to topological (rather than smooth) manifolds states that
formula_23,
where formula_24 is the Kirby–Siebenmann invariant of "M". The Kirby–Siebenmann invariant of "M" is 0 if "M" is smooth.
Armand Borel and Friedrich Hirzebruch proved the following theorem: If "X" is a smooth compact spin manifold of dimension divisible by 4 then the  genus is an integer, and is even if the dimension of "X" is 4 mod 8. This can be deduced from the Atiyah–Singer index theorem: Michael Atiyah and Isadore Singer showed that the  genus is the index of the Atiyah–Singer operator, which is always integral, and is even in dimensions 4 mod 8. For a 4-dimensional manifold, the Hirzebruch signature theorem shows that the signature is −8 times the  genus, so in dimension 4 this implies Rokhlin's theorem.
proved that if "X" is a compact oriented smooth spin manifold of dimension 4 mod 8, then its signature is divisible by 16. | [
{
"math_id": 0,
"text": "w_2(M)"
},
{
"math_id": 1,
"text": "H^2(M)"
},
{
"math_id": 2,
"text": "Q_M\\colon H^2(M,\\Z)\\times H^2(M,\\Z)\\rightarrow \\mathbb{Z}"
},
{
"math_id": 3,
"text": "\\Z"
},
{
"math_id": 4,
"text": "\\mathbb{CP}^3"
},
{
"math_id": 5,
"text": "d"
},
{
"math_id": 6,
"text": "(4-d^2)d/3"
},
{
"math_id": 7,
"text": "d=4"
},
{
"math_id": 8,
"text": "E_8"
},
{
"math_id": 9,
"text": "\\pi^S_3"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "\\mu(N,s)"
},
{
"math_id": 13,
"text": "\\Z/16\\mathbb{Z}"
},
{
"math_id": 14,
"text": "(N,s)"
},
{
"math_id": 15,
"text": "\\operatorname{sign}(M)/8"
},
{
"math_id": 16,
"text": "\\Z/2\\Z"
},
{
"math_id": 17,
"text": "S^4"
},
{
"math_id": 18,
"text": "\\Sigma"
},
{
"math_id": 19,
"text": "\\operatorname{signature}(M) = \\Sigma \\cdot \\Sigma \\bmod 16"
},
{
"math_id": 20,
"text": "\\operatorname{signature}(M) = \\Sigma \\cdot \\Sigma + 8\\operatorname{Arf}(M,\\Sigma) \\bmod 16"
},
{
"math_id": 21,
"text": "\\operatorname{Arf}(M,\\Sigma)"
},
{
"math_id": 22,
"text": "H_1(\\Sigma, \\Z/2\\Z)"
},
{
"math_id": 23,
"text": "\\operatorname{signature}(M) = \\Sigma \\cdot \\Sigma + 8\\operatorname{Arf}(M,\\Sigma) + 8\\operatorname{ks}(M) \\bmod 16"
},
{
"math_id": 24,
"text": "\\operatorname{ks}(M)"
}
]
| https://en.wikipedia.org/wiki?curid=7031816 |
70321243 | Addiction severity index | Clinical assessment tool
The Addiction Severity Index (ASI) is used to assess the severity of patient's addiction and analyse the need of treatment which has been in use for more than 2 decades since its publication in 1992. It is used in a variety of settings such as clinics, mental health services in the US, the Indian Health Service and several European countries. One of its major applications is as a clinical assessment tool for clinicians to determine the severity of the addictions and the necessity for treatment through probing the patients' conditions in both health and social issues. 7 aspects including medical health, employment/ support status, drug and alcohol use, illegal activity and legal status, family and social relationships and psychiatric health were inquired.
The ASI offers a more complete assessment of patients' conditions than other tools as the authors believed that the detrimental effects in health and social aspects are not merely the results of addictions and these issues could not be simply resolved by reducing the use of substances. Despite the lack of clarity on the causal relationship between socioeconomic determinants of health and addiction, it was found that the health and social problems often are more valued by the patients rather than the addiction itself and in other cases, these complex issues would be the causes of relapses, showing the greater role of health and social problems in dealing with addiction. Hence, the ASI would like to delve deeper into the socioeconomic determinants of health of patients to better evaluate specific plans targeting these specific areas.
History.
Before the development of the ASI, it was assumed that the addiction could be characterized by measuring the nature, amount, and duration of their substance use and would directly lead to health and social problems or even criminal behaviors. Hence, the foundation of addiction therapy with the aim of reducing substance use was laid. However, it was noticed that addiction could not be understood by simply assessing the characteristics of addiction itself. This was exemplified by ASI that an anesthesiologist with severe opioid addiction but better personal and social support would have a better outcome than a pregnant woman with less severe cocaine addiction but worse criteria such as alcohol use, sexual behavior and educational level. This shows the previous assumption was only partially correct and reflected a more complex relationship between substance addiction and health and social problems. While there is a possibility that addiction is the direct cause of the health and social issues, their causal relationship could also be swapped, or unrelated as they are caused by inherited personality or a combination of economic, social and genetic factors. Coupled with findings depicting that variation in the substance abuse treatment showed little effect on outcomes while the addition of health and/ or social services showed improved outcomes, the development of ASI funded by the Veterans Administration (VA) began in 1977 with an emphasis on analyzing patients' health and social background.
In the beginning, around 250 questions were prepared for the target population of 524 male veterans with alcohol and drug addictions from Coatesville and Philadelphia VA Medical Centers. Face-to-face interviews were conducted in a six-month period in which researchers improved the survey by not only seeking answers to the questions but also asking them whether they understand the meaning of the questions and whether others would interpret the questions in the same manner. The "asking, listening, re-asking, and re-thinking" procedures would eventually narrow the survey down to 164 items categorized into 7 aspects.
The third version of ASI was established in 1980. This version of ASI adopted a ten-point severity rating which is assessed during patients' interviews with clinical staff. But clinical staff complained that the rating is difficult due to insufficient summary information. Thus, the interviewer severity rating (ISR) was proposed. However, several drawbacks of ISR, such as subjective data and low flexibility made it hard to apply in clinical practice. With this concern, the quantitative Composite scores (CSs), which were derived from clinical trials and errors, were applied. Both CSs and ISR have shown test-retest reliability and were used in ASI (details refer to the scoring system).
Owing to the new finding of drug and alcohol abuse treatment in the 1990s, the ASI saw the introduction of newer items regarding addiction-related disorder and drug use, route of drug administration, antisocial personality disorder, trauma on top of the pre-existing framework of ASI-3, leading to the publication of ASI-5 in 1992.
Structure.
Content.
The ASI-5 survey contains a total of 164 items inquiring about the general background of the patients (n= 28), their conditions in the respective areas would be asked in the following according to the participants' preference on privacy recorded during the development of ASI: 1) Medical health (n= 11) 2) Employment/ support status (n= 24) 3 & 4) Drug/ alcohol use (n= 35) 5) Illegal activity/ legal status (n= 32) 6) Family/ social relationships (n= 38) and 7) Psychiatric health (n= 23).
The general situations of each area would first be inquired, then patients are asked to rate in certain questions on their subjective feeling on the area. Next, interviewers would be able to estimate a score in the interviewer severity rating based on both objective and subjective information inquired. Lastly, a confidence rating would be given by the interviewers. In the medical, alcohol, drug and psychiatric sections, there are the "Final Three" questions (the number of questions including, but not limited to 3) placed before the estimation of interviewer severity rating in which they are logically related. For instance, in the "medical health" area, question 6 inquires about the frequency of experiencing medical problems in the last 30 days, question 7 inquires about the frequency of being troubled by these medical problems (referring to question 6) and question 8 inquires about the importance of treatment for these medical problems (also referring to question 6). It could be seen that if question 6 is answered 0, questions 7 and 8 should also be answered 0. On the other hand, if question 6 is a non-zero positive number, questions 7 and 8 should also be answered with non-zero positive numbers.
Confidence rating contains 2 questions confirming the full understanding of the interviewees which are rated by the interviewers. The items would ensure that there are no misrepresentations by the patients and they are able to understand all parts of the questions under certain sections.
Interview process.
Despite the feasibility of self-administration of the ASI with similar consistency as face-to-face interview, the survey would be preferably conducted in an interactive interview privately as it ensures interviewees could understand all the questions by further repeating, paraphrasing as well as probing and as a gesture of politeness and support to patients. The first interview which is done at admission would be estimated to complete within 45–75 minutes and the follow-up interview would be completed within 25–30 minutes.
Application.
The original purpose of ASI was to serve as a standardized data collection instrument for clinical staff for the determination of the severity of the addiction of patients through objective and subjective information. The estimated severity rating would guide the clinicians to determine the urgency of treatments. It was also designed for research staff to test for the efficacy of interventions by comparing the before and after results of the ASI using CSs.
The popularity of ASI grew as other languages of the ASI-5 was found to be equally reliable and valid. It could also be seen that the use of ASI was spread beyond the field of medicine and research. The expansion of populations that the ASI was used on besides the substance-dependent treatment population sees the increase of its versatility. Since 2000, the ASI has been used in sectors ranging from welfare to criminal justice to employment. It was also used in conjunction with other indexes to have extensive reviews on not only the effectiveness but also the cost-effectiveness of novel treatment. Lastly, the ASI is adopted in several pharmacovigilance studies by pharmaceutical companies to test for products abuse liability.
Scoring.
Definition.
The scoring system enables clinicians to determine the severity of the addiction of the patients which is defined as the need for treatment where there currently is none; or for an additional form or type of treatment where the patient is currently receiving some form of treatment, instead of a deviation from optimum function. It was exemplified that if a patient with extremely poor uncorrected vision but was adjusted with glasses, their pathological condition would be classified as severe yet using the definition used in ASI, their severity would be rated as minimal as they were well-adjusted with glasses for daily activities. In addition, it is of paramount importance to understand that the ratings do not indicate the potential benefits from treatments but depict the extent to which some forms of effective interventions are needed regardless of their existence and availability.
Scoring system.
In clinical practice, two scores would be derived for each section by reviewing the patient's situation in two time frames, including lifetime and past-30 days from the date of the interview. The scores from each section are independent of each other including the ISR and patient severity item. The ISR is determined by both objective information that are verifiable tests and patients' judgement of severity. Interviewers would gather all the objective information and a range of scores would be selected based on a 10-point system. The system would be listed below:
Once a specific range is selected, the exact score would be determined based on the subjective information provided by the patient. It would be related to their subjective perceptions of their addictions only for the past 30 days before the initiation of the interview and they would be asked to grade by themselves based on a 5-point system. The scale would be listed below:
Patients could leave the question blank if they are uncomfortable to answer.
Once the interviewers have selected the suitable range of severity rating, they would further derive the exact score based on the patients' subjective judgement. Should the patient choose higher scores in these specific questions, the higher point of the range would be selected. If lower scores are rated by the patient, a middle or lower score of the range would be pinpointed by the interviewers.
Despite the tested reliability and validity of the ISR and its ability to summarize patients' overall status in clinical admission, it has several drawbacks that made it not favourable for research purposes. Its subjective nature means that biases would easily be introduced during research analysis. In addition, its reliability and validity only last when all the data is available and the interviews are done face to face which might not be in follow-ups.
Composite scores.
With reference to the shortcomings of ISR, composite scores (CSs) are derived specifically to evaluate changes over time and relativity to different population groups in research. CSs are used in a scoring system that could be calculated for each of the 7 aspects by combining items from specific questions inquiring about the past 30-day status with equal weighting. In view of the possibility of large variations between answers (e.g.: patient rating scale of 0–4 and money earned), the composites would be calculated by dividing each item within a composite by its maximum value, then divide again by the total number of questions in the said composite. Finally, summation of all the scores will generate a score between 0 and 1.
An example has been depicted in the composite score manual. In the medical sections, three questions are included in the composite score calculation:
A. How many days have you experienced medical problems in the last 30? (Maximum value = 30)
B. How troubled or bothered have you been by your medical problems in the past 30 days? (Maximum value = 4 (Patient's rating))
C. How important to you now is treatment for these medical problems? (Maximum value = 4 (Patient's rating))
If the answer recorded for the three questions were 15 days for A, a rating of 3/4 for B and a rating of 4/4 for C. The score would be calculated using the equation below:
formula_0
Although the CSs are a measure of problem severity, with high scores indicating higher severity, their intrinsic value would have little meaning and they could not be compared between different aspects probed. CSs are advised to be only used for measuring changes in various time-points of treatments or relative outcomes between groups.
Confidence rating.
There are 2 possible scores that could be given to each of the 2 questions in the confidence rating (0= No and 1= Yes). Factors ranging from unjustified contradiction of information to lack of confidence in answering, patients' misrepresentation to poor understanding of questions due to reasons including but not limited to language barrier and illiteracy all contribute to poor confidence rating. Interviewers are encouraged to recognize and reconcile the aforementioned issues. But the interview would be terminated and rescheduled should the problems would not be resolved.
Ongoing improvement of ASI.
As clinicians gained more experience with the use of ASI-5 in real-life practice, it was pointed out that some questions in the questionnaire might be overlapping with information collected during admission. In a bid to avoid duplication and wasting medical resources, a condensed form of ASI, ASI-Lite was introduced in 1997. Modifications were done including the removal of interviewer severity rating, removal of questions pertaining to family/genetic heritability and emotional problems and the inclusions of research-oriented questions. It consists of 111 items and requires 30–40 minutes to complete. Since most key elements are retained, ASI-Lite and ASI-5 showed similar reliability and validity.
ASI-5 and ASI-Lite are continued to be used during clinical admission until the 21st century. Yet, a useful instrument should be reviewed and re-evaluated overtime to keep up with the advance in technology and change in social norms, coupled with the wider use of the ASI outside of clinical and research purposes, fundamental changes are brought to the development of a newer ASI, the ASI-6.
The principle of the revision is to add more content in each domain while shortening training and testing time, as well as retaining the essential element in ASI. The addition of new content such as queries on the date of most recent occurrence of more severe symptoms, days of hospitalizations for mental health problems and recent patients' status ranging from homelessness to pregnancy, tobacco use to gambling aimed to provide wider coverage. In addition, a time frame of 6-month was added for cost-related questions on top of the lifetime and past 30-day time frames in view of increased popularity in use cost-effectiveness analysis. In view of the additional content, "skip-outs" could be employed on screening questions to keep the interview within an hour. In terms of data analysis, confirmatory non-linear analysis was added to better meet the new application of ASI-6. All in all, the ASI-6 is supported to be used clinically and in research with acceptable scalability, reliability and concurrent validity. The ASI would continued to be improved with reference to the updating knowledge in psychology and the ever-changing socioeconomic factors in the society.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(15/30)/3+ (3/4)/3+ (4/4)/3= 0.750"
}
]
| https://en.wikipedia.org/wiki?curid=70321243 |
70327644 | Judges 10 | Book of Judges, chapter 10
Judges 10 is the tenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Tola and Jair. belonging to a section comprising Judges 6 to 9 and a bigger section of Judges 6:1 to 16:31.
Text.
This chapter was originally written in the Hebrew language. It is divided into 18 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A 6:1
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
6:1 , "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
Verse 6 of chapter 10 starts a section of Jephthah's Narrative, which can be divided into 5 episodes, each with a distinct dialogue, as follows:
Tola (10:1–2).
The judge Tola (and also the next one, Jair) has only an abbreviated notes of probably a larger tradition that might be well-known in the past (cf. Othniel in Judges 3:7–11 and Samgar in 3:31).
"And after Abimelech there arose to defend Israel Tola the son of Puah, the son of Dodo, a man of Issachar; and he dwelt in Shamir in mount Ephraim."
"And he judged Israel twenty-three years. Then he died and was buried at Shamir."
Jair (10:3–5).
Jair of Gilead judged Israel after the crisis that Tola faced had passed, because the reference to his wealth (that he had "thirty sons that rode on thirty ass colts, and they had thirty cities…"; cf. Judges 12:9–14) indicates a time of peace and prosperity, with no preparation for the incoming invasion of the Ammonites that would follow at the end of 22 years of his rule. The 30 cities in the land of Gilead seems to be related with those in the Bashan (cf. 1 Kings 4:13). The notice of the cities suggests what interesting offer Gilead could give to a new leader who would fight for them and why Jephthah would find it attractive. Jair was buried in Camon (Qamon), identified as modern Qamm north of Tayyiba in northern Galilee.
Israel oppressed again (10:6–18).
This section opens the Jephthah Narrative with a 'conventionalized pattern'—death of judge, backsliding, cry for help— and resumes with a review Israel's major enemies, In the specific dialogue between the Israelites and YHWH (verses 10–16) Israel confessed her sins of idolatry, then YHWH described his previous saving actions against Israel's unfaithfulness (cf. Hosea 7:11–16), and Israel repented (cf. similar pattern of motifs in Ezra 9, Nehemiah 9, and 2 Chronicles 20; by contrition of people and leaders 2 Chronicles 20:12; 16:8; 12:6–7), so YHWH took pity upon Israel (cf. Exodus 2:23–25) and would send a rescuer.
"And the anger of the Lord was hot against Israel, and he sold them into the hands of the Philistines, and into the hands of the children of Ammon."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70327644 |
7032835 | Knudsen diffusion | Particle behavior in systems of length less than the mean free path
Knudsen diffusion, named after Martin Knudsen, is a means of diffusion that occurs when the scale length of a system is comparable to or smaller than the mean free path of the particles involved. An example of this is in a long pore with a narrow diameter (2–50 nm) because molecules frequently collide with the pore wall. As another example, consider the diffusion of gas molecules through very small capillary pores. If the pore diameter is smaller than the mean free path of the diffusing gas molecules, and the density of the gas is low, the gas molecules collide with the pore walls more frequently than with each other, leading to Knudsen diffusion.
In fluid mechanics, the Knudsen number is a good measure of the relative importance of Knudsen diffusion. A Knudsen number much greater than one indicates Knudsen diffusion is important. In practice, Knudsen diffusion applies only to gases because the mean free path for molecules in the liquid state is very small, typically near the diameter of the molecule itself.
Mathematical description.
The diffusivity for Knudsen diffusion is obtained from the self-diffusion coefficient derived from the kinetic theory of gases:
formula_0
For Knudsen diffusion, path length λ is replaced with pore diameter formula_1, as species "A" is now more likely to collide with the pore wall as opposed with another molecule. The Knudsen diffusivity for diffusing species "A", formula_2 is thus
formula_3
where formula_4 is the gas constant (8.3144 J/(mol·K) in SI units), molar mass formula_5 is expressed in units of kg/mol, and temperature "T" (in kelvins). Knudsen diffusivity formula_2 thus depends on the pore diameter, species molar mass and temperature. Expressed as a molecular flux, Knudsen diffusion follows the equation for Fick's first law of diffusion:
formula_6
Here, formula_7 is the molecular flux in mol/m²·s, formula_8 is the molar concentration in formula_9. The diffusive flux is driven by a concentration gradient, which in most cases is embodied as a pressure gradient ("i.e." formula_10 therefore formula_11 where formula_12 is the pressure difference between both sides of the pore and formula_13 is the length of the pore).
If we assume that formula_12 is much less than formula_14, the average absolute pressure in the system ("i.e." formula_15) then we can express the Knudsen flux as a volumetric flow rate as follows:
formula_16,
where formula_17 is the volumetric flow rate in formula_18. If the pore is relatively short, entrance effects can significantly reduce to net flux through the pore. In this case, the law of effusion can be used to calculate the excess resistance due to entrance effects rather easily by substituting an effective length formula_19 in for formula_13. Generally, the Knudsen process is significant only at low pressure and small pore diameter. However there may be instances where both Knudsen diffusion and molecular diffusion formula_20 are important. The effective diffusivity of species "A" in a binary mixture of "A" and "B", formula_21 is determined by
formula_22
where formula_23 and formula_24 is the flux of component "i".
For cases where α = 0 (formula_25, i.e. countercurrent diffusion) or where formula_26 is close to zero, the equation reduces to
formula_27
Knudsen self diffusion.
In the Knudsen diffusion regime, the molecules do not interact with one another, so that they move in straight lines between points on the pore channel surface. Self-diffusivity is a measure of the translational mobility of individual molecules. Under conditions of thermodynamic equilibrium, a molecule is tagged and its trajectory followed over a long time. If the motion is diffusive, and in a medium without long-range correlations, the squared displacement of the molecule from its original position will eventually grow linearly with time (Einstein’s equation). To reduce statistical errors in simulations, the self-diffusivity, formula_28, of a species is defined from ensemble averaging Einstein’s equation over a large enough number of molecules "N".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{D_{AA*}} = {{\\lambda u} \\over {3}} = {{\\lambda}\\over{3}} \\sqrt{{8R T}\\over {\\pi M_{A}}}"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "D_{KA}"
},
{
"math_id": 3,
"text": "{D_{KA}} = {d u\\over {3}} = {{d\\over{3}}} \\sqrt{{8 R T}\\over {\\pi M_{A}}},"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "M_{A}"
},
{
"math_id": 6,
"text": "J_K = \\nabla n D_{KA}"
},
{
"math_id": 7,
"text": "J_K"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\rm mol/m^3"
},
{
"math_id": 10,
"text": "n=P/RT"
},
{
"math_id": 11,
"text": "\\nabla n=\\frac{\\Delta P}{RTl}"
},
{
"math_id": 12,
"text": "\\Delta P"
},
{
"math_id": 13,
"text": "l"
},
{
"math_id": 14,
"text": "P_{\\rm ave}"
},
{
"math_id": 15,
"text": "\\Delta P \\ll P_{\\rm ave}"
},
{
"math_id": 16,
"text": "Q_K=\\frac{\\Delta Pd^3}{6lP_{\\rm ave}} \\sqrt{\\frac{2\\pi RT}{M_A}}"
},
{
"math_id": 17,
"text": "Q_K"
},
{
"math_id": 18,
"text": "\\rm m^3/s"
},
{
"math_id": 19,
"text": "l_{\\rm e}=l+\\tfrac{4}{3}d"
},
{
"math_id": 20,
"text": "D_{AB}"
},
{
"math_id": 21,
"text": "D_{Ae}"
},
{
"math_id": 22,
"text": "\\frac{1}{{{D}_{Ae}}}=\\frac{1-\\alpha {{y}_{a}}}{{{D}_{AB}}}+\\frac{1}{{{D}_{KA}}},"
},
{
"math_id": 23,
"text": "\\alpha = 1 + \\tfrac{{{N}_{B}}}{{{N}_{A}}}"
},
{
"math_id": 24,
"text": "{N}_{i}"
},
{
"math_id": 25,
"text": "N_{A} = -N_{B}"
},
{
"math_id": 26,
"text": "y_{A}"
},
{
"math_id": 27,
"text": "\\frac{1}{{{D}_{Ae}}}=\\frac{1}{{{D}_{AB}}}+\\frac{1}{{{D}_{KA}}}."
},
{
"math_id": 28,
"text": "D_{S}"
}
]
| https://en.wikipedia.org/wiki?curid=7032835 |
70330392 | Topographic Rossby waves | Waves in the ocean and atmosphere created by bottom irregularities
Topographic Rossby waves are geophysical waves that form due to bottom irregularities. For ocean dynamics, the bottom irregularities are on the ocean floor such as the mid-ocean ridge. For atmospheric dynamics, the other primary branch of geophysical fluid dynamics, the bottom irregularities are found on land, for example in the form of mountains. Topographic Rossby waves are one of two types of geophysical waves named after the meteorologist Carl-Gustaf Rossby. The other type of Rossby waves are called planetary Rossby waves and have a different physical origin. Planetary Rossby waves form due to the changing Coriolis parameter over the earth. Rossby waves are quasi-geostrophic, dispersive waves. This means that not only the Coriolis force and the pressure-gradient force influence the flow, as in geostrophic flow, but also inertia.
Physical derivation.
This section describes the mathematically simplest situation where topographic Rossby waves form: a uniform bottom slope.
Shallow water equations.
A coordinate system is defined with x in eastward direction, y in northward direction and z as the distance from the earth's surface. The coordinates are measured from a certain reference coordinate on the earth's surface with a reference latitude formula_0 and a mean reference layer thickness formula_1. The derivation begins with the shallow water equations:
formula_2
where
In the equation above, friction (viscous drag and kinematic viscosity) is neglected. Furthermore, a constant Coriolis parameter is assumed ("f-plane approximation"). The first and the second equation of the shallow water equations are respectively called the zonal and meridional momentum equations, and the third equation is the continuity equation. The shallow water equations assume a homogeneous and barotropic fluid.
Linearization.
For simplicity, the system is limited by means of a weak and uniform bottom slope that is aligned with the y-axis, which in turn enables a better comparison to the results with planetary Rossby waves. The mean layer thickness formula_7 for an undisturbed fluid is then defined as
formula_8
where formula_9 is the slope of bottom, formula_10 the topographic parameter and formula_11 the horizontal length scale of the motion. The restriction on the topographic parameter guarantees that there is a weak bottom irregularity. The local and instantaneous fluid thickness formula_5 can be written as
formula_12
Utilizing this expression in the continuity equation of the shallow water equations yields
formula_13
The set of equations is made linear to obtain a set of equations that is easier to solve analytically. This is done by assuming a Rossby number Ro (= advection / Coriolis force), which is much smaller than the temporal Rossby number RoT (= inertia / Coriolis force). Furthermore, the length scale of formula_6 formula_14 is assumed to be much smaller than the thickness of the fluid formula_7. Finally, the condition on the topographic parameter is used and the following set of linear equations is obtained:
formula_15
Quasi-geostrophic approximation.
Next, the quasi-geostrophic approximation Ro, RoT formula_16 1 is made, such that
formula_17
where formula_18 and formula_19 are the geostrophic flow components and formula_20 and formula_21 are the ageostrophic flow components with formula_22 and formula_23. Substituting these expressions for formula_3 and formula_4 in the previously acquired set of equations, yields:
formula_24
Neglecting terms where small component terms (formula_25 and formula_9) are multiplied, the expressions obtained are:
formula_26
Substituting the components of the ageostrophic velocity in the continuity equation the following result is obtained:
formula_27
in which R, the Rossby radius of deformation, is defined as
formula_28
Dispersion relation.
Taking for formula_6 a plane monochromatic wave of the form
formula_29
with formula_30 the amplitude, formula_31 and formula_32 the wavenumber in x- and y- direction respectively, formula_33 the angular frequency of the wave, and formula_34 a phase factor, the following dispersion relation for topographic Rossby waves is obtained:
formula_35
If there is no bottom slope (formula_36), the expression above yields no waves, but a steady and geostrophic flow. This is the reason why these waves are called topographic Rossby waves.
The maximum frequency of the topographic Rossby waves is
formula_37
which is attained for formula_38 and formula_39. If the forcing creates waves with frequencies above this threshold, no Rossby waves are generated. This situation rarely happens, unless formula_9 is very small. In all other cases formula_40 exceeds formula_41 and the theory breaks down. The reason for this is that the assumed conditions: formula_42 and RoT formula_43 are no longer valid. The shallow water equations used as a starting point also allow for other types of waves such as Kelvin waves and inertia-gravity waves (Poincaré waves). However, these do not appear in the obtained results because of the quasi-geostrophic assumption which is used to obtain this result. In wave dynamics this is called filtering.
Phase speed.
The phase speed of the waves along the isobaths (lines of equal depth, here the x-direction) is
formula_44
which means that on the northern hemisphere the waves propagate with the shallow side at their right and on the southern hemisphere with the shallow side at their left. The equation of formula_45 shows that the phase speed varies with wavenumber so the waves are dispersive. The maximum of formula_45 is
formula_46
which is the speed of very long waves (formula_47). The phase speed in the y-direction is
formula_48
which means that formula_49 can have any sign. The phase speed is given by
formula_50
from which it can be seen that formula_51 as formula_52. This implies that the maximum of formula_53 is the maximum of formula_54.
Analogy between topographic and planetary Rossby waves.
Planetary and topographic Rossby waves are the same in the sense that, if the term formula_55 is exchanged for formula_56 in the expressions above, where formula_57 is the beta-parameter or Rossby parameter, the expression of planetary Rossby waves is obtained. The reason for this similarity is that for the nonlinear shallow water equations for a frictionless, homogeneous flow the potential vorticity q is conserved:
formula_58
with formula_59 being the relative vorticity, which is twice the rotation speed of fluid elements about the z-axis, and is mathematically defined as
formula_60
with formula_61 an anticlockwise rotation about the z-axis. On a beta-plane and for a linearly sloping bottom in the meridional direction, the potential vorticity becomes
formula_62.
In the derivations above it was assumed that
formula_63
so
formula_64
where a Taylor expansion was used on the denominator and the dots indicate higher order terms. Only keeping the largest terms and neglecting the rest, the following result is obtained:
formula_65
Consequently, the analogy that appears in potential vorticity is that formula_66 and formula_67 play the same role in the potential vorticity equation. Rewriting these terms a bit differently, this boils down to the earlier seen formula_56 and formula_55, which demonstrates the similarity between planetary and topographic Rossby waves. The equation for potential vorticity shows that planetary and topographic Rossby waves exist because of a background gradient in potential vorticity.
The analogy between planetary and topographic Rossby waves is exploited in laboratory experiments that study geophysical flows to include the beta effect which is the change of the Coriolis parameter over the earth. The water vessels used in those experiments are far too small for the Coriolis parameter to vary significantly. The beta effect can be mimicked to a certain degree in these experiments by using a tank with a sloping bottom. The substitution of the beta effect by a sloping bottom is only valid for a gentle slope, slow fluid motions and in the absence of stratification.
Conceptual explanation.
As shown in the last section, Rossby waves are formed because potential vorticity must be conserved. When the surface has a slope, the thickness of the fluid layer formula_5 is not constant. The conservation of the potential vorticity forces the relative vorticity formula_59 or the Coriolis parameter formula_68 to change. Since the Coriolis parameter is constant at a given latitude, the relative vorticity must change. In the figure a fluid moves to a shallower environment, where formula_5 is smaller, causing the fluid to form a crest. When the height is smaller, the relative vorticity must also be smaller. In the figure, this becomes a negative relative vorticity (on the northern hemisphere a clockwise spin) shown with the rounded arrows. On the southern hemisphere this is an anticlockwise spin, because the Coriolis parameter is negative on the southern hemisphere. If a fluid moves to a deeper environment, the opposite is true. The fluid parcel on the original depth is sandwiched between two fluid parcels with one of them having a positive relative vorticity and the other one a negative relative vorticity. This causes a movement of the fluid parcel to the left in the figure. In general, the displacement causes a wave pattern that propagates with the shallower side to the right on the northern hemisphere and to the left on the southern hemisphere.
Measurements of topographic Rossby waves on earth.
From 1 January 1965 till 1 January 1968, The Buoy Project at the Woods Hole Oceanographic Institution dropped buoys on the western side of the Northern Atlantic to measure the velocities. The data has several gaps because some of the buoys went missing. Still they managed to measure topographic Rossby waves at 500 meters depth. Several other research projects have confirmed that there are indeed topographic Rossby waves in the Northern Atlantic.
In 1988, barotropic planetary Rossby waves were found in the Northwest Pacific basin. Further research done in 2017 concluded that the Rossby waves are no planetary Rossby waves, but topographic Rossby waves.
In 2021, research in the South China Sea confirmed that topographic Rossby waves exist.
In 2016, research in the East Mediterranean showed that topographic Rossby Waves are generated south of Crete due to lateral shifts of a mesoscale circulation structure over the sloping bottom at 4000 m (https://doi.org/10.1016/j.dsr2.2019.07.008).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi_0"
},
{
"math_id": 1,
"text": "H_0"
},
{
"math_id": 2,
"text": "\\begin{align}\n{\\partial u\\over\\partial t}&+u{\\partial u\\over\\partial x}+v{\\partial u\\over\\partial y}-f_0v = -g{\\partial \\eta\\over\\partial x}\\\\[3pt]\n{\\partial v\\over\\partial t}&+u{\\partial v\\over\\partial x}+v{\\partial v\\over\\partial y}+f_0u = -g{\\partial \\eta\\over\\partial y}\\\\[3pt]\n{\\partial \\eta\\over\\partial t}&+{\\partial \\over\\partial x}hu + {\\partial \\over\\partial y}hv=0,\n\\end{align}"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "\\eta"
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": "H = H_0 + \\alpha_0y \\qquad \\text{with} \\qquad \\alpha ={\\left\\vert \\alpha_{0} \\right\\vert L \\over H_0}\\ll1,"
},
{
"math_id": 9,
"text": "\\alpha_0"
},
{
"math_id": 10,
"text": "\\alpha"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "h(x,y,t)=H_0+\\alpha_0 y+\\eta (x,y,t)."
},
{
"math_id": 13,
"text": "{\\partial \\eta\\over\\partial t}+\\left(u{\\partial \\eta \\over\\partial x}+ v{\\partial \\eta \\over\\partial y}\\right)+\\eta\\left({\\partial u \\over\\partial x}+{\\partial v \\over\\partial y}\\right)+ (H_0+\\alpha_0y)\\left({\\partial u \\over\\partial x}+{\\partial v \\over\\partial y}\\right)+\\alpha_0v=0."
},
{
"math_id": 14,
"text": "\\Delta H"
},
{
"math_id": 15,
"text": "\\begin{align}\n{\\partial u\\over\\partial t}&-f_0v = -g{\\partial \\eta\\over\\partial x}\\\\[3pt]\n{\\partial v\\over\\partial t}&+f_0u = -g{\\partial \\eta\\over\\partial y}\\\\[3pt]\n{\\partial \\eta\\over\\partial t}&+H_0\\left({\\partial u \\over\\partial x} + {\\partial v \\over\\partial y}\\right)+\\alpha_0v=0.\n\\end{align}"
},
{
"math_id": 16,
"text": "\\ll"
},
{
"math_id": 17,
"text": "\\begin{align}\nu = \\bar{u}+\\tilde{u} \\qquad &\\text{with} \\qquad \\bar{u}=-{g \\over f_0}{\\partial \\eta \\over \\partial y}\\\\[3pt]\nv = \\bar{v}+\\tilde{v} \\qquad &\\text{with} \\qquad \\bar{v}={g \\over f_0}{\\partial \\eta \\over \\partial x},\\\\[3pt]\n\\end{align}"
},
{
"math_id": 18,
"text": "\\bar{u}"
},
{
"math_id": 19,
"text": "\\bar{v}"
},
{
"math_id": 20,
"text": "\\tilde{u}"
},
{
"math_id": 21,
"text": "\\tilde{v}"
},
{
"math_id": 22,
"text": "\\tilde{u}\\ll\\bar{u}"
},
{
"math_id": 23,
"text": "\\tilde{v}\\ll\\bar{v}"
},
{
"math_id": 24,
"text": "\\begin{align}\n&-{g \\over f_0}{\\partial^2 \\eta \\over \\partial y \\partial t}+{\\partial \\tilde{u} \\over\\partial t}-f_0\\tilde{v} = 0\\\\[3pt]\n&{g \\over f_0}{\\partial^2 \\eta \\over \\partial x \\partial t}+{\\partial \\tilde{v} \\over\\partial t}+f_0\\tilde{u} = 0\\\\[3pt]\n&{\\partial \\eta\\over\\partial t}+H_0\\left({\\partial \\tilde{u} \\over\\partial x} + {\\partial \\tilde{v} \\over\\partial y}\\right)+\\alpha_0 {g \\over f_0}{\\partial \\eta \\over \\partial x}+ \\alpha_0\\tilde{v}=0.\n\\end{align}"
},
{
"math_id": 25,
"text": "\\tilde{u}, \\tilde{v}, {\\partial\\over\\partial t}"
},
{
"math_id": 26,
"text": "\\begin{align}\n&\\tilde{v} = -{g \\over f_0^2}{\\partial^2 \\eta \\over \\partial y \\partial t}\\\\[3pt]\n&\\tilde{u} = -{g \\over f_0^2}{\\partial^2 \\eta \\over \\partial x \\partial t}\\\\[3pt]\n&{\\partial \\eta\\over\\partial t}+H_0\\left({\\partial \\tilde{u} \\over\\partial x} + {\\partial \\tilde{v} \\over\\partial y}\\right)+\\alpha_0 {g \\over f_0}{\\partial \\eta \\over \\partial x}=0.\n\\end{align}"
},
{
"math_id": 27,
"text": "{\\partial \\eta \\over\\partial t}-R^2{\\partial\\over\\partial t}\\nabla^2 \\eta + \\alpha_0{g \\over f_0}{\\partial \\eta \\over \\partial x}=0,"
},
{
"math_id": 28,
"text": "R = {\\sqrt{gH_0}\\over f_0}."
},
{
"math_id": 29,
"text": "\\eta = A \\cos(k_xx+k_yy-\\omega t+\\phi),"
},
{
"math_id": 30,
"text": "A"
},
{
"math_id": 31,
"text": "k_x"
},
{
"math_id": 32,
"text": "k_y"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "\\phi"
},
{
"math_id": 35,
"text": "\\omega = {\\alpha_0 g \\over f_0}{k_x \\over 1+R^2(k_x^2+k_y^2)}."
},
{
"math_id": 36,
"text": "\\alpha_{0}=0"
},
{
"math_id": 37,
"text": "\\left\\vert \\omega \\right\\vert _{max} ={ \\left\\vert \\alpha_0 \\right\\vert g \\over 2\\left\\vert f_0 \\right\\vert R },"
},
{
"math_id": 38,
"text": "k_x = R^{-1}"
},
{
"math_id": 39,
"text": "k_y = 0"
},
{
"math_id": 40,
"text": "\\left\\vert \\omega \\right\\vert_{max}"
},
{
"math_id": 41,
"text": "\\left\\vert f_0 \\right\\vert"
},
{
"math_id": 42,
"text": "\\alpha \\ll 1"
},
{
"math_id": 43,
"text": "\\ll 1"
},
{
"math_id": 44,
"text": "c_x = {\\omega \\over k_x}={\\alpha_0g\\over f_0}{1 \\over 1 + R^2(k_x^2+k_y^2)},"
},
{
"math_id": 45,
"text": "c_x"
},
{
"math_id": 46,
"text": "\\left\\vert c_x \\right\\vert_{max} = {\\alpha_0g \\over f_0},"
},
{
"math_id": 47,
"text": "k_x^2+k_y^2\\rightarrow 0"
},
{
"math_id": 48,
"text": "c_y = {\\omega \\over k_y}={k_x \\over k_y}c_x,"
},
{
"math_id": 49,
"text": "c_y"
},
{
"math_id": 50,
"text": "c = {\\omega \\over k} = {k_x \\over k}c_x,"
},
{
"math_id": 51,
"text": "\\left\\vert c \\right\\vert \\leq \\left\\vert c_x \\right\\vert"
},
{
"math_id": 52,
"text": "\\left\\vert k \\right\\vert = \\sqrt{k_x^2+k_y^2} \\geq \\left\\vert k_x \\right\\vert"
},
{
"math_id": 53,
"text": "\\left\\vert c_x \\right\\vert"
},
{
"math_id": 54,
"text": "\\left\\vert c \\right\\vert"
},
{
"math_id": 55,
"text": "{\\alpha_0 g/f_0}"
},
{
"math_id": 56,
"text": "-\\beta_0R^2"
},
{
"math_id": 57,
"text": "\\beta_0"
},
{
"math_id": 58,
"text": "{dq \\over dt}=0 \\qquad \\text{with} \\qquad q={\\zeta + f \\over h},"
},
{
"math_id": 59,
"text": "\\zeta"
},
{
"math_id": 60,
"text": "\\zeta = {\\partial v \\over \\partial x}-{\\partial u \\over \\partial y},"
},
{
"math_id": 61,
"text": "\\zeta > 0"
},
{
"math_id": 62,
"text": "q = {f_0+\\beta_0 y + \\zeta \\over H_0 + \\alpha_0 y + \\eta}."
},
{
"math_id": 63,
"text": "\\begin{align}\n\\beta_0L \\ll \\left\\vert f_0 \\right\\vert &\\text{ : small planetary number }\\beta \\\\[3pt]\n[\\zeta] \\ll \\left\\vert f_0 \\right\\vert &\\text{ : small Rossby number Ro}\\\\[3pt]\n\\alpha_0L \\ll H_0&\\text{ : small topographic parameter }\\alpha \\\\[3pt]\n\\Delta H \\ll H &\\text{ : linear dynamics},\n\\end{align}"
},
{
"math_id": 64,
"text": "q = { f_0 \\left(1+{\\beta_0y \\over f_0} + {\\zeta \\over f_0} \\right)\\over H_0 \\left(1+{\\alpha_0y \\over H_0}+ {\\eta \\over H_0}\\right)}={f_0\\over H_0} \\left(1+{\\beta_0y \\over f_0} + {\\zeta \\over f_0} \\right)\\left(1-{\\alpha_0y \\over H_0}- {\\eta \\over H_0} + \\ldots\\right),"
},
{
"math_id": 65,
"text": "q\\approx {f_0 \\over H_0}\\left(1+{\\beta_0y \\over f_0} + {\\zeta \\over f_0}-{\\alpha_0y \\over H_0} -{\\eta \\over H_0}\\right)."
},
{
"math_id": 66,
"text": "\\beta_0/f_0"
},
{
"math_id": 67,
"text": "-\\alpha_0/H_0"
},
{
"math_id": 68,
"text": "f"
}
]
| https://en.wikipedia.org/wiki?curid=70330392 |
70345105 | Internal wave breaking | Fluid dynamics process driving mixing in the oceans
Internal wave breaking is a process during which internal gravity waves attain a large amplitude compared to their length scale, become nonlinearly unstable and finally break. This process is accompanied by turbulent dissipation and mixing. As internal gravity waves carry energy and momentum from the environment of their inception, breaking and subsequent turbulent mixing affects the fluid characteristics in locations of breaking. Consequently, internal wave breaking influences even the large scale flows and composition in both the ocean and the atmosphere. In the atmosphere, momentum deposition by internal wave breaking plays a key role in atmospheric phenomena such as the Quasi-Biennial Oscillation and the Brewer-Dobson Circulation. In the deep ocean, mixing induced by internal wave breaking is an important driver of the meridional overturning circulation. On smaller scales, breaking-induced mixing is important for sediment transport and for nutrient supply to the photic zone. Most breaking of oceanic internal waves occurs in continental shelves, well below the ocean surface, which makes it a difficult phenomenon to observe.
The contribution of breaking internal waves to many atmospheric and ocean processes makes it important to parametrize their effects in weather and climate models.
Breaking mechanisms.
Similar to what happens to surface gravity waves near a coastline, when internal waves enter shallow waters and encounter steep topography, they steepen and grow in amplitude in a nonlinear process known as shoaling. As the wave travels over topography with increasing height, bed friction leads to internal waves becoming asymmetrical with an increasing steepness. These nonlinear internal waves on a shallow slope are generally referred to as internal bores. Wave height and energy increase until a critical steepness is reached, whereafter the wave breaks by convective, Kelvin-Helmholtz or parametric subharmonic instability. Due to the relatively small density differences (and thus small restoring forces) over the ocean depth, ocean internal waves may reach amplitudes up to around 100 m. Analogous to surface wave breaking in the region known as the surf zone, internal breaking waves dissipate energy in what is known as the "internal surf zone."
Internal tide breaking.
Internal tidal waves are internal waves at tidal frequency in the ocean, which are generated by the interaction of the tide with the ocean topography. Alongside internal inertial waves, they constitute the majority of the ocean internal wavefield. The internal tides consist of so-called "low modes" and "high modes" with varying vertical wavelengths. As these waves propagate, the high modes tend to dissipate their energy quickly, leading to the low modes to dominate further away from the location of their generation. Low mode internal waves, with wavelengths exceeding 100 km, generated by either tides or winds acting on the sea surface, can travel thousands of kilometers from their regions of generation, where they will eventually encounter sloping topography and break. When this happens, isopycnals become steeper and steeper, where the wavefront is followed by a sharp temperature drop. This then leads to an unstable density profile that eventually overturns and breaks. The magnitude of the topographic slope and the slope of the internal wave beam dictate where internal waves break.
The slope of an internal wave beam (formula_0) can be expressed as the ratio between its horizontal (formula_1")" and vertical (formula_2) wavenumbers:
formula_3
where formula_4 is the buoyancy frequency (or Brunt-Väisälä frequency), formula_5 is the Coriolis frequency and formula_6 is the wave frequency in the dispersion relation that governs the propagation of internal waves in a continuously stratified and rotating medium:
formula_7.
In the case that the slope of a downgoing incident internal wave beam is larger than the topographic slope ("supercritical" slope), waves will be reflected downward. In the case that the slope of a downgoing incident internal wave beam is smaller than the topographic slope ("subcritical" slope), however, waves will be reflected upward with reduced wavelength and lower group velocity. Because the energy flux is conserved during reflection, energy density and therefore wave amplitude in the reflected wave must increase with respect to the incident wave. This increase in amplitude and wave steepness results in the waves being subject to breaking. These effects are increased the closer the slope of the internal wave beam is to the magnitude of the topographic slope. When the slope of the beam of the incoming internal wave is equal to the topographic slope, the slope of the topography is referred to as the "critical" slope. Critical slopes and near-critical slopes are important locations for both wave breaking and wave generation via tide-topography interactions.
Internal solitary wave breaking.
Owing to the generally long distances traveled by internal tidal waves, they may steepen and form trains of internal solitary waves, or internal solitons. These internal solitons have much shorter wavelengths, on the order of hundreds of meters, making them much steeper than internal tides. The ratio of the topographic slope to the wave steepness can be characterized by the internal Iribarren number:
formula_8
where formula_9 is the topographical slope, formula_10 the internal wave amplitude and formula_11 the wavelength of the internal wave. The internal Iribarren number can be used to classify internal bores into two categories: "canonical bores" and "non-canonical bores". For a gentle slope, as is typical for the continental shelf and nearshore areas, the internal Iribarren number is low (formula_12) such that canonical bores occur. In this case, an incoming internal solitary wave can convert to a packet of solitary waves or boluses as it travels up the slope in a process referred to as fission. This is also called a fission breaker. Canonical bores are generally accompanied by an intense drop in temperature as the wavefront passes by, followed by a gradual increase over time.
In rarer cases, non-canonical (formula_13) bores may occur. In these cases, for an increasing internal Iribarren number (that is, steeper waves or steeper topographic slope), wave breaking can be classified successively as surging, collapsing and plunging breakers (see Breaking wave). Contrary to canonical bores, temperature gradually decreases as the wavefront passes by, followed by a sharp increase in temperature. Due to the steeper topographic slopes associated with non-canonical bores, a larger part of the wave energy is reflected back, meaning there is less turbulent energy that leads to mixing.
Mixing.
Breaking internal waves are regarded to play an important role on mixing of the ocean based on lab experiments and remote sensing. The effect of internal waves on mixing is also studied extensively in direct numerical simulations. Even though research indicates that internal wave breaking is important for local turbulence, there remains uncertainty in global estimates.
Breaking internal tidal waves can result in turbulent water columns of several hundred meters high and the turbulent kinetic energy may reach levels up to 10.000 times higher than in the open ocean.
Quantifying mixing efficiency.
The intensity of the turbulence caused by breaking internal waves depends mainly on the ratio between topographical steepness and the wave steepness, known as the internal Iribarren number. A smaller internal Iribarren number correlates with a larger intensity of the resulting turbulence due to internal wave breaking. That means that a small internal Iribarren number predicts that a lot of the wave energy will be transferred to mixing and turbulence, while a large internal Iribarren number predicts that the wave energy will reflect offshore.
Studies express the mixing efficiency as the ratio between the total amount of mixing and the total irreversible energy loss. In other words, the mixing efficiency can generally be defined as the following ratio:
formula_14,
where formula_15 is the mixing efficiency, formula_16 the change in background potential energy due to mixing and formula_17 the total energy expended. Because formula_16 and formula_17 are not directly observable, studies use different definitions to determine the mixing efficiency.
It is notoriously hard to estimate the mixing efficiency in the ocean, due to practical limitations in measuring ocean dynamics. Besides measurements of ocean dynamics, the mixing efficiency can also be obtained from lab experiments and numerical simulations, but they also have their limitations. Therefore, these three different approaches have slightly different definitions of mixing efficiency. In theory these three approaches should give the same estimates for the mixing efficiency, but there remain discrepancies between them. Therefore, there are varying estimates and disagreements on mixing efficiency and comparisons are difficult due to the different definitions.
Studies that quantify the mixing properties of breaking internal solitary waves have split estimates of the mixing efficiency range, with values between 5% and 25% for laboratory experiments or between 13% and 21% for numerical simulations depending on the Internal Iribarren number.
Mass and sediment transport.
Breaking and shoaling of internal waves have been shown to cause the transport of mass and energy in the form of sediment and heat, but also of nutrients, plankton and other forms of marine life.
Sediment transport.
Wave breaking causes mass and sediment transport that is important for the ocean biology and shaping of the continental shelves due to erosion. The erosion caused by internal wave breaking can result in sediment to be suspended and transported off-shore. This off-shore sediment transport may give rise to the emergence of nepheloid layers, which are in turn important for the ocean biology. Direct numerical simulations show that breaking internal waves are also responsible for on-shore sediment transport, after which sediment can be deposited or transported elsewhere.
Although many studies show that internal wave breaking leads to sediment transport, their traces in the geologic record remain uncertain. Their sedimentary structures may coexist in turbidites on continental slopes and canyons.
Transport of nutrients.
The mixing and transport of nutrients in the ocean is affected largely by internal wave breaking. The arrival of internal tidal bores has been shown to cause a 10 to 40 fold increase of nutrients on Conch Reef. Here it has been shown that the appearance of internal bores provide a predictable and periodic source of transport that can be important for a diversity of marine life. Large amplitude tidal internal tidal waves can cause sediments to be resuspended for as long as 5 hours each tidal wave and internal bores have shown to play a vital role in the onshore transportation of planktonic larvae.
Internal wave breaking may also cause ecological hazards, such as red tides and low dissolved oxygen levels.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "r = \\Bigg| \\frac{k}{m} \\Bigg| = \\sqrt{\\frac{\\omega^2-f^2}{N^2-\\omega^2}}"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "\\omega"
},
{
"math_id": 7,
"text": "\\omega^2 = \\frac{N^2k^2 + f^2m^2}{k^2+m^2}"
},
{
"math_id": 8,
"text": "\\xi = \\frac{s}{\\sqrt{a/\\lambda}}"
},
{
"math_id": 9,
"text": "s"
},
{
"math_id": 10,
"text": "a"
},
{
"math_id": 11,
"text": "\\lambda"
},
{
"math_id": 12,
"text": "\\xi << 1"
},
{
"math_id": 13,
"text": "\\xi = O (1)"
},
{
"math_id": 14,
"text": "\\gamma = \\frac{\\Delta V}{E}"
},
{
"math_id": 15,
"text": "\\gamma"
},
{
"math_id": 16,
"text": "\\Delta V"
},
{
"math_id": 17,
"text": "E"
}
]
| https://en.wikipedia.org/wiki?curid=70345105 |
70346758 | Sea surface skin temperature | Quantity in oceanography
The sea surface skin temperature (SSTskin), or ocean skin temperature, is the temperature of the sea surface as determined through its infrared spectrum (3.7–12 μm) and represents the temperature of the sublayer of water at a depth of 10–20 μm. High-resolution data of skin temperature gained by satellites in passive infrared measurements is a crucial constituent in determining the sea surface temperature (SST).
Since the skin layer is in radiative equilibrium with the atmosphere and the sun, its temperature underlies a daily cycle. Even small changes in the skin temperature can lead to large changes in atmospheric circulation. This makes skin temperature a widely used quantity in weather forecasting and climate science.
Remote Sensing.
Large-scale sea surface skin temperature measurements started with the use of satellites in remote sensing. The underlying principle of this kind of measurement is to determine the surface temperature via its black body spectrum. Different measurement devices are installed where each device measures a different wavelength. Every wavelength corresponds to different sublayers in the upper 500 μm of the ocean water column. Since this layer shows a strong temperature gradient, the observed temperature depends on the wavelength used. Therefore, the measurements are often indicated with their wavelength band instead of their depths.
History.
First satellite measurements of the sea surface were conducted as early as 1964 by Nimbus-I. Further satellites were deployed in 1966 and the early 1970s. Early measurements suffered from contamination by atmospheric disturbances. The first satellite to carry a sensor operating on multiple infrared bands was launched late in 1978, which enabled atmospheric correction. This class of sensors is called Advanced very-high-resolution radiometers (AVHRR) and provides information that is also relevant for the tracking of clouds. The current, third-generation features six channels at wavelength ranges important for cloud observation, cloud/snow differentiation, surface temperature observation and atmospheric correction. The modern satellite array is able to give a global coverage with a resolution of 10 km every ~6 h.
Conversion to SST.
Sea surface skin temperature measurements are completed with SSTsubskin measurements in the microwave regime to estimate the sea surface temperature. These measurements have the advantage of being independent of cloud cover and underlie less variation. The conversion to SST is done via elaborate retrieval algorithms. These algorithms take additional information like the current wind, cloud cover, precipitation and water vapor content into account and model the heat transfer between the layers. The determined SST is validated by in-situ measurements from ships, buoys and profilers. On average, the skin temperature is estimated to be systematically cooler by 0.15 ± 0.1 K compared to the temperature at 5m depth.
Vertical temperature profile of the sea surface.
The vertical temperature profile of the surface layer of the ocean is determined by different heat transport processes. At the very interface, the ocean is in thermal equilibrium with the atmosphere which is dominated by conductive and diffusive heat transfer. Also, evaporation takes place at the interface and thus cools the skin layer. Below the skin layer lies the subskin layer, this layer is defined as the layer where molecular and viscous heat transfer dominates. At larger scales, as the much bigger foundation layer, turbulent heat transport through eddies contributes most to the vertical heat transfer.
During the day, there is additional heating by the sun. The solar radiation entering the ocean gets heats the surface following the Beer-Lambert law. Here, approximately five percent of the incoming radiation is absorbed in the upper 1 mm of the ocean. Since the heating from above leads to a stable stratification, other processes dominate the heat transport, depending on the considered scale.
Regarding the skin layer with thickness formula_0, turbulent diffusion term formula_1 is negligible. For the stationary case without external heating, the vertical temperature profile obeys the following energy budget:
formula_2
Here, formula_3 and formula_4 denote the density and heat capacity of water, formula_5 the molecular thermal conductivity and formula_6 the vertical partial derivative of the temperature. The vertical heat difference formula_7 consists of latent heat release, sensible heat fluxes and the net longwave thermal radiation. The formula_7 observed in the skin layer is positive, which corresponds to a temperature increasing with depth (Note that the z-axis points downward into the ocean). This leads to a cool skin layer as can be seen in Fig. 2. A common empiric description of the vertical temperature profile within the skin layer of depth formula_0 is:
formula_8
Here, formula_9 and formula_10 denote the temperature of the surface and the lower boundary. When including the diurnal heating, we have to include an additional heating term, depending on the absorbed short wave radiation. Integrating over formula_11, we can express the temperature at depth formula_0 as:
formula_12
where formula_13 is the net shortwave solar radiation at the ocean interface and formula_14 is its fraction absorbed up to depth formula_0. As can be seen in Fig. 2, the diurnal heating reduces the cool skin effect. The maximum temperature can be found in the subskin layer, where the external heating per depth is lower than in the skin layer, but where the surface cooling has a smaller effect. With further increasing depth, the temperature declines, as the proportional heating is smaller and the layer is mixed via turbulent processes.
Variation of skin temperature.
Daily cycle.
The ocean skin temperature is defined as the temperature of the water at 20 μm depth. This means that the SSTskin is very dependent on the heat flux from the ocean to the atmosphere. This results in diurnal warming of the sea surface, high temperatures occur during the day and low temperatures during the night (especially with clear skies and low wind speed conditions).
Because the SSTskin can be measured by satellites and is the temperature almost at the interface of the ocean and the atmosphere, it is a very useful measure to find the heat flux from the ocean. The increased heat flux due to diurnal warming can reach as high as 50-60 W/m2 and has a temporal mean of 10 W/m2. These amounts of heat flux cannot be neglected in atmospheric processes.
Wind and interaction with the atmosphere.
The sea surface temperature is also highly dependent on wind and waves. Both processes cause mixing and therefore cooling/heating of the SSTskin. For example, when rough seas occur during the day, colder water from lower layers are mixed with the ocean skin. When gravity waves are present at the sea surface, there is a modulation of ocean skin temperature. In this modulation, the wind plays an important role. The magnitude of this modulation depends on wind speed, the phase is determined by the direction of the wind relative to the waves. When the wind and wave direction are similar, maximum temperatures occur on the forward side of the wave and when the wind blows from the opposite side compared to the waves, maximum temperatures are found at the rear face of the wave.
Interaction with marine lifeforms.
On a global scale, skin temperature is an indicator of plankton concentrations. In areas where a relatively cold SSTskin is measured, abundance of phytoplankton is high. This effect is caused by the rise of cold, nutrient-rich water from the sea bottom in these regions. This increase in nutrients causes phytoplankton to thrive. On the other hand, relatively high SSTskin is an indication of higher zooplankton concentrations. These plankton depend on organic matter to thrive and higher temperatures increase production.
On more local scales, surface accumulations of cyanobacteria can cause local increases in SSTskin by up to 1.5 degrees Celsius. Cyanobacteria are bacteria that photosynthesize and therefore chlorophyll is present in these bacteria. This increased chlorophyll concentration causes more absorption of incoming radiation. This increased absorption causes the temperature of the sea surface to rise. This increased temperature is most likely only apparent in the first meter and definitely only in the first five meters, after which no increased temperatures are measured.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta"
},
{
"math_id": 1,
"text": "K_w"
},
{
"math_id": 2,
"text": "\\rho_wc_wk_w\\frac{\\partial T}{\\partial z}=Q=LH+SH+LW,"
},
{
"math_id": 3,
"text": "\\rho_w"
},
{
"math_id": 4,
"text": "c_w"
},
{
"math_id": 5,
"text": "k_w"
},
{
"math_id": 6,
"text": "\\tfrac{\\partial T}{\\partial z}."
},
{
"math_id": 7,
"text": "Q"
},
{
"math_id": 8,
"text": "\\overline{T}(z)=T_b+(T_s-T_b)*e^{-z/\\delta}"
},
{
"math_id": 9,
"text": "T_s"
},
{
"math_id": 10,
"text": "T_b"
},
{
"math_id": 11,
"text": "z"
},
{
"math_id": 12,
"text": "T_{z=\\delta} =T_s -\\frac{\\delta}{\\rho_wc_wk_w} (Q+R_sf_s)"
},
{
"math_id": 13,
"text": "R_s"
},
{
"math_id": 14,
"text": "f_s"
}
]
| https://en.wikipedia.org/wiki?curid=70346758 |
70347238 | Judges 11 | Book of Judges, chapter 11
Judges 11 is the eleventh chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judge Jephthah. belonging to a section comprising Judges 6:1 to 16:31.
Text.
This chapter was originally written in the Hebrew language. It is divided into 40 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD (KJV)
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A 6:1
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
6:1 , "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
This chapter contains the Jephthah's Narrative, which can be divided into 5 episodes, each with a distinct dialogue, as follows:
Jephthah and the elders of Gilead (11:1–11).
The Jephthah Narrative has a pattern of traditional story about the success of the once marginalized hero who rises to power in a 'non-dynastic' society with 'fluid patterns of leadership'. The hero, Jephthah, was a son of a prostitute, denied rights of inheritance by his father's
legitimate children, then became a 'social bandit' chief and gained the military prowess to lead and save his nation. Faced with an imminent Ammonite threat, the leaders of Gilead tried to woo back Jephthah, whom they had marginalized, by offering him the position of "commander", but when he balked they had to increase the offer to the position of "head" ("chieftain"). The agreement between Jephthah and the elders was sealed in a covenant with YHWH as witness (verse 10).
There is a parallel structure of the dialogue between YHWH and the Israelites in Judges 10:10–16 and the dialogue between Jephthah and the elders of Gilead in Judges 11:4-11.
"Now Jephthah the Gileadite was a mighty man of valour, and he was the son of an harlot: and Gilead begat Jephthah."
Jephthah's diplomacy with the Ammonite king (11:12–28).
The concept of 'just war' was the main subject of the exchange between Jephthah and the king of the Ammonites, arguing about land rights using 'juridical language' (cf. formula in 2 Chronicles 35:21; 2 Kings 3:13; 1 Kings 17:18). Jephthah demands to know what justifies the Ammonites' invasion against Israel, and the Ammonite king responds by providing a version of events recorded in Numbers 21:21–31 (cf. Deuteronomy 2:26–35), but painted Israel as the unjust aggressor. In a lengthy response, Jephthah gave a pro-Israelite version of the taking of the disputed territory using three arguments:
Unsurprisingly the Ammonite king rejected Jephthah's arguments, because in an 'enfeebled state' (Judges 10:8–9) Israel should not have power to negotiate, but Jephthah had been willing to give diplomacy a chance before the war and showed himself as the leader of Israel.
Jephthah's vow (11:29–40).
This section contains the fourth part of the Jephthah Narrative recording Jephthah's victory over the Ammonites, which is overshadowed by his ill-considered vow, and a special dialogue between Jephthah and his daughter in verses 34–38. In other ancient Near-Eastern cultures, the warriors often promise the deity something of value in return for his assistance in war, a particular belief in the efficacy of sacrifice in the ideology of the "ban" (Hebrew: "herem"), which leads to the consecration of valuable commodities after victory (cf.
Numbers 21:2–3; the terminology at Deuteronomy 13:16). However, in this case, Jephthah's vow is considered rash and manipulative:
The narrative frames the vow (verses 30–31) within the records of battles and victory over the Ammonites in verses 29 and 32 to show that Jephthah's vow is totally unnecessary, as his last words to the Ammonite king should be sufficient, "Let the Lord, the Judge, decide the dispute this day between the Israelites and the Ammonites" (verse 27), that YHWH would deliver the Ammonites to Jephthah's hands just as YHWH delivered Sihon to the Israelites (verse 21). Despite the understandable reluctance of Jephthah and his daughter (verses 37–38), both decided to carry out the vow (verse 39). The obedience of Jephthah's daughter is remembered and noted in a corresponding structure in verses 37–40 as follows:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70347238 |
70347239 | Judges 12 | Book of Judges, chapter 12
Judges 12 is the twelfth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformist Judean king Josiah in 7th century BCE. This chapter records the activities of Biblical judges Jephthah, Ibzan, Elon, and Abdon. belonging to a section comprising Judges 6:1 to 16:31.
Text.
This chapter was originally written in Biblical Hebrew. It is divided into 15 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
A linguistic study by Robert B. Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD (KJV)
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A 6:1
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
6:1 , "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
This chapter contains the Jephthah's Narrative, which can be divided into 5 episodes, each with a distinct dialogue, as follows:
Jephthah and the Ephraimites (12:1–7).
This section contains the fifth (the final) episode in the Jephthah Narrative. As with Gideon in Judges 8:1–3, the Ephraimites complained that they had not been asked to join in the battle (so they could also enjoy the spoils), but this time it ended in a civil war, which the Gileadites, unified by Jephthah, had upperhand. The Gileadites used the pronunciation of the Hebrew word "Shibboleth" to distinguish the Ephraimites, so they could kill them.
Ibzan (12:8–10).
Ibzan succeeded Jephthah as judge for seven years. He had thirty sons and thirty daughters, and when he died, he was buried in his native town, Bethlehem, which is not followed by "Ephratah" or by "Judah" so it could be the Bethlehem of Galilee in the territory of Zebulun (Joshua 19:15).
Elon (12:11–12).
The tenth judge succeeded Ibzan, Elon, who was given very few statistics and with no historical exploits, other than he was from the tribe of Zebulun, succeeded Ibzan to judge Israel for ten years. When he died, he was buried in Aijalon in the territory of Zebulun.
Abdon (12:13–15).
Abdon succeeded Elon, the son of Hillel of Pirathon of the tribe of Ephraim, who had forty sons and thirty grandsons, and judged Israel for eight years, restoring order in the central area of Israel in the aftermath of the civil war involving Jephthah and the Gileadites.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70347239 |
70347242 | Judges 13 | Book of Judges, chapter 13
Judges 13 is the thirteenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judges Samson. belonging to a section comprising Judges 13 to 16 and to .
Text.
This chapter was originally written in the Hebrew language. It is divided into 25 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD (KJV)
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
, "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
Chapters 13–16 contains the "Samson Narrative" or "Samson Cycle", a highly structured poetic composition with an 'almost architectonic tightness' from a literary point-of-view. The entire section consists of 3 "cantos" and 10 subcantos and 30 canticles, as follows:
The distribution of the 10 subcantos into 3 cantos is a regular 2 + 4 + 4, with the number of canticles per subcanto as follows:
The number of strophes per canticle in each canto is quite uniform with numerical patterns in Canto II showing a 'concentric symmetry':
The structure regularity within the whole section classifies this composition as a 'narrative poetry' or 'poetic narrative'.
"And the children of Israel did evil again in the sight of the Lord; and the Lord delivered them into the hand of the Philistines forty years."
Israel oppressed by the Philistines (13:1).
The oppression of the Israelites by the Philistines was briefly mentioned in Judges 10:7, is stated here again with the standing formula: "And the children of Israel did evil again in the sight of the Lord" (cf. Judges 10:6; Judges 4:1; Judges 3:12).
Birth of Samson (13:2–25).
The birth narrative of Samson follows the pattern of heroes' birth in the Israelite tradition, starting with as a barren mother (cf. Sarah, Rebekah, Rachel, and Hannah) receiving an annunciation (verse 3), a special theophany usually with women as the 'primary recipients' (cf. Mary in Luke 1), accompanied by specific instructions for the mother and son (verses 4-6) causing an expression of fear or awe (verse 22; cf. Rebekah in Genesis 25:22–23; Hagar in Genesis 16:11–12; Sarah and Abraham in Genesis 18). Samson's nazirite identity (verses 4–6; 7; 14) is in accordance to the description in the Priestly text of Numbers 6:1–21, but among the nazirite characteristics, the specific motif of hair is especially central to the Samson Narrative. Samson's mother was unnamed, although she was the one receiving the important message about the birth and especially about the hair (verse 5), and appears to be calmer (and more readily believe the message) than her named husband (Manoah) who was fearful and unsure (cf. verses 8, 12, 16, and 21 with 6–7, 10, 23). As a confirmation of her importance in the narrative, she is the one who names the boy, Samson ("man of the sun"; in Hebrew: "simson", whereas "semes" means "sun"), following the tradition of naming the child in the Hebrew Bible (cf. Hannah in 1 Samuel 1:20; Eve in Genesis 4:1, and the matriarchs, Leah and Rachel).
The narrative in verses 3–24 has a structure that almost parallels with Judges 16 in terms of text arrangement:
The woman is barren (13:2)
1) an inclusion
1. A. messenger appears (Hebrew: "wyr") to the woman (13:3–5)
2. B. the woman tells her husband (13:6–8)
3. C. he prays that the messenger come again (13:9)
4. A'. the messenger comes again to the woman (13:9)
5. B'. she tells the man has appeared (Hebrew: "nr'h") (13:10)
2) fourfold asking and answer discourse (13:11–18)
1. First question and answer (13:11)
2. Second question and answer (13:12–14)
3. A request and reply (13:15–16)
4. Fourth question and answer (13:17–18)
3) an inclusion
1. Manoah takes (Hebrew: "wyqh") a kind and cereal offering (13:19)
2. messenger does not appear again (13:20–21)
3. Manoah knew he was messenger of YHWH (13:21–22)
4. YHWH would not have taken (Hebrew: "lqh") the burnt offering (13:23)
The woman bears a son (13:24)
"And the Spirit of the Lord began to move upon him at Mahaneh Dan between Zorah and Eshtaol."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70347242 |
70347258 | Eddy pumping | Ocean dynamics
Eddy pumping is a component of mesoscale eddy-induced vertical motion in the ocean. It is a physical mechanism through which vertical motion is created from variations in an eddy's rotational strength. Cyclonic (Anticyclonic) eddies lead primarily to upwelling (downwelling). It is a key mechanism driving biological and biogeochemical processes in the ocean such as algal blooms and the carbon cycle.
The mechanism.
Eddies have a re-stratifying effect, which means they tend to organise the water in layers of different density. These layers are separated by surfaces called isopycnals. The re-stratification of the mixed layer is strongest in regions with large horizontal density gradients, known also as “fronts”, where the geostrophic shear and potential energy provide an energy source from which baroclinic and symmetric instabilities can grow. Below the mixed layer, a region of rapid density change (or pycnocline) separates the upper and lower water, hindering vertical transport.
Eddy pumping is a component of mesoscale eddy-induced vertical motion. Such vertical motion is caused by the deformation of the pycnocline. It can be conceptualised by assuming that ocean water has a density surface with mean depth averaged over time and space. This surface separates the upper ocean, corresponding to the euphotic zone, from the lower, deep ocean. When an eddy transits through, such density surface is deformed. Dependent on the phases of the lifespan of an eddy this will create vertical perturbations in different direction. Eddy lifespans are divided in formation, evolution and destruction. Eddy-pumping perturbations are of three types:
Eddy-centric approach.
Mode-water eddies have a complex density structure. Due to their shape, they cannot be distinguished from regular anticyclones in an eddy-centric (focused on the core of the eddy) analysis based on sea level height. Nonetheless, eddy pumping induced vertical motion in the euphotic zone of mode-water eddies is comparable to cyclones. For this reasons, only the cyclonic and anticyclonic mechanisms of eddy-pumping perturbations are explained.
Conceptual explanation based on sea-surface level.
An intuitive description of this mechanism is what is defined as eddy-centric-analysis based on sea-surface level. In the Northern hemisphere, anticlockwise rotation in cyclonic eddies creates a divergence of horizontal surface currents due to the Coriolis effect, leading to a dampened water surface. To compensate the inhomogeneity of surface elevation, isopycnal surfaces are uplifted toward the euphotic zone and incorporation of deep ocean, nutrient-rich waters can occur.
Physical explanation.
Conceptually, eddy pumping associates the vertical motion in the interior of eddies to temporal changes in eddy relative vorticity. The vertical motion created by the change in vorticity is understood from the characteristics of the water contained in the core of the eddy. Cyclonic eddies rotate anticlockwise (clockwise) in the Northern (Southern) hemisphere and have a cold core. Anticyclonic eddies rotate clockwise (anticlockwise) in the Northern (Southern) hemisphere and have a warm core. The temperature and salinity difference between the eddy core and the surrounding waters is the key element driving vertical motion. While propagating in horizontal direction, Cyclones and anticyclones “bend” the pycnocline upwards and downwards, respectively, induced by this temperature and salinity discrepancy. The extent of the vertical perturbation of the density surface inside the eddy (compared to the mean ocean density surface) is determined by the changes in rotational strength (relative vorticity) of the eddy.
Ignoring horizontal advection in the density conservation equation, the density changes due to changes in vorticity can be directly related to vertical transport. This assumption is coherent with the idea of vertical motion occurring at the eddy centre, in correspondence to variations of a perfectly circular flow.
formula_0
Through such mechanism eddy pumping generates upwelling of cold, nutrient rich deep waters in cyclonic eddies and downwelling of warm, nutrient poor, surface water in anticyclonic eddies.
Dependency on the phase of lifespan.
Eddies weaken over time due to kinetic energy dissipation. As eddies form and intensify, the mechanisms mentioned above will strengthen and, as an increase in relative vorticity generates perturbations of the isopycnal surfaces, the pycnocline deforms. On the other hand, when eddies have aged and carry low kinetic energy, their vorticity diminishes and leads to eddy destruction. Such process opposes to eddy formation and intensification, as the pycnocline will return to its original position prior to the eddy-induced deformation. This means that the pycnocline will uplift in anticyclones and compress in cyclones, leading to upwelling and downwelling, respectively.
Eddy pumping characteristics.
The direction of vertical motion in cyclonic and anticyclonic eddies is independent of the hemisphere. Observed vertical velocities of eddy pumping are in the order of one meter per day. However, there are regional differences. In regions where kinetic energy is higher, such as in the Western boundary current, eddies are found to generate stronger vertical currents than eddies in open ocean.
Limitations.
When describing vertical motion in eddies it is important to note that eddy pumping is only one component of a complex mechanism. Another important factor to take into account, especially when considering ocean-wind interaction, is the role played by eddy-induced Ekman pumping. Some other limitations of the explanation above are due to the idealised, quasi circular linear dynamical response to perturbations that neglects the vertical displacement that a particle can experience moving along a sloping neutral surface. Vertical motion in eddies is a fairly recent research topic that still presents limitations in the theory both due to complexity and lack of sufficient observations. Nonetheless, the one presented above is a simplification that helps explain partially the important role that eddies play in biological productivity, as well as their biogeochemical role in the carbon cycle.
Biological impact.
Recent findings suggest that mesoscale eddies are likely to play a key role in nutrient transport, such as the spatial distribution of chlorophyll concentration, in the open ocean. Lack of knowledge on the impact of eddy activity is however still notable, as eddies’ contribution has been argued not to be sufficient to maintain the observed primary production through nitrogen supply in parts of the subtropical gyre. Although the mechanisms through which eddies shape ecosystems are not yet fully understood, eddies transport nutrients through a combination of horizontal and vertical processes. Stirring and trapping relate to nutrient transport, whereas eddy pumping, eddy-induced Ekman pumping, and eddy impacts on mixed-layer depth variate nutrient. Here, the role played by eddy pumping is discussed.
Cyclonic eddy pumping drives new primary production by lifting nutrient-rich waters into the euphotic zone. Complete utilisation of the upwelled nutrients is guaranteed by two main factors. Firstly, biological uptake takes place in timescales that are much shorter than the average lifetime of eddies. Secondly, because the nutrient enhancement takes place in the eddy's interior, isolated from the surrounding waters, biomass can accumulate until upwelled nutrients are fully consumed.
Main examples.
Evidence of the biological impacts of eddy pumping mechanism is present in various publications based on observations and modelling of multiple locations worldwide. Eddy-centric chlorophyll anomalies have been observed in the Gulf Stream region and off the west coast of British Columbia (Haida eddies), as well as eddy-induced enhanced biological production in the Weddell-Scotia Confluence in the Southern Ocean, in the northern Gulf of Alaska, in the South China Sea, in the Bay of Bengal, in the Arabian Sea and in the north-western Alboran Sea, to name a few. Estimations of the eddy pumping in the Sargasso Sea resulted in a flux between 0.24 and 0.5 nitrogen formula_1. These quantities have been deemed sufficient to sustain a rate of new primary production consistent with estimates for this region.
On a wider ecological scale, eddy-driven variations in productivity influence the trade-off between phytoplankton larval survival and the abundance of predators. These concepts partially explain mesoscale variations in the distribution of larval bluefin tuna, sailfish, marlin, swordfish, and other species. Distributions of adult fishes have also been associated with the presence of cyclonic eddies. Particularly, higher abundances of bluefin tuna and cetaceans in the Gulf of Mexico and blue marlin in the proximity of Hawaii are linked to cyclonic eddy activities. Such spatial patterns extend to seabirds spotted in the vicinities of eddies, including great frigate birds in the Mozambique Channel and albatross, terns, and shearwaters in the South Indian Ocean.
North Atlantic Algal Bloom.
The North Sea is an ideal basin for the formation of algal blooms or spring blooms due to the combination of abundant nutrients and intense Arctic winds that favour the mixing of waters. Blooms are important indicators of the health of a marine ecosystem.
Springtime phytoplankton blooms have been thought to be initiated by seasonal light increase and near-surface stratification. Recent observations from the sub-polar North Atlantic experiment and biophysical models suggest that the bloom may be instead resulting from an eddy-induced stratification, taking place 20 to 30 days earlier than it would occur by seasonal changes. These findings revolutionise the entire understanding of spring blooms. Moreover, eddy pumping and eddy-induced Ekman pumping have been shown to dominate late-bloom and post-bloom biological fields.
Biogeochemistry.
Phytoplankton absorbs formula_2 through photosynthesis. When such organisms die and sink to the seafloor, the carbon they absorbed gets stored in the deep ocean through what is known as the biological pump. Recent research has been investigating the role of eddy pumping and more in general, of vertical motion in mesoscale eddies in the carbon cycle. Evidence has shown that eddy pumping-induced upwelling and downwelling may play a significant role in shaping the way that carbon is stored in the ocean. Despite the fact that research in this field is only developing recently, first results show that eddies contribute less than 5% of the total annual export of phytoplankton to the ocean interior.
Plastic pollution.
Eddies play an important role in the sea surface distribution of microplastics in the ocean. Due to their convergent nature, anticyclonic eddies trap and transport microplastics at the sea surface, along with nutrients, chlorophyll and zooplankton. In the North Atlantic subtropical gyre, the first direct observation of sea surface concentrations of microplastics between a cyclonic and an anticyclonic mesoscale eddy has shown an increased accumulation in the latter. Accumulation of microplastics has environmental impacts through its interaction with the biota. Initially buoyant plastic particles (between 0.01 and 1 mm) are submerged below the climatological mixed layer depth mainly due to biofouling. In regions with very low productivity, particles remain within the upper part of the mixed layer and can only sink below it if a spring bloom occurs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0={\\partial \\rho\\over\\partial t}+\\nabla\\cdot (\\rho\\textbf{u}) \\longrightarrow {\\partial\\rho\\over\\partial t}=-{\\partial\\over\\partial\\!z}\\rho w"
},
{
"math_id": 1,
"text": "\\frac{mol}{m^2yr}\n"
},
{
"math_id": 2,
"text": "CO_2"
}
]
| https://en.wikipedia.org/wiki?curid=70347258 |
70348415 | Standard linear array | Standard linear arrays
In the context of phased arrays, a standard linear array (SLA) is a uniform linear array (ULA) of interconnected transducer elements, e.g. microphones or antennas, where the individual elements are arranged in a straight line spaced at one half of the smallest wavelength of the intended signal to be received and/or transmitted. Therefore, an SLA is a subset of the ULA category. The reason for this spacing is that it prevents grating lobes in the visible region of the array.
Intuitively one can think of a ULA as spatial sampling of a signal in the same sense as time sampling of a signal. Grating lobes are identical to aliasing that occurs in time series analysis for an under-sampled signal. Per Shannon's sampling theorem, the sampling rate must be at least twice the highest frequency of the desired signal in order to preclude spectral aliasing. Because the beam pattern (or array factor) of a linear array is the Fourier transform of the element pattern, the sampling theorem directly applies, but in the spatial instead of spectral domain. The discrete-time Fourier transform (DTFT) of a sampled signal is always periodic, producing "copies" of the spectrum at intervals of the sampling frequency. In the spatial domain, these copies are the grating lobes. The analog of radian frequency in the time domain is wavenumber, formula_0 radians per meter, in the spatial domain. Therefore, the spatial sampling rate, in samples per meter, must be formula_1. The sampling interval, which is the inverse of the sampling rate, in meters per sample, must be formula_2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k = \\frac{2\\pi}{\\lambda}"
},
{
"math_id": 1,
"text": "\\geq 2 \\frac{samples}{cycle} \\times \\frac{k \\frac{radians}{meter}}{2\\pi \\frac{radians}{cycle}}"
},
{
"math_id": 2,
"text": "\\leq \\frac{\\lambda}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=70348415 |
7035148 | Half-logistic distribution | In probability theory and statistics, the half-logistic distribution is a continuous probability distribution—the distribution of the absolute value of a random variable following the logistic distribution. That is, for
formula_0
where "Y" is a logistic random variable, "X" is a half-logistic random variable.
Specification.
Cumulative distribution function.
The cumulative distribution function (cdf) of the half-logistic distribution is intimately related to the cdf of the logistic distribution. Formally, if "F"("k") is the cdf for the logistic distribution, then "G"("k") = 2"F"("k") − 1 is the cdf of a half-logistic distribution. Specifically,
formula_1
Probability density function.
Similarly, the probability density function (pdf) of the half-logistic distribution is "g"("k") = 2"f"("k") if "f"("k") is the pdf of the logistic distribution. Explicitly,
formula_2 | [
{
"math_id": 0,
"text": "X = |Y| \\!"
},
{
"math_id": 1,
"text": "G(k) = \\frac{1-e^{-k}}{1+e^{-k}} \\text{ for } k\\geq 0. \\!"
},
{
"math_id": 2,
"text": "g(k) = \\frac{2 e^{-k}}{(1+e^{-k})^2} \\text{ for } k\\geq 0. \\!"
}
]
| https://en.wikipedia.org/wiki?curid=7035148 |
70352727 | Recharge oscillator | Theory to explain the periodical variation of the sea surface temperature and thermocline depth
The recharge oscillator model for El Niño–Southern Oscillation (ENSO) is a theory described for the first time in 1997 by Jin., which explains the periodical variation of the sea surface temperature (SST) and thermocline depth that occurs in the central equatorial Pacific Ocean. The physical mechanisms at the basis of this oscillation are periodical recharges and discharges of the zonal mean equatorial heat content, due to ocean-atmosphere interaction. Other theories have been proposed to model ENSO, such as the delayed oscillator, the western Pacific oscillator and the advective reflective oscillator. A unified and consistent model has been proposed by Wang in 2001, in which the recharge oscillator model is included as a particular case.
Historical Development.
The first attempts to model ENSO were made by Bjerknes in 1969, who understood that ENSO is the result of an ocean-atmosphere interaction (Bjerknes feedback). In 1975 an important step in ENSO comprehension was made by Wyrtki, who improved the Bjerknes model realising that the warm water build-up in the western Pacific is due to a strengthening in the trade winds, and that an El Niño event is triggered by the warm water flow eastward in form of Kelvin waves. Although the Bjerknes-Wyrtki model explained the causes that trigger El Niño events, it was not able to deal with the cyclic nature of the whole ENSO. The recurring nature of the ENSO was introduced by Cane and Zebiak in 1985, who understood that as a result of El Niño event the thermocline depth at the equator is shallower than normal. This condition causes the switch to the cold phase, also referred as La Niña phase. The model proposed by Cane and Zebiak was the first to take into account the coupled interaction of ocean-atmosphere and the ocean-memory system. These two assumptions are the foundation of the model described by Jin in 1997, the recharge oscillator.
A qualitative explanation of the model.
The physics processes behind the recharge oscillator model can be divided into 4 different phases:
Recharge oscillator models.
Idealised recharge oscillator model.
The idealised non-dimensional theory that has been proposed by Jin in 1997 to explain ENSO consists of the mathematical structure described below. It relies on modelling the western and eastern part of the Pacific Ocean as two pools. The assumptions and equations behind the model are explained below.
The thermocline depth anomaly in the eastern part of the basin formula_1 is directly and instantaneously related to the anomaly in the western part formula_2 and to the wind stress anomaly formula_0, according to the relation
formula_3.
The thermocline depth anomaly changes over the western equatorial Pacific are mathematically described by the equation
formula_4
where formula_5 represents the ocean adjustment, characterised by a damping process rate formula_6 due to the mixing and equatorial energy loss to the boundary layer currents, which occur at the eastern and western side of the basin"." The second term formula_7 represents the Sverdrup transport across the basin, or equally the heat transport into or out of the basin; Sverdrup transport depends on the wind stress curl formula_8. Since formula_9 is directly proportional to zonally integrated wind stress and its curl, it is possible to approximate formula_10, where formula_11 is a constant. The minus sign in the equation above is due to the fact that a westerly wind stress reduces the thermocline depth in the western basin, while an enforced trade wind (whose direction is always east to west) increases the thermocline depth.
The previous equations provide a simplified description of the basinwide equatorial oceanic adjustment under the anomalous wind stress forcing.
The SST anomaly formula_12 evolution in time is described by the relation
formula_13
where formula_14 represents the SST relaxation due to the formula_15 rate damping processes, formula_16 is taking account of the climatological upwelling, and formula_17 represents the advective feedback. formula_18 is the wind stress averaged over the region where the SST occurs, formula_19 and formula_20 are respectively the thermocline and the Ekman pumping feedback coefficients.
As explained in the previous section, the atmospheric response to a SST anomaly is an increased wind stress formula_0, whose orientation depends on anomaly's sign. The wind stress anomaly magnitude is influenced by the zonal area where the SST is averaged, and results bigger if all the basin is taken into account, rather than just its eastern part. This observation allows to approximate the relation between formula_21, formula_18 and formula_12:
formula_22
with formula_23 coupling coefficients.
From the previous equations it is possible to derive a linear coupled system, which describes the thermocline depth and the SST anomaly time evolution in the eastern side of the basin:
formula_24
formula_25
where formula_26 is initially defined as the sum of the already introduced constants formula_27 and formula_28, and describes the Bjerknes' positive feedback hypothesis. As already stated formula_29 represents the thermocline feedback, formula_30 the Ekman upwelling, and formula_31 the rate of the damping process. Because of the weak local wind stress averaged over the eastern basin, the Ekman upwelling feedback parameter formula_30is negligible when compared to the other two terms, ultimately leading to formula_32.
Improved physical approach.
The model presented above is still highly idealised. There is a similar approach which investigates the same climatological anomalies in the Walker circulation along with ocean surface-layer thickness anomalies, but with a more physical perspective.
The key processes of this recharge-oscillator model still involve the ocean (dynamics, volume conservation and heat budget) and the atmosphere, thus leading to another coupled model that comprehensively describes the mechanism of the different phases of ENSO. The assumptions are slightly different from those of the model described above.
The contribution from the ocean is updated to be that of a reduced-gravity surface layer with an average thickness formula_33. Following a similar approach as described above, given the wind stress anomaly formula_0 and one thermocline depth anomaly (for instance formula_2) it is possible to know the other depth anomaly ("formula_1)" through the relation:
formula_34
where formula_35 is the reduced gravity formula_36. By imposing the volume conservation it is possible to see that the only transport that can exist is a meridional transport. This is accomplished through the Sverdrup transport induced by the wind stress anomaly.
The vertical integrated North-South velocity is then:
formula_37.
Nevertheless, in this model we can consider no wind stress anomaly in the meridional direction, leading to a final integrated transport:
formula_38.
The wind anomaly is limited in space, and decreases away from the equator; its limit are set by the equatorial Rossby radius of deformation formula_39, that at the equator assumes a value slightly higher than 200 Km. Therefore, the wind anomaly is considered to be maximum at the equator and to reach formula_40 over the limits formula_41 and beyond them"." By virtue of this consideration the meridional integrated transport and the corresponding flow divergence formula_42 can be calculated as:
formula_43
where the formula_44 sign in the total flow refers to the northern boundary (formula_45) and the southern boundary (formula_46).
On the basis of what has just been described, the understanding and the inclusion of the ocean dynamics contribution is completed, and it is therefore possible to estimate the total change in the western thermocline depth as:
formula_47
where the second contribution on the right-hand is due to the ocean adjustment damping relative to boundary layer and lateral processes.
Further considerations can be made about the ocean heat budget. The heat budget is considered to be open only when a temperature anomaly appears on the eastern side of the ocean. In order for the temperature in the eastern Pacific to change, both a zonal velocity anomaly due to wind stress anomaly and vertical advection are needed.
In the first case, the horizontal velocity of heat transporting flow can be considered to be proportional to the wind stress anomaly as: formula_48. This relation is valid as long as the basin taken into consideration is assumed to be purely wind-driven and not influenced by Earth rotation. Positive formula_49 values correspond to positive wind stress anomaly values.
Thereby, the temperature anomaly along the eastern Pacific (advection of the background climatological temperature field) can be seen as:
formula_50
where formula_51 is the difference in temperature between the eastern and the western parts of the basin.
As far as convection is concerned, the vertical convection in the East can be normally estimated as
formula_52
where the bar above the temperature refers to the climatological situation. In this case, formula_53 refers to the upwelling and its relation with the wind stress can be parameterised as formula_54 , where the minus sign ensures that to a positive wind stress corresponds a decrease in the upwelling on the eastern pool. In fact, a positive wind stress anomaly generates a corresponding negative upwelling anomaly formula_55. The resulting reduction of deep cold water generates a temperature anomaly and thus a positive heat flux calculated as:
formula_56
where formula_57 is the vertical temperature difference.
Since the surface water temperature is higher than the deep water temperature (formula_58), this contribution is positive for a positive wind stress anomaly.
Nonetheless, the downward shifting of the water temperature profile (the thermocline deepens by formula_59) implies that the temperature at a depth formula_60 should be considered to be the climatological value found at formula_61"." It is important to highlight that the surface temperature, at this point, includes the increase due to the anomaly.
Therefore, the following result is obtained:
formula_62.Grouping the three contributions from advection, the variation over time of the temperature anomaly due to advection becomes:
formula_63.
In the final contribution, a damping component (first one on the right-hand side) is added in a similar way to what has been done above for formula_64.
It is possible to further assume that the relation between the wind stress anomaly and the temperature anomaly is given by: formula_65 .
Finally, the coupled model is complete and described as follows:
formula_66
formula_67.
Comparison with real measurements.
Despite the improvements, the previous model is still a simplification of the real mechanism that is much more complex in its behaviour. The animation clearly shows an elliptical behaviour over time in the relation between temperature and depth anomalies that is not observed in historical data observations. The model described above considers a symmetrical behaviour for the two different phases (El Niño and La Niña), which is not what is observed in reality. For instance, as shown in the work of McPhaden et al. (2000): "air–sea fluxes, which are a negative feedback on SST anomaly growth in the equatorial cold tongue are more effective at heating the ocean during cold phases of ENSO than they are at cooling the ocean during warm phases of ENSO. Alternately, the ability of upwelling and vertical mixing to cool the surface may saturate at some threshold beyond which further thermocline shoaling does not lead to further SST cooling". | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "h_{E}"
},
{
"math_id": 2,
"text": "h_{W}"
},
{
"math_id": 3,
"text": "h_{E}=h_{W}+\\tau"
},
{
"math_id": 4,
"text": "\\frac{dh_{W}}{dt}=-rh_{W}-F_{\\tau} "
},
{
"math_id": 5,
"text": "-rh_{W}"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "-F_{\\tau}"
},
{
"math_id": 8,
"text": "\\nabla\\times\\tau"
},
{
"math_id": 9,
"text": "F_{\\tau}"
},
{
"math_id": 10,
"text": "F_{\\tau}=\\alpha \\tau"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "T_{E}"
},
{
"math_id": 13,
"text": "\\frac{dT_{E}}{dt} = -cT_{E}+\\gamma h_{E} + \\delta_{s} \\tau_{E}"
},
{
"math_id": 14,
"text": "-cT_{E}"
},
{
"math_id": 15,
"text": "c"
},
{
"math_id": 16,
"text": "+ \\gamma h_{E}"
},
{
"math_id": 17,
"text": "+\\delta_{s} \\tau_{E}"
},
{
"math_id": 18,
"text": "\\tau_{E}"
},
{
"math_id": 19,
"text": "\\gamma"
},
{
"math_id": 20,
"text": "\\delta_{s}"
},
{
"math_id": 21,
"text": "\\tau"
},
{
"math_id": 22,
"text": "\\tau = b T_{E}, \\tau_{E} = b'T_{E} "
},
{
"math_id": 23,
"text": "b,b'"
},
{
"math_id": 24,
"text": "\\frac{dh_{W}}{dt}=-rh_{W}-\\alpha b T_{E} "
},
{
"math_id": 25,
"text": "\\frac{dT_{E}}{dt} = RT_{E}+\\gamma h_{W}"
},
{
"math_id": 26,
"text": "R"
},
{
"math_id": 27,
"text": "\\gamma b, \\delta_{s}b'"
},
{
"math_id": 28,
"text": "-c"
},
{
"math_id": 29,
"text": "\\gamma b"
},
{
"math_id": 30,
"text": "\\delta_{s}b'"
},
{
"math_id": 31,
"text": "-c"
},
{
"math_id": 32,
"text": "R = \\gamma b -c"
},
{
"math_id": 33,
"text": "H"
},
{
"math_id": 34,
"text": "g'\\frac{h_E-h_W}{L}=\\frac{\\tau}{\\rho_0H}"
},
{
"math_id": 35,
"text": "g'"
},
{
"math_id": 36,
"text": "g'= g \\frac{\\Delta \\rho}{\\rho_0}"
},
{
"math_id": 37,
"text": "V_{TOT}=\\frac{1}{\\rho_0\\beta_0} \n(\\frac{\\partial\\tau^y}{\\partial x} - \\frac{\\partial\\tau^x}{\\partial y})"
},
{
"math_id": 38,
"text": "V_{TOT}=-\\frac{1}{\\rho_0\\beta_0} \n\\frac{\\partial\\tau^x}{\\partial y}"
},
{
"math_id": 39,
"text": "R_{EQ}"
},
{
"math_id": 40,
"text": "\\tau = 0"
},
{
"math_id": 41,
"text": "\\pm R_{EQ}"
},
{
"math_id": 42,
"text": "D"
},
{
"math_id": 43,
"text": "V_{TOT}=\\pm\\frac{1}{\\rho_0\\beta_0} \n\\frac{\\tau}{ R_{EQ}} ; \\ \\ \\ \\ \\ D\\sim \\frac{\\tau}{\\rho_0\\beta_0 R_{EQ}^2} "
},
{
"math_id": 44,
"text": "\\pm"
},
{
"math_id": 45,
"text": "+"
},
{
"math_id": 46,
"text": "-"
},
{
"math_id": 47,
"text": "\\frac{dh_W}{dt}= -D - rh_W"
},
{
"math_id": 48,
"text": "U=\\gamma\\tau"
},
{
"math_id": 49,
"text": "U"
},
{
"math_id": 50,
"text": "-u\\frac{\\partial T}{\\partial x} \\sim U \\frac{\\Delta_hT}{L}"
},
{
"math_id": 51,
"text": "\\Delta_hT"
},
{
"math_id": 52,
"text": "-u\\frac{\\partial \\bar{T}}{\\partial z}\\sim - \\bar{\\omega} \\frac{\\bar{T}_{SURF}-\\bar{T}_{-H}}{H}"
},
{
"math_id": 53,
"text": "\\omega"
},
{
"math_id": 54,
"text": "\\bar\\omega = - \\alpha \\bar\\tau"
},
{
"math_id": 55,
"text": "\\tilde\\omega"
},
{
"math_id": 56,
"text": " - \\tilde{\\omega} \\frac{T_{SURF}-T_{-H}}{H} \\sim \\alpha \\tau \\frac{\\Delta_vT}{H}"
},
{
"math_id": 57,
"text": "\\Delta_vT"
},
{
"math_id": 58,
"text": "\\Delta_vT > 0"
},
{
"math_id": 59,
"text": "h_E"
},
{
"math_id": 60,
"text": "-H"
},
{
"math_id": 61,
"text": "-H + h_E"
},
{
"math_id": 62,
"text": "-\\bar{\\omega}(\\frac{T_E}{z}-\\frac{\\partial \\bar{T}}{\\partial z} \\frac{h_E}{H}) \n\\sim - \\frac{\\bar{\\omega}}{H} T_E + \\frac{\\bar{\\omega}\\Delta_vT}{H^2} h_E"
},
{
"math_id": 63,
"text": "\\frac{dT_E}{dt}=-(r'+\\frac{\\bar\\omega}{H}) + \\frac{\\bar\\omega \\Delta_vT}{H^2} h_E +(\\gamma \\frac{\\Delta_hT}{L}+ \\alpha \\frac{\\Delta_vT}{H})\\tau"
},
{
"math_id": 64,
"text": "\\frac{dh_W}{dt}"
},
{
"math_id": 65,
"text": "\\tau = \\mu T_E"
},
{
"math_id": 66,
"text": "\\frac{dh_W}{dt}=-\\frac{\\mu T_E}{\\rho_0\\beta_0 R_{EQ}^2} - rh_W"
},
{
"math_id": 67,
"text": "\\frac{dT_E}{dt}=[\\mu(\\gamma\\frac{\\Delta_hT}{L}+ \\alpha \\frac{\\Delta_vT}{H}\n+\\frac{\\bar\\omega L \\Delta_vT}{H^3 g' \\rho_0}- r') - \\frac{\\bar\\omega}{H}] T_E\n+\\frac{\\bar\\omega\\Delta_vT}{H^2}h_W\n\n"
}
]
| https://en.wikipedia.org/wiki?curid=70352727 |
70353559 | Grating lobes | For discrete aperture antennas (such as phased arrays) in which the element spacing is greater than a half wavelength, a spatial aliasing effect allows plane waves incident to the array from visible angles other than the desired direction to be coherently added, causing grating lobes. Grating lobes are undesirable and identical to the main lobe. The perceived difference seen in the grating lobes is because of the radiation pattern of non-isotropic antenna elements, which affects main and grating lobes differently. For isotropic antenna elements, the main and grating lobes are identical.
Definition.
In antenna or transducer arrays, a grating lobe is defined as "a lobe other than the main lobe, produced by an array antenna when the inter-element spacing is sufficiently large to permit the in-phase addition of radiated fields in more than one direction."
Derivation.
To illustrate the concept of grating lobes, we will use a simple uniform linear array. The beam pattern (or array factor) of any array can be defined as the dot product of the steering vector and the array manifold vector. For a uniform linear array, the manifold vector is formula_0, where formula_1 is the phase difference between adjacent elements created by an impinging plane wave from an arbitrary direction, formula_2 is the element number, and formula_3 is the total number of elements. The formula_4 term merely centers the point of reference for phase to the physical center of the array. From simple geometry, formula_1 can be shown to be formula_5, where formula_6 is defined as the plane wave incident angle where formula_7 is a plane wave incident orthogonal to the array (from boresight).
For a uniformly weighted (un-tapered) uniform linear array, the steering vector is of similar form to the manifold vector, but is "steered" to a target phase, formula_8, that may differ from the actual phase, formula_1 of the impinging signal. The resulting normalized array factor is a function of the phase difference, formula_9.
formula_10
The array factor is therefore periodic and maximized whenever the numerator and denominator both equal zero, by L'Hôpital's rule. Thus, a maximum of unity is obtained for all integers formula_2, where formula_11. Returning to our definition of formula_6, we wish to be able to steer the array electronically over the entire visible region, which extends from formula_12 to formula_13, without incurring a grating lobe. This requires that the grating lobes be separated by at least formula_14. From the definition of formula_1, we see that maxima will occur whenever formula_15. The first grating lobe will occur at formula_16. For a beam steered to formula_17, we require the grating lobe to be no closer than formula_12. Thus formula_18.
Relationship to sampling theorem.
Alternatively, one can think of a uniform linear array (ULA) as spatial sampling of a signal in the same sense as time sampling of a signal. Grating lobes are identical to aliasing that occurs in time series analysis for an under-sampled signal. Per Shannon's sampling theorem, the sampling rate must be at least twice the highest frequency of the desired signal in order to preclude spectral aliasing. Because the beam pattern (or array factor) of a linear array is the Fourier transform of the element pattern, the sampling theorem directly applies, but in the spatial instead of spectral domain. The discrete-time Fourier transform (DTFT) of a sampled signal is always periodic, producing "copies" of the spectrum at intervals of the sampling frequency. In the spatial domain, these copies are the grating lobes. The analog of radian frequency in the time domain is wavenumber, formula_19 radians per meter, in the spatial domain. Therefore the spatial sampling rate, in samples per meter, must be formula_20. The sampling interval, which is the inverse of the sampling rate, in meters per sample, must be formula_21.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{v}(\\psi)= e^{j \\left( n-\\frac{N-1}{2} \\right) \\psi} "
},
{
"math_id": 1,
"text": "\\psi"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "N"
},
{
"math_id": 4,
"text": "\\frac{N-1}{2}"
},
{
"math_id": 5,
"text": "\\psi=\\frac{2\\pi}{\\lambda}d \\times cos\\theta"
},
{
"math_id": 6,
"text": "\\theta"
},
{
"math_id": 7,
"text": "\\theta=90^\\circ"
},
{
"math_id": 8,
"text": "\\psi_T"
},
{
"math_id": 9,
"text": "\\psi_\\Delta = \\psi-\\psi_T"
},
{
"math_id": 10,
"text": "AF = \\frac{1}{N} \\vec{v}^H(\\psi_T)\\vec{v}(\\psi)= \\frac{1}{N} \\sum_{n=0}^{N-1}e^{-j \\left( n-\\frac{N-1}{2} \\right) \\psi_T}e^{j \\left( n-\\frac{N-1}{2} \\right) \\psi}=\\frac{1}{N} e^{-j \\frac{N-1}{2} \\psi_\\Delta} \\sum_{n=0}^{N-1}e^{jn \\psi_\\Delta}= \\frac{sin \\left( N \\frac{\\psi_\\Delta}{2} \\right)}{N sin \\frac{\\psi_\\Delta}{2}}, -\\infty < \\psi_\\Delta < \\infty "
},
{
"math_id": 11,
"text": "\\psi_\\Delta=2 \\pi n"
},
{
"math_id": 12,
"text": "\\theta=0^\\circ"
},
{
"math_id": 13,
"text": "\\theta=180^\\circ"
},
{
"math_id": 14,
"text": "180^\\circ"
},
{
"math_id": 15,
"text": "2\\pi n=\\frac{2\\pi}{\\lambda}d \\times \\left( cos\\theta - cos\\theta_T \\right)"
},
{
"math_id": 16,
"text": "|n|=1"
},
{
"math_id": 17,
"text": "\\theta_T=180^\\circ"
},
{
"math_id": 18,
"text": "d=\\frac{2\\pi \\lambda}{2\\pi \\left( 1 + 1 \\right)} = \\frac{\\lambda}{2}"
},
{
"math_id": 19,
"text": "k = \\frac{2\\pi}{\\lambda}"
},
{
"math_id": 20,
"text": "\\geq 2 \\frac{samples}{cycle} \\times \\frac{k \\frac{radians}{meter}}{2\\pi \\frac{radians}{cycle}}"
},
{
"math_id": 21,
"text": "\\leq \\frac{\\lambda}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=70353559 |
70368424 | Illumination efficiency | Antenna [aperture] illumination efficiency is a measure of the extent to which an antenna or array is uniformly excited or illuminated. It is typical for an antenna [aperture] or array to be intentionally under-illuminated or under-excited in order to mitigate sidelobes and reduce antenna temperature. It is not to be confused with radiation efficiency or antenna efficiency.
Definition.
Antenna [aperture] illumination efficiency is defined as "The ratio, usually expressed in percent, of the maximum directivity of an antenna [aperture] to its standard directivity." It is synonymous with normalized directivity. Standard [reference] directivity is defined as "The maximum directivity from a planar aperture of area A, or from a line source of length L, when excited with a uniform-amplitude, equiphase distribution." Key to understanding these definitions is that "maximum" directivity refers to the direction of maximum radiation intensity, i.e., the main lobe. Therefore, illumination efficiency is not a function of angle with respect to the antenna [aperture], but rather is a constant of the aperture for all aspect angles.
Standard directivity.
The distinction between maximum directivity and standard directivity is subtle. However, one can infer that, if an antenna [aperture] were excited [illuminated] uniformly with no phase difference (equiphase) over the entire aperture, then the illumination efficiency would be equal to unity. It is very typical for an antenna [aperture] to be intentionally under-excited [illuminated] with a "taper" in order to reduce radiation pattern sidelobes and antenna temperature. In such a design, the maximum directivity is reduced because the full aperture is not being used to the full extent possible, and the illumination efficiency will be less than unity. IEEE's choice of words is somewhat confusing, because "maximum" directivity is always less than or equal to "standard" directivity. The word maximum, in this case, is used to mean the maximum radiation intensity of the overall directivity pattern, which is otherwise defined for all aspect angles.
Relationship to antenna efficiency.
There are critical differences in how various authors and IEEE define antenna efficiency and effective area of an antenna. IEEE defines the "antenna efficiency of an aperture-type antenna" as, "For an antenna with a specified planar aperture, the ratio of the maximum effective area of the antenna to the aperture area."
formula_0
and under effective area of an antenna, IEEE states, "The effective area of an antenna in a given direction is equal to the square of the operating wavelength times its gain in that direction divided by 4π." Gain is also defined to be less than directivity by the radiation efficiency, formula_1
formula_2
However, other reputable authors define the effective area in terms of the directivity:
formula_3
Either way, the standard directivity cannot exceed:
formula_4
since formula_5.
Per the IEEE definitions:
formula_6
where formula_7 is the illumination efficiency.
However, per the definition of other authors:
formula_8
So clearly there is a problem. If the IEEE definitions are true, then formula_9 and therefore formula_10. Or, if the other authors are correct, then formula_11.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\eta_a=\\frac{A_{e,max}}{A}"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": " A_e = G\\frac{\\lambda^2}{4\\pi}=\\eta D \\frac{\\lambda^2}{4\\pi}"
},
{
"math_id": 3,
"text": " A_e = D\\frac{\\lambda^2}{4\\pi}"
},
{
"math_id": 4,
"text": " D_{std} \\leq A \\frac{4\\pi}{\\lambda^2}"
},
{
"math_id": 5,
"text": "\\eta_a \\leq 1"
},
{
"math_id": 6,
"text": "D_{max}=\\eta_i D_{std} \\leq \\frac{\\eta_i}{\\eta_a} A_{e,max} \\frac{4\\pi}{\\lambda^2}=\\frac{\\eta_i}{\\eta_a}\\eta D_{max}"
},
{
"math_id": 7,
"text": "\\eta_i"
},
{
"math_id": 8,
"text": "D_{max}=\\eta_i D_{std} \\leq \\frac{\\eta_i}{\\eta_a} A_{e,max} \\frac{4\\pi}{\\lambda^2}=\\frac{\\eta_i}{\\eta_a} D_{max}"
},
{
"math_id": 9,
"text": "\\frac{\\eta_i}{\\eta_a}\\eta=1"
},
{
"math_id": 10,
"text": "\\eta = \\frac{\\eta_a}{\\eta_i}"
},
{
"math_id": 11,
"text": "\\eta_a = \\eta_i"
}
]
| https://en.wikipedia.org/wiki?curid=70368424 |
70368444 | OceanParcels | Set of python packages for Langragian ocean analysis
OceanParcels, “Probably A Really Computationally Efficient Lagrangian Simulator”, is a set of Python classes and methods that is used to track particles like water, plankton and plastics. It uses the output of ocean general circulation model (OGCMs). OceanParcels main goal is to process the increasingly large amounts of data that is governed by OGCM's. The flow dynamics are simulated using Lagrangian modelling (observer moves with particle) and the geophysical fluid dynamics are simulated with Eulerian modelling (observer remains stationary) or provided through experimental data. OceanParcels is dependent on two principles, namely the ability to read external data sets from different formats and customizable kernels to define particle dynamics.
Concept.
OceanParcels takes a flow field given through Eulerian methods or experimental data (e.g. provided through land-based measurements, satellite imagery or Doppler currents) as input. This data can be given on various grid structures. To obtain data at an arbitrary point, interpolation is used. Lagrangian modelling uses the data given by the flow field and interpolation to simulate the dynamics of an object.
Grid structure.
In the horizontal plane, i.e. in the x-y-plane, OceanParcels allows for rectilinear and curvilinear data grids (see Fig. 1 a and b respectively). In the vertical plane, i.e. the x-z-plane, z-levels and s-levels are supported (see Fig. 1 c and d respectively). All four combinations of these grids are supported in three-dimensional space.
OceanParcels also supports so-called staggered A, B and C grids, which are common in ocean modelling. These take into account that the different variables can be measured at different grid points. The relevant variables in Ocean analysis are the meriodial velocity (v), the zonal velocity (u) and the Tracers (q). In an A grid all of these are evaluated at the same grid points. In a B grid, u and v are evaluated at the grid point in the center between the grid point where T is being evaluated, that is there are two overlying grids. In a C grid all variables are evaluated on their own grid, that is there are three overlying grids. See Figure 2 for a visualization.
Interpolation methods.
To obtain field data at a point that is not on a grid node, interpolation methods are used. OceanParcels supports several different such methods:
Lagrangian modelling.
The Lagrangian trajectory of a particle can be calculated by numerically integrating over its position formula_0 as a function of time, given by formula_1. Here formula_2 is the starting time and formula_3 is the time at which the position of the particle is evaluated. formula_4 is the velocity of the particle at time formula_5. formula_6 captures the movement of the particle caused by its specific characteristics (e.g. an ice-berg moves differently in the ocean than a non-frozen water mass). Unless otherwise specified, OceanParcels uses a fourth-order Runge-Kutta scheme for the integration of this function. As alternatives, OceanParcels supports Euler-forward integration or adaptive Runge-Kutta-Fehlberg integration.
OceanParcels provides the option to model individual behavior of a particle. Individual behavior is behavior that is defined by characteristics of a particle, for example icebergs melt and fish swims, leading to different dynamics. OceanParcels implements this using short pieces of code that are executed whenever a ParticleSet (class defining the particles used) is executed. Those pieces of code are called kernels. Some commonly used kernels are predefined in OceanParcels, such as kernels modeling Brownian motion. However, the user may also define problem specific kernels. This allows for modeling various objects, examples of which are given in the section "Applications". For implementing these customized kernels the user can choose between using Python with a possibility to automatically translate to C or using a C-library.
Applications.
Lagrangian analysis can be used to simulate the pathways of virtual particles, which can represent water masses, temperature tracers, salinity tracers, nutrient tracers, kelp, coral larvae, plastics, fish, icebergs, plankton, etc. The pathways of the different particles can be modelled using specific kernels, for example that fish can swim and that icebergs can melt.
Water masses.
The movement of water masses can be calculated numerically using velocity fields, which are simulated by a primitive equation model (fine resolution Antarctic model). Using this model, it can be estimated how many times the water mass has traveled around Antarctica and how much time it has spend in the Southern Ocean. It turned out that on average a water mass travels around Antarctica six times before it is reaches the surface for the first time.
Temperature tracers.
Surface temperatures can be analyzed using proxies, which can reconstruct environmental changes over the last 600 years. These reconstructions are based on alkenones, Membrane lipids and stable isotope ratios of globigerinoides ruber. It is argued that ocean currents are transporting these proxies far away from there origin, creating a bias towards temperature approximations.
Salinity tracers.
The proxies used for salinity are the ratios between elements and calcium and stable oxygen isotopes ratios of foraminifera. These isotope ratios can be calculated by the following formula: formula_7, where y is the common isotope and x the more rare isotope. Both ratios have a positive correlation in the Mediterranean Sea, which has a strong west to east salinity gradient. Salinity tracers are important for reconstructing the ocean circulation, because together with temperature, the density can be determined, which can reconstruct large-scale circulation patterns, including meridional overturning circulation.
Nutrient tracers.
The influence of the Equatorial Undercurrent (EUC), New Guinea Coastal Undercurrent (NGCU) and the New Ireland Coastal Undercurrent were examined using models to elucidate the high iron concentrations in the Pacific Ocean. During El Niño, the latter two currents strengthen, which enhances the iron concentrations in the Pacific.
Plastic.
Lagrangian particle models have shown that about three quarters of buoyant marine plastics released from land are ending up in coastal waters, with the highest concentration in Southeast Asia. However, these simulations are overestimated with respect to field measurements.
Fish.
The transport of fish larvae by ocean currents is an important dispersal mechanism, because the timing has a lot of influence on the location where the larvae are settled. Pomatomus saltatrix is a fish species that is present globally, and its dispersal is examined by using particle tracking simulations. Especially the East Australian Current is important for maximum dispersal of the larvae.
Plankton.
Plankton are important archives to reconstruct past conditions of the ocean surfaces. While the plankton sinks to the sediments, it can be advected by turbulent ocean currents. Ocean simulations can be used to define ocean bottom provinces based on the particle surface origin locations.
Icebergs.
The influence of icebergs on sea ice can be modeled and compared to observations. When icebergs melt, a large fresh water flux takes place. Around Antarctica, a negative fresh water flux takes place due to sea ice freezing.
Development.
The latest version of OceanParcels can be accessed via GitHub. It is constantly being improved, updated and expanded. The code design ensures that this can be done without effecting the user interface. Below you see a list of some possible improvements:
Alternatives to OceanParcels.
The following list contains some code packages with (partially) similar functions as OceanParcels.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X(t_2)=X(t_1)+\\int_{t_1}^{t_2} v(x,\\tau)d\\tau + \\Delta X_b(t_1)"
},
{
"math_id": 2,
"text": "t_1"
},
{
"math_id": 3,
"text": "t_2"
},
{
"math_id": 4,
"text": "v(x,\\tau)"
},
{
"math_id": 5,
"text": "\\tau"
},
{
"math_id": 6,
"text": "\\Delta X_b(t)"
},
{
"math_id": 7,
"text": " \\delta^{X} = \\frac{^{x/y}R_{sample}}{^{x/y}R_{reference}}-1"
}
]
| https://en.wikipedia.org/wiki?curid=70368444 |
70370785 | Flickering spectroscopy | Flickering analysis of cellular or membranous structures is a widespread technique for measuring the bending modulus and other properties from the power spectrum of thermal fluctuations.
First demonstrated theoretically by Brochard and Lennon in 1975, flickering spectroscopy has become a widespread technique due to its simplicity and lack of specialised equipment beyond a brightfield microscope. It is used in structures such as red blood cells, giant unilamellar vesicles and other cell-like structures.
Theoretical overview.
Considering a quasi-spherical shell subject to thermal undulations according to Langevin dynamics, one can express the time-averaged mean square amplitudes of the fluctuation modes as
formula_0
where where formula_1 and formula_2 index the fluctuation mode corresponding to spherical harmonics formula_3 and formula_4 is the reduced membrane tension, formula_5 is the spontaneous curvature and formula_6 is the bending modulus, as defined by the Helfrich hamiltonian.
Experimental procedure and analysis.
The equatorial plane of a cell-like structure can be imaged using phase contrast microscopy to obtain a video showing the fluctuations of the membrane.
On the video, the contours can be found using image analysis algorithms, which can then be used to determine the power spectrum of the fluctuation modes in real space amplitude. This can be used, following the steps above, to obtain relevant parameters such as the bending modulus, which is useful for a number of applications in membrane structure research.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\biggl\\langle | u_n^m |^2 \\biggr\\rangle = \\frac{k_B T}{\\kappa (n-1) (n+2) [n(n+1) + \\overline{\\sigma}]}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "Y^m_n(\\theta,\\phi)\n"
},
{
"math_id": 4,
"text": "\\overline{\\sigma} = \\sigma(R^2/\\kappa)-2H_0 R + 2H_0^2 R^2"
},
{
"math_id": 5,
"text": "H_0"
},
{
"math_id": 6,
"text": "\\kappa"
}
]
| https://en.wikipedia.org/wiki?curid=70370785 |
70383921 | Oceanic freshwater flux | Freshwater fluxes into and out of the world's oceans
Oceanic freshwater fluxes are defined as the transport of non saline water between the oceans and the other components of the Earth's system (the lands, the atmosphere and the cryosphere). These fluxes have an impact on the local ocean properties (on sea surface salinity, temperature and elevation), as well as on the large scale circulation patterns (such as the thermohaline circulation).
Introduction.
Freshwater fluxes in general describe how freshwater is transported between and stored in the earth's systems: oceans, land, the atmosphere and the cryosphere. While the total amount of water on Earth has remained virtually constant over human timescales, the relative distribution of that total mass between the four reservoirs has been influenced by past climate states, such as glacial cycles. Since the oceans account for 71% of the Earth's surface area, 86% of evaporation (E) and 78% of precipitation (P) occur over the ocean, the oceanic freshwater fluxes represent a large part of the world's freshwater fluxes.
There are five major freshwater fluxes into and out of the ocean, namely:
whereby the 1., 3. and 5. are all inputs, adding freshwater to the ocean, while 2. is an output, i.e. a negative freshwater flux and 4. can be either a freshwater loss (freezing) or gain (melting).
The quantity and the spatial distribution of those fluxes determine the ocean salinity (the salt concentration of the ocean water). A positive freshwater flux leads to mixing of water with low to zero salinity with the salty ocean water, resulting in a decrease of the water salinity. This is for example the case in regions, where precipitation is greater than evaporation. On the contrary, if evaporation gets greater than precipitation, the ocean salinity increases, since only water (<chem>H2O</chem>) evaporates, but not the ions (e.g. <chem>Na+</chem>, <chem>Cl+</chem>) which make up salt.
Estimates of the annual mean freshwater fluxes into the ocean are formula_0 for precipitation (88% of total freshwater input), formula_1 for riverine discharge from land (9%), formula_2 for ice discharge from land (<1%) and formula_3 and formula_4 for saline and fresh groundwater discharge respectively (<1%). The annual mean freshwater fluxes out of the ocean via evaporation is estimated to be formula_5.
The salinity, along with temperature and pressure, determines the density of the water. Higher salinity and cooler water results in a higher water density (see also spiciness of ocean water). Since differences in water density drive large-scale ocean circulation, freshwater fluxes are most important for ocean circulation patterns like the Thermohaline Circulation (THC).
Freshwater fluxes into the ocean.
Evaporation and precipitation.
There are large spatial and temporal variations in precipitation and evaporation patterns. The dominant reason for precipitation is adiabatic cooling when moist air rises, whose water vapor then becomes supersaturated above a certain altitude and condenses out. Areas of large precipitation are therefore areas of convection, which is most prominent in the Intertropical Convergence Zone (ITCZ), a band of latitudes around the equator.
Evaporation describes the process when surface water changes its phase from liquid to gaseous. This process requires a high amount of energy, due to the strong hydrogen bonds between the water molecules. This results in a global evaporation pattern, where high evaporation rates can be observed mostly in warm tropical and subtropical regions, where the surface was heated by solar radiation which can provide the necessary amount of energy. At higher latitudes the evaporation rate decreases. Additionally, the evaporation rate is influenced by the relative humidity of the air overlying the water surface. Approaching the saturation of the air with water vapour, the evaporation rate decreases, i.e. a lower air–sea humidity gradient decreases evaporation.
The actual freshwater flux that the ocean experiences in a certain timeframe is the net amount of precipitation and evaporation in this time interval. This means, if evaporation minus precipitation (E-P) is positive, the ocean experiences a net loss of freshwater, while the opposite is true for a negative value for E-P. On a global scale, the subtropical gyres and western boundary currents of the Atlantic, Pacific and Indian Oceans are regions where evaporation exceeds precipitation. In contrast, the ITCZ as well as high latitudes (> 40° N/S) are regions of net precipitation, although the ITCZ exceeds the high latitudes in terms of quantity of rainfall. The equatorial region of net precipitation is centered north of the equator in the Atlantic and Pacific Oceans but is broader and extends further south in the Indian Ocean. An additional center of strong net precipitation is located over the western Pacific-Indonesian region.
Both Atlantic subtropical gyres are net evaporative, as well as the Pacific subtropical gyres, although they show an east–west transition with increased evaporation near the eastern boundaries. This spatial pattern can be attributed to the fact that the overlying air becomes saturated in humidity, subsequently leading to decreasing evaporation rates as the air is driven westward by the trade winds.
An estimation of the annual mean freshwater flux into the ocean is formula_6 for precipitation, while annual mean freshwater flux out of the ocean via evaporation is estimated to be formula_7.
When considering all the ocean basins, the only ocean basin which experiences net precipitation averaged over the year, is the North Pacific. The other ocean basins, namely the South Pacific, the North and South Atlantic and the Indian Ocean are areas of net evaporation, albeit with varying strength. The net evaporation over the South Pacific Ocean is distinctly smaller than over the other ocean basins, although the South Pacific Ocean covers an area as large the whole Atlantic Ocean and one third larger than the Indian Ocean.
It is very likely that the energy increase (heat flux) observed in the upper 700 m of the global oceans can be attributed to anthropogenic climate change and increased radiative forcing due to greenhouse gas emissions. Although observed trends in evaporation minus precipitation suggest that the Atlantic Ocean will become saltier, while the Indian Ocean will become fresher in the coming decades, it is easier to project global patterns of air-sea flux based on changes in heat content and salinity while regional trends are rarely robust.
Seasonal cycle.
The amount and even the sign of the net total freshwater flux E-P from an ocean basin can change throughout the year.
The net evaporation over much of the subtropics is most pronounced during winter season due to the increased strength of the easterly trades in winter. This applies for both hemispheres. The wind impacts evaporation in two ways. Firstly directly, whereby a greater wind speed carries water vapour faster away from the evaporating surface, leading to a faster reestablishment of the air–sea humidity gradient, which were reduced by the evaporation beforehand and is necessary for high evaporation rates. Secondly indirectly, since enhanced surface wind strengthens the wind-driven subtropical gyre. Since the subtropical gyres drive a northwards heat transport via the western boundary currents, the sea surface temperatures warm up along the paths of the currents and cause more evaporation by providing more energy and enlarging the air–sea humidity gradients. In the extratropics the net precipitation is not explainable by a simple seasonal cycle. In the Atlantic and Pacific Oceans the net mid-latitude precipitation reaches its peak during June–August synchronously in the northern and the southern hemisphere, i.e. in different seasons.
In the North and South Atlantic Oceans and in the North Pacific Ocean evaporation exceeds precipitation in winter and spring. During summer and autumn the sign of E-P changes for all ocean basins but the South Atlantic Ocean, which is always net evaporative. When considering the Atlantic as a whole, the constant net loss of freshwater in the South Atlantic Ocean determines the sign of the total freshwater flux and cancels the net precipitation from the North Atlantic in summer out. This means, the Atlantic in total is net-evaporative during the whole year due to the prominent influence of the South Atlantic Ocean. The opposite can be stated about the Pacific Ocean as a whole, which shows an excess of precipitation over evaporation for every season. This pattern of evaporation minus precipitation is consistent with the observed higher salinity in the Atlantic compared to the Pacific Ocean. In the Indian Ocean a net evaporation rules most of the year, except during December–February.
Changes due to climate change.
Past.
The report from Working Group 1 in the IPCC 2021 AR6 concluded that patterns of evaporation minus precipitation (E-P) over the ocean have enhanced the present mean pattern of wetting and drying. In general, saline surface waters had become saltier (especially in the Atlantic Ocean) while relatively fresh surface waters had become fresher (especially in the Indian Ocean). However, AR6 assessed only low confidence in globally averaged trends in E-P over the 20th century due to observational uncertainty, with a spatial dominated by evaporation increases over the ocean. Even coarse-resolution models show that mean SST and variability in SST are sensitive to changes in flux forcing.
Future.
Based on the assessment of Coupled Model Intercomparison Project 6 (CMIP6) models, AR6 concluded that it is very likely that, in the long term, global mean ocean precipitation will increase with increasing Global Surface Air Temperature. Annual mean and global mean precipitation will very likely increase by 1–3% per °C warming. Hereby, the precipitation patterns will also change and exhibit substantial regional and seasonal differences. Following the general trend ‘wet-gets-wetter-dry-gets-drier’, precipitation will very likely increase over high latitudes and the tropical ocean and likely increase in large parts of the monsoon regions, but likely decrease over the subtropics, including the Mediterranean, southern Africa and southwest Australia, in response to greenhouse-gas induced warming. Although these are the expected general trends there can be distinct deviations from those pattern changes on a local scale. One possible impact of the corresponding trend in ocean salinity is an altering of the Thermohaline Circulation, which is explained below.
Continental discharge.
Another source of freshwater discharge into the ocean is runoff from continents, through river estuaries. The average yearly freshwater discharge from continents is estimated around formula_1.
Compared to other ocean basins, the discharge is relatively high into the western tropical Atlantic, led by the Amazon and the Orinoco river estuaries. This causes some local effects as well adjustment to the large scale thermohaline circulation, as discussed in the "Influence on the Thermohaline Circulation" chapter.
Seasonal cycles.
Most rivers exhibit some sort of seasonal cycle in their discharge, often (but not always) related to seasonal variation in the precipitation. The figure on the right shows the seasonal cycle of the runoff from the 10 largest rivers (Amazon, Mississippi, Congo, Yenisey, Paraná, Orinoco, Lena, Changjiang, Mekong, Brahmaputra/Gange), compared with the local precipitation cycle and two different P-E estimates.
In several rivers, the runoff peak follows the precipitation peak, with different delays reflecting the time needed for the surface runoff to travel to the river mouth. For shorter rivers such as Changjiang, Mekong and Brahmaputra/Gange, the lag between the precipitation and the runoff peaks is about a month or less, while in the Amazon and the Orinoco rivers, the lag is of 2 or more months.
Other larger rivers at higher latitude, such as the Yenisey, Lena and Mississippi, seem to experience a runoff cycle decoupled from the precipitation cycle. The sharp June peak of the Lena and Yesiney cycle is likely due to snowmelt, as well as the less prominent peak between March and May in the Mississippi river.
The Panama and the Congo river do not experience significant seasonal runoff cycles, despite the precipitation cycles, this is probably due to human intervention through river damming.
Multi-annual cycles and climate change impacts.
River runoff is also affected by other meteorological cycles that span over several years.
In particular, a significant correlation with El Niño-Southern Oscillation (ENSO) phase and strength has been observed for several major rivers, as well as a correlation with Interdecadal Pacific Oscillation (IPO).
These irregular cycles, and other possible factors of internal variation which are yet to be researched fully, make it difficult to identify the changes in river runoff that can be ascribed to human induced climate change.
However, climate simulations under a moderate emission scenario (RCP4.5) show significant changes in river runoff by the end of the century, with decreased runoff in Central America, Mexico, the Mediterranean Basin, Southern Africa and much of South America, and increased runoff in the rest of Eurasia and North America.
These changes are consistent with the expected precipitation changes, but a component of earlier snowmelt and permafrost thawing will also have to be considered.
A 2018 study has recorded the variation in river runoff into different each oceanic basin from 1986 to 2016, showing an increased discharge into the Arctic Ocean and a decreased discharge in the Indian Ocean over the last decade.
Local impacts of river runoff.
The input of freshwater from river runoff may seem negligible compared to that precipitation, but several studies have shown that its impact can't be neglected
The largest annual changes in surface salinity have been observed on the western tropical Atlantic, peaking between spring and summer, when the precipitation peak in the ITCZ coincides with the peak in the Amazon discharge. The low salinity water influx have also been shown to follow the seasonal variation of currents on the Brazil coast (northwestward in spring, eastward in summer)
As a direct consequence of the freshwater discharge, rivers have an impact on the local Sea Surface Temperature (SST). This effect is theorethically present at all river mouths, but it was possible to measure it only for very large rivers.
The freshwater placed on top of the saline water serves to stabilize the stratification, restricting the vertical mixing of colder water from higher depths, hence increasing the local SST. Simulations have shown a large SST anomaly especially close to the mouth of the Congo river between July and April, up to +1 °C; a similar anomaly has also been simulated near the mouth of the Amazon river between May and October.
River discharge also has an impact on the local sea level, through two different processes:. Those are roughly represented by the first two terms of the following equation
formula_8
In which formula_9 is the sea surface variation from the mean, formula_10 is the variation of bottom sea pressure, formula_11 is the variation of sea water density from the mean (formula_12), and formula_13 is the atmospheric pressure at sea level.
The first term represents the simple increase of the ocean mass: the importance of this contribution can be established through "hosing experiment", which entails simulating the same water input of the river but with the same salinity as ocean water. While it has been shown that sea level increase caused by this contribution is carried away by bartropic waves in the timescale of days, it can still have an impact when the water basin is semi-enclosed (as in the Arctic) or the water input is particularly large. Durand et altr. simulated a "hosing experiment" with seasonally variable input in the Bay of Bengal, that showed sea level oscillation to the order of 0.1 m
Further, since the freshwater from the river runoff has a lower density than the seawater, the term formula_14 is negative across the first water layer, which results in a positive contribution to the sea level from the second term. This phenomenon is called halosteric effect. The contribution of the halosteric effect generally has a longer lasting effect compared to the ocean mass contribution, while still being in the same order of margnitude (0.1-0-2 m).
The third term of the equation represents the dependency on atmospheric pressure which is unaffected by river runoff.
Other oceanic freshwater fluxes.
Groundwater.
The total flux of groundwater to the ocean can be divided into three different fluxes: fresh submarine groundwater discharge, near-shore terrestrial groundwater discharge and recirculated sea water. The contribution of fresh groundwater accounts for less than 1% of the total freshwater input into the ocean and is therefore negligible on a global scale. However, due to a high variability of groundwater discharge there can be an important contribution to coastal ecosystems on a local scale.
Ice freezing and melting.
Two categories of ice have to be considered in context of oceanic freshwater fluxes: sea ice and (recently) grounded ice like ice shelfs and icebergs.
Sea ice is considered as part of the oceanic water budget, therefore, its melting or freezing states not an input or output of water in general. However, at a regional scale and intraannually timescale, it can present an important determinator of ocean salinity, by adding freshwater during melting process or by rejecting salt during the freezing process. For example, over the Arctic Ocean evaporation and precipitation rates are quite low, respectively, about 5±10 cm/yr and 20±30 cm/yr in liquid water equivalent. The freshwater cycle in the Arctic Ocean is, therefore, significantly determined by freezing and melting of sea ice, for which characteristic rates are about 100 and 50 cm/yr, respectively. If the ice drifts during the long intervals between the phase changes (frozen and liquid), the result is a net local distillation, where the sea ice was formed and a net local freshening of water, where the sea ice melts. This freezing and melting of sea ice, with their accompanying salinity changes, supply local buoyancy forcing that influences ocean circulation.
The calving of a previously grounded ice sheet into the ocean as an iceberg as well as the melting of ice shelfs related to warm ocean water constitute a net freshwater influx, not only on a local but whole ocean scale. Although, the total input from the cryosphere is small compared to the total input of precipitation and riverine discharge (less than 1% on a global scale), on a local scale this can be an important contributor of freshwater and influencing ocean circulation.
Influence on thermohaline circulation (THC).
The Thermohaline Circulation is part of the global ocean circulation. Although this phenomenon is not fully understood yet, it is known that its driving processes are thermohaline forcing and turbulent mixing. Thermohaline forcing refers to density-gradient driven motions, whereby density is determined by the temperature (‘thermo’) and salt concentration (‘haline’) of the water. Heat and freshwater fluxes at the ocean's surface play therefore a key role in forming ocean currents. Those currents exert a major effect on regional and global climate.
The Atlantic Meridional Overturning Circulation (AMOC) is the Atlantic branch of the THC. Hereby, northward moving surface water release heat and water to the atmosphere and gets therefore colder, more saline and consequently denser. This leads to the formation of cold deep water in the North Atlantic. This cold deep water flows back to the south a depth of 2–3 km until it joins the Antarctic Circumpolar Current.
The described differences in net precipitation-evaporation patterns between the Atlantic and the Pacific, with the Atlantic being net evaporative and the Pacific experiencing net precipitation, leads to a distinct difference in salinity contrast, with the Atlantic being more saline than the Pacific. This freshwater flux driven salinity contrast is the main reason that the Atlantic supports a meridional overturning circulation and the Pacific does not. The lower surface salinity of the North Pacific, due to high precipitation rates, inhibits deep convection in the Pacific. AR6 concluded from model simulations from the Climate Model Intercomparison Project 6 (CMIP6) that the AMOC will very likely weaken in the 21st century, but there is low confidence in the models’ projected timing and magnitude of AMOC decline. The projected AMOC weakening can be explained by the CMIP6 projection of an increase in high-latitude temperature and precipitation, along with freshwater input from increased melting of the Greenland Ice Sheet, which cause high-latitude North Atlantic surface waters to become less dense and more stable, preventing overturning and weakening AMOC.
Impacts of river runoff on the large-scale thermohaline circulation.
While evaporation and precipitation processes are the main cause of the salinity anomalies that drive the THC, large rivers seem to have a not negligible impact as well. In particular, a 2017 study simulated the shutdown of the Amazon runoff, and measured its impact on the AMOC. It was found that the Amazon shutdown could cause a strengthening in the AMOC, increased upwelling and lower SST in the equator and southern tropics. The cooler SST over the equator consequently could cause a reduction of the rainfall in the ITCZ, weakening of the meridional atmospheric cells and the westerlies winds in the extratropics. North America and the Arctic would then experience warmer winters (with anomalies up to 1.3 °C), while Northern Eurasia would have cooler and drier condition. In the southern hemisphere, the Amazonia region could also experience drier conditions, possibly causing a positive feedback. The paper concluded by advising caution in the building of dams over the Amazon river (more than a hundred new dams are being considered for construction in the next few decades)
In what could be seen as a small-scale case study, the damming of the Nile river in 1964 (Aswan High Dam) has been shown to have had an impact on the THC of the Mediterranean Sea. A steady increase in the surface and intermediate waters' salinity has been recorded in the West Mediterranean over the last 40 years. This is connected to a growth of the activity in the deep water formation sites in the South Adriatic. The damming of the Nile has been found to be responsible for about 40% of this salinity increase (and hence the increase in deep water formation)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "424\\pm10 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 1,
"text": "46\\pm10 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 2,
"text": "3\\pm40 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 3,
"text": "0.25\\pm90 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 4,
"text": "4\\pm70 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 5,
"text": "470\\pm10 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 6,
"text": "424.10^3 \\pm10 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 7,
"text": "470.10^3 \\pm10 \\%\\,\\mathrm{km^3/yr}"
},
{
"math_id": 8,
"text": "\\eta=P'_b/(g\\rho_0)-1/\\rho_0 \\cdot \\int\\limits_{0}^{-H} \\rho' dz+ P_{atm}/(g\\rho_0)"
},
{
"math_id": 9,
"text": "\\eta"
},
{
"math_id": 10,
"text": "P'_b"
},
{
"math_id": 11,
"text": "\\rho'"
},
{
"math_id": 12,
"text": "\\rho_0"
},
{
"math_id": 13,
"text": "P_{atm}"
},
{
"math_id": 14,
"text": "\\rho'"
}
]
| https://en.wikipedia.org/wiki?curid=70383921 |
70388147 | Judges 14 | Book of Judges, chapter 14
Judges 14 is the fourteenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judges Samson. belonging to a section comprising Judges 13 to 16 and Judges 6:1 to 16:31.
Text.
This chapter was originally written in the Hebrew language. It is divided into 20 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Two panels.
A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD (KJV)
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A 6:1
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore, from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
6:1 , "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
The Samson Narrative.
Chapters 13–16 contains the "Samson Narrative" or "Samson Cycle", a highly structured poetic composition with an 'almost architectonic tightness' from a literary point-of-view. The entire section consists of 3 "cantos" and 10 subcantos and 30 canticles, as follows:
The distribution of the 10 subcantos into 3 cantos is a regular 2 + 4 + 4, with the number of canticles per subcanto as follows:
The number of strophes per canticle in each canto is quite uniform with numerical patterns in Canto II showing a 'concentric symmetry':
The structure regularity within the whole section classifies this composition as a 'narrative poetry' or 'poetic narrative'.
Besides the thematic symmetry, parts of the narrative shows an observable structure with chapter 13 balances chapter 16 (each consisting of three sub-sections with a fourfold asking and answer discourse at the center) whereas chapters 14 and 15 show a parallelism in form and content.
Structure of chapter 14.
Chapter 14 has the following structure:
A. Samson went down to Timnah (14:1-4)
1. speech between Samson and parents/father
2. parental objection
3. Samson's rejection of the possibility of another woman.
B. Samson went down to Timnah (14:5-6)
1. action involving an animal (lion).
C. And he went down and spoke to the woman (14:7-9)
1. action involving honey, a gracious act
D. His father went down to the woman (14:10-20)
1. speech between Samson, Philistines and the Timnite;
2. Philistines threaten third party to beat Samson
3. Spirit of YHWH and Samson's victory.
Samson wants to marry a Philistine woman (14:1–4).
The power struggles between Samson and the Philistines stem from the incident recorded in verses 1–4 of this chapter that starts with Samson "going down" to Timnah and "seeing" an attractive Philistine woman. Themes of Israelite status and the otherness of the Philistines ( 'us' versus 'them') are displayed in a tale of trickery and counter-trickery as God uses Samson to challenge the Philistines who 'rule over Israel at this time' (14:4). These themes are shown in the parental disapproving words to Samson concerning his chosen match (14:3; cf. Genesis 34:14—15) and in the ethnic way Samson describing the woman.
"And Samson went down to Timnath, and saw a woman in Timnath of the daughters of the Philistines."
Samson's wedding and riddle (14:5–20).
The killing of the lion with bare hands (verse 5) was kept secret (cf. verse 9) and led to the hidden answer to the riddle that follows (verse 14). This episode gives a portrayal Samson with a superpower which is followed superhuman feats against the Philistines (cf. 15:1,4; 16:1,3; 16:4, 9,12,14). The honey in the lion's carcass acts as a source of nourishment for a warrior (verse 8; cf. honey and Jonathan in 1 Samuel 14:27–29). The seven-day wedding feast between Samson and the Timnite woman becomes an occasion for trickery, as a possible union between opposing groups turned to resentment and destruction (ultimately God's plan for the Philistines, oppressors of Israel), where Samson is clearly an outsider surrounded by Philistines, and either side plays fair. In this chapter forward, the pattern of knowledge, deception, sexuality, and power intertwine in the Samson Narrative. Samson paid the loss of his riddling bet by killing thirty Philistines from another Philistine city, Ascalon, and gave the clothes to his riddle opponents in Timnath, but he immediately went back to his own people and did not consummate his marriage, so his father-in-law gave Samson's bride to another man, which becomes a set up for the fissure between Samson and the Philistines.
The center section of the riddle (verses 14–17) has a concentric symmetry highlighted by the words "tell" and "riddle" as follows:
A. Report
1. They could not "tell" the "riddle" for "three" days
2. On the seventh day, they approached the wife
B. The Philistines' speech
"Entice your husband to "tell" the "riddle"
X. Speech of Samson's bride
"You hate me, you do not love me,
You possed the "riddle" to my people
to me you did not "tell"
B'. Samson's speech
"I did not "tell" my father and mother
Shall I "tell" you?
A.' Report
2'. She wept for "seven" days
1'. On the seventh day, he "told" her
and she "told" the "riddle" to her people
The riddle itself was given with a high artistry of word play (verse 14), taking the three possible meaning of the root "'ry" (to "eat", "lion", or "honey") that the correct answer to the riddle would be ""ari mē ari" ("honey from lion"). However, the Philistines avoided to give that answer which would betray their source of knowledge, and instead gave a counter-riddle as an answer: "What is sweeter than honey? What is stronger than a lion?" that the answer would be "love".
Archeology.
A circular stone seal, approximately in diameter, was found by the archaeologists from Tel Aviv University (announced in August 2012) on the floor of a house at Beth Shemesh and appears to depict a long-haired man slaying a lion, which may or may not depict the biblical Samson. The 12th-century-BCE seal was discovered in the geographical proximity to the area where Samson lived, and the time period of the seal indicates that a story was being told at the time of a hero who fought a lion, and that the story eventually found its way into the biblical text and onto the seal.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70388147 |
70388151 | Judges 15 | Book of Judges, chapter 15
Judges 15 is the fifteenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judges Samson. belonging to a section comprising Judges 13 to 16 and Judges 6:1 to 16:31.
Text.
This chapter was originally written in the Hebrew language. It is divided into 20 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD (KJV)
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A 6:1
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
6:1 , "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
Chapters 13–16 contains the "Samson Narrative" or "Samson Cycle", a highly structured poetic composition with an 'almost architectonic tightness' from a literary point-of-view. The entire section consists of 3 "cantos" and 10 subcantos and 30 canticles, as follows:
The distribution of the 10 subcantos into 3 cantos is a regular 2 + 4 + 4, with the number of canticles per subcanto as follows:
The number of strophes per canticle in each canto is quite uniform with numerical patterns in Canto II showing a 'concentric symmetry':
The structure regularity within the whole section classifies this composition as a 'narrative poetry' or 'poetic narrative'.
Besides the thematic symmetry, parts of the narrative shows an observable structure with chapter 13 balances chapter 16 (each consisting of three sub-sections with a fourfold asking and answer discourse at the center) whereas chapters 14 and 15 show a parallelism in form and content.
Chapter 15:1–19 has the following structure:
A. After a while ... Samson visited (15:1–3)
1. speech between Samson and father-in-law.
2. parental objection
3. Samson's rejection of the possibility of another woman.
B. Samson went (15:4–6a)
1. action involving animals (foxes).
C. The Philistines came up (15:6b–8)
1. action involving retaliation, a vicious act
D. The Philistines came up (15:9–19)
1. speech between Judahites, Philistines and Samson;
2. Philistines threaten third party to beat Samson
3. Spirit of YHWH and Samson's victory.
Samson's revenge (15:1–8).
Samson's desire for his woman coincides with the harvest season, a time of fertility (cf. Ruth 2)., and he brought a peace offering as if all is forgiven, displaying 'his obliviousness to social convention'. The woman's father offered Samson another deal, the younger sister (cf. Saul to David, 1 Samuel 17:25; 18:17–22), but this was declined and followed by Samson's superheroic vengeance, attaching torches to the tails of 300 foxes to set fire among the standing grain, vineyards, and olive groves of the Philistines. The Philistines retaliated by setting the whole family of Samson's wife-to-be on fire. Samson had an outburst that he killed many Philistines as vengeance, then withdrew to a cave in Etam(verse 8).
"And he struck them hip and thigh with a great blow, and he went down and stayed in the cleft of the rock of Etam."
Samson defeats the Philistines (15:9–20).
Israelites are elsewhere portrayed as tending to collaborate with the enemy than to revolt (cf. Exodus 2:14; 5:21), thus the men of Judah would rather to hand over Samson to the Philistines to avoid an attack (cf. 2 Samuel 20:14–22). As many as 3,000 men of Judah came to Samson to bind him as instructed by the Philistines, but starting with an accusation of wrongdoing ('What is this that you have done to us?') to convince Samson to allow himself to be given over peacefully to the enemy. Samson went with the Philistines until Lehi before he had an outburst with a 'power fuelled by the divine frenzy', breaking the ropes that bind him with an imagery of 'fire' (verse 14), then using the jawbone of a donkey (another animal motif) to kill a thousand Philistine men.
The basic elements of this fight recalls a similar pattern in Samson's fight with the lion oin the vineyards of Timnah as follows:
Thereafter Samson used a proverb to declare his victory in a 'war-taunt' with word play of the root "h-m-r" (could mean 'donkey' or 'pile up', in parallel to the many slain Philistines as'heaps and heaps'). The record of the amazing victory over the Philistines concludes with Samson's plea to God to quench his thirst, characteristically with a hyperbole if 'God intends to reward the hero of Israel with death by thirst' (verse 18). God responded by splitting open a spring from a rocky hollow (cf. Elijah and Moses) so that Samson could drink and be revived. Verse 20 marks the end of the first part of Samson epic, to be followed by the story of Samson's fall in the next chapter.
"And he judged Israel in the days of the Philistines twenty years."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70388151 |
70388152 | Judges 16 | Chapter of the Bible
Judges 16 is the sixteenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of judges Samson. belonging to a section comprising Judges 13 to 16 and to 16:31.
Text.
This chapter was originally written in the Hebrew language. It is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The Two Panels.
A linguistic study by Chisholm reveals that the central part in the Book of Judges (Judges 3:7–16:31) can be divided into two panels based on the six refrains that state that the Israelites did evil in Yahweh's eyes:
Panel One
A 3:7
And the children of Israel did evil in the sight of the LORD (KJV)
B 3:12
And the children of Israel did evil "again" in the sight of the LORD
B 4:1
And the children of Israel did evil "again" in the sight of the LORD
Panel Two
A 6:1
And the children of Israel did evil in the sight of the LORD
B 10:6
And the children of Israel did evil "again" in the sight of the LORD
B 13:1
And the children of Israel did evil "again" in the sight of the LORD
Furthermore, from the linguistic evidence, the verbs used to describe the Lord's response to Israel's sin have chiastic patterns and can be grouped to fit the division above:
Panel One
3:8 , "and he sold them," from the root , "makar"
3:12 , "and he strengthened," from the root , "khazaq"
4:2 , "and he sold them," from the root , "makar"
Panel Two
6:1 , "and he gave them," from the root , "nathan"
10:7 , "and he sold them," from the root , "makar"
13:1 , "and he gave them," from the root , "nathan"
The Samson Narrative.
Chapters 13–16 contains the "Samson Narrative" or "Samson Cycle", a highly structured poetic composition with an 'almost architectonic tightness' from a literary point-of-view. The entire section consists of 3 "cantos" and 10 subcantos and 30 canticles, as follows:
The distribution of the 10 subcantos into 3 cantos is a regular 2 + 4 + 4, with the number of canticles per subcanto as follows:
The number of strophes per canticle in each canto is quite uniform with numerical patterns in Canto II showing a 'concentric symmetry':
The structure regularity within the whole section classifies this composition as a 'narrative poetry' or 'poetic narrative'.
Besides the thematic symmetry, parts of the narrative shows an observable structure with chapter 13 balances chapter 16 (each consisting of three sub-sections with a fourfold asking and answer discourse at the center) whereas chapters 14 and 15 show a parallelism in form and content.
Structure of Chapter 16.
The narrative in chapter 16 has a structure that almost parallels with Judges 13 in terms of text arrangement:
1) encounter with the harlot of Gaza (16:1–3)
2) fourfold asking and answer discourse 16:4–22)
1. First question and answer (16:4–9)
2. Second question and answer (16:10–12)
3. Fourth question and answer(16:13–14)
4. Upbraiding and reply (16:15–22)
3) an inclusion
1. Lords of Philistines and people are present (16:23–24)
2. They "call" Samson; a lad supports (Hebrew: "hmhzyq") him (16:25–26)
3. Great numbers are present(16:27)
4. Samson "call" on YHWH; YHWH strengthens (Hebrew: "whzqny") him (16:28)
5. Lords of Philistines and people are killed (16:30)
Samson's encounter with the harlot of Gaza (16:1–3).
This brief section foreshadows the longer narrative involving Delilah and follows the earlier patterns. Samson was again attracted to a Philistine woman, a prostitute (or "harlot"), in Gaza and the encounter ended in his superheroic departure, by lifting off the gates of the city at night (verse 3), whereas the enemy planned to capture him the next morning (verse 2). This episode provides a clue to the false sense of Samson's invincibility, which would soon turn to his downfall, especially as the appeal of Philistine women was seen as Samson's tragic flaw; emphasizing the 'danger of foreign (and loose) women' (Deuteronomy 7:3–4; Proverbs 5:3–6; 7:10–23). Samson's escape from Gaza turned out to be temporary because he would later be brought there again in bronze fetter (verse 21) and had his final confrontation with the Philistines.
"And Samson lay low till midnight; then he arose at midnight, took hold of the doors of the gate of the city and the two gateposts, pulled them up, bar and all, put them on his shoulders, and carried them to the top of the hill that faces Hebron."
Samson and Delilah (16:4–22).
The story of Samson's downfall follows the familiar pattern in the cycle:
Samson was finally caught by his enemies, when he was with the third foreign woman, Delilah (Hebrew: , "də·lî·lāh"), whose name could mean 'loose hair' or 'flirtatiousness', but also a word play on the term for "night" (Hebrew: "lay-lāh") whereas Samson's name derives from the term for "sun" (Hebrew: "šemeš"). Significantly, Delilah is the only named woman in the Samson Narrative (cf. Samson's mother as "Manoah's wife", Samson's Timnite wife, the harlot in Gaza). The Philistine lords ('tyrants') offered Delilah a reward in silver if she was able to discover and divulge to them the secret of Samson's strength, following the folktales that some heroes' strength resides in an amulet or special item. The
source of Samson's power in this narrative is related to his status as a 'nazir', declared even before his birth, so here the traditional folk motif intertwines with particular theological topic of Samson's relationship to YHWH. Thus, Samson's mistake was his false belief that his strength is not contingent upon the symbol of his consecration to YHWH, so when shorn of his hair, Samson did not realize that YHWH had left him and that he had become vulnerable like normal men, so he could be caught and bound by his enemies.
Powerless in fetter and his eyes gouged, Samson was placed in the prison in Gaza, made to grind at a mill, usually a work of women, so the mighty hero had been feminized, as Sisera to Jael (Judges 4, 5).
Death of Samson (16:23–31).
Samson's rehabilitation and his final victory took place during a Philistine festival to honor their god, Dagon, when the Philistines had Samson brought out for humiliation. Feigning weakness, Samson asked the lad who led him to be allowed to lean on the pillars of the great house that was filled with 3,000 Philistines. With a final prayer to God, Samson pushed the pillars, thereby broke down the roof of the house, killing himself and his enemies. The narrative ends with an admiration for Samson's final deed (verse 30) and a note of his honorable burial (verse 31).
"And his brothers and all his father's household came down and took him, and brought him up and buried him between Zorah and Eshtaol in the tomb of his father Manoah. He had judged Israel twenty years."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70388152 |
7039 | Control theory | Branch of engineering and mathematics
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any "delay", "overshoot", or "steady-state error" and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the "error" signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.
Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.
Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.
History.
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled "On Governors". A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.
A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.
Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.
Linear and nonlinear control theory.
The field of control theory can be divided into two branches:
Analysis techniques - frequency domain and time domain.
Mathematical techniques for analyzing and designing control systems fall into two different categories:
In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.
System interfacing - SISO & MIMO.
Control systems can be divided into different categories depending on the number of inputs and outputs.
Classical SISO system design.
The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.
Modern MIMO system design.
Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs. Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.
Topics in control theory.
Stability.
The "stability" of a general dynamical system with no input can be described with Lyapunov stability criteria.
For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.
Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the formula_0 axis is the real axis and the discrete Z-transform is in circular coordinates where the formula_1 axis is the real axis.
When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
If a system in question has an impulse response of
formula_2
then the Z-transform (see this example), is given by
formula_3
which has a pole in formula_4 (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is "inside" the unit circle.
However, if the impulse response was
formula_5
then the Z-transform is
formula_6
which has a pole at formula_7 and is not BIBO stable since the pole has a modulus strictly greater than one.
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
Controllability and observability.
Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed "stabilizable". Observability instead is related to the possibility of "observing", through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.
Control specification.
Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).
A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have formula_8, where formula_9 is a fixed value strictly greater than zero, instead of simply asking that formula_10.
Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).
Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).
Model identification and robustness.
A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.
The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that formula_11. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
System classifications.
Linear systems control.
For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
Nonlinear systems control.
Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.
Decentralized systems control.
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
Deterministic and stochastic systems control.
A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.
Main control strategies.
Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
People in systems and control.
Many active and historical figures made significant contribution to control theory including
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "\\ x[n] = 0.5^n u[n]"
},
{
"math_id": 3,
"text": "\\ X(z) = \\frac{1}{1 - 0.5z^{-1}}"
},
{
"math_id": 4,
"text": "z = 0.5"
},
{
"math_id": 5,
"text": "\\ x[n] = 1.5^n u[n]"
},
{
"math_id": 6,
"text": "\\ X(z) = \\frac{1}{1 - 1.5z^{-1}}"
},
{
"math_id": 7,
"text": "z = 1.5"
},
{
"math_id": 8,
"text": "Re[\\lambda] < -\\overline{\\lambda}"
},
{
"math_id": 9,
"text": "\\overline{\\lambda}"
},
{
"math_id": 10,
"text": "Re[\\lambda]<0"
},
{
"math_id": 11,
"text": " m \\ddot{{x}}(t) = - K x(t) - \\Beta \\dot{x}(t)"
}
]
| https://en.wikipedia.org/wiki?curid=7039 |
70396474 | Square root of 7 | Positive real number which when multiplied by itself gives 7
The square root of 7 is the positive real number that, when multiplied by itself, gives the prime number 7. It is more precisely called the principal square root of 7, to distinguish it from the negative number with the same property. This number appears in various geometric and number-theoretic contexts. It can be denoted in surd form as:
formula_1
and in exponent form as:
formula_2
It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are:
which can be rounded up to 2.646 to within about 99.99% accuracy (about 1 part in 10000); that is, it differs from the correct value by about . The approximation (≈ 2.645833...) is better: despite having a denominator of only 48, it differs from the correct value by less than , or less than one part in 33,000.
More than a million decimal digits of the square root of seven have been published.
Rational approximations.
The extraction of decimal-fraction approximations to square roots by various methods has used the square root of 7 as an example or exercise in textbooks, for hundreds of years. Different numbers of digits after the decimal point are shown: 5 in 1773 and 1852, 3 in 1835, 6 in 1808, and 7 in 1797.
An extraction by Newton's method (approximately) was illustrated in 1922, concluding that it is 2.646 "to the nearest thousandth".
For a family of good rational approximations, the square root of 7 can be expressed as the continued fraction
formula_3 (sequence in the OEIS)
The successive partial evaluations of the continued fraction, which are called its "convergents", approach formula_0:
formula_4
Their numerators are 2, 3, 5, 8, 37, 45, 82, 127, 590, 717, 1307, 2024, 9403, 11427, 20830, 32257…(sequence in the OEIS) , and their denominators are 1, 1, 2, 3, 14, 17, 31, 48, 223, 271, 494, 765, 3554, 4319, 7873, 12192,…(sequence in the OEIS).
Each convergent is a best rational approximation of formula_0; in other words, it is closer to formula_0 than any rational with a smaller denominator. Approximate decimal equivalents improve linearly (number of digits proportional to convergent number) at a rate of less than one digit per step:
formula_5
Every fourth convergent, starting with , expressed as , satisfies the Pell's equation
formula_6
When formula_0 is approximated with the Babylonian method, starting with "x"1
3 and using "x""n"+1
"x""n" + , the "n"th approximant "x""n" is equal to the 2"n"th convergent of the continued fraction:
formula_7
All but the first of these satisfy the Pell's equation above.
The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial formula_8. The Newton's method update, formula_9 is equal to formula_10 when formula_11. The method therefore converges quadratically (number of accurate decimal digits proportional to the square of the number of Newton or Babylonian steps).
Geometry.
In plane geometry, the square root of 7 can be constructed via a sequence of dynamic rectangles, that is, as the largest diagonal of those rectangles illustrated here.
The minimal enclosing rectangle of an equilateral triangle of edge length 2 has a diagonal of the square root of 7.
Due to the Pythagorean theorem and Legendre's three-square theorem, formula_0 is the smallest square root of a natural number that cannot be the distance between any two points of a cubic integer lattice (or equivalently, the length of the space diagonal of a rectangular cuboid with integer side lengths). formula_12 is the next smallest such number.
Outside of mathematics.
On the reverse of the current US one-dollar bill, the "large inner box" has a length-to-width ratio of the square root of 7, and a diagonal of 6.0 inches, to within measurement accuracy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sqrt{7}"
},
{
"math_id": 1,
"text": "\\sqrt{7}\\, , "
},
{
"math_id": 2,
"text": "7^\\frac{1}{2}."
},
{
"math_id": 3,
"text": " [2; 1, 1, 1, 4, 1, 1, 1, 4,\\ldots] = 2 + \\cfrac 1 {1 + \\cfrac 1 {1 + \\cfrac 1 {1 + \\cfrac 1 {4 + \\cfrac 1 {1 + \\dots}}}}}."
},
{
"math_id": 4,
"text": "\\frac{2}{1}, \\frac{3}{1}, \\frac{5}{2}, \\frac{8}{3}, \\frac{37}{14}, \\frac{45}{17}, \\frac{82}{31}, \\frac{127}{48}, \\frac{590}{223}, \\frac{717}{271}, \\dots"
},
{
"math_id": 5,
"text": "\\frac{2}{1} = 2.0,\\quad \\frac{3}{1} = 3.0,\\quad \\frac{5}{2} = 2.5,\\quad \\frac{8}{3} = 2.66\\dots,\\quad \\frac{37}{14} = 2.6429...,\\quad \\frac{45}{17} = 2.64705...,\\quad \\frac{82}{31} = 2.64516...,\\quad \\frac{127}{48} = 2.645833..., \\quad \\ldots"
},
{
"math_id": 6,
"text": "x^2 - 7y^2 = 1 ."
},
{
"math_id": 7,
"text": " x_1 = 3, \\quad x_2 = \\frac{8}{3} = 2.66..., \\quad x_3 = \\frac{127}{48} =2.6458..., \\quad x_4 = \\frac{32257}{12192} = 2.645751312..., \\quad x_5 = \\frac{2081028097}{786554688} = 2.645751311064591..., \\quad \\dots"
},
{
"math_id": 8,
"text": "x^2-7"
},
{
"math_id": 9,
"text": "x_{n+1} = x_n - f(x_n)/f'(x_n),"
},
{
"math_id": 10,
"text": "(x_n + 7/x_n)/2"
},
{
"math_id": 11,
"text": "f(x) = x^2 - 7"
},
{
"math_id": 12,
"text": "\\sqrt{15}"
}
]
| https://en.wikipedia.org/wiki?curid=70396474 |
70400 | Bounded rationality | Making of satisfactory, not optimal, decisions
Bounded rationality is the idea that rationality is limited when individuals make decisions, and under these limitations, rational individuals will select a decision that is satisfactory rather than optimal.
Limitations include the difficulty of the problem requiring a decision, the cognitive capability of the mind, and the time available to make the decision. Decision-makers, in this view, act as satisficers, seeking a satisfactory solution, with everything that they have at the moment rather than an optimal solution. Therefore, humans do not undertake a full cost-benefit analysis to determine the optimal decision, but rather, choose an option that fulfills their adequacy criteria.
Some models of human behavior in the social sciences assume that humans can be reasonably approximated or described as rational entities, as in rational choice theory or Downs' political agency model. The concept of bounded rationality complements the idea of rationality as optimization, which views decision-making as a fully rational process of finding an optimal choice given the information available. Therefore, bounded rationality can be said to address the discrepancy between the assumed perfect rationality of human behaviour (which is utilised by other economics theories), and the reality of human cognition. In short, bounded rationality revises notions of perfect rationality to account for the fact that perfectly rational decisions are often not feasible in practice because of the intractability of natural decision problems and the finite computational resources available for making them. The concept of bounded rationality continues to influence (and be debated in) different disciplines, including political science, economics, psychology, law and cognitive science.
Background and motivation.
Bounded rationality was coined by Herbert A. Simon, where it was proposed as an alternative basis for the mathematical and neoclassical economic modelling of decision-making, as used in economics, political science, and related disciplines. Many economics models assume that agents are on average rational, and can in large quantities be approximated to act according to their preferences in order to maximise utility. With bounded rationality, Simon's goal was "to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist." Soon after the term bounded rationality appeared, studies in the topic area began examining the issue in depth. A study completed by Allais in 1953 began to generate ideas of the irrationality of decision making as he found that given preferences, individuals will not always choose the most rational decision and therefore the concept of rationality was not always reliable in economic predictions.
In "Models of Man", Simon argues that most people are only partly rational, and are irrational in the remaining part of their actions. In another work, he states "boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information". Simon used the analogy of a pair of scissors, where one blade represents "cognitive limitations" of actual humans and the other the "structures of the environment", illustrating how minds compensate for limited resources by exploiting known structural regularity in the environment.
Simon describes a number of dimensions along which classical models of rationality can be made somewhat more realistic, while remaining within the vein of fairly rigorous formalization. These include:
Simon suggests that economic agents use heuristics to make decisions rather than a strict rigid rule of optimization. They do this because of the complexity of the situation. An example of behaviour inhibited by heuristics can be seen when comparing the cognitive strategies utilised in simple situations (e.g. tic-tac-toe), in comparison to strategies utilised in difficult situations (e.g. chess). Both games, as defined by game theory economics, are finite games with perfect information, and therefore equivalent. However, within chess, mental capacities and abilities are a binding constraint, therefore optimal choices are not a possibility. Thus, in order to test the mental limits of agents, complex problems, such as those within chess, should be studied to test how individuals work around their cognitive limits, and what behaviours or heuristics are used to form solutions
Anchoring and adjustment are types of heuristics that give some explanation to bounded rationality and why decision makers do not make rational decisions. A study undertaken by Zenko et al. showed that the amount of physical activity completed by decision makers was able to be influenced by anchoring and adjustment as most decision makers would typically be considered irrational and would unlikely do the amount of physical activity instructed and it was shown that these decision makers use anchoring and adjustment to decide how much exercise they will complete.
Other heuristics that are closely related to the concept of bounded rationality include the availability heuristic and representativeness heuristic. The availability heuristic refers to how people tend to overestimate the likelihood of events that are easily brought to mind, such as vivid or recent experiences. This can lead to biased judgments based on incomplete or unrepresentative information. The representativeness heuristic states that people often judge the probability of an event based on how closely it resembles a typical or representative case, ignoring other relevant factors like base rates or sample size. These mental shortcuts and systematic errors in thinking demonstrate how people's decision-making abilities are limited and often deviate from perfect rationality.
Example.
An example of bounded rationality in individuals would be a customer who made a suboptimal decision to order some food at the restaurant because they felt rushed by the waiter who was waiting beside the table. Another example is a trader who would make a moderate and risky decision to trade their stock due to time pressure and imperfect information of the market at that time.
In organisational context, a CEO cannot make fully rational decisions in an ad-hoc situation because their cognition was overwhelmed by a lot of information in that tense situation. The CEO also needs to take time to process all the information given to them, but due to the limited time and fast decision making needed, they will disregard some information in determining the decision.
Bounded rationality can have significant effects on political decision-making, voter behavior, and policy outcomes. A prominent example of this is heuristic-based voting. According to the theory of bounded rationality, individuals have limited time, information, and cognitive resources to make decisions. In the context of voting, this means that most voters cannot realistically gather and process all available information about candidates, issues, and policies. Even if such information were available, the time and effort required to analyze it would be prohibitively high for many voters. As a result, voters often resort to heuristics, which allow voters to make decisions based on cues like party affiliation, candidate appearance, or single-issue positions, rather than engaging in a comprehensive evaluation of all relevant factors. For example, a voter who relies on the heuristic of party affiliation may vote for a candidate whose policies do not actually align with their interests, simply because the candidate belongs to their preferred party.
Model extensions.
As decision-makers have to make decisions about how and when to decide, Ariel Rubinstein proposed to model bounded rationality by explicitly specifying decision-making procedures as decision-makers with the same information are also not able to analyse the situation equally thus reach the same rational decision. Rubinstein argues that consistency in reaching final decision for the same level of information must factor in the decision making procedure itself. This puts the study of decision procedures on the research agenda.
Gerd Gigerenzer stated that decision theorists, to some extent, have not adhered to Simon's original ideas. Rather, they have considered how decisions may be crippled by limitations to rationality, or have modeled how people might cope with their inability to optimize. Gigerenzer proposes and shows that simple heuristics often lead to better decisions than theoretically optimal procedures. Moreover, Gigerenzer claimed, agents react relative to their environment and use their cognitive processes to adapt accordingly.
Huw Dixon later argued that it may not be necessary to analyze in detail the process of reasoning underlying bounded rationality. If we believe that agents will choose an action that gets them close to the optimum, then we can use the notion of "epsilon-optimization", which means we choose our actions so that the payoff is within epsilon of the optimum. If we define the optimum (best possible) payoff as formula_0, then the set of epsilon-optimizing options S(ε) can be defined as all those options s such that:
formula_1
The notion of strict rationality is then a special case ("ε"=0). The advantage of this approach is that it avoids having to specify in detail the process of reasoning, but rather simply assumes that whatever the process is, it is good enough to get near to the optimum.
From a computational point of view, decision procedures can be encoded in algorithms and heuristics. Edward Tsang argues that the effective rationality of an agent is determined by its computational intelligence. Everything else being equal, an agent that has better algorithms and heuristics could make more rational (closer to optimal) decisions than one that has poorer heuristics and algorithms.
Tshilidzi Marwala and Evan Hurwitz in their study on bounded rationality observed that advances in technology (e.g. computer processing power because of Moore's law, artificial intelligence, and big data analytics) expand the bounds that define the feasible rationality space. Because of this expansion of the bounds of rationality, machine automated decision making makes markets more efficient.
The model of bounded rationality also extends to bounded self-interest, in which humans are sometimes willing to forsake their own self-interests for the benefits of others due to incomplete information that the individuals have at the time being. This is something that had not been considered in earlier economic models.
The theory of rational inattention, an extension of bounded rationality, studied by Christopher Sims, found that decisions may be chosen with incomplete information as opposed to affording the cost to receive complete information. This shows that decision makers choose to endure bounded rationality.
On the other hand, another extension came from the notion of bounded rationality and was explained by Ulrich Hoffrage and Torsten Reimer in their studies of a "fast and frugal heuristic approach". The studies explained that complete information sometimes is not needed as there are easier and simpler ways to reach the same optimal outcome. However, this approach which is usually known as the gaze heuristic was explained to be the theory for non-complex decision making only.
Bounded Rationality and Nudging.
Nudging is a concept in behavioral economics that is closely related to the idea of bounded rationality. Nudging involves designing choice architectures that guide people towards making better decisions without limiting their freedom of choice. The concept was popularized by Richard Thaler and Cass Sunstein in their 2008 book "."
The connection between nudging and bounded rationality lies in the fact that nudges are designed to help people overcome the cognitive limitations and biases that arise from their bounded rationality.
One way nudges are used is with the aim of simplifying complex decisions by presenting information in a clear and easily understandable format, reducing the cognitive burden on individuals. Nudges can also be designed to counteract common heuristics and biases, such as the default bias (people's tendency to stick with the default option). For example, with adequate other policies in place, making posthumous organ donation the default option with an opt-out provision has been shown to increase actual donation rates. Moreover, in cases where the information needed to make an informed decision is incomplete, nudges can provide the relevant information. For instance, displaying the calorie content of menu items can help people make healthier food choices. Nudges can also guide people towards satisfactory options when they are unable or unwilling to invest the time and effort to find the optimal choice. For example, providing a limited set of well-designed investment options in a retirement plan can help people make better financial decisions.
As nudging has become more popular in the last decade, governments around the world and nongovernmental organizations like the United Nations have established behavioral insights teams or incorporated nudging into their policy-making processes.
Bounded rationality attempts to address assumption points discussed within neoclassical economics theory during the 1950s. This theory assumes that the complex problem, the way in which the problem is presented, all alternative choices, and a utility function, are all provided to decision-makers in advance, where this may not be realistic. This was widely used and accepted for a number of decades, however economists realised some disadvantages exist in utilising this theory. This theory did not consider how problems are initially discovered by decision-makers, which could have an impact on the overall decision. Additionally, personal values, the way in which alternatives are discovered and created, and the environment surrounding the decision-making process are also not considered when using this theory. Alternatively, bounded rationality focuses on the cognitive ability of the decision-maker and the factors which may inhibit optimal decision-making. Additionally, placing a focus on organisations rather than focusing on markets as neoclassical economics theory does, bounded rationality is also the basis for many other economics theories (e.g. organisational theory) as it emphasises that the "...performance and success of an organisation is governed primarily by the psychological limitations of its members..." as stated by John D.W. Morecroft (1981).
Principles of Boundedness.
In addition to bounded rationality, bounded willpower and bounded selfishness are two other key concepts in behavioral economics that challenge the traditional neoclassical economic assumption of perfectly rational, self-interested, and self-disciplined individuals.
Bounded willpower refers to the idea that people often have difficulty following through on their long-term plans and intentions due to limited self-control and the tendency to prioritize short-term desires. This can lead to problems like procrastination, impulsive spending, and unhealthy lifestyle choices. The concept of bounded willpower is closely related to the idea of hyperbolic discounting, which describes how people tend to value immediate rewards more highly than future ones, leading to inconsistent preferences over time.
While traditional economic models assume that people are primarily motivated by self-interest, bounded selfishness suggests that people also have social preferences and care about factors such as fairness, reciprocity, and the well-being of others. This concept helps explain phenomena like charitable giving, cooperation in social dilemmas, and the existence of social norms. However, people's concern for others is often bounded in the sense that it is limited in scope and can be influenced by factors such as in-group favoritism and emotional distance.
Together, these three concepts form the core of behavioral economics and have been used to develop more realistic models of human decision-making and behavior. By recognizing the limitations and biases that people face in their daily lives, behavioral economists aim to design policies, institutions, and choice architectures that can help people make better decisions and achieve their long-term goals.
In psychology.
The collaborative works of Daniel Kahneman and Amos Tversky expand upon Herbert A. Simon's ideas in the attempt to create a map of bounded rationality. The research attempted to explore the choices made by what was assumed as rational agents compared to the choices made by individuals optimal beliefs and their satisficing behaviour. Kahneman cites that the research contributes mainly to the school of psychology due to imprecision of psychological research to fit the formal economic models; however, the theories are useful to economic theory as a way to expand simple and precise models and cover diverse psychological phenomena. Three major topics covered by the works of Daniel Kahneman and Amos Tversky include heuristics of judgement, risky choice, and framing effect, which were a culmination of research that fit under what was defined by Herbert A. Simon as the psychology of bounded rationality. In contrast to the work of Simon; Kahneman and Tversky aimed to focus on the effects bounded rationality had on simple tasks which therefore placed more emphasis on errors in cognitive mechanisms irrespective of the situation. The study undertaken by Kahneman found that emotions and the psychology of economic decisions play a larger role in the economics field than originally thought. The study focused on the emotions behind decision making such as fear and personal likes and dislikes and found these to be significant factors in economic decision making.
Bounded rationality is also shown to be useful in negotiation techniques as shown in research undertaken by Dehai et al. that negotiations done using bounded rationality techniques by labourers and companies when negotiating a higher wage for workers were able to find an equal solution for both parties.
Influence on social network structure.
Recent research has shown that bounded rationality of individuals may influence the topology of the social networks that evolve among them. In particular, Kasthurirathna and Piraveenan have shown that in socio-ecological systems, the drive towards improved rationality on average might be an evolutionary reason for the emergence of scale-free properties. They did this by simulating a number of strategic games on an initially random network with distributed bounded rationality, then re-wiring the network so that the network on average converged towards Nash equilibria, despite the bounded rationality of nodes. They observed that this re-wiring process results in scale-free networks. Since scale-free networks are ubiquitous in social systems, the link between bounded rationality distributions and social structure is an important one in explaining social phenomena.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " U^* "
},
{
"math_id": 1,
"text": " U(s) \\geq U^* - \\epsilon."
}
]
| https://en.wikipedia.org/wiki?curid=70400 |
70401 | Satisficing | Cognitive heuristic of searching for an acceptable decision
Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. The term "satisficing", a portmanteau of "satisfy" and "suffice", was introduced by Herbert A. Simon in 1956, although the concept was first posited in his 1947 book "Administrative Behavior". Simon used satisficing to explain the behavior of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterized by computational intractability or a lack of information, both of which preclude the use of mathematical optimization procedures. He observed in his Nobel Prize in Economics speech that "decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world of management science".
Simon formulated the concept within a novel approach to rationality, which posits that rational choice theory is an unrealistic description of human decision processes and calls for psychological realism. He referred to this approach as bounded rationality. Some consequentialist theories in moral philosophy use the concept of satisficing in the same sense, though most call for optimization instead.
In decision-making research.
In decision making, satisficing refers to the use of aspiration levels when choosing from different paths of action. By this account, decision-makers select the first option that meets a given need or select the option that seems to address most needs rather than the "optimal" solution.
Example: A task is to sew a patch onto a pair of blue pants. The best needle to do the threading is a 4-cm-long needle with a 3-millimeter eye. This needle is hidden in a haystack along with 1,000 other needles varying in size from 1 cm to 6 cm. Satisficing claims that the first needle that can sew on the patch is the one that should be used. Spending time searching for that one specific needle in the haystack is a waste of energy and resources.
A crucial determinant of a satisficing decision strategy concerns the construction of the aspiration level. In many circumstances, the individual may be uncertain about the aspiration level.
Example: An individual who only seeks a satisfactory retirement income may not know what level of wealth is required—given uncertainty about future prices—to ensure a satisfactory income. In this case, the individual can only evaluate outcomes on the basis of their probability of being satisfactory. If the individual chooses that outcome which has the maximum chance of being satisfactory, then this individual's behavior is theoretically indistinguishable from that of an optimizing individual under certain conditions.
Another key issue concerns an evaluation of satisficing strategies. Although often regarded as an inferior decision strategy, specific satisficing strategies for inference have been shown to be ecologically rational, that is in particular decision environments, they can outperform alternative decision strategies.
Satisficing also occurs in consensus building when the group looks towards a solution everyone can agree on even if it may not be the best.
Example: A group spends hours projecting the next fiscal year's budget. After hours of debating they eventually reach a consensus, only to have one person speak up and ask if the projections are correct. When the group becomes upset at the question, it is not because this person is wrong to ask, but rather because the group has already come up with a solution that works. The projection may not be what will actually come, but the majority agrees on one number and thus the projection is good enough to close the book on the budget.
Optimization.
One popular method for rationalizing satisficing is optimization when "all" costs, including the cost of the optimization calculations themselves and the cost of getting information for use in those calculations, are considered. As a result, the eventual choice is usually sub-optimal in regard to the main goal of the optimization, i.e., different from the optimum in the case that the costs of choosing are not taken into account.
As a form of optimization.
Alternatively, satisficing can be considered to be just constraint satisfaction, the process of finding a solution satisfying a set of constraints, without concern for finding an optimum. Any such satisficing problem can be formulated as an (equivalent) optimization problem using the indicator function of the satisficing requirements as an objective function. More formally, if X denotes the set of all options and S ⊆ X denotes the set of "satisficing" options, then selecting a satisficing solution (an element of S) is equivalent to the following optimization problem
formula_0
where Is denotes the Indicator function of S, that is
formula_1
A solution s ∈ X to this optimization problem is optimal if, and only if, it is a satisficing option (an element of S). Thus, from a decision theory point of view, the distinction between "optimizing" and "satisficing" is essentially a stylistic issue (that can nevertheless be very important in certain applications) rather than a substantive issue. What is important to determine is what should be optimized and what should be satisficed. The following quote from Jan Odhnoff's 1965 paper is appropriate:
<templatestyles src="Template:Blockquote/styles.css" />In my opinion there is room for both 'optimizing' and 'satisficing' models in business economics. Unfortunately, the difference between 'optimizing' and 'satisficing' is often referred to as a difference in the quality of a certain choice. It is a triviality that an optimal result in an optimization can be an unsatisfactory result in a satisficing model. The best things would therefore be to avoid a general use of these two words.
Applied to the utility framework.
In economics, satisficing is a behavior which attempts to achieve at least some minimum level of a particular variable, but which does not necessarily maximize its value. The most common application of the concept in economics is in the behavioral theory of the firm, which, unlike traditional accounts, postulates that producers treat profit not as a goal to be maximized, but as a constraint. Under these theories, a critical level of profit must be achieved by firms; thereafter, priority is attached to the attainment of other goals.
More formally, as before if X denotes the set of all options s, and we have the payoff function U(s) which gives the payoff enjoyed by the agent for each option. Suppose we define the optimum payoff U* the solution to
formula_2
with the optimum actions being the set O of options such that U(s*)
"U"* (i.e. it is the set of all options that yield the maximum payoff). Assume that the set O has at least one element.
The idea of the aspiration level was introduced by Herbert A. Simon and developed in economics by Richard Cyert and James March in their 1963 book "A Behavioral Theory of the Firm". The aspiration level is the payoff that the agent aspires to: if the agent achieves at least this level it is satisfied, and if it does not achieve it, the agent is not satisfied. Let us define the aspiration level "A" and assume that "A" ≤ "U"*. Clearly, whilst it is possible that someone can aspire to something that is better than the optimum, it is in a sense irrational to do so. So, we require the aspiration level to be at or below the optimum payoff.
We can then define the set of satisficing options S as all those options that yield at least A: s ∈ S if and only if A ≤ U(s). Clearly since A ≤ U*, it follows that O ⊆ S. That is, the set of optimum actions is a subset of the set of satisficing options. So, when an agent satisfices, then she will choose from a larger set of actions than the agent who optimizes. One way of looking at this is that the satisficing agent is not putting in the effort to get to the precise optimum or is unable to exclude actions that are below the optimum but still above aspiration.
An equivalent way of looking at satisficing is epsilon-optimization (that means you choose your actions so that the payoff is within epsilon of the optimum). If we define the "gap" between the optimum and the aspiration as ε where ε
"U"* − "A". Then the set of satisficing options S(ε) can be defined as all those options s such that U(s) ≥ U* − ε.
Other applications in economics.
Apart from the behavioral theory of the firm, applications of the idea of satisficing behavior in economics include the Akerlof and Yellen model of menu cost, popular in New Keynesian macroeconomics. Also, in economics and game theory there is the notion of an Epsilon-equilibrium, which is a generalization of the standard Nash equilibrium in which each player is within ε of his or her optimal payoff (the standard Nash-equilibrium being the special case where ε
0).
Endogenous aspiration levels.
What determines the aspiration level may be derived from past experience (some function of an agent's or firm's previous payoffs), or some organizational or market institutions. For example, if we think of managerial firms, the managers will be expected to earn normal profits by their shareholders. Other institutions may have specific targets imposed externally (for example state-funded universities in the UK have targets for student recruitment).
An economic example is the Dixon model of an economy consisting of many firms operating in different industries, where each industry is a duopoly. The endogenous aspiration level is the average profit in the economy. This represents the power of the financial markets: in the long-run firms need to earn normal profits or they die (as Armen Alchian once said, "This is the criterion by which the economic system selects survivors: those who realize positive profits are the survivors; those who suffer losses disappear"). We can then think what happens over time. If firms are earning profits at or above their aspiration level, then they just stay doing what they are doing (unlike the optimizing firm which would always strive to earn the highest profits possible). However, if the firms are earning below aspiration, then they try something else, until they get into a situation where they attain their aspiration level. It can be shown that in this economy, satisficing leads to collusion amongst firms: competition between firms leads to lower profits for one or both of the firms in a duopoly. This means that competition is unstable: one or both of the firms will fail to achieve their aspirations and hence try something else. The only situation which is stable is one where all firms achieve their aspirations, which can only happen when all firms earn average profits. In general, this will only happen if all firms earn the joint-profit maximizing or collusive profit.
In personality and happiness research.
Some research has suggested that satisficing/maximizing and other decision-making strategies, like personality traits, have a strong genetic component and endure over time. This genetic influence on decision-making behaviors has been found through classical twin studies, in which decision-making tendencies are self-reported by each member of a twinned pair and then compared between monozygotic and dizygotic twins. This implies that people can be categorized into "maximizers" and "satisficers", with some people landing in between.
The distinction between satisficing and maximizing not only differs in the decision-making process, but also in the post-decision evaluation. Maximizers tend to use a more exhaustive approach to their decision-making process: they seek and evaluate more options than satisficers do to achieve greater satisfaction. However, whereas satisficers tend to be relatively pleased with their decisions, maximizers tend to be less happy with their decision outcomes. This is thought to be due to limited cognitive resources people have when their options are vast, forcing maximizers to not make an optimal choice. Because maximization is unrealistic and usually impossible in everyday life, maximizers often feel regretful in their post-choice evaluation.
In survey methodology.
As an example of satisficing, in the field of social cognition, Jon Krosnick proposed a theory of statistical survey satisficing which says that optimal question answering by a survey respondent involves a great deal of cognitive work and that some people would use satisficing to reduce that burden.
Some people may shortcut their cognitive processes in two ways:
Likelihood to satisfice is linked to respondent ability, respondent motivation and task difficulty.
Regarding survey answers, satisficing manifests in:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\max_{s\\in X} I_{S}(s)"
},
{
"math_id": 1,
"text": "I_{S}(s):=\\begin{cases} \\begin{array}{ccc} 1 &,& s\\in S\\\\\n0 &,& s\\notin S\n\\end{array}\n\\end{cases} \\ , \\ s\\in X"
},
{
"math_id": 2,
"text": "\\max_{s\\in X} U(s)"
}
]
| https://en.wikipedia.org/wiki?curid=70401 |
70405123 | Judges 17 | Book of Judges chapter
Judges 17 is the seventeenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of Micah of Ephraim. belonging to a section comprising Judges 17 to 21.
Text.
This chapter was originally written in the Hebrew language. It is divided into 13 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Double Introduction and Double Conclusion.
Chapters 17 to 21 contain the "Double Conclusion" of the Book of Judges and form a type of inclusio together with their counterpart, the "Double Introduction", in chapters 1 to 3:6 as in the following structure of the whole book:
A. Foreign wars of subjugation with the "ḥērem" being applied (1:1–2:5)
B. Difficulties with foreign religious idols (2:6–3:6)
Main part: the "cycles" section(3:7–16:31)
B'. Difficulties with domestic religious idols (17:1–18:31)
A'. Domestic wars with the "ḥērem" being applied (19:1–21:25)
There are similar parallels between the double introduction and the double conclusion as the following:
The entire double conclusion is connected by the four-time repetition of a unique statement: twice in full at the beginning and the end of the double conclusion and twice in the center of the section as follows:
A. In those days there was no king…
Every man did what right in his own eyes (17:6)
B. In those days there was no king… (18:1)
B'. In those days there was no king… (19:1)
A'. In those days there was no king…
Every man did what right in his own eyes (21:25)
It also contains internal links:
Conclusion 1 (17:1–18:31): A Levite in Judah moving to the hill country of Ephraim and then on to Dan.
Conclusion 2 (19:1–21:25): A Levite in Ephraim looking for his concubine in Bethlehem in Judah.
Both sections end with a reference to Shiloh.
The Bethlehem Trilogy.
Three sections of the Hebrew Bible (Old Testament) — Judges 17–18, Judges 19–21, Ruth 1–4 — form a trilogy with a link to the city Bethlehem of Judah and characterized by the repetitive unique statement:
"In those days there was no king in Israel; everyone did what was right in his own eyes"
(Judges 17:6; ; ; ; cf. )
as in the following chart:
The founding myth of Dan.
Chapters 17–18 record a Danite founding myth that gives insight into Israelite early religious lives, and the ideology of war as background to the establishment of Dan as a city. Reading the entire section in the light of Deuteronomy 12:1–13:1, there are several thematic elements and concerns in common, although Judges 17:1–18:31 usually portrays them antithetically.
Micah's idols (17:1–6).
The section starts with a confession of a guilty son named Micah, who had stolen his mother's money, but now returned it to her. The mother was not angry, but instead praised God for her son's remorse and asked him to dedicate the money to YHWH by making a "a carved statue" (Hebrew: "pesel") and "a cast metal icon" (Hebrew "massemka"), which were used as symbols of a deity's indwelling presence (cf. Micah's words to the Danites in Judges 18:24). Micah completed his private shrine with a 'divinatory ephod' (cf Gideon's in Judges 8:27) and teraphim (cf. Genesis 31:30, 34-5), then installed one of his own sons to serve as priest.
"And he said to his mother, "The eleven hundred shekels of silver that were taken from you, and on which you put a curse, even saying it in my ears—here is the silver with me; I took it.""
"And his mother said, "May you be blessed by the Lord, my son!""
Micah and the Levite (17:7–13).
This section shows the venerable status of Levites in Israel (cf. ; 1 Chronicles 6:26), so the presence of a levitical priest would lend a special recognition to a shrine, 'granting its owner prestige and divine blessing'.
7"And there was a young man out of Bethlehemjudah of the family of Judah, who was a Levite, and he sojourned there."
8" And the man departed out of the city from Bethlehemjudah to sojourn where he could find a place: and he came to mount Ephraim to the house of Micah, as he journeyed."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70405123 |
70406705 | Judges 18 | Book of Judges chapter
Judges 18 is the eighteenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, but modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of the tribe of Dan, and belongs to a section comprising Judges 17 to 21.
Text.
This chapter was originally written in the Hebrew language. It is divided into 31 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Double Introduction and Double Conclusion.
Chapters 17 to 21 contain the "Double Conclusion" of the Book of Judges and form a type of inclusio together with their counterpart, the "Double Introduction", in chapters 1 to 3:6 as in the following structure of the whole book:
A. Foreign wars of subjugation with the "ḥērem" being applied (1:1–2:5)
B. Difficulties with foreign religious idols (2:6–3:6)
Main part: the "cycles" section(3:7–16:31)
B'. Difficulties with domestic religious idols (17:1–18:31)
A'. Domestic wars with the "ḥērem" being applied (19:1–21:25)
There are similar parallels between the double introduction and the double conclusion as the following:
The entire double conclusion is connected by the four-time repetition of a unique statement: twice in full at the beginning and the end of the double conclusion and twice in the center of the section as follows:
A. In those days there was no king…
Every man did what right in his own eyes (17:6)
B. In those days there was no king… (18:1)
B'. In those days there was no king… (19:1)
A'. In those days there was no king…
Every man did what right in his own eyes (21:25)
It also contains internal links:
Conclusion 1 (17:1–18:31): A Levite in Judah moving to the hill country of Ephraim and then on to Dan.
Conclusion 2 (19:1–21:25): A Levite in Ephraim looking for his concubine in Bethlehem in Judah.
Both sections end with a reference to Shiloh.
The Bethlehem Trilogy.
Three sections of the Hebrew Bible (Old Testament) — Judges 17–18, Judges 19–21, Ruth 1–4 — form a trilogy with a link to the city Bethlehem of Judah and characterized by the repetitive unique statement:
"In those days there was no king in Israel; everyone did what was right in his own eyes"
(Judges 17:6; 18:1; ; ; cf. )
as in the following chart:
The founding story of Dan.
Chapters 17–18 record a Danite founding narrative that gives insight into Israelite early religious lives, and the ideology of war as background to the establishment of Dan as a city. Reading the entire section in the light of Deuteronomy 12:1–13:1, there are several thematic elements and concerns in common, although Judges 17:1–18:31 usually portrays them antithetically.
The Danite spies (18:1–13).
This chapter starts with the report of a Danite clan in search of a new homeland, sending out a reconnaissance mission (verse 2; cf. Numbers 13; Joshua 2; Judges 6:10–14). While receiving hospitality in Micah's household, the Danite spies met the Levite at Micah's shrine and could have recognized the priest's southern accent or dialect (verse 3). A request for an oracle or a sign before battle is a typical feature of traditional Israelite war accounts (verse 5, 6; cf. Judges 4:5, 8 on Deborah and Judges 6:13 on Gideon). The Danite spies identified the town Laish in far north with military vulnerability as a target to conquer.
"In those days there was no king in Israel: and in those days the tribe of the Danites sought them an inheritance to dwell in; for unto that day all their inheritance had not fallen unto them among the tribes of Israel."
"Then they went up and encamped in Kirjath Jearim in Judah. (Therefore they call that place Mahaneh Dan to this day. There it is, west of Kirjath Jearim.)"
The Danites take Micah's idols and the Levites with them (18:14–26).
This passage has an 'aura of banditry' that is also found in the accounts of David's early career, such as his encounters with the priest at Nob (1 Samuel 21:1–9) and with Nabal (1 Samuel 25:2–38), as the armed Danites would take what they need or desire against any resistance and even manage to make their intentions seem inevitable and logical (cf. verses 19, 23–25). When Micah confronted the Danites to protest the taking away of his idols along with the Levite and his family, the Danites responded self-righteously ('wonderfully disingenuous') with "What's it to you?" or "What troubles you that you call up [a force against us]?" (verse 22), basically putting the guilt to the robbed person if a bloodbath would happen. Like Laban (Jacob's father-in-law; cf. Genesis 31), Micah, who was 'not above cheating his own mother', knew he had been bested and returned home empty-handed (verse 26).
The Danites settle in Laish (18:27–31).
The conquest of Laish by the Danites is reported using the language of biblical "ban" in Deuteronomy and Joshua ("putting to the sword and burning") but here the intention is quite different (cf. Judges 18:7–10). The use of the word "pesel" ("idol" or "graven image") in verses 30–31 as in Judges 17:3, 4; 18:14, indicates the disapproval of the idolatry of the Danites (and Micah), as there is clear comparison to the 'God's house' which was then in the sanctuary at Shiloh.
"And the children of Dan set up the graven image: and Jonathan, the son of Gershom, the son of Manasseh, he and his sons were priests to the tribe of Dan until the day of the captivity of the land."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70406705 |
70406714 | Judges 19 | Book of Judges chapter
Judges 19 is the nineteenth chapter of the Book of Judges in the Old Testament or the Hebrew Bible. According to Jewish tradition, the book was attributed to the prophet Samuel; modern scholars view it as part of the Deuteronomistic History, which spans in the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the activities of a Levite from Ephraim and his concubine, belonging to a section comprising Judges 17 to 21.
Text.
This chapter was originally written in the Hebrew language. It is divided into 30 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q50 (4QJudgb; 30 BCE–68 CE) with extant verses 5–7.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Double Introduction and Double Conclusion.
Chapters 17 to 21 contain the "Double Conclusion" of the Book of Judges and form a type of inclusio together with their counterpart, the "Double Introduction", in chapters 1 to 3:6 as in the following structure of the whole book:
A. Foreign wars of subjugation with the "ḥērem" being applied (1:1–2:5)
B. Difficulties with foreign religious idols (2:6–3:6)
Main part: the "cycles" section(3:7–16:31)
B'. Difficulties with domestic religious idols (17:1–18:31)
A'. Domestic wars with the "ḥērem" being applied (19:1–21:25)
There are similar parallels between the double introduction and the double conclusion as the following:
The entire double conclusion is connected by the four-time repetition of a unique statement: twice in full at the beginning and the end of the double conclusion and twice in the center of the section as follows:
A. In those days there was no king...
Every man did what right in his own eyes (17:6)
B. In those days there was no king... (18:1)
B'. In those days there was no king... (19:1)
A'. In those days there was no king...
Every man did what right in his own eyes (21:25)
It also contains internal links:
Conclusion 1 (17:1–18:31): A Levite in Judah moving to the hill country of Ephraim and then on to Dan.
Conclusion 2 (19:1–21:25): A Levite in Ephraim looking for his concubine in Bethlehem in Judah.
Both sections end with a reference to Shiloh.
The Bethlehem Trilogy.
Three sections of the Hebrew Bible (Old Testament) — Judges 17–18, Judges 19–21, Ruth 1–4 — form a trilogy with a link to the city Bethlehem of Judah and characterized by the repetitive unique statement:
<templatestyles src="Template:Blockquote/styles.css" />In those days there was no king in Israel; everyone did what was right in his own eyes
as in the following chart:
Chapters 19 to 21.
The section comprising -21:25 has a chiastic structure of five episodes as follows:
A. The Rape of the Concubine (19:1–30)
B. "ḥērem" ("holy war") of Benjamin (20:1–48)
C. Problem: The Oaths-Benjamin Threatened with Extinction (21:1–5)
B'. "ḥērem" ("holy war") of Jabesh Gilead (21:6–14)
A'. The Rape of the Daughters of Shiloh (21:15–25)
The rape of the daughters of Shiloh is the ironic counterpoint to the rape of the Levite's concubine, with the "daughter" motif linking the two stories ( and ), and the women becoming 'doorways leading into and out of war, sources of contention and reconciliation'.
The Levite's concubine (19:1–10).
The setting of the story, like in chapters 17–18, the travels of Levites who often relied on the 'local support and hospitality, having no patrilineal holdings of their own'. A particular Levite who resided in the mountains of Ephraim married a "concubine" (a lower status than "wife") from Bethlehem in Judah. The woman left him after 'she played the harlot towards him' (according to the Hebrew text; that is 'being disloyal but not necessarily adulterous') or NRSV renders: 'she became angry with him' (according to other manuscript traditions), going back to her father's house in Bethlehem. After four months the Levite traveled, accompanied by one servant and two donkeys, to visit and hope to win her back (verse 3; cf. Genesis 34:3). The woman brought into her house and his father-in-law received the Levite with full hospitality, even to persuade the guest to stay for four nights, a generosity that is emphasized by repetition in verses 4, 6, 8 and 5, 7, 8, 9. Finally, on the fifth morning the Levite left with his concubine and his servant, heading back north to the mountains of Ephraim (verse 10).
Verse 1.
<templatestyles src="Template:Blockquote/styles.css" />And it came to pass in those days, when there was no king in Israel, that there was a certain Levite staying in the remote mountains of Ephraim. He took for himself a concubine from Bethlehem in Judah.
Gibeah's crime (19:11–30).
As the first day of travel reaching the sunset, the Levite refused his servant's advice to stop in Jebus (later: Jerusalem), a non-Israelite town, but instead suggesting they stay at a town 'of the people Israel', Gibeah of Benjamin, ironically where the outrage would take place (verses 11–15). In Gibeah, the group did not receive the expected hospitality; they were ignored for a long time in the open square, until an elderly man finally invited them to his house (verses 16–21).
Recalling a similar story about Lot in Sodom (Genesis 19), some "sons of Belial" ('base fellows', 'miscreants') came surrounding the house and demanded that the strangers be sent out to them so they might 'know them', a biblical euphemism for sexual intercourse (verses 22–26). As Lot had done (Genesis 19:8), the host attempted to appease the wild men outside by offering them his "virgin daughters" (cf. Genesis 19:8) and the Levite's concubine, but the people outside decline the offer. The Levite then acted, throwing his concubine outside to the vicious mob and locking the door (verse 25–28; cf. Judges 20:6; in contrast to a divine intervention in Genesis 19:11).
At the break of day, the victimized woman was let go, collapsing at the doorway. Later in the morning, the Levite opened the doors, ready to travel, finding the woman lying down in front of the door and giving crass and brusque orders to her to get up (verse 28). When the woman did not respond, the Levites simply placed her unconscious body on a donkey and continued the journey together. Upon returning home, the Levite cut the woman's body into twelve parts and sent each part to the twelve tribes of Israel, while asking for justice (verses 29–30, cf. 1 Samuel 11:5–8).
Verse 30.
<templatestyles src="Block indent/styles.css"/>"And so it was that all who saw it said, "No such deed has been done or seen from the day that the children of Israel came up from the land of Egypt until this day. Consider it, confer, and speak up!""
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70406714 |
704160 | Tree (set theory) | In set theory, a tree is a partially ordered set ("T", <) such that for each "t" ∈ "T", the set {"s" ∈ "T" : "s" < "t"} is well-ordered by the relation <. Frequently trees are assumed to have only one root (i.e. minimal element), as the typical questions investigated in this field are easily reduced to questions about single-rooted trees.
Definition.
A tree is a partially ordered set (poset) ("T", <) such that for each "t" ∈ "T", the set {"s" ∈ "T" : "s" < "t"} is well-ordered by the relation <. In particular, each well-ordered set ("T", <) is a tree. For each "t" ∈ "T", the order type of {"s" ∈ "T" : "s" < "t"} is called the "height" of "t", denoted ht("t", "T"). The "height" of "T" itself is the least ordinal greater than the height of each element of "T". A "root" of a tree "T" is an element of height 0. Frequently trees are assumed to have only one root. Trees in set theory are often defined to grow downward making the root the greatest node.
Trees with a single root may be viewed as rooted trees in the sense of graph theory in one of two ways: either as a tree (graph theory) or as a trivially perfect graph. In the first case, the graph is the undirected Hasse diagram of the partially ordered set, and in the second case, the graph is simply the underlying (undirected) graph of the partially ordered set. However, if "T" is a tree of height > ω, then the Hasse diagram definition does not work. For example, the partially ordered set formula_0 does not have a Hasse Diagram, as there is no predecessor to ω. Hence a height of at most ω is required in this case.
A "branch" of a tree is a maximal chain in the tree (that is, any two elements of the branch are comparable, and any element of the tree "not" in the branch is incomparable with at least one element of the branch). The "length" of a branch is the ordinal that is order isomorphic to the branch. For each ordinal α, the "α-th level" of "T" is the set of all elements of "T" of height α. A tree is a κ-tree, for an ordinal number κ, if and only if it has height κ and every level has cardinality less than the cardinality of κ. The "width" of a tree is the supremum of the cardinalities of its levels.
Any single-rooted tree of height formula_1 forms a meet-semilattice, where meet (common ancestor) is given by maximal element of intersection of ancestors, which exists as the set of ancestors is non-empty and finite well-ordered, hence has a maximal element. Without a single root, the intersection of parents can be empty (two elements need not have common ancestors), for example formula_2 where the elements are not comparable; while if there are an infinite number of ancestors there need not be a maximal element – for example, formula_3 where formula_4 are not comparable.
A "subtree" of a tree formula_5 is a tree formula_6 where formula_7 and formula_8 is downward closed under formula_9, i.e., if formula_10 and formula_11 then formula_12.
Set-theoretic properties.
There are some fairly simply stated yet hard problems in infinite tree theory. Examples of this are the Kurepa conjecture and the Suslin conjecture. Both of these problems are known to be independent of Zermelo–Fraenkel set theory. By Kőnig's lemma, every ω-tree has an infinite branch. On the other hand, it is a theorem of ZFC that there are uncountable trees with no uncountable branches and no uncountable levels; such trees are known as Aronszajn trees. Given a cardinal number κ, a "κ-Suslin tree" is a tree of height κ which has no chains or antichains of size κ. In particular, if κ is singular then there exists a κ-Aronszajn tree and a κ-Suslin tree. In fact, for any infinite cardinal κ, every κ-Suslin tree is a κ-Aronszajn tree (the converse does not hold).
The Suslin conjecture was originally stated as a question about certain total orderings but it is equivalent to the statement: Every tree of height ω1 has an antichain of cardinality ω1 or a branch of length ω1.
If ("T",<) is a tree, then the reflexive closure ≤ of < is a prefix order on "T".
The converse does not hold: for example, the usual order ≤ on the set Z of integers is a total and hence a prefix order, but (Z,<) is not a set-theoretic tree since e.g. the set {"n" ∈Z: "n" < 0} has no least element.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega + 1 = \\left\\{0, 1, 2, \\dots, \\omega\\right\\}"
},
{
"math_id": 1,
"text": "\\leq \\omega"
},
{
"math_id": 2,
"text": "\\left\\{a, b\\right\\}"
},
{
"math_id": 3,
"text": "\\left\\{0, 1, 2, \\dots, \\omega_0, \\omega_0'\\right\\}"
},
{
"math_id": 4,
"text": "\\omega_0, \\omega_0'"
},
{
"math_id": 5,
"text": "(T,<)"
},
{
"math_id": 6,
"text": "(T',<)"
},
{
"math_id": 7,
"text": "T' \\subseteq T"
},
{
"math_id": 8,
"text": "T'"
},
{
"math_id": 9,
"text": " < "
},
{
"math_id": 10,
"text": "s, t \\in T"
},
{
"math_id": 11,
"text": " s < t"
},
{
"math_id": 12,
"text": "t \\in T' \\implies s \\in T'"
},
{
"math_id": 13,
"text": "\\kappa"
},
{
"math_id": 14,
"text": "X"
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "f:\\alpha \\mapsto X"
},
{
"math_id": 17,
"text": "\\alpha < \\kappa"
},
{
"math_id": 18,
"text": "f < g"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "g"
},
{
"math_id": 21,
"text": "\\kappa = \\omega \\cdot 2"
},
{
"math_id": 22,
"text": "X = \\{0,1\\}"
},
{
"math_id": 23,
"text": "m,n"
},
{
"math_id": 24,
"text": "m < n"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "m"
},
{
"math_id": 27,
"text": "\\omega"
},
{
"math_id": 28,
"text": "r"
},
{
"math_id": 29,
"text": "m \\neq n"
}
]
| https://en.wikipedia.org/wiki?curid=704160 |
70416226 | Praseodymium(III) acetate | Compound of praseodymium
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Praseodymium(III) acetate is an inorganic salt composed of a Praseodymium atom trication and three acetate groups as anions. This compound commonly forms the dihydrate, Pr(O2C2H3)3·2H2O.
Preparation.
Praseodymium(III) acetate can be formed by the reaction of acetic acid and praseodymium(III) oxide:
formula_0
Praseodymium(III) carbonate and praseodymium(III) hydroxide can also be used:
formula_1↑
formula_2
Structure.
According to X-ray crystallography, anhydrous praseodymium acetate is a coordination polymer. Each Pr(III) center is nine-coordinate, with two bidentate acetate ligands and the remaining sites occupied by oxygens provided by bridging acetate ligands. The lanthanum and holmium compounds are isostructural.
Decomposition.
When the dihydrate is heated, it decomposes to the anhydrous, which then decomposes into praseodymium(III) oxyacetate(PrO(O2C2H3)) then to praseodymium(III) oxycarbonate, and at last to praseodymium(III) oxide.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{6CH_3COOH + Pr_2O_3 \\ \\xrightarrow{T}\\ 2Pr(CH_3COO)_3 + 3H_2O}"
},
{
"math_id": 1,
"text": "\\mathsf{6CH_3COOH + Pr_2(CO_3)_3 \\ \\xrightarrow{T}\\ 2Pr(CH_3COO)_3 + 3H_2O + 3CO_2}"
},
{
"math_id": 2,
"text": "\\mathsf{6CH_3COOH + 2Pr(OH)_3 \\ \\xrightarrow{T}\\ 2Pr(CH_3COO)_3 + 6H_2O}"
}
]
| https://en.wikipedia.org/wiki?curid=70416226 |
70416725 | George Osborn (mathematician) | English mathematician
George Osborn (1864–1932) was an English mathematician, known for Osborn’s rule that deals with hyperbolic trigonometric identities.
Life.
Osborn was born in 1864 in Manchester, England and attended Emmanuel College, Cambridge University in 1884 where in 1887 he received the 17th Wrangler award for achieving a first in his mathematics degree. After this he then attended The Leys School, Cambridge in 1888 before becoming assistant headmaster and senior science master in 1891. He continued to work at the school until his retirement in 1926. Alongside his work in mathematics, Osborn took his time to study the New Testament owing to his grandfather Revenant George Osborn the president of the Methodist Conference in 1863 and 1881. In addition to this, Osborn enjoyed reading Spanish literature and was an avid chess player up until his death on October 14, 1932.
Work.
From 1902 to 1925, Osborn wrote numerous articles for The Mathematical Gazette which covered a range of topics from sums of cubes to series expansions with his most notable paper in July 1902 titled: Mnemonic for hyperbolic formulae. In this publication Osborn outlined a rule, that he found useful for teaching, when converting between trigonometric and hyperbolic trigonometric identities. In conjunction with this he published various books with his colleague Charles Henry French, who was the head of mathematics at The Leys School, Cambridge. The titles of their joint work include: "Elementary Algebra", "First Year’s Algebra" and "The Graphical Representation of Algebraic Functions".
Osborn's Rule.
Osborn’s Rule which was outlined in his 1902 Mathematical Gazette publication: Mnemonic for hyperbolic formulae and aids in the conversion between trigonometric and hyperbolic trigonometric identities. To convert a trigonometric identity to the equivalent hyperbolic trigonometric identity, Osborn’s rule states to first write out all the cosine and sine compound angles terms to their expanded constituent parts. Then exchange all the cosine and sine terms to cosh and sinh terms. However, for all products or implied products of two sine terms replace it with the negative product of two sinh terms. This is because formula_0 is equivalent to formula_1, so when multiplied to together the sign switched when compared to the regular trigonometric identity. Due to this fact however, for terms which have a product of a multiple of four sinh terms the sign does not change when compared to the regular trigonometric identities.
Trigonometric Identity.
formula_2
formula_3
formula_4
formula_5
formula_6
Hyperbolic Trigonometric Identity.
formula_7
formula_8
formula_9
formula_10
formula_11
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "-i\\sin(ix)"
},
{
"math_id": 1,
"text": "\\sinh(x)"
},
{
"math_id": 2,
"text": "\\cos^2(x)+\\sin^2(x)=1"
},
{
"math_id": 3,
"text": "1+\\tan^2(x)=\\sec^2(x)"
},
{
"math_id": 4,
"text": "\\cot^2(x)+1=\\csc^2(x)"
},
{
"math_id": 5,
"text": "\\cos(x+y)=\\cos(x)\\cos(y)-\\sin(x)\\sin(y)"
},
{
"math_id": 6,
"text": "\\cos(2x)=1-2\\sin^2(x)"
},
{
"math_id": 7,
"text": "\\cosh^2(x)-\\sinh^2(x)=1"
},
{
"math_id": 8,
"text": "1-\\tanh^2(x)=\\operatorname{sech^2}(x)"
},
{
"math_id": 9,
"text": "-\\coth^2(x)+1=-\\operatorname{csch^2}(x)"
},
{
"math_id": 10,
"text": "\\cosh(x+y)=\\cosh(x)\\cosh(y)+\\sinh(x)\\sinh(y)"
},
{
"math_id": 11,
"text": "\\cosh(2x)=1+2\\sinh^2(x)"
}
]
| https://en.wikipedia.org/wiki?curid=70416725 |
70421 | Elo rating system | Method for calculating relative skill levels of players
The Elo rating system is a method for calculating the relative skill levels of players in zero-sum games such as chess or esports. It is named after its creator Arpad Elo, a Hungarian-American physics professor.
The Elo system was invented as an improved chess-rating system over the previously used Harkness system, but is also used as a rating system in association football (soccer), American football, baseball, basketball, pool, various board games and esports, and, more recently, large language models.
The difference in the ratings between two players serves as a predictor of the outcome of a match. Two players with equal ratings who play against each other are expected to score an equal number of wins. A player whose rating is 100 points greater than their opponent's is expected to score 64%; if the difference is 200 points, then the expected score for the stronger player is 76%.
A player's Elo rating is a number which may change depending on the outcome of rated games played. After every game, the winning player takes points from the losing one. The difference between the ratings of the winner and loser determines the total number of points gained or lost after a game. If the higher-rated player wins, then only a few rating points will be taken from the lower-rated player. However, if the lower-rated player scores an upset win, many rating points will be transferred. The lower-rated player will also gain a few points from the higher rated player in the event of a draw. This means that this rating system is self-correcting. Players whose ratings are too low or too high should, in the long run, do better or worse correspondingly than the rating system predicts and thus gain or lose rating points until the ratings reflect their true playing strength.
Elo ratings are comparative only, and are valid only within the rating pool in which they were calculated, rather than being an absolute measure of a player's strength.
While Elo-like systems are widely used in two-player settings, variations have also been applied to multiplayer competitions.
History.
Arpad Elo was a chess master and an active participant in the United States Chess Federation (USCF) from its founding in 1939. The USCF used a numerical ratings system devised by Kenneth Harkness to enable members to track their individual progress in terms other than tournament wins and losses. The Harkness system was reasonably fair, but in some circumstances gave rise to ratings many observers considered inaccurate.
On behalf of the USCF, Elo devised a new system with a more sound statistical basis. At about the same time, György Karoly and Roger Cook independently developed a system based on the same principles for the New South Wales Chess Association.
Elo's system replaced earlier systems of competitive rewards with a system based on statistical estimation. Rating systems for many sports award points in accordance with subjective evaluations of the 'greatness' of certain achievements. For example, winning an important golf tournament might be worth an arbitrarily chosen five times as many points as winning a lesser tournament.
A statistical endeavor, by contrast, uses a model that relates the game results to underlying variables representing the ability of each player.
Elo's central assumption was that the chess performance of each player in each game is a normally distributed random variable. Although a player might perform significantly better or worse from one game to the next, Elo assumed that the mean value of the performances of any given player changes only slowly over time. Elo thought of a player's true skill as the mean of that player's performance random variable.
A further assumption is necessary because chess performance in the above sense is still not measurable. One cannot look at a sequence of moves and derive a number to represent that player's skill. Performance can only be inferred from wins, draws, and losses. Therefore, a player who wins a game is assumed to have performed at a higher level than the opponent for that game. Conversely, a losing player is assumed to have performed at a lower level. If the game ends in a draw, the two players are assumed to have performed at nearly the same level.
Elo did not specify exactly how close two performances ought to be to result in a draw as opposed to a win or loss. Actually, there is a probability of a draw that is dependent on the performance differential, so this latter is more of a confidence interval than any deterministic frontier. And while he thought it was likely that players might have different standard deviations to their performances, he made a simplifying assumption to the contrary.
To simplify computation even further, Elo proposed a straightforward method of estimating the variables in his model (i.e., the true skill of each player). One could calculate relatively easily from tables how many games players would be expected to win based on comparisons of their ratings to those of their opponents. The ratings of a player who won more games than expected would be adjusted upward, while those of a player who won fewer than expected would be adjusted downward. Moreover, that adjustment was to be in linear proportion to the number of wins by which the player had exceeded or fallen short of their expected number.
From a modern perspective, Elo's simplifying assumptions are not necessary because computing power is inexpensive and widely available. Several people, most notably Mark Glickman, have proposed using more sophisticated statistical machinery to estimate the same variables. On the other hand, the computational simplicity of the Elo system has proven to be one of its greatest assets. With the aid of a pocket calculator, an informed chess competitor can calculate to within one point what their next officially published rating will be, which helps promote a perception that the ratings are fair.
Implementing Elo's scheme.
The USCF implemented Elo's suggestions in 1960, and the system quickly gained recognition as being both fairer and more accurate than the Harkness rating system. Elo's system was adopted by the World Chess Federation (FIDE) in 1970. Elo described his work in detail in "The Rating of Chessplayers, Past and Present", first published in 1978.
Subsequent statistical tests have suggested that chess performance is almost certainly not distributed as a normal distribution, as weaker players have greater winning chances than Elo's model predicts. In paired comparison data, there is often very little practical difference in whether it is assumed that the differences in players' strengths are normally or logistically distributed. Mathematically, however, the logistic function is more convenient to work with than the normal distribution.
FIDE continues to use the rating difference table as proposed by Elo.table 8.1b
The development of the Percentage Expectancy Table (table 2.11) is described in more detail by Elo as follows:
The normal probabilities may be taken directly from the standard
tables of the areas under the normal curve when the difference in rating is
expressed as a z score. Since the standard deviation σ of individual
performances is defined as 200 points, the standard deviation σ' of the
differences in performances becomes σ√2 or 282.84. The z value of a
difference then is "D" / 282.84. This will then divide the area under the
curve into two parts, the larger giving P for the higher rated player and
the smaller giving P for the lower rated player.
For example, let "D" = 160. Then "z" = 160 / 282.84 = .566. The table
gives .7143 and .2857 as the areas of the two portions under the curve.
These probabilities are rounded to two figures in table 2.11.
The table is actually built with standard deviation 200(10/7) as an approximation for 200√2.
The normal and logistic distributions are, in a way, arbitrary points in a spectrum of distributions which would work well. In practice, both of these distributions work very well for a number of different games.
Different ratings systems.
The phrase "Elo rating" is often used to mean a player's chess rating as calculated by FIDE. However, this usage may be confusing or misleading because Elo's general ideas have been adopted by many organizations, including the USCF (before FIDE), many other national chess federations, the short-lived Professional Chess Association (PCA), and online chess servers including the Internet Chess Club (ICC), Free Internet Chess Server (FICS), Lichess, Chess.com, and Yahoo! Games. Each organization has a unique implementation, and none of them follows Elo's original suggestions precisely.
Instead one may refer to the organization granting the rating. For example: "As of April 2018, Tatev Abrahamyan had a FIDE rating of 2366 and a USCF rating of 2473." The Elo ratings of these various organizations are not always directly comparable, since Elo ratings measure the results within a closed pool of players rather than absolute skill.
FIDE ratings.
For top players, the most important rating is their FIDE rating. FIDE has issued the following lists:
The following analysis of the July 2015 FIDE rating list gives a rough impression of what a given FIDE rating means in terms of world ranking:
The highest ever FIDE rating was 2882, which Magnus Carlsen had on the May 2014 list. A list of the highest-rated players ever is at Comparison of top chess players throughout history.
Performance rating.
Performance rating or special rating is a hypothetical rating that would result from the games of a single event only. Some chess organizations p. 8 use the "algorithm of 400" to calculate performance rating. According to this algorithm, performance rating for an event is calculated in the following way:
Example: 2 wins (opponents w & x), 2 losses (opponents y & z)
formula_2
This can be expressed by the following formula:
formula_3
Example: If you beat a player with an Elo rating of 1000,
formula_4
If you beat two players with Elo ratings of 1000,
formula_5
If you draw,
formula_6
This is a simplification, but it offers an easy way to get an estimate of PR (performance rating).
FIDE, however, calculates performance rating by means of the formulaformula_7where "rating difference" formula_1 is based on a player's tournament percentage score formula_0, which is then used as the key in a lookup table where formula_0 is simply the number of points scored divided by the number of games played. Note that, in case of a perfect or no score formula_1 is 800.
Live ratings.
FIDE updates its ratings list at the beginning of each month. In contrast, the unofficial "Live ratings" calculate the change in players' ratings after every game. These Live ratings are based on the previously published FIDE ratings, so a player's Live rating is intended to correspond to what the FIDE rating would be if FIDE were to issue a new list that day.
Although Live ratings are unofficial, interest arose in Live ratings in August/September 2008 when five different players took the "Live" No. 1 ranking.
The unofficial live ratings of players over 2700 were published and maintained by Hans Arild Runde at the Live Rating website until August 2011. Another website, 2700chess.com, has been maintained since May 2011 by Artiom Tsepotan, which covers the top 100 players as well as the top 50 female players.
Rating changes can be calculated manually by using the FIDE ratings change calculator. All top players have a K-factor of 10, which means that the maximum ratings change from a single game is a little less than 10 points.
United States Chess Federation ratings.
The United States Chess Federation (USCF) uses its own classification of players:
The K-factor used by the USCF.
The "K-factor", in the USCF rating system, can be estimated by dividing 800 by the effective number of games a player's rating is based on ("N""e") plus the number of games the player completed in a tournament (m).
formula_8
Rating floors.
The USCF maintains an absolute rating floor of 100 for all ratings. Thus, no member can have a rating below 100, no matter their performance at USCF-sanctioned events. However, players can have higher individual absolute rating floors, calculated using the following formula:
formula_9
where formula_10 is the number of rated games won, formula_11 is the number of rated games drawn, and formula_12 is the number of events in which the player completed three or more rated games.
Higher rating floors exist for experienced players who have achieved significant ratings. Such higher rating floors exist, starting at ratings of 1200 in 100-point increments up to 2100 (1200, 1300, 1400, ..., 2100). A rating floor is calculated by taking the player's peak established rating, subtracting 200 points, and then rounding down to the nearest rating floor. For example, a player who has reached a peak rating of 1464 would have a rating floor of 1464 − 200 = 1264, which would be rounded down to 1200. Under this scheme, only Class C players and above are capable of having a higher rating floor than their absolute player rating. All other players would have a floor of at most 150.
There are two ways to achieve higher rating floors other than under the standard scheme presented above. If a player has achieved the rating of Original Life Master, their rating floor is set at 2200. The achievement of this title is unique in that no other recognized USCF title will result in a new floor. For players with ratings below 2000, winning a cash prize of $2,000 or more raises that player's rating floor to the closest 100-point level that would have disqualified the player for participation in the tournament. For example, if a player won $4,000 in a 1750-and-under tournament, they would now have a rating floor of 1800.
Theory.
Pairwise comparisons form the basis of the Elo rating methodology. Elo made references to the papers of Good, David, Trawinski and David, and Buhlman and Huber.
Mathematical details.
Performance is not measured absolutely; it is inferred from wins, losses, and draws against other players. Players' ratings depend on the ratings of their opponents and the results scored against them. The difference in rating between two players determines an estimate for the expected score between them. Both the average and the spread of ratings can be arbitrarily chosen. The USCF initially aimed for an average club player to have a rating of 1500 and Elo suggested scaling ratings so that a difference of 200 rating points in chess would mean that the stronger player has an "expected score" of approximately 0.75.
A player's "expected score" is their probability of winning plus half their probability of drawing. Thus, an expected score of 0.75 could represent a 75% chance of winning, 25% chance of losing, and 0% chance of drawing. On the other extreme it could represent a 50% chance of winning, 0% chance of losing, and 50% chance of drawing. The probability of drawing, as opposed to having a decisive result, is not specified in the Elo system. Instead, a draw is considered half a win and half a loss. In practice, since the true strength of each player is unknown, the expected scores are calculated using the player's current ratings as follows.
If player A has a rating of formula_13 and player B a rating of formula_14, the exact formula (using the logistic curve with base 10) for the expected score of player A is
formula_15
Similarly, the expected score for player B is
formula_16
This could also be expressed by
formula_17
and
formula_18
where formula_19 and formula_20 Note that in the latter case, the same denominator applies to both expressions, and it is plain that formula_21 This means that by studying only the numerators, we find out that the expected score for player A is formula_22 times the expected score for player B. It then follows that for each 400 rating points of advantage over the opponent, the expected score is magnified ten times in comparison to the opponent's expected score.
When a player's actual tournament scores exceed their expected scores, the Elo system takes this as evidence that player's rating is too low, and needs to be adjusted upward. Similarly, when a player's actual tournament scores fall short of their expected scores, that player's rating is adjusted downward. Elo's original suggestion, which is still widely used, was a simple linear adjustment proportional to the amount by which a player over-performed or under-performed their expected score. The maximum possible adjustment per game, called the K-factor, was set at formula_23 for masters and formula_24 for weaker players.
Suppose player A (again with rating formula_25) was expected to score formula_26 points but actually scored formula_27 points. The formula for updating that player's rating is
formula_28
This update can be performed after each game or each tournament, or after any suitable rating period.
An example may help to clarify:
<templatestyles src="Template:Blockquote/styles.css" />Suppose player A has a rating of 1613 and plays in a five-round tournament. They lose to a player rated 1609, draw with a player rated 1477, defeat a player rated 1388, defeat a player rated 1586, and lose to a player rated 1720. The player's actual score is (0 + 0.5 + 1 + 1 + 0)
2.5. The expected score, calculated according to the formula above, was (0.51 + 0.69 + 0.79 + 0.54 + 0.35) = 2.88.
Therefore, the player's new rating is [1613 + 32·(2.5 − 2.88)]
1601, assuming that a K-factor of 32 is used. Equivalently, each game the player can be said to have put an ante of K times their expected score for the game into a pot, the opposing player does likewise, and the winner collects the full pot of value K; in the event of a draw, the players split the pot and receive formula_29 points each.
Note that while two wins, two losses, and one draw may seem like a par score, it is worse than expected for player A because their opponents were lower rated on average. Therefore, player A is slightly penalized. If player A had scored two wins, one loss, and two draws, for a total score of three points, that would have been slightly better than expected, and the player's new rating would have been [1613 + 32·(3 − 2.88)]
1617.
This updating procedure is at the core of the ratings used by FIDE, USCF, Yahoo! Games, the Internet Chess Club (ICC) and the Free Internet Chess Server (FICS). However, each organization has taken a different approach to dealing with the uncertainty inherent in the ratings, particularly the ratings of newcomers, and to dealing with the problem of ratings inflation/deflation. New players are assigned provisional ratings, which are adjusted more drastically than established ratings.
The principles used in these rating systems can be used for rating other competitions—for instance, international football matches.
Elo ratings have also been applied to games without the possibility of draws, and to games in which the result can also have a quantity (small/big margin) in addition to the quality (win/loss). See Go rating with Elo for more.
Suggested modification.
In 2011 after analyzing 1.5 million FIDE rated games, Jeff Sonas demonstrated according to the Elo formula, two players having a rating difference of X actually have a true difference of around "X"(5/6). Likewise, one can leave the rating difference alone and divide by 480 instead of 400. Since the Elo formula is overestimating the stronger player's win probability, stronger players are losing points against weaker players despite playing at their true strength. Likewise, weaker players gain points against stronger players. When the modification is applied, observed win rates deviate by less than 0.1% away from prediction, while traditional Elo can be 4% off the predicted rate.
Most accurate distribution model.
The first mathematical concern addressed by the USCF was the use of the normal distribution. They found that this did not accurately represent the actual results achieved, particularly by the lower rated players. Instead they switched to a logistic distribution model, which the USCF found provided a better fit for the actual results achieved. FIDE also uses an approximation to the logistic distribution.
Most accurate K-factor.
The second major concern is the correct "K-factor" used. The chess statistician Jeff Sonas believes that the original formula_30 value (for players rated above 2400) is inaccurate in Elo's work. If the K-factor coefficient is set too large, there will be too much sensitivity to just a few, recent events, in terms of a large number of points exchanged in each game. And if the K-value is too low, the sensitivity will be minimal, and the system will not respond quickly enough to changes in a player's actual level of performance.
Elo's original K-factor estimation was made without the benefit of huge databases and statistical evidence. Sonas indicates that a K-factor of 24 (for players rated above 2400) may be both more accurate as a predictive tool of future performance and be more sensitive to performance.
Certain Internet chess sites seem to avoid a three-level K-factor staggering based on rating range. For example, the ICC seems to adopt a global "K" = 32 except when playing against provisionally rated players.
The USCF (which makes use of a logistic distribution as opposed to a normal distribution) formerly staggered the K-factor according to three main rating ranges:
Currently, the USCF uses a formula that calculates the K-factor based on factors including the number of games played and the player's rating. The K-factor is also reduced for high rated players if the event has shorter time controls.
FIDE uses the following ranges:
FIDE used the following ranges before July 2014:
The gradation of the K-factor reduces rating change at the top end of the rating range, reducing the possibility for rapid rise or fall of rating for those with a rating high enough to reach a low K-factor.
In theory, this might apply equally to online chess players and over-the-board players, since it is more difficult for all players to raise their rating after their rating has become high and their K-factor consequently reduced. However, when playing online, 2800+ players can more easily raise their rating by simply selecting opponents with high ratings – on the ICC playing site, a grandmaster may play a string of different opponents who are all rated over 2700. In over-the-board events, it would only be in very high level all-play-all events that a player would be able to engage that number of 2700+ opponents. In a normal, open, Swiss-paired chess tournament, frequently there would be many opponents rated less than 2500, reducing the ratings gains possible from a single contest for a high-rated player.
Formal derivation for win/loss games.
The above expressions can be now formally derived by exploiting the link between the Elo rating and the stochastic gradient update in the logistic regression.
If we assume that the game results are binary, that is, only a win or a loss can be observed, the problem can be addressed via logistic regression, where the games results are dependent variables, the players' ratings are independent variables, and the model relating both is probabilistic: the probability of the player formula_31 winning the game is modeled as
formula_32
where
formula_33
denotes the difference of the players' ratings, and we use a scaling factor formula_34, and, by law of total probability
formula_35
The log loss is then calculated as
formula_36
and, using the stochastic gradient descent the log loss is minimized as follows:
formula_37,
formula_38.
where formula_39 is the adaptation step.
Since formula_40, formula_41, and formula_42, the adaptation is then written as follows
formula_43
which may be compactly written as
formula_44
where formula_45 is the new adaptation step which absorbs formula_46 and formula_47, formula_48 if formula_49 wins and formula_50 if formula_51 wins, and the expected score is given by formula_52.
Analogously, the update for the rating formula_53 is
formula_54.
Formal derivation for win/draw/loss games.
Since the very beginning, the Elo rating has been also used in chess where we observe wins, losses or draws and, to deal with the latter a fractional score value, formula_55, is introduced. We note, however, that the scores formula_56 and formula_57 are merely indicators to the events when the player formula_31 wins or loses the game. It is, therefore, not immediately clear what is the meaning of the fractional score. Moreover, since we do not specify explicitly the model relating the rating values formula_58 and formula_59 to the probability of the game outcome, we cannot say what the probability of the win, the loss, or the draw is.
To address these difficulties, and to derive the Elo rating in the ternary games, we will define the explicit probabilistic model of the outcomes. Next, we will minimize the log loss via stochastic gradient.
Since the loss, the draw, and the win are ordinal variables, we should adopt the model which takes their ordinal nature into account, and we use the so-called adjacent categories model which may be traced to the Davidson's work
formula_60
formula_61
formula_62
where
formula_63
and formula_64 is a parameter. Introduction of a free parameter should not be surprising as we have three possible outcomes and thus, an additional degree of freedom should appear in the model. In particular, with formula_65 we recover the model underlying the logistic regression
formula_66
where formula_67.
Using the ordinal model defined above, the log loss is now calculated as
formula_68
which may be compactly written as
formula_69
where formula_56 iff formula_31 wins, formula_70 iff formula_71 wins, and formula_72 iff formula_31 draws.
As before, we need the derivative of formula_73 which is given by
formula_74,
where
formula_75
Thus, the derivative of the log loss with respect to the rating formula_58 is given by
formula_76
where we used the relationships formula_77 and formula_78.
Then, the stochastic gradient descent applied to minimize the log loss yields the following update for the rating formula_58
formula_79
where formula_80 and formula_81. Of course, formula_82 if formula_83 wins, formula_84 if formula_83 draws, and formula_85 if formula_83 loses. To recognize the origin in the model proposed by Davidson, this update is called an Elo-Davidson rating.
The update for formula_59 is derived in the same manner as
formula_86,
where formula_87.
We note that
formula_88
and thus, we obtain the rating update may be written as
formula_89,
where formula_90 and we obtained practically the same equation as in the Elo rating except that the expected score is given by formula_91 instead of formula_92.
Of course, as noted above, for formula_65, we have formula_93 and thus, the Elo-Davidson rating is exactly the same as the Elo rating. However, this is of no help to understand the case when the draws are observed (we cannot use formula_94 which would mean that the probability of draw is null). On the other hand, if we use formula_95, we have
formula_96
which means that, using formula_95, the Elo-Davidson rating is exactly the same as the Elo rating.
Practical issues.
Game activity versus protecting one's rating.
In some cases the rating system can discourage game activity for players who wish to protect their rating. In order to discourage players from sitting on a high rating, a 2012 proposal by British Grandmaster John Nunn for choosing qualifiers to the chess world championship included an activity bonus, to be combined with the rating.
Beyond the chess world, concerns over players avoiding competitive play to protect their ratings caused Wizards of the Coast to abandon the Elo system for "" tournaments in favour of a system of their own devising called "Planeswalker Points".
Selective pairing.
A more subtle issue is related to pairing. When players can choose their own opponents, they can choose opponents with minimal risk of losing, and maximum reward for winning. Particular examples of players rated 2800+ choosing opponents with minimal risk and maximum possibility of rating gain include: choosing opponents that they know they can beat with a certain strategy; choosing opponents that they think are overrated; or avoiding playing strong players who are rated several hundred points below them, but may hold chess titles such as IM or GM. In the category of choosing overrated opponents, new entrants to the rating system who have played fewer than 50 games are in theory a convenient target as they may be overrated in their provisional rating. The ICC compensates for this issue by assigning a lower K-factor to the established player if they do win against a new rating entrant. The K-factor is actually a function of the number of rated games played by the new entrant.
Therefore, Elo ratings online still provide a useful mechanism for providing a rating based on the opponent's rating. Its overall credibility, however, needs to be seen in the context of at least the above two major issues described—engine abuse, and selective pairing of opponents.
The ICC has also recently introduced "auto-pairing" ratings which are based on random pairings, but with each win in a row ensuring a statistically much harder opponent who has also won x games in a row. With potentially hundreds of players involved, this creates some of the challenges of a major large Swiss event which is being fiercely contested, with round winners meeting round winners. This approach to pairing certainly maximizes the rating risk of the higher-rated participants, who may face very stiff opposition from players below 3000, for example. This is a separate rating in itself, and is under "1-minute" and "5-minute" rating categories. Maximum ratings achieved over 2500 are exceptionally rare.
Ratings inflation and deflation.
The term "inflation", applied to ratings, is meant to suggest that the level of playing strength demonstrated by the rated player is decreasing over time; conversely, "deflation" suggests that the level is advancing. For example, if there is inflation, a modern rating of 2500 means less than a historical rating of 2500, while the reverse is true if there is deflation. Using ratings to compare players between different eras is made more difficult when inflation or deflation are present. (See also Comparison of top chess players throughout history.)
Analyzing FIDE rating lists over time, Jeff Sonas suggests that inflation may have taken place since about 1985. Sonas looks at the highest-rated players, rather than all rated players, and acknowledges that the changes in the distribution of ratings could have been caused by an increase of the standard of play at the highest levels, but looks for other causes as well.
The number of people with ratings over 2700 has increased. Around 1979 there was only one active player (Anatoly Karpov) with a rating this high. In 1992 Viswanathan Anand was only the 8th player in chess history to reach the 2700 mark at that point of time. This increased to 15 players by 1994. 33 players had a 2700+ rating in 2009 and 44 as of September 2012. Only 14 players have ever broken a rating of 2800.
One possible cause for this inflation was the rating floor, which for a long time was at 2200, and if a player dropped below this they were struck from the rating list. As a consequence, players at a skill level just below the floor would only be on the rating list if they were overrated, and this would cause them to feed points into the rating pool. In July 2000 the average rating of the top 100 was 2644. By July 2012 it had increased to 2703.
Using a strong chess engine to evaluate moves played in games between rated players, Regan and Haworth analyze sets of games from FIDE-rated tournaments, and draw the conclusion that there had been little or no inflation from 1976 to 2009.
In a pure Elo system, each game ends in an equal transaction of rating points. If the winner gains N rating points, the loser will drop by N rating points. This prevents points from entering or leaving the system when games are played and rated. However, players tend to enter the system as novices with a low rating and retire from the system as experienced players with a high rating. Therefore, in the long run a system with strictly equal transactions tends to result in rating deflation.
In 1995, the USCF acknowledged that several young scholastic players were improving faster than the rating system was able to track. As a result, established players with stable ratings started to lose rating points to the young and underrated players. Several of the older established players were frustrated over what they considered an unfair rating decline, and some even quit chess over it.
Combating deflation.
Because of the significant difference in timing of when inflation and deflation occur, and in order to combat deflation, most implementations of Elo ratings have a mechanism for injecting points into the system in order to maintain relative ratings over time. FIDE has two inflationary mechanisms. First, performances below a "ratings floor" are not tracked, so a player with true skill below the floor can only be unrated or overrated, never correctly rated. Second, established and higher-rated players have a lower K-factor. New players have a "K" = 40, which drops to "K" = 20 after 30 played games, and to "K" = 10 when the player reaches 2400.
The current system in the United States includes a bonus point scheme which feeds rating points into the system in order to track improving players, and different K-values for different players. Some methods, used in Norway for example, differentiate between juniors and seniors, and use a larger K-factor for the young players, even boosting the rating progress by 100% for when they score well above their predicted performance.
Rating floors in the United States work by guaranteeing that a player will never drop below a certain limit. This also combats deflation, but the chairman of the USCF Ratings Committee has been critical of this method because it does not feed the extra points to the improving players. A possible motive for these rating floors is to combat sandbagging, i.e., deliberate lowering of ratings to be eligible for lower rating class sections and prizes.
Ratings of computers.
Human–computer chess matches between 1997 (Deep Blue versus Garry Kasparov) and 2006 demonstrated that chess computers are capable of defeating even the strongest human players. However, chess engine ratings are difficult to quantify, due to variable factors such as the time control and the hardware the program runs on, and also the fact that chess is not a fair game. The existence and magnitude of the first-move advantage in chess becomes very important at the computer level. Beyond some skill threshold, an engine with White should be able to force a draw on demand from the starting position even against perfect play, simply because White begins with too big an advantage to lose compared to the small magnitude of the errors it is likely to make. Consequently, such an engine is more or less guaranteed to score at least 25% even against perfect play. Differences in skill beyond a certain point could only be picked up if one does not begin from the usual starting position, but instead chooses a starting position that is only barely not lost for one side. Because of these factors, ratings depend on pairings and the openings selected. Published engine rating lists such as CCRL are based on engine-only games on standard hardware configurations and are not directly comparable to FIDE ratings.
For some ratings estimates, see Chess engine § Ratings.
Use outside of chess.
Athletic sports.
The Elo rating system is used in the chess portion of chess boxing. In order to be eligible for professional chess boxing, one must have an Elo rating of at least 1600, as well as competing in 50 or more matches of amateur boxing or martial arts.
American college football used the Elo method as a portion of its Bowl Championship Series rating systems from 1998 to 2013 after which the BCS was replaced by the College Football Playoff. Jeff Sagarin of "USA Today" publishes team rankings for most American sports, which includes Elo system ratings for college football. The use of rating systems was effectively scrapped with the creation of the College Football Playoff in 2014.
In other sports, individuals maintain rankings based on the Elo algorithm. These are usually unofficial, not endorsed by the sport's governing body. The World Football Elo Ratings is an example of the method applied to men's football. In 2006, Elo ratings were adapted for Major League Baseball teams by Nate Silver, then of Baseball Prospectus. Based on this adaptation, both also made Elo-based Monte Carlo simulations of the odds of whether teams will make the playoffs. In 2014, Beyond the Box Score, an SB Nation site, introduced an Elo ranking system for international baseball.
In tennis, the Elo-based Universal Tennis Rating (UTR) rates players on a global scale, regardless of age, gender, or nationality. It is the official rating system of major organizations such as the Intercollegiate Tennis Association and World TeamTennis and is frequently used in segments on the Tennis Channel. The algorithm analyzes more than 8 million match results from over 800,000 tennis players worldwide. On May 8, 2018, Rafael Nadal—having won 46 consecutive sets in clay court matches—had a near-perfect clay UTR of 16.42.
In pool, an Elo-based system called Fargo Rate is used to rank players in organized amateur and professional competitions.
One of the few Elo-based rankings endorsed by a sport's governing body is the FIFA Women's World Rankings, based on a simplified version of the Elo algorithm, which FIFA uses as its official ranking system for national teams in women's football.
From the first ranking list after the 2018 FIFA World Cup, FIFA has used Elo for their FIFA World Rankings.
In 2015, Nate Silver, editor-in-chief of the statistical commentary website FiveThirtyEight, and Reuben Fischer-Baum produced Elo ratings for every National Basketball Association team and season through the 2014 season. In 2014 FiveThirtyEight created Elo-based ratings and win-projections for the American professional National Football League.
The English Korfball Association rated teams based on Elo ratings, to determine handicaps for their cup competition for the 2011/12 season.
An Elo-based ranking of National Hockey League players has been developed. The hockey-Elo metric evaluates a player's overall two-way play: scoring AND defense in both even strength and power-play/penalty-kill situations.
Rugbyleagueratings.com uses the Elo rating system to rank international and club rugby league teams.
Hemaratings.com was started in 2017 and uses a Glicko-2 algorithm to rank individual Historical European martial arts fencers worldwide in different categories such as Longsword, Rapier, historical Sabre and Sword & Buckler.
Video games and online games.
Many video games use modified Elo systems in competitive gameplay. The MOBA game League of Legends used an Elo rating system prior to the second season of competitive play. The Esports game "Overwatch", the basis of the unique Overwatch League professional sports organization, uses a derivative of the Elo system to rank competitive players with various adjustments made between competitive seasons. "World of Warcraft" also previously used the Glicko-2 system to team up and compare Arena players, but now uses a system similar to Microsoft's TrueSkill. The game "Puzzle Pirates" uses the Elo rating system to determine the standings in the various puzzles. This system is also used in FIFA Mobile for the Division Rivals modes. Another recent game to start using the Elo rating system is "AirMech", using Elo ratings for 1v1, 2v2, and 3v3 random/team matchmaking. "RuneScape 3" used the Elo system in the rerelease of the bounty hunter minigame in 2016. "Mechwarrior Online" instituted an Elo system for its new "Comp Queue" mode, effective with the Jun 20, 2017 patch. ' and ' are using the Elo system for their Leaderboard and matchmaking, with new players starting at Elo 1000. Competitive Classic Tetris (Tetris played on the Nintendo Entertainment System) derives its ratings using a combination of players' personal best scores and a highly modified Elo system.
Few video games use the original Elo rating system. According to Lichess, an online chess server, the Elo system is outdated, with Glicko-2 now being used by many chess organizations. "PlayerUnknown’s Battlegrounds" is one of the few video games that utilizes the very first Elo system. In "Guild Wars", Elo ratings are used to record guild rating gained and lost through guild-versus-guild battles. In 1998, an online gaming ladder called "Clanbase" was launched, which used the Elo scoring system to rank teams. The initial K-value was 30, but was changed to 5 in January 2007, then changed to 15 in July 2009. The site later went offline in 2013. A similar alternative site was launched in 2016 under the name "Scrimbase", which also used the Elo scoring system for ranking teams. Since 2005, "Golden Tee Live" has rated players based on the Elo system. New players start at 2100, with top players rating over 3000.
Despite many video games using different systems for matchmaking, it is common for players of ranked video games to refer to all matchmaking ratings as "Elo".
Other usage.
The Elo rating system has been used in soft biometrics, which concerns the identification of individuals using human descriptions. Comparative descriptions were utilized alongside the Elo rating system to provide robust and discriminative 'relative measurements', permitting accurate identification.
The Elo rating system has also been used in biology for assessing male dominance hierarchies, and in automation and computer vision for fabric inspection.
Moreover, online judge sites are also using Elo rating system or its derivatives. For example, Topcoder is using a modified version based on normal distribution, while Codeforces is using another version based on logistic distribution.
The Elo rating system has also been noted in dating apps, such as in the matchmaking app Tinder, which uses a variant of the Elo rating system.
The YouTuber Marques Brownlee and his team used Elo rating system when they let people to vote between digital photos taken with different smartphone models launched in 2022.
The Elo rating system has also been used in U.S. revealed preference college rankings, such as those by the digital credential firm Parchment.
The Elo rating system has also been adopted to evaluate AI models. In 2021, Anthropic utilized the Elo system for ranking AI models in their research. The LMSYS leaderboard briefly employed the Elo rating system to rank AI models before transitioning to Bradley–Terry model.
References in the media.
The Elo rating system was featured prominently in "The Social Network" during the algorithm scene where Mark Zuckerberg released Facemash. In the scene Eduardo Saverin writes mathematical formulas for the Elo rating system on Zuckerberg's dormitory room window. Behind the scenes, the movie claims, the Elo system is employed to rank girls by their attractiveness. The equations driving the algorithm are shown briefly, written on the window; however, they are slightly incorrect.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "d_p"
},
{
"math_id": 2,
"text": "\n\\begin{align}\n& \\frac{w+400+x+400+y-400+z-400}{4} \\\\[6pt]\n& \\frac{w+x+y+z+400(2)-400(2)}{4}\n\\end{align}\n"
},
{
"math_id": 3,
"text": " \\text{performance rating} = \\frac{\\text{total of opponents' ratings } + 400 \\times (\\text{wins} - \\text{losses})}{\\text{games}}"
},
{
"math_id": 4,
"text": "\n\\text{performance rating} = \\frac{1000 + 400 \\times (1)}{1} = 1400"
},
{
"math_id": 5,
"text": " \\text{performance rating} = \\frac{2000 + 400 \\times (2)}{2} = 1400"
},
{
"math_id": 6,
"text": " \\text{performance rating} = \\frac{1000 + 400 \\times (0)}{1} = 1000"
},
{
"math_id": 7,
"text": "\\text{performance rating} = \\text{average of opponents' ratings} + d_p,"
},
{
"math_id": 8,
"text": "K = \\frac{800}{N_e + m} \\, "
},
{
"math_id": 9,
"text": "AF = \\operatorname{min}\\{100+4N_W+2N_D+N_R , 150\\}"
},
{
"math_id": 10,
"text": "N_W"
},
{
"math_id": 11,
"text": "N_D"
},
{
"math_id": 12,
"text": "N_R"
},
{
"math_id": 13,
"text": "\\, R_\\mathsf{A} \\,"
},
{
"math_id": 14,
"text": "\\, R_\\mathsf{B} \\,"
},
{
"math_id": 15,
"text": " E_\\mathsf{A} = \\frac 1 {1 + 10^{(R_\\mathsf{B} - R_\\mathsf{A})/400}} ~."
},
{
"math_id": 16,
"text": " E_\\mathsf{B} = \\frac 1 {1 + 10^{(R_\\mathsf{A} - R_\\mathsf{B})/400}} ~."
},
{
"math_id": 17,
"text": " E_\\mathsf{A} = \\frac{ Q_\\mathsf{A} }{ Q_\\mathsf{A} + Q_\\mathsf{B} } "
},
{
"math_id": 18,
"text": " E_\\mathsf{B} = \\frac{ Q_\\mathsf{B} }{Q_\\mathsf{A} + Q_\\mathsf{B} } ~,"
},
{
"math_id": 19,
"text": "\\; Q_\\mathsf{A} = 10^{R_\\mathsf{A}/400} \\;,"
},
{
"math_id": 20,
"text": "\\; Q_\\mathsf{B} = 10^{R_\\mathsf{B}/400} ~."
},
{
"math_id": 21,
"text": "\\; E_\\mathsf{A} + E_\\mathsf{B} = 1 ~."
},
{
"math_id": 22,
"text": "\\; Q_\\mathsf{A}/Q_\\mathsf{B} \\;"
},
{
"math_id": 23,
"text": "\\; K = 16 \\;"
},
{
"math_id": 24,
"text": "\\; K = 32 \\;"
},
{
"math_id": 25,
"text": "R_\\mathsf{A}"
},
{
"math_id": 26,
"text": "\\, E_\\mathsf{A} \\,"
},
{
"math_id": 27,
"text": "\\, S_\\mathsf{A} \\,"
},
{
"math_id": 28,
"text": "R_\\mathsf{A}' = R_\\mathsf{A} + K \\cdot (S_\\mathsf{A} - E_\\mathsf{A}) ~."
},
{
"math_id": 29,
"text": "\\; \\tfrac{1}{2}K \\;"
},
{
"math_id": 30,
"text": "\\; K = 10 \\;"
},
{
"math_id": 31,
"text": "\\mathsf{A}"
},
{
"math_id": 32,
"text": " \n\\Pr\\{\\mathsf{A}~\\textrm{wins}\\} = \\sigma(r_{\\mathsf{A,B}}), \\quad \\sigma(r)=\\frac 1 {1 + 10^{-r/s}},\n"
},
{
"math_id": 33,
"text": " \nr_{\\mathsf{A,B}} = (R_\\mathsf{A} - R_\\mathsf{B})\n"
},
{
"math_id": 34,
"text": "s=400"
},
{
"math_id": 35,
"text": " \n\\Pr\\{\\mathsf{B}~\\textrm{wins}\\} = 1-\\sigma(r_{\\mathsf{A,B}})=\\sigma(-r_{\\mathsf{A,B}}).\n"
},
{
"math_id": 36,
"text": " \\ell = \n\\begin{cases}\n-\\log \\sigma(r_\\mathsf{A,B}) & \\textrm{if}~ \\mathsf{A}~\\textrm{wins},\\\\\n-\\log \\sigma(-r_\\mathsf{A,B}) & \\textrm{if}~ \\mathsf{B}~\\textrm{wins},\n\\end{cases}"
},
{
"math_id": 37,
"text": " R_{\\mathsf{A}}\\leftarrow R_{\\mathsf{A}} - \\eta \\frac{\\textrm{d}\\ell}{\\textrm{d} R_{\\mathsf{A}}}"
},
{
"math_id": 38,
"text": " R_{\\mathsf{B}}\\leftarrow R_{\\mathsf{B}} - \\eta \\frac{\\textrm{d}\\ell}{\\textrm{d} R_{\\mathsf{B}}}"
},
{
"math_id": 39,
"text": "\\eta"
},
{
"math_id": 40,
"text": " \\frac{\\textrm{d}}{\\textrm{d} r}\\log\\sigma(r)=\\frac{\\log 10}{s}\\sigma(-r)"
},
{
"math_id": 41,
"text": " \\frac{\\textrm{d} r_{\\mathsf{A,B}}}{\\textrm{d} R_{\\mathsf{A}}}=1"
},
{
"math_id": 42,
"text": " \\frac{\\textrm{d} r_{\\mathsf{A,B}}}{\\textrm{d} R_{\\mathsf{B}}}=-1"
},
{
"math_id": 43,
"text": " R_{\\mathsf{A}}\\leftarrow \n\\begin{cases}\nR_{\\mathsf{A}} + K \\sigma(-r_{\\mathsf{A,B}}) & \\textrm{if}~\\mathsf{A}~\\textrm{wins}\\\\\nR_{\\mathsf{A}} - K \\sigma(r_{\\mathsf{A,B}}) & \\textrm{if}~\\mathsf{B}~\\textrm{wins},\n\\end{cases}"
},
{
"math_id": 44,
"text": " R_{\\mathsf{A}}\\leftarrow \nR_{\\mathsf{A}} + K (S_{\\mathsf{A}}-E_{\\mathsf{A}})"
},
{
"math_id": 45,
"text": " K=\\eta\\log10/s"
},
{
"math_id": 46,
"text": " \\eta"
},
{
"math_id": 47,
"text": " s"
},
{
"math_id": 48,
"text": " S_{\\mathsf{A}}=1"
},
{
"math_id": 49,
"text": " \\mathsf{A}"
},
{
"math_id": 50,
"text": " S_{\\mathsf{A}}=0"
},
{
"math_id": 51,
"text": " \\mathsf{B}"
},
{
"math_id": 52,
"text": " E_{\\mathsf{A}}=\\sigma(r_{\\mathsf{A,B}})"
},
{
"math_id": 53,
"text": " R_{\\mathsf{B}}"
},
{
"math_id": 54,
"text": " R_{\\mathsf{B}}\\leftarrow \nR_{\\mathsf{B}} + K (S_{\\mathsf{B}}-E_{\\mathsf{B}})"
},
{
"math_id": 55,
"text": "S_{\\mathsf{A}}=0.5"
},
{
"math_id": 56,
"text": "S_{\\mathsf{A}}=1"
},
{
"math_id": 57,
"text": "S_{\\mathsf{A}}=0"
},
{
"math_id": 58,
"text": "R_{\\mathsf{A}}"
},
{
"math_id": 59,
"text": "R_{\\mathsf{B}}"
},
{
"math_id": 60,
"text": " \n\\Pr\\{\\mathsf{A}~\\textrm{wins}\\} = \\sigma(r_{\\mathsf{A,B}}; \\kappa),\n"
},
{
"math_id": 61,
"text": " \n\\Pr\\{\\mathsf{B}~\\textrm{wins}\\} = \\sigma(-r_{\\mathsf{A,B}}; \\kappa),\n"
},
{
"math_id": 62,
"text": " \n\\Pr\\{\\mathsf{A}~\\textrm{draws}\\} = \\kappa\\sqrt{\\sigma(r_{\\mathsf{A,B}}; \\kappa)\\sigma(-r_{\\mathsf{A,B}}; \\kappa)},\n"
},
{
"math_id": 63,
"text": " \n\\sigma(r; \\kappa) = \\frac{10^{r/s}}{10^{-r/s}+\\kappa + 10^{r/s}}\n"
},
{
"math_id": 64,
"text": "\\kappa\\ge 0"
},
{
"math_id": 65,
"text": "\\kappa=0"
},
{
"math_id": 66,
"text": " \n\\Pr\\{\\mathsf{A}~\\textrm{wins}\\} = \\sigma(r_{\\mathsf{A,B}};0)=\\frac{10^{r_{\\mathsf{A,B}}/s}}{10^{-r_{\\mathsf{A,B}}/s}+ 10^{r_{\\mathsf{A,B}}/s}}=\\frac{1}{1+ 10^{-r_{\\mathsf{A,B}}/s'}},\n"
},
{
"math_id": 67,
"text": "s' = s/2"
},
{
"math_id": 68,
"text": "\n \\ell = \n\\begin{cases}\n-\\log \\sigma(r_{\\mathsf{A,B}};\\kappa) & \\textrm{if}~ \\mathsf{A}~\\textrm{wins},\\\\\n-\\log \\sigma(-r_{\\mathsf{A,B}};\\kappa) & \\textrm{if}~ \\mathsf{B}~\\textrm{wins},\\\\\n-\\log \\kappa -\\frac{1}{2}\\log\\sigma(r_{\\mathsf{A,B}};\\kappa) - \\frac{1}{2}\\log\\sigma(-r_{\\mathsf{A,B}};\\kappa) & \\textrm{if}~ \\mathsf{A}~\\textrm{draw},\n\\end{cases}\n"
},
{
"math_id": 69,
"text": "\n \\ell = \n-(S_{\\mathsf{A}} +\\frac{1}{2}D)\\log \\sigma(r_{\\mathsf{A,B}};\\kappa)\n-(S_{\\mathsf{B}} +\\frac{1}{2}D) \\log \\sigma(-r_{\\mathsf{A,B}};\\kappa)\n-D\\log \\kappa \n"
},
{
"math_id": 70,
"text": "S_{\\mathsf{B}}=1"
},
{
"math_id": 71,
"text": "\\mathsf{B}"
},
{
"math_id": 72,
"text": "D=1"
},
{
"math_id": 73,
"text": "\\log\\sigma(r;\\kappa)"
},
{
"math_id": 74,
"text": " \n\\frac{\\textrm{d}}{\\textrm{d} r}\\log\\sigma(r; \\kappa) \n=\\frac{2\\log 10}{s} [1-g(r;\\kappa)] \n"
},
{
"math_id": 75,
"text": " \ng(r;\\kappa)=\n\\frac{10^{r/s}+\\kappa/2 }\n{10^{-r/s}+\\kappa + 10^{r/s}}.\n"
},
{
"math_id": 76,
"text": " \n\\begin{align}\n\\frac{\\textrm{d}}{\\textrm{d} R_{\\mathsf{A}}}\\ell\n&=\n-\\frac{2\\log 10}{s}\n\\left( \n(S_{\\mathsf{A}} +0.5D)[1-g(r_{\\mathsf{A,B}};\\kappa)]\n-(S_{\\mathsf{B}} +0.5D)g(r_{\\mathsf{A,B}};\\kappa)\n\\right)\\\\\n&=\n-\\frac{2\\log 10}{s}\n\\left(S_{\\mathsf{A}} + 0.5D-g(r_{\\mathsf{A,B}};\\kappa)\\right),\n\\end{align}\n"
},
{
"math_id": 77,
"text": "S_{\\mathsf{A}} + S_{\\mathsf{B}} + D=1"
},
{
"math_id": 78,
"text": " \ng(-r;\\kappa)=1-g(r;\\kappa)\n"
},
{
"math_id": 79,
"text": " \nR_{\\mathsf{A}}\\leftarrow R_{\\mathsf{A}} + K (\\hat{S}_{\\mathsf{A}}- g(r_{\\mathsf{A,B}};\\kappa)) \n"
},
{
"math_id": 80,
"text": "K=2\\eta\\log10/s"
},
{
"math_id": 81,
"text": " \n\\hat{S}_{\\mathsf{A}}= S_{\\mathsf{A}} + 0.5D \n"
},
{
"math_id": 82,
"text": " \n\\hat{S}_{\\mathsf{A}}= 1 \n"
},
{
"math_id": 83,
"text": " \n\\textsf{A} \n"
},
{
"math_id": 84,
"text": " \n\\hat{S}_{\\mathsf{A}}= 0.5 \n"
},
{
"math_id": 85,
"text": " \n\\hat{S}_{\\mathsf{A}}= 0 \n"
},
{
"math_id": 86,
"text": " \nR_{\\mathsf{B}}\\leftarrow R_{\\mathsf{B}} + K (\\hat{S}_{\\mathsf{B}}- g(r_{\\mathsf{B,A}};\\kappa)) \n"
},
{
"math_id": 87,
"text": " \nr_{\\mathsf{B,A}}=R_{\\mathsf{B}}-R_{\\mathsf{A}}=-r_{\\mathsf{A,B}} \n"
},
{
"math_id": 88,
"text": " \n\\begin{align}\nE[\\hat{S}_{\\mathsf{A}}]\n&=\\Pr\\{\\mathsf{A}~\\text{wins}\\}+0.5\\Pr\\{\\mathsf{A}~\\text{draws}\\}\\\\\n&=\\sigma(r_{\\mathsf{A,B}};\\kappa)+0.5\\kappa\\sqrt{\\sigma(r_{\\mathsf{A,B}};\\kappa)\\sigma(-r_{\\mathsf{A,B}};\\kappa)}\\\\\n&=g(r_{\\mathsf{A,B}};\\kappa)\n\\end{align} \n"
},
{
"math_id": 89,
"text": " \nR_{\\mathsf{A}}\\leftarrow R_{\\mathsf{A}} + K (\\hat{S}_{\\mathsf{A}}- E_{\\mathsf{A}}) \n"
},
{
"math_id": 90,
"text": " \nE_{\\mathsf{A}}=E[\\hat{S}_\\mathsf{A}] \n"
},
{
"math_id": 91,
"text": " \nE_{\\mathsf{A}}=g(r_{\\mathsf{A,B}};\\kappa) \n"
},
{
"math_id": 92,
"text": " \nE_{\\mathsf{A}}=\\sigma(r_{\\mathsf{A,B}}) \n"
},
{
"math_id": 93,
"text": " \ng(r;0) = \\sigma(r)\n"
},
{
"math_id": 94,
"text": " \n\\kappa=0\n"
},
{
"math_id": 95,
"text": " \n\\kappa=2\n"
},
{
"math_id": 96,
"text": " \ng(r;2)=\n\\frac{10^{r/s}+1 }\n{10^{-r/s}+2 + 10^{r/s}}=\\frac{1}\n{1+10^{-r/s}}=\\sigma(r)\n"
}
]
| https://en.wikipedia.org/wiki?curid=70421 |
7042478 | Main effect | In the design of experiments and analysis of variance, a main effect is the effect of an independent variable on a dependent variable averaged across the levels of any other independent variables. The term is frequently used in the context of factorial designs and regression models to distinguish main effects from interaction effects.
Relative to a factorial design, under an analysis of variance, a main effect test will test the hypotheses expected such as H0, the null hypothesis. Running a hypothesis for a main effect will test whether there is evidence of an effect of different treatments. However, a main effect test is nonspecific and will not allow for a localization of specific mean pairwise comparisons (simple effects). A main effect test will merely look at whether overall there is something about a particular factor that is making a difference. In other words, it is a test examining differences amongst the levels of a single factor (averaging over the other factor and/or factors). Main effects are essentially the overall effect of a factor.
Definition.
A factor averaged over all other levels of the effects of other factors is termed as main effect (also known as marginal effect). The contrast of a factor between levels over all levels of other factors is the main effect. The difference between the marginal means of all the levels of a factor is the main effect of the response variable on that factor . Main effects are the primary independent variables or factors tested in the experiment. Main effect is the specific effect of a factor or independent variable regardless of other parameters in the experiment. In design of experiment, it is referred to as a factor but in regression analysis it is referred to as the independent variable.
Estimating Main Effects.
In factorial designs, thus two levels each of factor A and B in a factorial design, the main effects of two factors say A and B be can be calculated. The main effect of A is given by
formula_0
The main effect of B is given by
formula_1
Where n is total number of replicates. We use factor level 1 to denote the low level, and level 2 to denote the high level. The letter "a" represent the factor combination of level 2 of A and level 1 of B and "b" represents the factor combination of level 1 of A and level 2 of B. "ab" is the represents both factors at level 2. Finally, 1 represents when both factors are set to level 1.
Hypothesis Testing for Two Way Factorial Design..
Consider a two-way factorial design in which factor A has 3 levels and factor B has 2 levels with only 1 replicate. There are 6 treatments with 5 degrees of freedom. in this example, we have two null hypotheses. The first for Factor A is: formula_2 and the second for Factor B is: formula_3. The main effect for factor A can be computed with 2 degrees of freedom. This variation is summarized by the sum of squares denoted by the term SSA. Likewise the variation from factor B can be computed as SSB with 1 degree of freedom. The expected value for the mean of the responses in column i is formula_4while the expected value for the mean of the responses in row j is formula_5where i corresponds to the level of factor in factor A and j corresponds to the level of factor in factor B. formula_6and formula_7are main effects. SSA and SSB are main-effects sums of squares. The two remaining degrees of freedom can be used to describe the variation that comes from the interaction between the two factors and can be denoted as SSAB. A table can show the layout of this particular design with the main effects (where formula_8 is the observation of the ith level of factor B and the jth level of factor A):
Example.
Take a formula_9factorial design (2 levels of two factors) testing the taste ranking of fried chicken at two fast food restaurants. Let taste testers rank the chicken from 1 to 10 (best tasting), for factor X: "spiciness" and factor Y: "crispiness." Level X1 is for "not spicy" chicken and X2 is for "spicy" chicken. Level Y1 is for "not crispy" and level Y2 is for "crispy" chicken. Suppose that five people (5 replicates) tasted all four kinds of chicken and gave a ranking of 1-10 for each. The hypotheses of interest would be: Factor X is: formula_10 and for Factor Y is: formula_11. The table of hypothetical results is given here:
The "Main Effect" of X (spiciness) when we are at Y1 (not crunchy) is given as:
formula_12 where n is the number of replicates. Likewise, the "Main Effect" of X at Y2 (crunchy) is given as:
formula_13, upon which we can take the simple average of these two to determine the overall main effect of the Factor X, which results as the above
formula, written here as:
formula_14 = formula_15
Likewise, for Y, the overall main effect will be:
formula_16= formula_17
For the Chicken tasting experiment, we would have the resulting main effects:
formula_18
formula_19
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A={1 \\over 2n}[ab+a-b-1]"
},
{
"math_id": 1,
"text": "B={1 \\over 2n}[ab+b-a-1]"
},
{
"math_id": 2,
"text": "H_0 : \\alpha_1=\\alpha_2=\\alpha_3=0"
},
{
"math_id": 3,
"text": "H_0 : \\beta_1=\\beta_2=0"
},
{
"math_id": 4,
"text": "\\mu + \\beta_j"
},
{
"math_id": 5,
"text": "\\mu + \\alpha_i"
},
{
"math_id": 6,
"text": "\\alpha_i"
},
{
"math_id": 7,
"text": "\\beta_j"
},
{
"math_id": 8,
"text": "x_{ij}"
},
{
"math_id": 9,
"text": "2^2"
},
{
"math_id": 10,
"text": "H_0 : X_1=X_2=0"
},
{
"math_id": 11,
"text": "H_0 : Y_1=Y_2=0"
},
{
"math_id": 12,
"text": "\\frac{[X_2 Y_1]-[X_1 Y_1]}{n} "
},
{
"math_id": 13,
"text": "\\frac{[X_2 Y_2]-[X_1 Y_2]}{n} "
},
{
"math_id": 14,
"text": "A=X={1 \\over 2n}[ab+a-b-1]"
},
{
"math_id": 15,
"text": "\\frac{[X_2 Y_2]+[X_2 Y_1] - [X_1 Y_2]-[X_1 Y_1]}{2n} "
},
{
"math_id": 16,
"text": "B=Y={1 \\over 2n}[ab+b-a-1]"
},
{
"math_id": 17,
"text": "\\frac{[X_2 Y_2]+[X_1 Y_2]-[X_2 Y_1] - [X_1 Y_1]}{2n} "
},
{
"math_id": 18,
"text": "X: \\frac{[25]-[21]+[41] - [23]}{2*5}= 2.2 "
},
{
"math_id": 19,
"text": "Y: \\frac{[41]-[25]+[23] - [21]}{2*5} = 1.8 "
}
]
| https://en.wikipedia.org/wiki?curid=7042478 |
704359 | Contour integration | Method of evaluating certain integrals along paths in the complex plane
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane.
Contour integration is closely related to the calculus of residues, a method of complex analysis.
One use for contour integrals is the evaluation of integrals along the real line that are not readily found by using only real variable methods.
Contour integration methods include:
One method can be used, or a combination of these methods, or various limiting processes, for the purpose of finding these integrals or sums.
Curves in the complex plane.
In complex analysis a contour is a type of curve in the complex plane. In contour integration, contours provide a precise definition of the curves on which an integral may be suitably defined. A curve in the complex plane is defined as a continuous function from a closed interval of the real line to the complex plane: formula_0.
This definition of a curve coincides with the intuitive notion of a curve, but includes a parametrization by a continuous function from a closed interval. This more precise definition allows us to consider what properties a curve must have for it to be useful for integration. In the following subsections we narrow down the set of curves that we can integrate to include only those that can be built up out of a finite number of continuous curves that can be given a direction. Moreover, we will restrict the "pieces" from crossing over themselves, and we require that each piece have a finite (non-vanishing) continuous derivative. These requirements correspond to requiring that we consider only curves that can be traced, such as by a pen, in a sequence of even, steady strokes, which stop only to start a new piece of the curve, all without picking up the pen.
Directed smooth curves.
Contours are often defined in terms of directed smooth curves. These provide a precise definition of a "piece" of a smooth curve, of which a contour is made.
A smooth curve is a curve formula_0 with a non-vanishing, continuous derivative such that each point is traversed only once (z is one-to-one), with the possible exception of a curve such that the endpoints match (formula_1). In the case where the endpoints match the curve is called closed, and the function is required to be one-to-one everywhere else and the derivative must be continuous at the identified point (formula_2). A smooth curve that is not closed is often referred to as a smooth arc.
The parametrization of a curve provides a natural ordering of points on the curve: formula_3 comes before formula_4 if formula_5. This leads to the notion of a directed smooth curve. It is most useful to consider curves independent of the specific parametrization. This can be done by considering equivalence classes of smooth curves with the same direction. A directed smooth curve can then be defined as an ordered set of points in the complex plane that is the image of some smooth curve in their natural order (according to the parametrization). Note that not all orderings of the points are the natural ordering of a smooth curve. In fact, a given smooth curve has only two such orderings. Also, a single closed curve can have any point as its endpoint, while a smooth arc has only two choices for its endpoints.
Contours.
Contours are the class of curves on which we define contour integration. A contour is a directed curve which is made up of a finite sequence of directed smooth curves whose endpoints are matched to give a single direction. This requires that the sequence of curves formula_6 be such that the terminal point of formula_7 coincides with the initial point of formula_8 for all formula_9 such that formula_10 . This includes all directed smooth curves. Also, a single point in the complex plane is considered a contour. The symbol formula_11 is often used to denote the piecing of curves together to form a new curve. Thus we could write a contour formula_12 that is made up of formula_13 curves as
formula_14
Contour integrals.
The contour integral of a complex function formula_15 is a generalization of the integral for real-valued functions. For continuous functions in the complex plane, the contour integral can be defined in analogy to the line integral by first defining the integral along a directed smooth curve in terms of an integral over a real valued parameter. A more general definition can be given in terms of partitions of the contour in analogy with the partition of an interval and the Riemann integral. In both cases the integral over a contour is defined as the sum of the integrals over the directed smooth curves that make up the contour.
For continuous functions.
To define the contour integral in this way one must first consider the integral, over a real variable, of a complex-valued function. Let formula_16 be a complex-valued function of a real variable, formula_17. The real and imaginary parts of formula_18 are often denoted as formula_19 and formula_20, respectively, so that
formula_21
Then the integral of the complex-valued function formula_18 over the interval formula_22 is given by
formula_23
Now, to define the contour integral, let formula_15 be a continuous function on the directed smooth curve formula_24. Let formula_0 be any parametrization of formula_24 that is consistent with its order (direction). Then the integral along formula_24 is denoted
formula_25
and is given by
formula_26
This definition is well defined. That is, the result is independent of the parametrization chosen. In the case where the real integral on the right side does not exist the integral along formula_24 is said not to exist.
As a generalization of the Riemann integral.
The generalization of the Riemann integral to functions of a complex variable is done in complete analogy to its definition for functions from the real numbers. The partition of a directed smooth curve formula_24 is defined as a finite, ordered set of points on formula_24. The integral over the curve is the limit of finite sums of function values, taken at the points on the partition, in the limit that the maximum distance between any two successive points on the partition (in the two-dimensional complex plane), also known as the mesh, goes to zero.
Direct methods.
Direct methods involve the calculation of the integral through methods similar to those in calculating line integrals in multivariate calculus. This means that we use the following method:
Example.
A fundamental result in complex analysis is that the contour integral of is 2π"i", where the path of the contour is taken to be the unit circle traversed counterclockwise (or any positively oriented Jordan curve about 0). In the case of the unit circle there is a direct method to evaluate the integral
formula_27
In evaluating this integral, use the unit circle |"z"| = 1 as a contour, parametrized by "z"("t") = "eit", with "t" ∈ [0, 2π], then = "ieit" and
formula_28
which is the value of the integral. This result only applies to the case in which z is raised to power of -1. If the power is not equal to -1, then the result will always be zero.
Applications of integral theorems.
Applications of integral theorems are also often used to evaluate the contour integral along a contour, which means that the real-valued integral is calculated simultaneously along with calculating the contour integral.
Integral theorems such as the Cauchy integral formula or residue theorem are generally used in the following method:
Example 1.
Consider the integral
formula_29
To evaluate this integral, we look at the complex-valued function
formula_30
which has singularities at i and −"i". We choose a contour that will enclose the real-valued integral, here a semicircle with boundary diameter on the real line (going from, say, −"a" to a) will be convenient. Call this contour C.
There are two ways of proceeding, using the Cauchy integral formula or by the method of residues:
Using the Cauchy integral formula.
Note that:
formula_31
thus
formula_32
Furthermore, observe that
formula_33
Since the only singularity in the contour is the one at i, then we can write
formula_34
which puts the function in the form for direct application of the formula. Then, by using Cauchy's integral formula,
formula_35
We take the first derivative, in the above steps, because the pole is a second-order pole. That is, ("z" − "i") is taken to the second power, so we employ the first derivative of "f"("z"). If it were ("z" − "i") taken to the third power, we would use the second derivative and divide by 2!, etc. The case of ("z" − "i") to the first power corresponds to a zero order derivative—just "f"("z") itself.
We need to show that the integral over the arc of the semicircle tends to zero as "a" → ∞, using the estimation lemma
formula_36
where M is an upper bound on along the arc and L the length of the arc. Now,
formula_37
So
formula_38
Using the method of residues.
Consider the Laurent series of "f"("z") about i, the only singularity we need to consider. We then have
formula_39
It is clear by inspection that the residue is −, so, by the residue theorem, we have
formula_40
Thus we get the same result as before.
Contour note.
As an aside, a question can arise whether we do not take the semicircle to include the "other" singularity, enclosing −"i". To have the integral along the real axis moving in the correct direction, the contour must travel clockwise, i.e., in a negative direction, reversing the sign of the integral overall.
This does not affect the use of the method of residues by series.
Example 2 – Cauchy distribution.
The integral
formula_41
(which arises in probability theory as a scalar multiple of the characteristic function of the Cauchy distribution) resists the techniques of elementary calculus. We will evaluate it by expressing it as a limit of contour integrals along the contour C that goes along the real line from −"a" to a and then counterclockwise along a semicircle centered at 0 from a to −"a". Take a to be greater than 1, so that the imaginary unit i is enclosed within the curve. The contour integral is
formula_42
Since "e""itz" is an entire function (having no singularities at any point in the complex plane), this function has singularities only where the denominator "z"2 + 1 is zero. Since "z"2 + 1 = ("z" + "i")("z" − "i"), that happens only where "z" = "i" or "z" = −"i". Only one of those points is in the region bounded by this contour. The residue of "f"("z") at "z" = "i" is
formula_43
According to the residue theorem, then, we have
formula_44
The contour C may be split into a "straight" part and a curved arc, so that
formula_45
and thus
formula_46
According to Jordan's lemma, if "t" > 0 then
formula_47
Therefore, if "t" > 0 then
formula_48
A similar argument with an arc that winds around −"i" rather than i shows that if "t" < 0 then
formula_49
and finally we have this:
formula_50
Example 3 – trigonometric integrals.
Certain substitutions can be made to integrals involving trigonometric functions, so the integral is transformed into a rational function of a complex variable and then the above methods can be used in order to evaluate the integral.
As an example, consider
formula_51
We seek to make a substitution of "z" = "eit". Now, recall
formula_52
and
formula_53
Taking C to be the unit circle, we substitute to get:
formula_54
The singularities to be considered are at formula_55 Let "C"1 be a small circle about formula_56 and "C"2 be a small circle about formula_57 Then we arrive at the following:
formula_58
Example 3a – trigonometric integrals, the general procedure.
The above method may be applied to all integrals of the type
formula_59
where P and Q are polynomials, i.e. a rational function in trigonometric terms is being integrated. Note that the bounds of integration may as well be π and −π, as in the previous example, or any other pair of endpoints 2π apart.
The trick is to use the substitution "z" = "eit" where "dz" = "ieit dt" and hence
formula_60
This substitution maps the interval [0, 2π] to the unit circle. Furthermore,
formula_61
and
formula_62
so that a rational function "f"("z") in z results from the substitution, and the integral becomes
formula_63
which is in turn computed by summing the residues of "f"("z") inside the unit circle.
The image at right illustrates this for
formula_64
which we now compute. The first step is to recognize that
formula_65
The substitution yields
formula_66
The poles of this function are at and . Of these, and are outside the unit circle (shown in red, not to scale), whereas and are inside the unit circle (shown in blue). The corresponding residues are both equal to −, so that the value of the integral is
formula_67
Example 4 – branch cuts.
Consider the real integral
formula_68
We can begin by formulating the complex integral
formula_69
We can use the Cauchy integral formula or residue theorem again to obtain the relevant residues. However, the important thing to note is that "z"1/2 = "e"(Log "z")/2, so "z"1/2 has a branch cut. This affects our choice of the contour C. Normally the logarithm branch cut is defined as the negative real axis, however, this makes the calculation of the integral slightly more complicated, so we define it to be the positive real axis.
Then, we use the so-called "keyhole contour", which consists of a small circle about the origin of radius ε say, extending to a line segment parallel and close to the positive real axis but not touching it, to an almost full circle, returning to a line segment parallel, close, and below the positive real axis in the negative sense, returning to the small circle in the middle.
Note that "z" = −2 and "z" = −4 are inside the big circle. These are the two remaining poles, derivable by factoring the denominator of the integrand. The branch point at "z" = 0 was avoided by detouring around the origin.
Let γ be the small circle of radius ε, Γ the larger, with radius R, then
formula_70
It can be shown that the integrals over Γ and γ both tend to zero as "ε" → 0 and "R" → ∞, by an estimation argument above, that leaves two terms. Now since "z"1/2 = "e"(Log "z")/2, on the contour outside the branch cut, we have gained 2π in argument along γ. (By Euler's identity, "e""i"π represents the unit vector, which therefore has π as its log. This π is what is meant by the argument of z. The coefficient of forces us to use 2π.) So
formula_71
Therefore:
formula_72
By using the residue theorem or the Cauchy integral formula (first employing the partial fractions method to derive a sum of two simple contour integrals) one obtains
formula_73
Example 5 – the square of the logarithm.
This section treats a type of integral of which
formula_74
is an example.
To calculate this integral, one uses the function
formula_75
and the branch of the logarithm corresponding to −π < arg "z" ≤ π.
We will calculate the integral of "f"("z") along the keyhole contour shown at right. As it turns out this integral is a multiple of the initial integral that we wish to calculate and by the Cauchy residue theorem we have
formula_76
Let R be the radius of the large circle, and r the radius of the small one. We will denote the upper line by M, and the lower line by N. As before we take the limit when "R" → ∞ and "r" → 0. The contributions from the two circles vanish. For example, one has the following upper bound with the ML lemma:
formula_77
In order to compute the contributions of M and N we set "z" = −"x" + "iε" on M and "z" = −"x" − "iε" on N, with 0 < "x" < ∞:
formula_78
which gives
formula_79
Example 6 – logarithms and the residue at infinity.
We seek to evaluate
formula_80
This requires a close study of
formula_81
We will construct "f"("z") so that it has a branch cut on [0, 3], shown in red in the diagram. To do this, we choose two branches of the logarithm, setting
formula_82
and
formula_83
The cut of "z"<templatestyles src="Fraction/styles.css" />3⁄4 is therefore (−∞, 0] and the cut of (3 − "z")1/4 is (−∞, 3]. It is easy to see that the cut of the product of the two, i.e. "f"("z"), is [0, 3], because "f"("z") is actually continuous across (−∞, 0). This is because when "z" = −"r" < 0 and we approach the cut from above, "f"("z") has the value
formula_84
When we approach from below, "f"("z") has the value
formula_85
But
formula_86
so that we have continuity across the cut. This is illustrated in the diagram, where the two black oriented circles are labelled with the corresponding value of the argument of the logarithm used in "z"<templatestyles src="Fraction/styles.css" />3⁄4 and (3 − "z")1/4.
We will use the contour shown in green in the diagram. To do this we must compute the value of "f"("z") along the line segments just above and just below the cut.
Let "z" = "r" (in the limit, i.e. as the two green circles shrink to radius zero), where 0 ≤ "r" ≤ 3. Along the upper segment, we find that "f"("z") has the value
formula_87
and along the lower segment,
formula_88
It follows that the integral of along the upper segment is −"iI" in the limit, and along the lower segment, I.
If we can show that the integrals along the two green circles vanish in the limit, then we also have the value of I, by the Cauchy residue theorem. Let the radius of the green circles be ρ, where "ρ" < 0.001 and "ρ" → 0, and apply the ML inequality. For the circle "C"L on the left, we find
formula_89
Similarly, for the circle "C"R on the right, we have
formula_90
Now using the Cauchy residue theorem, we have
formula_91
where the minus sign is due to the clockwise direction around the residues. Using the branch of the logarithm from before, clearly
formula_92
The pole is shown in blue in the diagram. The value simplifies to
formula_93
We use the following formula for the residue at infinity:
formula_94
Substituting, we find
formula_95
and
formula_96
where we have used the fact that −1 = "e"π"i" for the second branch of the logarithm. Next we apply the binomial expansion, obtaining
formula_97
The conclusion is that
formula_98
Finally, it follows that the value of I is
formula_99
which yields
formula_100
Evaluation with residue theorem.
Using the residue theorem, we can evaluate closed contour integrals. The following are examples on evaluating contour integrals with the residue theorem.
Using the residue theorem, let us evaluate this contour integral.
formula_101
Recall that the residue theorem states
formula_102
where formula_103 is the residue of formula_104, and the formula_105 are the singularities of formula_104 lying inside the contour formula_106 (with none of them lying directly on formula_106).
formula_104 has only one pole, formula_107. From that, we determine that the residue of formula_104 to be formula_108
formula_109
Thus, using the residue theorem, we can determine:
formula_110
Multivariable contour integrals.
To solve multivariable contour integrals (i.e. surface integrals, complex volume integrals, and higher order integrals), we must use the divergence theorem. For right now, let formula_111 be interchangeable with formula_112. These will both serve as the divergence of the vector field denoted as formula_113. This theorem states:
formula_114
In addition, we also need to evaluate formula_115 where formula_116 is an alternate notation of formula_117. The divergence of any dimension can be described as
formula_118
Example 1.
Let the vector field formula_119 and be bounded by the following
formula_120
The corresponding double contour integral would be set up as such:
<templatestyles src="Block indent/styles.css"/>
We now evaluate formula_121. Meanwhile, set up the corresponding triple integral:
formula_122
Example 2.
Let the vector field formula_123, and remark that there are 4 parameters in this case. Let this vector field be bounded by the following:
formula_124
To evaluate this, we must utilize the divergence theorem as stated before, and we must evaluate formula_121. Let formula_125
<templatestyles src="Block indent/styles.css"/>
formula_126
Thus, we can evaluate a contour integral with formula_127. We can use the same method to evaluate contour integrals for any vector field with formula_128 as well.
Integral representation.
An integral representation of a function is an expression of the function involving a contour integral. Various integral representations are known for many special functions. Integral representations can be important for theoretical reasons, e.g. giving analytic continuation or functional equations, or sometimes for numerical evaluations.
For example, the original definition of the Riemann zeta function "ζ"("s") via a Dirichlet series,
formula_129
is valid only for Re("s") > 1. But
formula_130
where the integration is done over the Hankel contour H, is valid for all complex s not equal to 1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z:[a,b]\\to\\C"
},
{
"math_id": 1,
"text": "z(a)=z(b)"
},
{
"math_id": 2,
"text": "z'(a)=z'(b)"
},
{
"math_id": 3,
"text": "z(x)"
},
{
"math_id": 4,
"text": "z(y)"
},
{
"math_id": 5,
"text": "x<y"
},
{
"math_id": 6,
"text": "\\gamma_1,\\dots,\\gamma_n"
},
{
"math_id": 7,
"text": "\\gamma_i"
},
{
"math_id": 8,
"text": "\\gamma_{i+1}"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "1\\leq i<n"
},
{
"math_id": 11,
"text": "+"
},
{
"math_id": 12,
"text": "\\Gamma"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": " \\Gamma = \\gamma_1 + \\gamma_2 + \\cdots + \\gamma_n."
},
{
"math_id": 15,
"text": "f:\\C\\to\\C"
},
{
"math_id": 16,
"text": "f:\\R\\to\\C"
},
{
"math_id": 17,
"text": "t"
},
{
"math_id": 18,
"text": "f"
},
{
"math_id": 19,
"text": "u(t)"
},
{
"math_id": 20,
"text": "v(t)"
},
{
"math_id": 21,
"text": "f(t) = u(t) + iv(t)."
},
{
"math_id": 22,
"text": "[a,b]"
},
{
"math_id": 23,
"text": "\\begin{align}\n\\int_a^b f(t) \\, dt &= \\int_a^b \\big( u(t) + i v(t) \\big) \\, dt \\\\\n&= \\int_a^b u(t) \\, dt + i \\int_a^b v(t) \\, dt. \n\\end{align}"
},
{
"math_id": 24,
"text": "\\gamma"
},
{
"math_id": 25,
"text": "\\int_\\gamma f(z)\\, dz\\, "
},
{
"math_id": 26,
"text": "\\int_\\gamma f(z) \\, dz := \\int_a^b f\\big(z(t)\\big) z'(t) \\, dt."
},
{
"math_id": 27,
"text": "\\oint_C \\frac{1}{z}\\,dz."
},
{
"math_id": 28,
"text": "\\oint_C \\frac{1}{z}\\,dz = \\int_0^{2\\pi} \\frac{1}{e^{it}} ie^{it}\\,dt = i\\int_0^{2\\pi} 1 \\, dt = i \\, t\\Big|_0^{2\\pi} = \\left(2\\pi-0\\right)i = 2\\pi i"
},
{
"math_id": 29,
"text": "\\int_{-\\infty}^\\infty \\frac{1}{\\left(x^2+1\\right)^2}\\,dx,"
},
{
"math_id": 30,
"text": "f(z)=\\frac{1}{\\left(z^2+1\\right)^2}"
},
{
"math_id": 31,
"text": "\\oint_C f(z)\\,dz = \\int_{-a}^a f(z)\\,dz + \\int_\\text{Arc} f(z)\\,dz "
},
{
"math_id": 32,
"text": "\\int_{-a}^a f(z)\\,dz = \\oint_C f(z)\\,dz - \\int_\\text{Arc} f(z)\\,dz "
},
{
"math_id": 33,
"text": "f(z)=\\frac{1}{\\left(z^2+1\\right)^2}=\\frac{1}{(z+i)^2(z-i)^2}."
},
{
"math_id": 34,
"text": "f(z)=\\frac{\\frac{1}{(z+i)^2}}{(z-i)^2},"
},
{
"math_id": 35,
"text": "\\oint_C f(z)\\,dz = \\oint_C \\frac{\\frac{1}{(z+i)^2}}{(z-i)^2}\\,dz = 2\\pi i \\, \\left.\\frac{d}{dz} \\frac{1}{(z+i)^2}\\right|_{z=i} =2 \\pi i \\left[\\frac{-2}{(z+i)^3}\\right]_{z = i} =\\frac{\\pi}{2}"
},
{
"math_id": 36,
"text": "\\left|\\int_\\text{Arc} f(z)\\,dz\\right| \\le ML"
},
{
"math_id": 37,
"text": "\\left|\\int_\\text{Arc} f(z)\\,dz\\right|\\le \\frac{a\\pi}{\\left(a^2-1\\right)^2} \\to 0 \\text{ as } a \\to \\infty."
},
{
"math_id": 38,
"text": "\\int_{-\\infty}^\\infty \\frac{1}{\\left(x^2+1\\right)^2}\\,dx = \\int_{-\\infty}^\\infty f(z)\\,dz = \\lim_{a \\to +\\infty} \\int_{-a}^a f(z)\\,dz = \\frac{\\pi}2.\\quad\\square"
},
{
"math_id": 39,
"text": "f(z) = \\frac{-1}{4(z-i)^2} + \\frac{-i}{4(z-i)} + \\frac{3}{16} + \\frac{i}{8}(z-i) + \\frac{-5}{64}(z-i)^2 + \\cdots"
},
{
"math_id": 40,
"text": "\\oint_C f(z)\\,dz = \\oint_C \\frac{1}{\\left(z^2+1\\right)^2}\\,dz = 2 \\pi i \\,\\operatorname{Res}_{z=i} f(z) = 2 \\pi i \\left(-\\frac{i}{4}\\right)=\\frac{\\pi}2 \\quad\\square"
},
{
"math_id": 41,
"text": "\\int_{-\\infty}^\\infty \\frac{e^{itx}}{x^2+1}\\,dx"
},
{
"math_id": 42,
"text": "\\int_C \\frac{e^{itz} }{ z^2+1}\\,dz."
},
{
"math_id": 43,
"text": "\\lim_{z\\to i}(z-i)f(z) = \\lim_{z\\to i}(z-i)\\frac{e^{itz} }{ z^2+1} = \\lim_{z\\to i}(z-i)\\frac{e^{itz} }{ (z-i)(z+i)} = \\lim_{z\\to i}\\frac{e^{itz} }{ z+i} = \\frac{e^{-t}}{2i}."
},
{
"math_id": 44,
"text": "\\int_C f(z)\\,dz=2\\pi i \\operatorname{Res}_{z=i}f(z)=2\\pi i\\frac{e^{-t} }{ 2i}=\\pi e^{-t}."
},
{
"math_id": 45,
"text": "\\int_\\text{straight}+\\int_\\text{arc}=\\pi e^{-t},"
},
{
"math_id": 46,
"text": "\\int_{-a}^a =\\pi e^{-t}-\\int_\\text{arc}."
},
{
"math_id": 47,
"text": "\\int_\\text{arc}\\frac{e^{itz} }{ z^2+1}\\,dz \\rightarrow 0 \\mbox{ as } a\\rightarrow\\infty."
},
{
"math_id": 48,
"text": "\\int_{-\\infty}^\\infty \\frac{e^{itx} }{ x^2+1}\\,dx=\\pi e^{-t}."
},
{
"math_id": 49,
"text": "\\int_{-\\infty}^\\infty \\frac{e^{itx} }{ x^2+1}\\,dx=\\pi e^t,"
},
{
"math_id": 50,
"text": "\\int_{-\\infty}^\\infty \\frac{e^{itx} }{ x^2+1} \\,dx=\\pi e^{-|t|}."
},
{
"math_id": 51,
"text": "\\int_{-\\pi}^\\pi \\frac{1 }{ 1 + 3 (\\cos t)^2} \\,dt."
},
{
"math_id": 52,
"text": "\\cos t = \\frac12 \\left(e^{it}+e^{-it}\\right) = \\frac12 \\left(z+\\frac{1}{z}\\right)"
},
{
"math_id": 53,
"text": "\\frac{dz}{dt} = iz,\\ dt = \\frac{dz}{iz}."
},
{
"math_id": 54,
"text": "\\begin{align}\n\\oint_C \\frac{1}{ 1 + 3 \\left(\\frac12 \\left(z+\\frac{1}{z}\\right)\\right)^2} \\,\\frac{dz}{iz} &= \\oint_C \\frac{1 }{ 1 + \\frac34 \\left(z+\\frac{1}{z}\\right)^2}\\frac{1}{iz} \\,dz \\\\\n&= \\oint_C \\frac{-i}{ z+\\frac34 z\\left(z+\\frac{1}{z}\\right)^2}\\,dz \\\\\n&= -i \\oint_C \\frac{dz}{ z+\\frac34 z\\left(z^2+2+\\frac{1}{z^2}\\right)} \\\\\n&= -i \\oint_C \\frac{dz}{ z+\\frac34 \\left(z^3+2z+\\frac{1}{z}\\right)} \\\\\n&= -i \\oint_C \\frac{dz}{ \\frac34 z^3+\\frac52 z+\\frac{3}{4z}} \\\\\n&= -i \\oint_C \\frac{4}{ 3z^3+10z+\\frac{3}{z}}\\,dz \\\\\n&= -4i \\oint_C \\frac{dz}{ 3z^3+10z+\\frac{3}{z}} \\\\\n&= -4i \\oint_C \\frac{z}{ 3z^4+10z^2+3 } \\,dz \\\\\n&= -4i \\oint_C \\frac{z}{ 3\\left(z+\\sqrt{3}i\\right)\\left(z-\\sqrt{3}i\\right)\\left(z+\\frac{i}{\\sqrt 3}\\right)\\left(z-\\frac{i}{\\sqrt 3}\\right)}\\,dz \\\\\n&= -\\frac{4i}{3} \\oint_C \\frac{z}{\\left(z+\\sqrt{3}i\\right)\\left(z-\\sqrt{3}i\\right)\\left(z+\\frac{i}{\\sqrt 3}\\right)\\left(z-\\frac{i}{\\sqrt 3}\\right)}\\,dz.\n\\end{align}"
},
{
"math_id": 55,
"text": "\\tfrac{\\pm i}{\\sqrt{3}}."
},
{
"math_id": 56,
"text": "\\tfrac{i}{\\sqrt{3}},"
},
{
"math_id": 57,
"text": "\\tfrac{-i}{\\sqrt{3}}."
},
{
"math_id": 58,
"text": "\\begin{align}\n& -\\frac{4i}{3} \\left [\\oint_{C_1} \\frac{\\frac{z}{\\left(z+\\sqrt{3}i\\right)\\left(z-\\sqrt{3}i\\right)\\left(z+\\frac{i}{\\sqrt 3} \\right)}}{z-\\frac{i}{\\sqrt 3}}\\,dz +\\oint_{C_2} \\frac{\\frac{z}{\\left(z+\\sqrt{3}i\\right)\\left(z-\\sqrt{3}i\\right)\\left(z-\\frac{i}{\\sqrt 3}\\right)}}{z+\\frac{i}{\\sqrt 3}} \\, dz \\right ] \\\\\n= {} & -\\frac{4i}{3} \\left[ 2\\pi i \\left[\\frac{z}{\\left(z+\\sqrt{3}i\\right)\\left(z-\\sqrt{3}i\\right)\\left(z+\\frac{i}{\\sqrt 3}\\right)}\\right]_{z=\\frac{i}{\\sqrt 3}} + 2\\pi i \\left[\\frac{z}{\\left(z+\\sqrt{3}i\\right)\\left(z-\\sqrt{3}i\\right)\\left(z-\\frac{i}{\\sqrt 3}\\right)} \\right]_{z=-\\frac{i}{\\sqrt 3}}\\right] \\\\\n= {} & \\frac{8\\pi}{3} \\left[\\frac{\\frac{i}{\\sqrt 3}}{\\left(\\frac{i}{\\sqrt 3}+\\sqrt{3}i\\right)\\left(\\frac{i}{\\sqrt 3}-\\sqrt{3}i\\right)\\left(\\frac{i}{\\sqrt 3}+\\frac{i}{\\sqrt 3}\\right)} + \\frac{-\\frac{i}{\\sqrt 3}}{\\left(-\\frac{i}{\\sqrt 3}+\\sqrt{3}i\\right)\\left(-\\frac{i}{\\sqrt 3}-\\sqrt{3}i\\right)\\left(-\\frac{i}{\\sqrt 3}-\\frac{i}{\\sqrt 3}\\right)} \\right] \\\\\n= {} & \\frac{8\\pi}{3} \\left[\\frac{\\frac{i}{\\sqrt 3}}{\\left(\\frac{4}{\\sqrt 3}i\\right)\\left(-\\frac{2}{i\\sqrt{3}}\\right)\\left(\\frac{2}{\\sqrt{3}i}\\right)}+\\frac{-\\frac{i}{\\sqrt 3}}{\\left(\\frac{2}{\\sqrt 3}i\\right)\\left(-\\frac{4}{\\sqrt 3}i\\right)\\left(-\\frac{2}{\\sqrt 3}i\\right)}\\right] \\\\\n= {} & \\frac{8\\pi}{3}\\left[\\frac{\\frac{i}{\\sqrt 3}}{i\\left(\\frac{4}{\\sqrt 3}\\right)\\left(\\frac{2}{\\sqrt 3}\\right)\\left(\\frac{2}{\\sqrt 3}\\right)}+\\frac{-\\frac{i}{\\sqrt 3}}{-i\\left(\\frac{2}{\\sqrt 3}\\right)\\left(\\frac{4}{\\sqrt 3}\\right)\\left(\\frac{2}{\\sqrt 3}\\right)}\\right] \\\\\n= {} & \\frac{8\\pi}{3}\\left[\\frac{\\frac{1}{\\sqrt 3}}{\\left(\\frac{4}{\\sqrt 3}\\right)\\left(\\frac{2}{\\sqrt 3}\\right)\\left(\\frac{2}{\\sqrt 3}\\right)}+\\frac{\\frac{1}{\\sqrt 3}}{\\left(\\frac{2}{\\sqrt 3}\\right)\\left(\\frac{4}{\\sqrt 3}\\right)\\left(\\frac{2}{\\sqrt 3}\\right)}\\right] \\\\\n= {} & \\frac{8\\pi}{3}\\left[\\frac{\\frac{1}{\\sqrt 3}}{\\frac{16}{3\\sqrt{3}}}+\\frac{\\frac{1}{\\sqrt 3}}{\\frac{16}{3\\sqrt{3}}} \\right] \\\\\n= {} & \\frac{8\\pi}{3}\\left[\\frac{3}{16} + \\frac{3}{16} \\right] \\\\\n= {} & \\pi.\n\\end{align}"
},
{
"math_id": 59,
"text": " \\int_0^{2\\pi} \\frac{P\\big(\\sin(t),\\sin(2t),\\ldots,\\cos(t),\\cos(2t),\\ldots\\big)}{Q\\big(\\sin(t),\\sin(2t),\\ldots,\\cos(t),\\cos(2t),\\ldots\\big)}\\, dt"
},
{
"math_id": 60,
"text": " \\frac{1}{iz} \\,dz = dt."
},
{
"math_id": 61,
"text": " \\sin(k t) = \\frac{e^{i k t} - e^{- i k t}}{2 i} = \\frac{z^k - z^{-k}}{2i}"
},
{
"math_id": 62,
"text": " \\cos(k t) = \\frac{e^{i k t} + e^{- i k t}}{2} = \\frac{z^k + z^{-k}}{2}"
},
{
"math_id": 63,
"text": " \\oint_{|z|=1} f(z) \\frac{1}{iz}\\, dz "
},
{
"math_id": 64,
"text": " I = \\int_0^\\frac{\\pi}{2} \\frac{1}{1 + (\\sin t)^2}\\, dt,"
},
{
"math_id": 65,
"text": " I = \\frac14 \\int_0^{2\\pi} \\frac{1}{1 + (\\sin t)^2} \\,dt."
},
{
"math_id": 66,
"text": " \\frac{1}{4} \\oint_{|z|=1} \\frac{4 i z}{z^4 - 6z^2 + 1}\\, dz = \\oint_{|z|=1} \\frac{i z}{z^4 - 6z^2 + 1}\\, dz."
},
{
"math_id": 67,
"text": " I = 2 \\pi i \\; 2 \\left( - \\frac{\\sqrt{2}}{16} i \\right) = \\pi \\frac{\\sqrt{2}}{4}."
},
{
"math_id": 68,
"text": "\\int_0^\\infty \\frac{\\sqrt x}{x^2+6x+8}\\,dx."
},
{
"math_id": 69,
"text": "\\int_C \\frac{\\sqrt z}{z^2+6z+8}\\,dz=I."
},
{
"math_id": 70,
"text": "\\int_C = \\int_\\varepsilon^R + \\int_\\Gamma + \\int_R^\\varepsilon + \\int_\\gamma."
},
{
"math_id": 71,
"text": "\\begin{align}\n\\int_R^\\varepsilon \\frac{\\sqrt z}{z^2+6z+8}\\,dz&=\\int_R^\\varepsilon \\frac{e^{\\frac12 \\operatorname{Log} z}}{z^2+6z+8}\\,dz \\\\[6pt]\n&=\\int_R^\\varepsilon \\frac{e^{\\frac12(\\log|z|+i \\arg{z})}}{z^2+6z+8}\\,dz \\\\[6pt]\n& = \\int_R^\\varepsilon \\frac{ e^{\\frac12\\log|z|}e^{\\frac12(2\\pi i)}}{z^2+6z+8}\\,dz\\\\[6pt]\n&=\\int_R^\\varepsilon \\frac{ e^{\\frac12\\log|z|}e^{\\pi i}}{z^2+6z+8}\\,dz \\\\[6pt]\n& = \\int_R^\\varepsilon \\frac{-\\sqrt z}{z^2+6z+8}\\,dz\\\\[6pt]\n&=\\int_\\varepsilon^R \\frac{\\sqrt z}{z^2+6z+8}\\,dz.\n\\end{align}"
},
{
"math_id": 72,
"text": "\\int_C \\frac{\\sqrt z}{z^2+6z+8}\\,dz=2\\int_0^\\infty \\frac{\\sqrt x}{x^2+6x+8}\\,dx."
},
{
"math_id": 73,
"text": "\\pi i \\left(\\frac{i}{\\sqrt 2}-i\\right)=\\int_0^\\infty \\frac{\\sqrt x}{x^2+6x+8}\\,dx = \\pi\\left(1-\\frac{1}{\\sqrt 2}\\right).\\quad\\square"
},
{
"math_id": 74,
"text": "\\int_0^\\infty \\frac{\\log x}{\\left(1+x^2\\right)^2} \\, dx"
},
{
"math_id": 75,
"text": "f(z) = \\left (\\frac{\\log z}{1+z^2} \\right )^2"
},
{
"math_id": 76,
"text": "\\begin{align}\n\\left( \\int_R + \\int_M + \\int_N + \\int_r \\right) f(z) \\, dz\n= &\\ 2 \\pi i \\big( \\operatorname{Res}_{z=i} f(z) + \\operatorname{Res}_{z=-i} f(z) \\big) \\\\\n= &\\ 2 \\pi i \\left( - \\frac{\\pi}{4} + \\frac{1}{16} i \\pi^2 - \\frac{\\pi}{4} - \\frac{1}{16} i \\pi^2 \\right) \\\\\n= &\\ - i \\pi^2. \\end{align}"
},
{
"math_id": 77,
"text": "\\left| \\int_R f(z) \\, dz \\right| \\le 2 \\pi R \\frac{(\\log R)^2 + \\pi^2}{\\left(R^2-1\\right)^2} \\to 0."
},
{
"math_id": 78,
"text": "\\begin{align} -i \\pi^2 &= \\left( \\int_R + \\int_M + \\int_N + \\int_r \\right) f(z) \\, dz \\\\[6pt]\n&= \\left( \\int_M + \\int_N \\right) f(z)\\, dz && \\int_R, \\int_r \\mbox{ vanish} \\\\[6pt]\n&=-\\int_\\infty^0 \\left (\\frac{\\log(-x + i\\varepsilon)}{1+(-x + i\\varepsilon)^2} \\right )^2\\, dx - \\int_0^\\infty \\left (\\frac{\\log(-x - i\\varepsilon)}{1+(-x - i\\varepsilon)^2}\\right)^2 \\, dx \\\\[6pt]\n&= \\int_0^\\infty \\left (\\frac{\\log(-x + i\\varepsilon)}{1+(-x + i\\varepsilon)^2} \\right )^2 \\, dx - \\int_0^\\infty \\left (\\frac{\\log(-x - i\\varepsilon)}{1+(-x - i\\varepsilon)^2} \\right )^2 \\, dx \\\\[6pt]\n&= \\int_0^\\infty \\left (\\frac{\\log x + i\\pi}{1+x^2} \\right )^2 \\, dx - \\int_0^\\infty \\left (\\frac{\\log x - i\\pi}{1+x^2} \\right )^2 \\, dx && \\varepsilon \\to 0 \\\\\n&= \\int_0^\\infty \\frac{(\\log x + i\\pi)^2 - (\\log x - i\\pi)^2}{\\left(1+x^2\\right)^2} \\, dx \\\\[6pt]\n&= \\int_0^\\infty \\frac{4 \\pi i \\log x}{\\left(1+x^2\\right)^2} \\, dx \\\\[6pt]\n&= 4 \\pi i \\int_0^\\infty \\frac{\\log x}{\\left(1+x^2\\right)^2} \\, dx \\end{align}"
},
{
"math_id": 79,
"text": "\\int_0^\\infty \\frac{\\log x}{\\left(1+x^2\\right)^2} \\, dx = - \\frac{\\pi}{4}."
},
{
"math_id": 80,
"text": "I = \\int_0^3 \\frac{x^\\frac34 (3-x)^\\frac14}{5-x}\\,dx."
},
{
"math_id": 81,
"text": "f(z) = z^\\frac34 (3-z)^\\frac14."
},
{
"math_id": 82,
"text": "z^\\frac34 = \\exp \\left (\\frac34 \\log z \\right ) \\quad \\mbox{where } -\\pi \\le \\arg z < \\pi "
},
{
"math_id": 83,
"text": "(3-z)^\\frac14 = \\exp \\left (\\frac14 \\log(3-z) \\right ) \\quad \\mbox{where } 0 \\le \\arg(3-z) < 2\\pi. "
},
{
"math_id": 84,
"text": " r^\\frac34 e^{\\frac34 \\pi i} (3+r)^\\frac14 e^{\\frac24 \\pi i} = r^\\frac34 (3+r)^\\frac14 e^{\\frac54 \\pi i}."
},
{
"math_id": 85,
"text": " r^\\frac34 e^{-\\frac34 \\pi i} (3+r)^\\frac14 e^{\\frac04 \\pi i} = r^\\frac34 (3+r)^\\frac14 e^{-\\frac34 \\pi i}."
},
{
"math_id": 86,
"text": "e^{-\\frac34 \\pi i} = e^{\\frac54 \\pi i},"
},
{
"math_id": 87,
"text": "r^\\frac34 e^{\\frac04 \\pi i} (3-r)^\\frac14 e^{\\frac24 \\pi i} = i r^\\frac34 (3-r)^\\frac14"
},
{
"math_id": 88,
"text": "r^\\frac34 e^{\\frac04 \\pi i} (3-r)^\\frac14 e^{\\frac04 \\pi i} = r^\\frac34 (3-r)^\\frac14."
},
{
"math_id": 89,
"text": "\\left| \\int_{C_\\mathrm{L}} \\frac{f(z)}{5-z} dz \\right| \\le 2 \\pi \\rho \\frac{\\rho^\\frac34 3.001^\\frac14}{4.999} \\in \\mathcal{O} \\left( \\rho^\\frac74 \\right) \\to 0."
},
{
"math_id": 90,
"text": "\\left| \\int_{C_\\mathrm{R}} \\frac{f(z)}{5-z} dz \\right| \\le 2 \\pi \\rho \\frac{3.001^\\frac34 \\rho^\\frac14}{1.999} \\in \\mathcal{O} \\left( \\rho^\\frac54 \\right) \\to 0."
},
{
"math_id": 91,
"text": "(-i + 1) I = -2\\pi i \\left( \\operatorname{Res}_{z=5} \\frac{f(z)}{5-z} + \\operatorname{Res}_{z=\\infty} \\frac{f(z)}{5-z} \\right)."
},
{
"math_id": 92,
"text": "\\operatorname{Res}_{z=5} \\frac{f(z)}{5-z} = - 5^\\frac34 e^{\\frac14 \\log(-2)}."
},
{
"math_id": 93,
"text": "-5^\\frac34 e^{\\frac14(\\log 2 + \\pi i)} = -e^{\\frac14 \\pi i} 5^\\frac34 2^\\frac14."
},
{
"math_id": 94,
"text": "\\operatorname{Res}_{z=\\infty} h(z) = \\operatorname{Res}_{z=0} \\left(- \\frac{1}{z^2} h\\left(\\frac{1}{z}\\right)\\right)."
},
{
"math_id": 95,
"text": "\\frac{1}{5-\\frac{1}{z}} = -z \\left(1 + 5z + 5^2 z^2 + 5^3 z^3 + \\cdots\\right)"
},
{
"math_id": 96,
"text": "\\left(\\frac{1}{z^3}\\left (3-\\frac{1}{z} \\right )\\right)^\\frac14 = \\frac{1}{z} (3z-1)^\\frac14 = \\frac{1}{z}e^{\\frac14 \\pi i} (1-3z)^\\frac14, "
},
{
"math_id": 97,
"text": "\\frac{1}{z} e^{\\frac14 \\pi i} \\left( 1 - {1/4 \\choose 1} 3z + {1/4 \\choose 2} 3^2 z^2 - {1/4 \\choose 3} 3^3 z^3 + \\cdots \\right). "
},
{
"math_id": 98,
"text": "\\operatorname{Res}_{z=\\infty} \\frac{f(z)}{5-z} = e^{\\frac14 \\pi i} \\left (5 - \\frac34 \\right ) = e^{\\frac14 \\pi i}\\frac{17}{4}."
},
{
"math_id": 99,
"text": " I = 2 \\pi i \\frac{e^{\\frac14 \\pi i}}{-1+i} \\left(\\frac{17}{4} - 5^\\frac34 2^\\frac14 \\right) = 2 \\pi 2^{-\\frac12} \\left(\\frac{17}{4} - 5^\\frac34 2^\\frac14 \\right)"
},
{
"math_id": 100,
"text": "I = \\frac{\\pi}{2\\sqrt 2} \\left(17 - 5^\\frac34 2^\\frac94 \\right) = \\frac{\\pi}{2\\sqrt 2} \\left(17 - 40^\\frac34 \\right)."
},
{
"math_id": 101,
"text": "\\oint_C \\frac{e^z}{z^3}\\,dz"
},
{
"math_id": 102,
"text": "\\oint_{C} f(z)=2\\pi i\\cdot \\sum\\operatorname{Res}(f,a_k),"
},
{
"math_id": 103,
"text": "\\operatorname{Res}"
},
{
"math_id": 104,
"text": "f(z)"
},
{
"math_id": 105,
"text": "a_k"
},
{
"math_id": 106,
"text": "C"
},
{
"math_id": 107,
"text": "0"
},
{
"math_id": 108,
"text": "\\tfrac{1}{2}"
},
{
"math_id": 109,
"text": "\\begin{align} \n\\oint_C f(z) &=\\oint_C \\frac{e^z}{z^3}\\\\ \n&=2\\pi i \\cdot \\operatorname{Res}_{z=0} f(z)\\\\\n&=2\\pi i\\operatorname{Res}_{z=0} \\frac{e^z}{z^3}\\\\\n&=2\\pi i \\cdot \\frac{1}{2}\\\\\n&=\\pi i\n\\end{align}"
},
{
"math_id": 110,
"text": "\\oint_C \\frac{e^z}{z^3} dz = \\pi i."
},
{
"math_id": 111,
"text": "\\nabla"
},
{
"math_id": 112,
"text": "\\operatorname{Div}"
},
{
"math_id": 113,
"text": "\\mathbf{F}"
},
{
"math_id": 114,
"text": "\\underbrace{\\int \\cdots \\int_U}_n \\operatorname{div}(\\mathbf{F}) \\, dV = \\underbrace{ \\oint \\cdots \\oint_{\\partial U} }_{n-1} \\mathbf{F} \\cdot \\mathbf{n} \\, dS"
},
{
"math_id": 115,
"text": "\\nabla\\cdot \\mathbf{F}"
},
{
"math_id": 116,
"text": "\\nabla \\cdot \\mathbf{F}"
},
{
"math_id": 117,
"text": "\\operatorname{div} (\\mathbf{F})"
},
{
"math_id": 118,
"text": "\\begin{align}\n\\operatorname{div}(\\mathbf{F}) &=\\nabla\\cdot\\mathbf{F}\\\\\n&= \\left(\\frac{\\partial}{\\partial u}, \\frac{\\partial}{\\partial x}, \\frac{\\partial}{\\partial y}, \\frac{\\partial}{\\partial z}, \\dots \\right) \\cdot (F_u,F_x,F_y,F_z,\\dots)\\\\\n&=\\left(\\frac{\\partial F_u}{\\partial u} + \\frac{\\partial F_x}{\\partial x} + \\frac{\\partial F_y}{\\partial y} + \\frac{\\partial F_z}{\\partial z} + \\cdots \\right)\n\\end{align}"
},
{
"math_id": 119,
"text": "\\mathbf{F}=\\sin(2x)\\mathbf{e}_x+\\sin(2y)\\mathbf{e}_y+\\sin(2z)\\mathbf{e}_z"
},
{
"math_id": 120,
"text": "{0\\leq x\\leq 1} \\quad {0\\leq y\\leq 3} \\quad {-1\\leq z\\leq 4}"
},
{
"math_id": 121,
"text": "\\nabla\\cdot\\mathbf{F}"
},
{
"math_id": 122,
"text": "\\begin{align}\n&=\\iiint_V \\left(\\frac{\\partial F_x}{\\partial x} + \\frac{\\partial F_y}{\\partial y} + \\frac{\\partial F_z}{\\partial z}\\right) dV\\\\[6pt]\n&=\\iiint_V \\left(\\frac{\\partial \\sin(2x)}{\\partial x} + \\frac{\\partial \\sin(2y)}{\\partial y} + \\frac{\\partial \\sin(2z)}{\\partial z}\\right) dV\\\\[6pt]\n&=\\iiint_V 2 \\left(\\cos(2x) + \\cos(2y) + \\cos(2z)\\right) dV \\\\[6pt]\n&=\\int_{0}^{1}\\int_{0}^{3}\\int_{-1}^{4} 2(\\cos(2x)+\\cos(2y)+\\cos(2z))\\,dx\\,dy\\,dz \\\\[6pt]\n&=\\int_{0}^{1}\\int_{0}^{3}(10\\cos(2y)+\\sin(8)+\\sin(2)+10\\cos(z))\\,dy\\,dz\\\\[6pt]\n&=\\int_{0}^{1}(30\\cos(2z)+3\\sin(2)+3\\sin(8)+5\\sin(6))\\,dz\\\\[6pt]\n&=18\\sin(2)+3\\sin(8)+5\\sin(6)\n\\end{align}"
},
{
"math_id": 123,
"text": "\\mathbf{F}=u^4\\mathbf{e}_u+x^5\\mathbf{e}_x+y^6\\mathbf{e}_y+z^{-3}\\mathbf{e}_z"
},
{
"math_id": 124,
"text": "{0\\leq x\\leq 1} \\quad {-10\\leq y\\leq 2\\pi} \\quad {4\\leq z\\leq 5} \\quad {-1\\leq u\\leq 3}"
},
{
"math_id": 125,
"text": "dV = dx \\, dy \\, dz \\, du"
},
{
"math_id": 126,
"text": "\\begin{align}\n&=\\iiiint_V \\left(\\frac{\\partial F_u}{\\partial u} + \\frac{\\partial F_x}{\\partial x} + \\frac{\\partial F_y}{\\partial y} + \\frac{\\partial F_z}{\\partial z}\\right)\\,dV\\\\[6pt]\n&=\\iiiint_V \\left(\\frac{\\partial u^4}{\\partial u} + \\frac{\\partial x^5}{\\partial x} + \\frac{\\partial y^6}{\\partial y} + \\frac{\\partial z^{-3}}{\\partial z}\\right)\\,dV\\\\[6pt]\n&=\\iiiint_V {{\\frac{4 u^3 z^4 + 5 x^4 z^4 + 5 y^4 z^4 - 3}{z^4}}}\\,dV \\\\[6pt]\n&= \\iiiint_V {{\\frac{4 u^3 z^4 + 5 x^4 z^4 + 5 y^4 z^4 - 3}{z^4}}}\\,dV \\\\[6pt]\n&=\\int_{0}^{1}\\int_{-10}^{2\\pi}\\int_{4}^{5}\\int_{-1}^{3} \\frac{4 u^3 z^4 + 5 x^4 z^4 + 5 y^4 z^4 - 3}{z^4}\\,dV\\\\[6pt]\n&=\\int_{0}^{1}\\int_{-10}^{2\\pi}\\int_{4}^{5}\\left(\\frac{4(3u^4z^3+3y^6+91z^3+3)}{3z^3}\\right)\\,dy\\,dz\\,du\\\\[6pt]\n&=\\int_{0}^{1}\\int_{-10}^{2\\pi}\\left(4u^4+\\frac{743440}{21}+\\frac{4}{z^3}\\right)\\,dz\\,du\\\\[6pt]\n&=\\int_{0}^{1} \\left(-\\frac{1}{2\\pi^2}+\\frac{1486880\\pi}{21}+8\\pi u^4+40 u^4+\\frac{371720021}{1050}\\right)\\,du\\\\[6pt]\n&=\\frac{371728421}{1050}+\\frac{14869136\\pi^3-105}{210\\pi^2}\\\\[6pt]\n&\\approx{576468.77}\n\\end{align}"
},
{
"math_id": 127,
"text": "n=4"
},
{
"math_id": 128,
"text": "n>4"
},
{
"math_id": 129,
"text": "\\zeta(s) = \\sum_{n=1}^\\infty\\frac{1}{n^s},"
},
{
"math_id": 130,
"text": "\\zeta(s) = - \\frac{\\Gamma(1 - s)}{2 \\pi i} \\int_H\\frac{(-t)^{s-1}}{e^t - 1} dt ,"
}
]
| https://en.wikipedia.org/wiki?curid=704359 |
7043631 | Generalized inverse | Algebraic element satisfying some of the criteria of an inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element "x" is an element "y" that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix formula_0.
A matrix formula_1 is a generalized inverse of a matrix formula_2 if formula_3 A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse.
Motivation.
Consider the linear system
formula_4
where formula_0 is an formula_5 matrix and formula_6 the column space of formula_0. If formula_7 and formula_0 is nonsingular then formula_8 will be the solution of the system. Note that, if formula_0 is nonsingular, then
formula_9
Now suppose formula_0 is rectangular (formula_10), or square and singular. Then we need a right candidate formula_11 of order formula_12 such that for all formula_6
formula_13
That is, formula_14 is a solution of the linear system formula_4.
Equivalently, we need a matrix formula_11 of order formula_15 such that
formula_16
Hence we can define the generalized inverse as follows: Given an formula_12 matrix formula_0, an formula_5 matrix formula_11 is said to be a generalized inverse of formula_0 if formula_16 The matrix formula_17 has been termed a regular inverse of formula_0 by some authors.
Types.
Important types of generalized inverse include:
Some generalized inverses are defined and classified based on the Penrose conditions:
where formula_32 denotes conjugate transpose. If formula_33 satisfies the first condition, then it is a generalized inverse of formula_0. If it satisfies the first two conditions, then it is a reflexive generalized inverse of formula_0. If it satisfies all four conditions, then it is the pseudoinverse of formula_0, which is denoted by formula_34 and also known as the Moore–Penrose inverse, after the pioneering works by E. H. Moore and Roger Penrose. It is convenient to define an formula_35-inverse of formula_0 as an inverse that satisfies the subset formula_36 of the Penrose conditions listed above. Relations, such as formula_37, can be established between these different classes of formula_35-inverses.
When formula_0 is non-singular, any generalized inverse formula_38 and is therefore unique. For a singular formula_0, some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined.
Examples.
Reflexive generalized inverse.
Let
formula_39
Since formula_40, formula_41 is singular and has no regular inverse. However, formula_41 and formula_42 satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence, formula_42 is a reflexive generalized inverse of formula_41.
One-sided inverse.
Let
formula_43
Since formula_41 is not square, formula_41 has no regular inverse. However, formula_44 is a right inverse of formula_41. The matrix formula_41 has no left inverse.
Inverse of other semigroups (or rings).
The element "b" is a generalized inverse of an element "a" if and only if formula_45, in any semigroup (or ring, since the multiplication function in any ring is a semigroup).
The generalized inverses of the element 3 in the ring formula_46 are 3, 7, and 11, since in the ring formula_46:
formula_47
formula_48
formula_49
The generalized inverses of the element 4 in the ring formula_46 are 1, 4, 7, and 10, since in the ring formula_46:
formula_50
formula_51
formula_52
formula_53
If an element "a" in a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ring formula_46.
In the ring formula_46, any element is a generalized inverse of 0, however, 2 has no generalized inverse, since there is no "b" in formula_46 such that formula_54.
Construction.
The following characterizations are easy to verify:
Uses.
Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the "n" × "m" linear system
formula_74,
with vector formula_75 of unknowns and vector formula_76 of constants, all solutions are given by
formula_77,
parametric on the arbitrary vector formula_78, where formula_33 is any generalized inverse of formula_0. Solutions exist if and only if formula_79 is a solution, that is, if and only if formula_80. If "A" has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique.
Generalized inverses of matrices.
The generalized inverses of matrices can be characterized as follows. Let formula_2, and
formula_81
be its singular-value decomposition. Then for any generalized inverse formula_82, there exist matrices formula_83, formula_84, and formula_85 such that
formula_86
Conversely, any choice of formula_83, formula_84, and formula_85 for matrix of this form is a generalized inverse of formula_0. The formula_87-inverses are exactly those for which formula_88, the formula_89-inverses are exactly those for which formula_90, and the formula_91-inverses are exactly those for which formula_92. In particular, the pseudoinverse is given by formula_93:
formula_94
Transformation consistency properties.
In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, formula_95 satisfies the following definition of consistency with respect to transformations involving unitary matrices "U" and "V":
formula_96.
The Drazin inverse, formula_97 satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix "S":
formula_98.
The unit-consistent (UC) inverse, formula_99 satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices "D" and "E":
formula_100.
The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "A^\\mathrm{g} \\in \\mathbb{R}^{n \\times m}"
},
{
"math_id": 2,
"text": "A \\in \\mathbb{R}^{m \\times n}"
},
{
"math_id": 3,
"text": " AA^\\mathrm{g}A = A."
},
{
"math_id": 4,
"text": "Ax = y"
},
{
"math_id": 5,
"text": "n \\times m"
},
{
"math_id": 6,
"text": "y \\in \\mathcal R(A),"
},
{
"math_id": 7,
"text": "n = m"
},
{
"math_id": 8,
"text": "x = A^{-1}y"
},
{
"math_id": 9,
"text": "AA^{-1}A = A."
},
{
"math_id": 10,
"text": "n \\neq m"
},
{
"math_id": 11,
"text": "G"
},
{
"math_id": 12,
"text": "m \\times n"
},
{
"math_id": 13,
"text": "AGy = y."
},
{
"math_id": 14,
"text": "x=Gy"
},
{
"math_id": 15,
"text": "m\\times n"
},
{
"math_id": 16,
"text": "AGA = A."
},
{
"math_id": 17,
"text": "A^{-1}"
},
{
"math_id": 18,
"text": " \\textrm{rank} (A) = n"
},
{
"math_id": 19,
"text": "A_{\\mathrm{R}}^{-1}"
},
{
"math_id": 20,
"text": " A A_{\\mathrm{R}}^{-1} = I_n "
},
{
"math_id": 21,
"text": "I_n"
},
{
"math_id": 22,
"text": "n \\times n"
},
{
"math_id": 23,
"text": " \\textrm{rank} (A) = m"
},
{
"math_id": 24,
"text": "A_{\\mathrm{L}}^{-1}"
},
{
"math_id": 25,
"text": "A_{\\mathrm{L}}^{-1} A = I_m "
},
{
"math_id": 26,
"text": "I_m"
},
{
"math_id": 27,
"text": "m \\times m"
},
{
"math_id": 28,
"text": " A A^\\mathrm{g} A = A "
},
{
"math_id": 29,
"text": " A^\\mathrm{g} A A^\\mathrm{g}= A^\\mathrm{g} "
},
{
"math_id": 30,
"text": " (A A^\\mathrm{g})^* = A A^\\mathrm{g} "
},
{
"math_id": 31,
"text": " (A^\\mathrm{g} A)^* = A^\\mathrm{g} A, "
},
{
"math_id": 32,
"text": "{}^*"
},
{
"math_id": 33,
"text": "A^\\mathrm{g}"
},
{
"math_id": 34,
"text": "A^+"
},
{
"math_id": 35,
"text": "I"
},
{
"math_id": 36,
"text": "I \\subset \\{1, 2, 3, 4\\}"
},
{
"math_id": 37,
"text": "A^{(1, 4)} A A^{(1, 3)} = A^+"
},
{
"math_id": 38,
"text": "A^\\mathrm{g} = A^{-1}"
},
{
"math_id": 39,
"text": "A = \\begin{bmatrix}\n 1 & 2 & 3 \\\\\n 4 & 5 & 6 \\\\\n 7 & 8 & 9\n \\end{bmatrix}, \\quad \n G = \\begin{bmatrix}\n -\\frac{5}{3} & \\frac{2}{3} & 0 \\\\[4pt]\n \\frac{4}{3} & -\\frac{1}{3} & 0 \\\\[4pt]\n 0 & 0 & 0\n \\end{bmatrix}.\n"
},
{
"math_id": 40,
"text": "\\det(A) = 0"
},
{
"math_id": 41,
"text": " A "
},
{
"math_id": 42,
"text": " G "
},
{
"math_id": 43,
"text": "A = \\begin{bmatrix}\n 1 & 2 & 3 \\\\\n 4 & 5 & 6\n \\end{bmatrix}, \\quad \n A_\\mathrm{R}^{-1} = \\begin{bmatrix}\n -\\frac{17}{18} & \\frac{8}{18} \\\\[4pt]\n -\\frac{2}{18} & \\frac{2}{18} \\\\[4pt]\n \\frac{13}{18} & -\\frac{4}{18}\n \\end{bmatrix}.\n"
},
{
"math_id": 44,
"text": " A_\\mathrm{R}^{-1} "
},
{
"math_id": 45,
"text": "a \\cdot b \\cdot a = a"
},
{
"math_id": 46,
"text": "\\mathbb{Z}/12\\mathbb{Z}"
},
{
"math_id": 47,
"text": "3 \\cdot 3 \\cdot 3 = 3"
},
{
"math_id": 48,
"text": "3 \\cdot 7 \\cdot 3 = 3"
},
{
"math_id": 49,
"text": "3 \\cdot 11 \\cdot 3 = 3"
},
{
"math_id": 50,
"text": "4 \\cdot 1 \\cdot 4 = 4"
},
{
"math_id": 51,
"text": "4 \\cdot 4 \\cdot 4 = 4"
},
{
"math_id": 52,
"text": "4 \\cdot 7 \\cdot 4 = 4"
},
{
"math_id": 53,
"text": "4 \\cdot 10 \\cdot 4 = 4"
},
{
"math_id": 54,
"text": "2 \\cdot b \\cdot 2 = 2"
},
{
"math_id": 55,
"text": "A_\\mathrm{R}^{-1} = A^{\\intercal} \\left( A A^{\\intercal} \\right)^{-1}"
},
{
"math_id": 56,
"text": "A_\\mathrm{L}^{-1} = \\left(A^{\\intercal} A \\right)^{-1} A^{\\intercal}"
},
{
"math_id": 57,
"text": "A = BC"
},
{
"math_id": 58,
"text": "G = C_\\mathrm{R}^{-1} B_\\mathrm{L}^{-1}"
},
{
"math_id": 59,
"text": "C_\\mathrm{R}^{-1}"
},
{
"math_id": 60,
"text": "C"
},
{
"math_id": 61,
"text": "B_\\mathrm{L}^{-1}"
},
{
"math_id": 62,
"text": "B"
},
{
"math_id": 63,
"text": "A = P \\begin{bmatrix}I_r & 0 \\\\ 0 & 0 \\end{bmatrix} Q"
},
{
"math_id": 64,
"text": "P"
},
{
"math_id": 65,
"text": "Q"
},
{
"math_id": 66,
"text": "G = Q^{-1} \\begin{bmatrix}I_r & U \\\\ W & V \\end{bmatrix} P^{-1}"
},
{
"math_id": 67,
"text": "U, V"
},
{
"math_id": 68,
"text": "W"
},
{
"math_id": 69,
"text": "r"
},
{
"math_id": 70,
"text": "A = \\begin{bmatrix}B & C\\\\ D & E\\end{bmatrix},"
},
{
"math_id": 71,
"text": "B_{r \\times r}"
},
{
"math_id": 72,
"text": "G = \\begin{bmatrix} B^{-1} & 0\\\\ 0 & 0 \\end{bmatrix}"
},
{
"math_id": 73,
"text": "E=DB^{-1}C"
},
{
"math_id": 74,
"text": "Ax = b"
},
{
"math_id": 75,
"text": "x"
},
{
"math_id": 76,
"text": "b"
},
{
"math_id": 77,
"text": "x = A^\\mathrm{g}b + \\left[I - A^\\mathrm{g}A\\right]w"
},
{
"math_id": 78,
"text": "w"
},
{
"math_id": 79,
"text": "A^\\mathrm{g}b"
},
{
"math_id": 80,
"text": "AA^\\mathrm{g}b = b"
},
{
"math_id": 81,
"text": "A = U \\begin{bmatrix} \\Sigma_1 & 0 \\\\ 0 & 0 \\end{bmatrix} V^\\operatorname{T}"
},
{
"math_id": 82,
"text": "A^g"
},
{
"math_id": 83,
"text": "X"
},
{
"math_id": 84,
"text": "Y"
},
{
"math_id": 85,
"text": "Z"
},
{
"math_id": 86,
"text": "A^g = V \\begin{bmatrix} \\Sigma_1^{-1} & X \\\\ Y & Z \\end{bmatrix} U^\\operatorname{T}."
},
{
"math_id": 87,
"text": "\\{1,2\\}"
},
{
"math_id": 88,
"text": "Z = Y \\Sigma_1 X"
},
{
"math_id": 89,
"text": "\\{1,3\\}"
},
{
"math_id": 90,
"text": "X = 0"
},
{
"math_id": 91,
"text": "\\{1,4\\}"
},
{
"math_id": 92,
"text": "Y = 0"
},
{
"math_id": 93,
"text": "X = Y = Z = 0"
},
{
"math_id": 94,
"text": "A^+ = V \\begin{bmatrix} \\Sigma_1^{-1} & 0 \\\\ 0 & 0 \\end{bmatrix} U^\\operatorname{T}."
},
{
"math_id": 95,
"text": "A^+,"
},
{
"math_id": 96,
"text": "(UAV)^+ = V^* A^+ U^*"
},
{
"math_id": 97,
"text": " A^\\mathrm{D}"
},
{
"math_id": 98,
"text": "\\left(SAS^{-1}\\right)^\\mathrm{D} = S A^\\mathrm{D} S^{-1}"
},
{
"math_id": 99,
"text": "A^\\mathrm{U},"
},
{
"math_id": 100,
"text": "(DAE)^\\mathrm{U} = E^{-1} A^\\mathrm{U} D^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=7043631 |
7044429 | Absolute irreducibility | In mathematics, a multivariate polynomial defined over the rational numbers is absolutely irreducible if it is irreducible over the complex field. For example, formula_0 is absolutely irreducible, but while formula_1 is irreducible over the integers and the reals, it is reducible over the complex numbers as formula_2 and thus not absolutely irreducible.
More generally, a polynomial defined over a field "K" is absolutely irreducible if it is irreducible over every algebraic extension of "K", and an affine algebraic set defined by equations with coefficients in a field "K" is absolutely irreducible if it is not the union of two algebraic sets defined by equations in an algebraically closed extension of "K". In other words, an absolutely irreducible algebraic set is a synonym of an algebraic variety, which emphasizes that the coefficients of the defining equations may not belong to an algebraically closed field.
Absolutely irreducible is also applied, with the same meaning, to linear representations of algebraic groups.
In all cases, being absolutely irreducible is the same as being irreducible over the algebraic closure of the ground field.
formula_3
is absolutely irreducible. It is the ordinary circle over the reals and remains an irreducible conic section over the field of complex numbers. Absolute irreducibility more generally holds over any field not of characteristic two. In characteristic two, the equation is equivalent to ("x" + "y" −1)2 = 0. Hence it defines the double line "x" + "y" =1, which is a non-reduced scheme.
formula_4
is not absolutely irreducible. Indeed, the left hand side can be factored as
formula_5 where formula_6 is a square root of −1.
Therefore, this algebraic variety consists of two lines intersecting at the origin and is not absolutely irreducible. This holds either already over the ground field, if −1 is a square, or over the quadratic extension obtained by adjoining "i".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^2+y^2-1"
},
{
"math_id": 1,
"text": "x^2+y^2"
},
{
"math_id": 2,
"text": "x^2+y^2 = (x+iy)(x-iy),"
},
{
"math_id": 3,
"text": " x^2 + y^2 = 1 "
},
{
"math_id": 4,
"text": " x^2 + y^2 = 0 "
},
{
"math_id": 5,
"text": " x^2 + y^2 = (x+yi)(x-yi), "
},
{
"math_id": 6,
"text": "i"
}
]
| https://en.wikipedia.org/wiki?curid=7044429 |
70445407 | Log-t distribution | In probability theory, a log-t distribution or log-Student t distribution is a probability distribution of a random variable whose logarithm is distributed in accordance with a Student's t-distribution. If "X" is a random variable with a Student's t-distribution, then "Y" = exp("X") has a log-t distribution; likewise, if "Y" has a log-t distribution, then "X" = log("Y") has a Student's t-distribution.
Characterization.
The log-t distribution has the probability density function:
formula_2,
where formula_0 is the location parameter of the underlying (non-standardized) Student's t-distribution, formula_3 is the scale parameter of the underlying (non-standardized) Student's t-distribution, and formula_1 is the number of degrees of freedom of the underlying Student's t-distribution. If formula_4 and formula_5 then the underlying distribution is the standardized Student's t-distribution.
If formula_6 then the distribution is a log-Cauchy distribution. As formula_1 approaches infinity, the distribution approaches a log-normal distribution. Although the log-normal distribution has finite moments, for any finite degrees of freedom, the mean and variance and all higher moments of the log-t distribution are infinite or do not exist.
The log-t distribution is a special case of the generalized beta distribution of the second kind. The log-t distribution is an example of a compound probability distribution between the lognormal distribution and inverse gamma distribution whereby the variance parameter of the lognormal distribution is a random variable distributed according to an inverse gamma distribution.
Applications.
The log-t distribution has applications in finance. For example, the distribution of stock market returns often shows fatter tails than a normal distribution, and thus tends to fit a Student's t-distribution better than a normal distribution. While the Black-Scholes model based on the log-normal distribution is often used to price stock options, option pricing formulas based on the log-t distribution can be a preferable alternative if the returns have fat tails. The fact that the log-t distribution has infinite mean is a problem when using it to value options, but there are techniques to overcome that limitation, such as by truncating the probability density function at some arbitrary large value.
The log-t distribution also has applications in hydrology and in analyzing data on cancer remission.
Multivariate log-t distribution.
Analogous to the log-normal distribution, multivariate forms of the log-t distribution exist. In this case, the location parameter is replaced by a vector μ, the scale parameter is replaced by a matrix Σ.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{\\mu}"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "p(x\\mid \\nu,\\hat{\\mu},\\hat{\\sigma}) = \\frac{\\Gamma(\\frac{\\nu + 1}{2})}{x\\Gamma(\\frac{\\nu}{2})\\sqrt{\\pi\\nu}\\hat\\sigma\\,} \\left(1+\\frac{1}{\\nu}\\left( \\frac{ \\ln x-\\hat{\\mu} } {\\hat{\\sigma} } \\right)^2\\right)^{-\\frac{\\nu+1}{2}} "
},
{
"math_id": 3,
"text": "\\hat{\\sigma}"
},
{
"math_id": 4,
"text": "\\hat{\\mu}=0"
},
{
"math_id": 5,
"text": "\\hat{\\sigma}=1"
},
{
"math_id": 6,
"text": "\\nu=1"
}
]
| https://en.wikipedia.org/wiki?curid=70445407 |
7044662 | Fiducial inference | One of a number of different types of statistical inference
Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.
Background.
The general approach of fiducial inference was proposed by Ronald Fisher. Here "fiducial" comes from the Latin for faith. Fiducial inference can be interpreted as an attempt to perform inverse probability without calling on prior probability distributions. Fiducial inference quickly attracted controversy and was never widely accepted. Indeed, counter-examples to the claims of Fisher for fiducial inference were soon published. These counter-examples cast doubt on the coherence of "fiducial inference" as a system of statistical inference or inductive logic. Other studies showed that, where the steps of fiducial inference are said to lead to "fiducial probabilities" (or "fiducial distributions"), these probabilities lack the property of additivity, and so cannot constitute a probability measure.
The concept of fiducial inference can be outlined by comparing its treatment of the problem of interval estimation in relation to other modes of statistical inference.
Fisher designed the fiducial method to meet perceived problems with the Bayesian approach, at a time when the frequentist approach had yet to be fully developed. Such problems related to the need to assign a prior distribution to the unknown values. The aim was to have a procedure, like the Bayesian method, whose results could still be given an inverse probability interpretation based on the actual data observed. The method proceeds by attempting to derive a "fiducial distribution", which is a measure of the degree of faith that can be put on any given value of the unknown parameter and is faithful to the data in the sense that the method uses all available information.
Unfortunately Fisher did not give a general definition of the fiducial method and he denied that the method could always be applied. His only examples were for a single parameter; different generalisations have been given when there are several parameters. A relatively complete presentation of the fiducial approach to inference is given by Quenouille (1958), while Williams (1959) describes the application of fiducial analysis to the calibration problem (also known as "inverse regression") in regression analysis. Further discussion of fiducial inference is given by Kendall & Stuart (1973).
The fiducial distribution.
Fisher required the existence of a sufficient statistic for the fiducial method to apply. Suppose there is a single sufficient statistic for a single parameter. That is, suppose that the conditional distribution of the data given the statistic does not depend on the value of the parameter. For example, suppose that "n" independent observations are uniformly distributed on the interval formula_0. The maximum, "X", of the "n" observations is a sufficient statistic for "ω". If only "X" is recorded and the values of the remaining observations are forgotten, these remaining observations are equally likely to have had any values in the interval formula_1. This statement does not depend on the value of "ω". Then "X" contains all the available information about "ω" and the other observations could have given no further information.
The cumulative distribution function of "X" is
formula_2
Probability statements about "X"/"ω" may be made. For example, given "α", a value of "a" can be chosen with 0 < "a" < 1 such that
formula_3
Thus
formula_4
Then Fisher might say that this statement may be inverted into the form
formula_5
In this latter statement, "ω" is now regarded as variable and "X" is fixed, whereas previously it was the other way round. This distribution of "ω" is the "fiducial distribution" which may be used to form fiducial intervals that represent degrees of belief.
The calculation is identical to the pivotal method for finding a confidence interval, but the interpretation is different. In fact older books use the terms "confidence interval" and "fiducial interval" interchangeably. Notice that the fiducial distribution is uniquely defined when a single sufficient statistic exists.
The pivotal method is based on a random variable that is a function of both the observations and the parameters but whose distribution does not depend on the parameter. Such random variables are called pivotal quantities. By using these, probability statements about the observations and parameters may be made in which the probabilities do not depend on the parameters and these may be inverted by solving for the parameters in much the same way as in the example above. However, this is only equivalent to the fiducial method if the pivotal quantity is uniquely defined based on a sufficient statistic.
A fiducial interval could be taken to be just a different name for a confidence interval and give it the fiducial interpretation. But the definition might not then be unique. Fisher would have denied that this interpretation is correct: for him, the fiducial distribution had to be defined uniquely and it had to use all the information in the sample.
Status of the approach.
Fisher admitted that "fiducial inference" had problems. Fisher wrote to George A. Barnard that he was "not clear in the head" about one problem on fiducial inference, and, also writing to Barnard, Fisher complained that his theory seemed to have only "an asymptotic approach to intelligibility". Later Fisher confessed that "I don't understand yet what fiducial probability does. We shall have to live with it a long time before we know what it's doing for us. But it should not be ignored just because we don't yet have a clear interpretation".
Dennis Lindley showed that fiducial probability lacked additivity, and so was not a probability measure. Cox points out that the same argument applies to the so-called "confidence distribution" associated with confidence intervals, so the conclusion to be drawn from this is moot. Fisher sketched "proofs" of results using fiducial probability. When the conclusions of Fisher's fiducial arguments are not false, many have been shown to also follow from Bayesian inference.
In 1978, J. G. Pederson wrote that "the fiducial argument has had very limited success and is now essentially dead". Davison wrote "A few subsequent attempts have been made to resurrect fiducialism, but it now seems largely of historical importance, particularly in view of its restricted range of applicability when set alongside models of current interest."
Fiducial inference is still being studied and its principles may be valuable for some scientific applications.
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "[0,\\omega]"
},
{
"math_id": 1,
"text": "[0,X]"
},
{
"math_id": 2,
"text": "F(x) = P(X \\leq x) = P\\left(\\text{all observations} \\leq x\\right) = \\left(\\frac x\\omega\\right)^n ."
},
{
"math_id": 3,
"text": "P\\left(X > \\omega a \\right) = 1-a^n = \\alpha."
},
{
"math_id": 4,
"text": "a = (1-\\alpha)^{1/n} ."
},
{
"math_id": 5,
"text": "P\\left(\\omega < \\frac Xa \\right) = \\alpha ."
}
]
| https://en.wikipedia.org/wiki?curid=7044662 |
7045490 | Dual abelian variety | In mathematics, a dual abelian variety can be defined from an abelian variety "A", defined over a field "k". A 1-dimensional abelian variety is an elliptic curve, and every elliptic curve is isomorphic to its dual, but this fails for higher-dimensional abelian varieties, so the concept of dual becomes more interesting in higher dimensions.
Definition.
Let "A" be an abelian variety over a field "k". We define formula_0 to be the subgroup consisting of line bundles "L" such that formula_1, where formula_2 are the multiplication and projection maps formula_3 respectively. An element of formula_4 is called a degree 0 line bundle on "A".
To "A" one then associates a dual abelian variety "A"v (over the same field), which is the solution to the following moduli problem. A family of degree 0 line bundles parametrized by a "k"-variety "T" is defined to be a line bundle "L" on
"A"×"T" such that
Then there is a variety "A"v and a line bundle formula_6, called the Poincaré bundle, which is a family of degree 0 line bundles parametrized by "A"v in the sense of the above definition. Moreover, this family is universal, that is, to any family "L" parametrized by "T" is associated a unique morphism "f": "T" → "A"v so that "L" is isomorphic to the pullback of "P" along the morphism 1A×"f": "A"×"T" → "A"×"A"v. Applying this to the case when "T" is a point, we see that the points of "A"v correspond to line bundles of degree 0 on "A", so there is a natural group operation on "A"v given by tensor product of line bundles, which makes it into an abelian variety.
In the language of representable functors one can state the above result as follows. The contravariant functor, which associates to each "k"-variety "T" the set of families of degree 0 line bundles parametrised by "T" and to each "k"-morphism "f": "T" → "T"' the mapping induced by the pullback with "f", is representable. The universal element representing this functor is the pair ("A"v, "P").
This association is a duality in the sense that there is a natural isomorphism between the double dual "A"vv and "A" (defined via the Poincaré bundle) and that it is contravariant functorial, i.e. it associates to all morphisms "f": "A" → "B" dual morphisms "f"v: "B"v → "A"v in a compatible way. The "n"-torsion of an abelian variety and the "n"-torsion of its dual are dual to each other when "n" is coprime to the characteristic of the base. In general - for all "n" - the "n"-torsion group schemes of dual abelian varieties are Cartier duals of each other. This generalizes the Weil pairing for elliptic curves.
History.
The theory was first put into a good form when "K" was the field of complex numbers. In that case there is a general form of duality between the Albanese variety of a complete variety "V", and its Picard variety; this was realised, for definitions in terms of complex tori, as soon as André Weil had given a general definition of Albanese variety. For an abelian variety "A", the Albanese variety is "A" itself, so the dual should be "Pic"0("A"), the connected component of the identity element of what in contemporary terminology is the Picard scheme.
For the case of the Jacobian variety "J" of a compact Riemann surface "C", the choice of a principal polarization of "J" gives rise to an identification of "J" with its own Picard variety. This in a sense is just a consequence of Abel's theorem. For general abelian varieties, still over the complex numbers, "A" is in the same isogeny class as its dual. An explicit isogeny can be constructed by use of an invertible sheaf "L" on "A" (i.e. in this case a holomorphic line bundle), when the subgroup
"K"("L")
of translations on "L" that take "L" into an isomorphic copy is itself finite. In that case, the quotient
"A"/"K"("L")
is isomorphic to the dual abelian variety "Â".
This construction of "Â" extends to any field "K" of characteristic zero. In terms of this definition, the Poincaré bundle, a universal line bundle can be defined on
"A" × "Â".
The construction when "K" has characteristic "p" uses scheme theory. The definition of "K"("L") has to be in terms of a group scheme that is a scheme-theoretic stabilizer, and the quotient taken is now a quotient by a subgroup scheme.
The Dual Isogeny.
Let formula_7 be an isogeny of abelian varieties. (That is, formula_8 is finite-to-one and surjective.) We will construct an isogeny formula_9 using the functorial description of formula_10, which says that the data of a map formula_9 is the same as giving a family of degree zero line bundles on formula_11, parametrized by formula_12.
To this end, consider the isogeny formula_13 and formula_14 where formula_15 is the Poincare line bundle for formula_16. This is then the required family of degree zero line bundles on formula_11.
By the aforementioned functorial description, there is then a morphism formula_9 so that formula_17. One can show using this description that this map is an isogeny of the same degree as formula_8, and that formula_18.
Hence, we obtain a contravariant endofunctor on the category of abelian varieties which squares to the identity. This kind of functor is often called a "dualizing functor".
Mukai's Theorem.
A celebrated theorem of Mukai states that there is an isomorphism of derived categories formula_19, where formula_20 denotes the bounded derived category of coherent sheaves on "X". Historically, this was the first use of the Fourier-Mukai transform and shows that the bounded derived category cannot necessarily distinguish non-isomorphic varieties.
Recall that if "X" and "Y" are varieties, and formula_21 is a complex of coherent sheaves, we define the Fourier-Mukai transform formula_22 to be the composition formula_23, where "p" and "q" are the projections onto "X" and "Y" respectively.
Note that formula_24 is flat and hence formula_25 is exact on the level of coherent sheaves, and in applications formula_26 is often a line bundle so one may usually leave the left derived functors underived in the above expression. Note also that one can analogously define a Fourier-Mukai transform formula_27 using the same kernel, by just interchanging the projection maps in the formula.
The statement of Mukai's theorem is then as follows.
Theorem: Let "A" be an abelian variety of dimension "g" and formula_28 the Poincare line bundle on formula_29. Then, formula_30, where formula_31 is the inversion map, and formula_32 is the shift functor. In particular, formula_33 is an isomorphism.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
"This article incorporates material from Dual isogeny on PlanetMath, which is licensed under the ." | [
{
"math_id": 0,
"text": "\\operatorname{Pic}^0 (A) \\subset \\operatorname{Pic} (A)"
},
{
"math_id": 1,
"text": "m^*L \\cong p^*L \\otimes q^*L"
},
{
"math_id": 2,
"text": "m, p, q"
},
{
"math_id": 3,
"text": "A \n\\times_k A \\to A"
},
{
"math_id": 4,
"text": "\\operatorname{Pic}^0(A) "
},
{
"math_id": 5,
"text": "t \\in T"
},
{
"math_id": 6,
"text": "P \\to A \\times A^\\vee"
},
{
"math_id": 7,
"text": "f: A \\to B"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "\\hat{f}: \\hat{B} \\to \\hat{A}"
},
{
"math_id": 10,
"text": "\\hat{A}"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "\\hat{B}"
},
{
"math_id": 13,
"text": "f \\times 1_{\\hat{B}}: A \\times \\hat{B} \\to B \\times \\hat{B}"
},
{
"math_id": 14,
"text": "(f \\times 1_{\\hat{B}})^* P_{B}"
},
{
"math_id": 15,
"text": "P_B"
},
{
"math_id": 16,
"text": "B"
},
{
"math_id": 17,
"text": "(\\hat{f} \\times 1_A)^*P_A \\cong (f \\times 1_{\\hat{B}})^* P_{B}"
},
{
"math_id": 18,
"text": "\\hat{\\hat{f}} = f"
},
{
"math_id": 19,
"text": "D^b(A) \\cong D^b(\\hat{A}) "
},
{
"math_id": 20,
"text": "D^b(X)"
},
{
"math_id": 21,
"text": "\\mathcal{K} \\in D^b(X \\times Y)"
},
{
"math_id": 22,
"text": "\\Phi^{X \\to Y}_{\\mathcal{K}}: D^b(X) \\to D^b(Y)"
},
{
"math_id": 23,
"text": "\\Phi^{X \\to Y}_{\\mathcal{K}}(\\cdot) = Rq_*(\\mathcal{K} \\otimes_L Lp^*(\\cdot))"
},
{
"math_id": 24,
"text": "p"
},
{
"math_id": 25,
"text": "p^*"
},
{
"math_id": 26,
"text": "\\mathcal{K}"
},
{
"math_id": 27,
"text": "\\Phi_{\\mathcal{K}}^{Y \\to X}"
},
{
"math_id": 28,
"text": "P_A"
},
{
"math_id": 29,
"text": "A \\times \\hat{A}"
},
{
"math_id": 30,
"text": "\\Phi_{P_A}^{\\hat{A} \\to A} \\circ \\Phi_{P_A}^{A\\to \\hat{A}} \\cong \\iota^*[-g]"
},
{
"math_id": 31,
"text": "\\iota: A \\to A"
},
{
"math_id": 32,
"text": "[-g]"
},
{
"math_id": 33,
"text": "\\Phi_{P_A}^{A\\to \\hat{A}}"
}
]
| https://en.wikipedia.org/wiki?curid=7045490 |
70459271 | Joshua 1 | Book of Joshua, chapter 1
Joshua 1 is the first chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the commission of Joshua as the leader of Israel after the death of Moses, a part of a section comprising Joshua 1:1–5:12 about the entry to the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 18 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including XJoshua (XJosh, X1; 50 BCE) with extant verses 9–12.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Old Testament references.
Verse 8 is the first reference to Jewish meditation in the Book of Joshua.
Analysis.
The narrative of Israelites entering the land of Canaan comprises verses 1:1 to 5:12 of the Book of Joshua and has the following outline:
A. Preparations for Entering the Land (1:1–18)
1. Directives to Joshua (1:1–9)
2. Directives to the Leaders (1:10–11)
3. Discussions with the Eastern Tribes (1:12–18)
B. Rahab and the Spies in Jericho (2:1–24)
1. Directives to the Spies (2:1a)
2. Deceiving the King of Jericho (2:1b–7)
3. The Oath with Rahab (2:8–21)
4. The Report to Joshua (2:22–24)
C. Crossing the Jordan (3:1–4:24)
1. Initial Preparations for Crossing (3:1–6)
2. Directives for Crossing (3:7–13)
3. A Miraculous Crossing: Part 1 (3:14–17)
4. Twelve-Stone Memorial: Part 1 (4:1–10a)
5. A Miraculous Crossing: Part 2 (4:10b–18)
6. Twelve-Stone Memorial: Part 2 (4:19–24)
D. Circumcision and Passover (5:1–12)
1. Canaanite Fear (5:1)
2. Circumcision (5:2–9)
3. Passover (5:10–12)
Commissioning of Joshua (1:1–9).
This section forms a transition from the narratives of the wilderness wanderings of Israel into the settlement of the land of Canaan, which YHWH has promised to give to his people (verses 3-4; cf Genesis 15:17-21; Exodus 3:17; Deuteronomy 1:7-8), as an overture to the book of Joshua. Moses had led the Israelites since the Exodus from Egypt throughout the time in the wilderness, but he was not to enter the promised land; rather, Joshua would do that, so the commissioning of Joshua in succession to Moses is the focus of this narrative, with a reference to Moses' death linking it to the closing words of the Book of Deuteronomy (the last book of the Torah). The relationship between Moses and Joshua is well documented in Exodus 17:8–16; Numbers 27:12–23, and in the Book of Deuteronomy (1:37–38; 3:21–28; 31:1–23; 34:9). The first speech in this chapter (verses 2–9) contains God's command to Joshua to cross the Jordan River, so the people of Israel could possess their land (verse 6), and a transfer of the privileges and role of Moses to Joshua. The elements in this transfer are
These recall the law of the 'king' (Deuteronomy 17:14–20), which refer to all who would lead in Israel. Joshua's special position is that YHWH's promise of presence is peculiarly his (verse 9), while Joshua place himself under the authority of the law of God given to Moses (verse 7).
"Now it came about after the death of Moses the servant of the LORD, that the LORD spoke to Joshua the son of Nun, Moses’ servant, saying,"
"Moses My servant is dead; now therefore arise, cross this Jordan, you and all this people, to the land which I am giving to them, to the sons of Israel."
Joshua assumes command (1:10–18).
In verses 10–11 Joshua gave his first command to the 'officers of the people' (presupposed in Exodus 5:10-19; commissioned in Numbers 1:16, Deuteronomy 1:15.) to prepare each tribe for the coming military campaign (Deuteronomy 11:31). Verses 12–15 record Joshua's speech to the 'Transjordanian tribes' — the tribes of Reuben, Gad and the half-tribe of Manasseh who had set their territories east of the Jordan river (cf. Numbers 32; Deuteronomy 3:12-21) — that they should send their men to fight with other tribes to conquer the land west of Jordan and only return after the conquest is considered complete. The topic is addressed again in Joshua 22, thus bracketing the main parts of the book. The reply of these tribes in verses 16–18 echoes God's assurance in verses 1–9 and brings conclusion to this chapter.
"until the LORD gives rest to your brothers as he has to you, and they also take possession of the land that the LORD your God is giving them. Then you shall return to the land of your possession and shall possess it, the land that Moses the servant of the LORD gave you beyond the Jordan toward the sunrise."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70459271 |
70459276 | Joshua 22 | Book of Joshua, chapter 22
Joshua 22 is the twenty-second chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the mediation for the issue of the establishment of an altar on the east back of Jordan River, a part of a section comprising Joshua 22:1–24:33 about the Israelites preparing for life in the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 34 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites preparing for life in the land comprising verses 22:1 to 24:33 of the Book of Joshua and has the following outline:
A. The Jordan Altar (22:1–34)
1. Joshua's Charge to the East Jordan Tribes (22:1–8)
2 The Construction of the Altar and a Possible Civil War (22:9–12)
3. Meeting Between Phinehas and the East Jordan Tribes (22:13–29)
a. Phinehas Challenges the East Jordan Tribes (22:13–20)
b. The East Jordan Tribes Explain (22:21–29)
4. Phinehas Returns to the West (22:30–32)
5. The Altar Named (22:33–34)
B. Joshua's Farewell (23:1–16)
C. Covenant and Conclusion (24:1–33)
The altar by the Jordan (22:1–12).
Still at Shiloh Joshua addressed the Transjordanian tribes who at outset of the conquest were obliged to participate with them in the war for the land although they had settled in their lands before their fellow-Israelites had crossed the Jordan (Joshua 1:12–18; cf. Deuteronomy 3:18–20 for the reference to Moses' command in verse 2). After the completion of the conquest and land distribution, they were now permitted to return home, with a strong exhortation (verses. 2–5; cf Deuteronomy 10:12–13) to be faithful to God and with Joshua's 'blessing' of them (verse 6). However, the unity of the people was soon called into question when those two and a half tribes, on their return, erected an altar by the Jordan, on the Israelite side of the border between the two lands (verses 10–11) and this was interpreted by the Cisjordan Israelites as an act of war, because it apparently challenged the claims of the unified sanctuary of Shiloh (verse 12).
The Altar of Witness (22:13–34).
The case against the two and a half tribes is outlined in terms of holiness requirements (verses. 13–20), so the priest Phinehas (son of Eleazar), rather than Joshua, was sent to talk to those tribes. The alleged sin from building the altar, whether it might make the land across the
Jordan to be ritually 'unclean', and therefore unfit for worship (verse 19)., is compared with two other sins in the religious realm (verses 17, 20):
It is the duty of all Israel, as a religious assembly or congregation,to pursue the errant tribes
(verses 12, 16).
The Transjordan tribes responded by recognizing the unique claims of both YHWH and his altar (22:21–29) using the phrase 'The LORD, God of gods' ("'el 'elohim YHWH") to emphasize a strong affirmation of YHWH's supremacy and the argument that this altar was not itself for sacrifice, but rather, as a copy of the true altar, to symbolize their participation in the worship even when they were on the other side of the Jordan (verse 27a). Thus the altar is named 'witness' (verses 28, 34), for the unity of Israel as well as the preservation of the true faith for future generations (verses. 24–28; cf. Deuteronomy 6:2, 7).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70459276 |
70460606 | Bird (mathematical artwork) | Mathematical artwork
Bird, also known as A Bird in Flight refers to bird-like mathematical artworks that are introduced by mathematical equations. A group of these figures are created by combing through tens of thousands of computer-generated images. They are usually defined by trigonometric functions. An example of "A Bird in Flight" is made up of 500 segments defined in a Cartesian plane where for each formula_0 the endpoints of the formula_1-th line segment are:
formula_2
and
formula_3.
The 500 line segments defined above together form a shape in the Cartesian plane that resembles a bird with open wings. Looking at the line segments on the wings of the bird causes an optical illusion and may trick the viewer into thinking that the segments are curved lines. Therefore, the shape can also be considered as an optical artwork. Another version of "A Bird in Flight" was defined as the union of all of the circles with center formula_4 and radius formula_5, where formula_6, and
formula_7
formula_8
formula_9
The set of the 20,001 circles defined above form a subset of the plane that resembles a flying bird. Although this version's equations are a lot more complicated than the version made of 500 segments, it has a better resemblance to a real flying bird.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k=1, 2, 3, \\ldots , 500"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "\n\\left(\\frac{3}{2}\\sin^7\\left(\\frac{2\\pi k}{500}+\\frac{\\pi}{3}\\right),\\,\\frac{1}{4}\\cos^{2}\\left(\\frac{6\\pi k}{500}\\right)\\right)\n"
},
{
"math_id": 3,
"text": "\n\\left(\\frac{1}{5}\\sin\\left(\\frac{6\\pi k}{500}+\\frac{\\pi}{5}\\right),\\,\\frac{-2}{3}\\sin^2\\left(\\frac{2\\pi k}{500}-\\frac{\\pi}{3}\\right)\\right)\n"
},
{
"math_id": 4,
"text": "\\left(A(k), B(k)\\right)"
},
{
"math_id": 5,
"text": "R(k)"
},
{
"math_id": 6,
"text": "k=-10000, -9999, \\ldots , 9999, 10000"
},
{
"math_id": 7,
"text": "A(k)=\\frac{3k}{20000}+\\sin\\left(\\frac{\\pi }{2}\\left(\\frac{k}{10000}\\right)^7\\right)\\cos^6\\left(\\frac{41\\pi k}{10000}\\right)+\\frac{1}{4}\\cos^{16}\\left(\\frac{41\\pi k}{10000}\\right)\\cos^{12}\\left(\\frac{\\pi k}{20000}\\right)\\sin\\left(\\frac{6\\pi k}{10000}\\right), "
},
{
"math_id": 8,
"text": "\n\\begin{align} B(k)= & -\\cos\\left(\\frac{\\pi}{2}\\left(\\frac{k}{10000}\\right)^7\\right)\\left(1+\\frac{3}{2}\\cos^6\\left(\\frac{\\pi k}{20000}\\right)\\cos^6\\left(\\frac{3\\pi k}{20000}\\right)\\right)\\cos^6\\left(\\frac{41\\pi k}{10000}\\right) \\\\ & +\\frac{1}{2}\\cos^{10}\\left(\\frac{3\\pi k}{100000}\\right)\\cos^{10}\\left(\\frac{9\\pi k}{100000}\\right)\\cos^{10}\\left(\\frac{18\\pi k}{100000}\\right), \\\\ \\end{align}\n"
},
{
"math_id": 9,
"text": "R(k)=\\frac{1}{50}+\\frac{1}{10}\\sin^2\\left(\\frac{41\\pi k}{10000}\\right)\\sin^2\\left(\\frac{9\\pi k}{100000}\\right)+\\frac{1}{20}\\cos^2\\left(\\frac{41\\pi k}{10000}\\right)\\cos^{10}\\left(\\frac{\\pi k}{20000}\\right). "
}
]
| https://en.wikipedia.org/wiki?curid=70460606 |
70474208 | Phase space crystal | A state of a physical system in phase space
Phase space crystal is the state of a physical system that displays discrete symmetry in phase space instead of real space. For a single-particle system, the phase space crystal state refers to the eigenstate of the Hamiltonian for a closed quantum system or the eigenoperator of the Liouvillian for an open quantum system. For a many-body system, phase space crystal is the solid-like crystalline state in phase space. The general framework of phase space crystals is to extend the study of solid state physics and condensed matter physics into phase space of dynamical systems. While real space has Euclidean geometry, phase space is embedded with classical symplectic geometry or quantum noncommutative geometry.
Phase space lattices.
In his celebrated book "Mathematical Foundations of Quantum Mechanics", John von Neumann constructed a "phase space lattice" by two commutative elementary displacement operators along position and momentum directions respectively, which is also called the "von Neumann lattice" nowadays. If the phase space is replaced a frequency-time plane, the von Neumann lattice is called "Gabor lattice" and widely used for signal processing.
The phase space lattice differs fundamentally from the real space lattice because the two coordinates of phase space are noncommutative in quantum mechanics. As a result, a coherent state moving along a closed path in phase space acquires an additional phase factor, which is similar to the Aharonov–Bohm effect of a charged particle moving in a magnetic field. There is a deep connection between phase space and magnetic field. In fact, the canonical equation of motion can also be rewritten in the Lorenz-force form reflecting the symplectic geometry of classical phase space.
In the phase space of dynamical systems, the stable points together with their neighbouring regions form the so-called "Poincaré-Birkhoff islands" in the chaotic sea that may form a chain or some regular two dimensional lattice structures in phase space. For example, the effective Hamiltonian of "kicked harmonic oscillator" (KHO). can possess square lattice, triangle lattice and even quasi-crystal structures in phase space depending on the ratio of kicking number. In fact, any arbitrary phase space lattice can be engineered by selecting an appropriate kicking sequence for the KHO.
Phase space crystals (PSC).
The concept of phase space crystal was proposed by Guo et al. and originally refers to the eigenstate of effective Hamiltonian of periodically driven (Floquet) dynamical system. Depending on whether interaction effect is included, phase space crystals can be classified into "single-particle PSC" and "many-body PSC".
Single-particle phase space crystals.
Depending on the symmetry in phase space, phase space crystal can be a one-dimensional (1D) state with formula_0-fold rotational symmetry in phase space or two-dimensional (2D) lattice state extended into the whole phase space. The concept of phase space crystal for a closed system has been extended into open quantum systems and is named as "dissipative phase space crystals".
Zn PSC.
Phase space is fundamentally different from real space as the two coordinates of phase space do not commute, i.e., formula_1 where formula_2 is the dimensionless Planck constant. The ladder operator is defined as formula_3 such that formula_4. The Hamiltonian of a physical system formula_5 can also be written in a function of ladder operators formula_6. By defining the rotational operator in phase space by formula_7 where formula_8 with formula_0 a positive integer, the system has formula_0-fold rotational symmetry or formula_9 symmetry if the Hamiltonian commutates with rotational operator formula_10, i.e.,
formula_11
In this case, one can apply Bloch theorem to the formula_0-fold symmetric Hamiltonian and calculate the band structure. The discrete rotational symmetric structure of Hamiltonian is called "formula_9 phase space lattice" and the corresponding eigenstates are called "formula_9 phase space crystals".
Lattice PSC.
The discrete rotational symmetry can be extended to the discrete translational symmetry in the whole phase space. For such purpose, the displacement operator in phase space is defined by formula_12 which has the property formula_13, where formula_14 is a complex number corresponding to the displacement vector in phase space. The system has discrete translational symmetry if the Hamiltonian commutates with translational operator formula_15, i.e.,
formula_16
If there exist two elementary displacements formula_17 and formula_18 that satisfy the above condition simultaneously, the phase space Hamiltonian possesses 2D lattice symmetry in phase space. However, the two displacement operators are not commutative in general formula_19. In the non-commutative phase space, the concept of a "point" is meaningless. Instead, a coherent state formula_20 is defined as the eigenstate of the lowering operator via formula_21. The displacement operator displaces the coherent state with an additional phase, i.e., formula_22. A coherent state that is moved along a closed path, e.g., a triangle with three edges given by formula_23 in phase space, acquires a geometric phase factor
formula_24
where formula_25 is the enclosed area. This geometric phase is analogous to the Aharonov–Bohm phase of charged particle in a magnetic field. If the magnetic unit cell and the lattice unit cell are commensurable, namely, there exist two integers formula_26 and formula_27 such that formula_28, one can calculate the band structure defined in a 2D Brillouin. For example, the spectrum of a square phase space lattice Hamiltonian formula_29 displays Hofstadter's butterfly band structure that describes the hopping of charged particles between tight-binding lattice sites in a magnetic field. In this case, the eigenstates are called "2D lattice phase space crystals".
Dissipative PSC.
The concept of phase space crystals for closed quantum system has been extended to open quantum system. In circuit QED systems, a microwave resonator combined with Josephson junctions and voltage bias under formula_0-photon resonance can be described by a rotating wave approximation (RWA) Hamiltonian formula_30 with formula_9 phase space symmetry described above. When single-photon loss is dominant, the dissipative dynamics of resonator is described by the following master equation (Lindblad equation)
formula_31
where formula_32 is the loss rate and superoperator formula_33 is called the "Liouvillian". One can calculate the eigenspectrum and corresponding eigenoperators of the Liouvillian of the system formula_34.
Notice that not only the Hamiltonian but also the Liouvillian both are invariant under the formula_0-fold rotational operation, i.e., formula_35 with formula_36 and formula_8. This symmetry plays a crucial role in extending the concept of phase space crystals to an open quantum system. As a result, the Liouvillian eigenoperators formula_37 have a Bloch mode structure in phase space, which is called a "dissipative phase space crystal".
Many-body phase space crystals.
The concept of phase space crystal can be extended to systems of interacting particles where it refers to the many-body state having a solid-like crystalline structure in phase space. In this case, the interaction of particles plays an important role. In real space, the many-body Hamiltonian subjected to a perturbative periodic drive (with period formula_38) is given by
formula_39
Usually, the interaction potential formula_40 is a function of two particles' distance in real space. By transforming to the rotating frame with the driving frequency and adapting rotating wave approximation (RWA), one can get the effective Hamiltonian.
formula_41
Here, formula_42 are the stroboscopic position and momentum of formula_43-th particle, namely, they take the values of formula_44 at the integer multiple of driving period formula_45. To have the crystal structure in phase space, the effective interaction in phase space needs to be invariant under the discrete rotational or translational operations in phase space.
Phase space interactions.
In classical dynamics, to the leading order, the effective interaction potential in phase space is the time-averaged real space interaction in one driving period
formula_46
Here, formula_47 represents the trajectory of formula_43-th particle in the absence of driving field. For the model power-law interaction potential formula_48 with integers and half-integers formula_49, the direct integral given by the above time-average formula is divergent, i.e., formula_50 A renormalisation procedure was introduced to remove the divergence and the correct phase space interaction is a function of "phase space distance" formula_51 in the formula_52 plane. For the Coulomb potential formula_53, the result formula_54 still keeps the form of Coulomb's law up to a logarithmic renormalised "charge" formula_55, where formula_56 is the Euler's number. For formula_57, the renormalised phase space interaction potential is
formula_58
where formula_59 is the collision factor. For the special case of formula_60, there is no effective interaction in phase space since formula_61 is a constant with respect to phase space distance. In general for the case of formula_62, phase space interaction formula_63 grows with the phase space distance formula_64. For the hard-sphere interaction (formula_65), phase space interaction formula_66 behaves like the confinement interaction between quarks in Quantum chromodynamics (QCD). The above phase space interaction is indeed invariant under the discrete rotational or translational operations in phase space. Combined with the phase space lattice potential from driving, there exist a stable regime where the particles arrange themselves periodically in phase space giving rise to "many-body phase space crystals".
In quantum mechanics, the point particle is replaced by a quantum wave packet and the divergence problem is naturally avoided. To the lowest-order Magnus expansion for Floquet system, the quantum phase space interaction of two particles is the time-averaged real space interaction over the periodic two-body quantum state formula_67 as follows.
formula_68
In the coherent state representation, the quantum phase space interaction approaches the classical phase space interaction in the long-distance limit. For formula_69 bosonic ultracold atoms with repulsive contact interaction bouncing on an oscillating mirror, it is possible to form Mott insulator-like state in the formula_9 phase space lattice. In this case, there is a well defined number of particles in each potential site which can be viewed as an example of "1D many-body phase space crystal".
If the two indistinguishable particles have spins, the total phase space interaction can be written in a sum of direct interaction and exchange interaction. This means that the exchange effect during the collision of two particles can induce an effective spin-spin interaction.
Phase space crystal vibrations.
Solid crystals are defined by a periodic arrangement of atoms in real space, atoms subject to a time-periodic drive can also form crystals in phase space. The interactions between these atoms give rise to collective vibrational modes similar to phonons in solid crystals. The honeycomb phase space crystal is particularly interesting because the vibrational band structure has two sub-lattice bands that can have nontrivial topological physics. The vibrations of any two atoms are coupled via a pairing interaction with intrinsically complex couplings. Their complex phases have a simple geometrical interpretation and can not be eliminated by a gauge transformation, leading to a vibrational band structure with non-trivial Chern numbers and chiral edge states in phase space. In contrast to all topological transport scenarios in real space, the chiral transport for phase space phonons can arise without breaking physical time-reversal symmetry.
Relation to time crystals.
Time crystals and phase space crystals are closely related but different concepts. They both study subharmonic modes that emerge in periodically driven systems. Time crystals focus on the spontaneous symmetry breaking process of discrete time translational symmetry (DTTS) and the protection mechanism of subharmonic modes in quantum many-body systems. In contrast, the study of phase space crystal focuses on the discrete symmetries in phase space. The basic modes constructing a phase space crystal are not necessarily a many-body state, and need not break DTTS as for the single-particle phase space crystals. For many-body systems, phase space crystals study the interplay of the potential subharmonic modes that are arranged periodically in phase space. There is a trend to study the interplay of multiple time crystals which is coined as "condensed matter physics in time crystals".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "[\\hat{x},\\hat{p}]=i\\lambda "
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": " \\hat{a}=(\\hat{x}+i\\hat{p})/\\sqrt{2\\lambda} "
},
{
"math_id": 4,
"text": "[\\hat{a},\\hat{a}^\\dagger]=1"
},
{
"math_id": 5,
"text": "\\hat{H}=H(\\hat{x},\\hat{p})"
},
{
"math_id": 6,
"text": "\\hat{H}=H(\\hat{a},\\hat{a}^\\dagger)"
},
{
"math_id": 7,
"text": "\\hat{T}_\\tau=e^{-i\\tau \\hat{a}^\\dagger \\hat{a}}"
},
{
"math_id": 8,
"text": "\\tau={2\\pi}/{n}"
},
{
"math_id": 9,
"text": "Z_n"
},
{
"math_id": 10,
"text": "[\\hat{H},\\hat{T}_\\tau]=0"
},
{
"math_id": 11,
"text": "\\hat{H}=\\hat{T}^\\dagger_\\tau\\hat{H}\\hat{T}_\\tau \\rightarrow H(\\hat{a},\\hat{a}^\\dagger)=H(\\hat{T}^\\dagger_\\tau\\hat{a}\\hat{T}_\\tau,\\hat{T}^\\dagger_\\tau\\hat{H}\\hat{a}^\\dagger_\\tau)=H(\\hat{a}e^{-i\\tau},\\hat{a}^\\dagger e^{i\\tau})."
},
{
"math_id": 12,
"text": "\\hat{D}(\\xi)=\\exp[(\\xi\\hat{a}^\\dagger-\\xi^*\\hat{a})/\\sqrt{2\\lambda}]"
},
{
"math_id": 13,
"text": "\\hat{D}^\\dagger(\\xi)\\hat{a}\\hat{D}(\\xi)=\\hat{a}+\\xi"
},
{
"math_id": 14,
"text": "\\xi"
},
{
"math_id": 15,
"text": "[\\hat{H},\\hat{D}^\\dagger(\\xi)]=0"
},
{
"math_id": 16,
"text": " \\hat{H}=\\hat{D}^\\dagger(\\xi)\\hat{H}\\hat{D}(\\xi) \\rightarrow H(\\hat{a},\\hat{a}^\\dagger)=H(\\hat{D}^\\dagger(\\xi)\\hat{a}\\hat{D}(\\xi),\\hat{D}^\\dagger\\hat{a}^\\dagger\\hat{D}(\\xi))=H(\\hat{a}+\\xi,\\hat{a}^\\dagger+\\xi^*)."
},
{
"math_id": 17,
"text": "\\hat{D}(\\xi_1)"
},
{
"math_id": 18,
"text": "\\hat{D}(\\xi_2)"
},
{
"math_id": 19,
"text": "[\\hat{D}(\\xi_1),\\hat{D}(\\xi_2)]\\neq 0"
},
{
"math_id": 20,
"text": "|\\alpha\\rangle"
},
{
"math_id": 21,
"text": "\\hat{a}|\\alpha\\rangle=\\alpha|\\alpha\\rangle"
},
{
"math_id": 22,
"text": "\\hat{D}(\\xi)|\\alpha\\rangle=e^{i\\mathrm{Im}(\\xi\\alpha^*)}|\\alpha+\\xi\\rangle"
},
{
"math_id": 23,
"text": "(\\xi_1,\\xi_2,-\\xi_1-\\xi_2)"
},
{
"math_id": 24,
"text": "\\hat{D}[-\\xi_1-\\xi_2]\\hat{D}(\\xi_2)\\hat{D}(\\xi_1)|\\alpha\\rangle=e^{i\\frac{S}{\\lambda}}|\\alpha\\rangle,"
},
{
"math_id": 25,
"text": "S=\\frac{1}{2}\\mathrm{Im}(\\xi_2\\xi^*_1)"
},
{
"math_id": 26,
"text": "r"
},
{
"math_id": 27,
"text": "s"
},
{
"math_id": 28,
"text": "[\\hat{D}^r(\\xi_1),\\hat{D}^s(\\xi_2)]=0"
},
{
"math_id": 29,
"text": "\\hat{H}=\\cos\\hat{x}+\\cos\\hat{p}"
},
{
"math_id": 30,
"text": "\\hat{H}_{RWA}"
},
{
"math_id": 31,
"text": " \\frac{d\\rho}{dt}=-\\frac{i}{\\hbar}[\\hat{H}_{RWA},\\rho]+\\frac{\\gamma}{2}(2\\hat{a}\\rho\\hat{a}^{\\dagger}-\\hat{a}^{\\dagger}\\hat{a}\\rho-\\rho\\hat{a}^{\\dagger}\\hat{a})=\\mathcal{L}(\\rho),"
},
{
"math_id": 32,
"text": "\\gamma"
},
{
"math_id": 33,
"text": "\\mathcal{L}"
},
{
"math_id": 34,
"text": "\\mathcal{L}\\hat{\\rho}_m=\\lambda_m\\hat{\\rho}_m"
},
{
"math_id": 35,
"text": "[\\mathcal{L},\\mathcal{T}_\\tau]=0"
},
{
"math_id": 36,
"text": "\\mathcal{T}_\\tau\\hat{O}=\\hat{T}^\\dagger_\\tau\\hat{O}\\hat{T}_\\tau"
},
{
"math_id": 37,
"text": "\\hat{\\rho}_m"
},
{
"math_id": 38,
"text": "T"
},
{
"math_id": 39,
"text": "\\mathcal{H}=\\sum_iH(x_i,p_i,t)+\\sum_{i<j}V(x_i-x_j)."
},
{
"math_id": 40,
"text": "V(x_i-x_j)"
},
{
"math_id": 41,
"text": "\\mathcal{H}_{RWA}=\\sum_iH_{RWA}(X_i,P_i,t)+\\sum_{i<j}U(X_i,P_i;X_j,P_j)."
},
{
"math_id": 42,
"text": "X_i, P_i"
},
{
"math_id": 43,
"text": "i"
},
{
"math_id": 44,
"text": "x_i(t), p_i(t)"
},
{
"math_id": 45,
"text": "t=nT"
},
{
"math_id": 46,
"text": "U_{ij}=\\frac{1}{T}\\int^T_0V[x_i(t)-x_j(t)]."
},
{
"math_id": 47,
"text": "x_i(t)"
},
{
"math_id": 48,
"text": "V(x_i-x_j)=\\epsilon^{2n}/|x_i-x_j|^{2n}"
},
{
"math_id": 49,
"text": "n\\geq 1/2"
},
{
"math_id": 50,
"text": "U_{ij}=\\infty."
},
{
"math_id": 51,
"text": " R_{ij}"
},
{
"math_id": 52,
"text": "(X_i,P_i)"
},
{
"math_id": 53,
"text": "n=1/2"
},
{
"math_id": 54,
"text": "U(R_{ij})=2\\pi^{-1}\\tilde{\\epsilon}/R_{ij}"
},
{
"math_id": 55,
"text": "\\tilde{\\epsilon}=\\epsilon\\ln (\\epsilon^{-1}e^2 R^3_{ij}/2)"
},
{
"math_id": 56,
"text": "e=2.71828\\cdots"
},
{
"math_id": 57,
"text": "n=1,3/2,2,5/2,\\cdots"
},
{
"math_id": 58,
"text": "U_{ij}=U(R_{ij})=\\frac{2\\epsilon\\gamma^{2n-1}4^{\\frac{1}{2n}-1}}{\\pi(2n-1)}R^{1-\\frac{1}{n}}_{ij}, "
},
{
"math_id": 59,
"text": "\\gamma=(4n-1)^{\\frac{1}{2n-1}}"
},
{
"math_id": 60,
"text": "n=1"
},
{
"math_id": 61,
"text": "U(R_{ij})=\\sqrt{3}\\epsilon\\pi^{-1}"
},
{
"math_id": 62,
"text": "n>1"
},
{
"math_id": 63,
"text": "{U}(R_{ij})"
},
{
"math_id": 64,
"text": "R_{ij}"
},
{
"math_id": 65,
"text": "n\\rightarrow\\infty"
},
{
"math_id": 66,
"text": "U(R_{ij})=\\epsilon\\pi^{-1}R_{ij}"
},
{
"math_id": 67,
"text": "\\Phi(x_i,x_j,t)"
},
{
"math_id": 68,
"text": "U_{ij}=\\frac{1}{T}\\int^T_0\\langle \\Phi(x_i,x_j,t) |V(x_i-x_j)|\\Phi(x_i,x_j,t)\\rangle."
},
{
"math_id": 69,
"text": "N"
}
]
| https://en.wikipedia.org/wiki?curid=70474208 |
70477295 | Capacity credit | Capacity credit (CC, also capacity value or de-rating factor) is the fraction of the installed capacity of a power plant which can be relied upon at a given time (typically during system stress), frequently expressed as a percentage of the nameplate capacity. A conventional (dispatchable) power plant can typically provide the electricity at full power as long as it has a sufficient amount of fuel and is operational, therefore the capacity credit of such a plant is close to 100%; it is exactly 100% for some definitions of the capacity credit (see below). The output of a variable renewable energy (VRE) plant depends on the state of an uncontrolled natural resource (usually the sun or wind), therefore a mechanically and electrically sound VRE plant might not be able to generate at the rated capacity (neither at the nameplate, nor at the capacity factor level) when needed, so its CC is much lower than 100%. The capacity credit is useful for a rough estimate of the firm power a system with weather-dependent generation can reliably provide. For example, with a low, but realistic (cf. Ensslin et al.) wind power capacity credit of 5%, 20 gigawatts (GW) worth of wind power needs to be added to the system in order to permanently retire a 1 GW fossil fuel plant while keeping the electrical grid reliability at the same level.
Definitions.
There are a few similar definitions of the capacity credit:
Values.
The capacity credit can be much lower than the capacity factor (CF): in a not very probable scenario, if the riskiest time for the power system is after sunset, the capacity credit for solar power without coupled energy storage is zero regardless of its CF (under this scenario all existing conventional power plants would have to be retained after the solar installation is added). More generally, the CC is low when the times of the day (or seasons) for the peak load do not correlate well with times of high energy production. Ensslin et al. report wind CC values ranging from 40% down to 5%, with values dropping off with increased wind power penetration.
For very low penetrations (few percent), when the chance of the system actually being forced to rely on the VRE at peak times is negligible, the CC of a VRE plant is close to its capacity factor. For high penetrations, due to the fact that the weather tends to affect all plants of similar type at the same time and in the same way - and the chance of a system stress during low wind condition increases, the capacity credit of a VRE plant decreases. Greater geographical diversity of the VRE installations improves the capacity credit value, assuming a grid that can carry all necessary load. Increasing the penetration of one VRE resource also can result in increasing the CC for another one, e.g., in California, increase in solar capacity, with a low incremental CC, expected to be 8% in 2023 and dropping to 6% by 2026, helps shifting the peak demand from other sources later into the evening, when the wind is stronger, therefore the CC of the wind power is expected to increase from 14% to 22% within the same period. A 2020 study of ELCC by California utilities recommends even more pessimistic values for photovoltaics: by 2030 the ELCC of solar will become "nearly zero". The California Public Utilities Commission orders of 2021 and 2023 intend to add by 2035 additional renewable generation capacity with NQC of 15.5 GW and nameplate capacity of 85 GW, implying planned NQC for renewables (a combination of solar and wind), combined with geothermal, batteries, long-term storage, and demand response to be 15.5/85 = 18%.
In some areas peak demand is driven by air conditioning and occurs on summer afternoons and evenings, while the wind is strongest at night, with offshore wind strongest in the winter. This results in a relatively low CC for such potential wind power locations: for example in Texas a predicted average for onshore wind is 13% and for offshore wind is 7%.
In Great Britain, the solar contribution to the system adequacy is small and is primarily due to scenarios when the use of solar allows to keep the battery storage fully charged until later in the evening. The National Grid ESO in 2019 suggested planning for the following EFC-based de-rating:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "NQC = QC"
}
]
| https://en.wikipedia.org/wiki?curid=70477295 |
70478462 | Hybrid rocket fuel regression | Hybrid rocket fuel regression refers to the process by which the fuel grain of a hybrid-propellant rocket is converted from a solid to a gas that is combusted. It encompasses the regression rate, the distance that the fuel surface recedes over a given time, as well as the burn area, the surface area that is being eroded at a given moment.
Because the quantity of fuel being burned is important for the effectiveness of combustion in the engine, the regression rate plays a fundamental role in the design and firing of a hybrid engine. Unfortunately, hybrid fuel grains tend to have extremely slow regression, requiring very long combustion chambers or complex port designs that result in excess mass. Regression rate has also proven quite difficult to predict, with advanced models still providing significant error when applied at various scales and with differing fuels. Recent research has centered around the development of more accurate models coupled with research into techniques for increasing regression rate.
Regression rate.
In contrast to solid rocket motors, hybrids exhibit significant dependence on the size of the port and low dependence on chamber pressure under normal conditions. Because they are dominated by thermodynamic forces, models typically emerge via a heat transfer calculation. Marxman provided the first attempt at an a priori model of hybrid regression, basing the rate on a heat transfer equilibrium calculation and assuming unity for the Prandtl and Lewis numbers. He eventually developed the below equation, using formula_0 for instantaneous local mass flux, formula_1 as distance along the port, rho formula_2 for density of the fuel, formula_3 for viscosity of the main-stream gas flow, formula_4 for the velocity ratio between gas in the main stream and gas at the flame, and formula_5 for the ratio considering the enthalpy difference from flame to fuel surface (formula_6) in comparison to the effective heat of vaporization (formula_7) for the fuel.formula_8Though the model showed large errors when used to predict regression rate for an annular port, the strong dependence on flux was a key finding. Unfortunately, many components of the equation are extremely difficult to determine, so most engineers focused on developing models based on testing, fitting the regression rate to a power function by effectively combining most of the terms into one coefficient that is assumed constant throughout the burn. It was typically simplified into a basic equation by considering the average regression over time for a test, fitting coefficients formula_9, formula_10 and formula_11 based on regression testing.formula_12Where G is the mass flux of propellant and x is the distance along the fuel grain. Though Marxman's initial math indicates that formula_13 and formula_14, data typically ranges from 0.5 to 0.8 for formula_10 and usually shows less dependence than predicted on formula_1. By averaging the regression out over the length of the fuel grain, the commonly used space-time average regression equation is created (also typically using formula_15, the flux of oxidizer, for the flux term instead of formula_0 for flux of oxidizer and fuel).formula_16Many alternative equations for regression rate have been derived, usually constructed by reconsidering the assumptions made by Marxman but using the same diffusion-limited calculation approach. A model published by Karabeyoglu, for example, provides a more accurate approach by considering variation in the Prandtl number, accounting for entrance effects in the Reynolds number, and moving the flame sheet location to the stoichiometric location.
Similar concepts can be seen in an extension by Whitmore, where the Prandtl number is approximated as 0.8 and the skin friction coefficient is recalculated to consider blowing and the flow development along the grain length.
Both improved formulas appear to show a better relationship with tested data.
Regression enhancements.
Liquifying fuels.
The simplest technique for increasing the regression rate is to use a different fuel. Solids with lower molecular masses tend to have lower viscosities, a quality which generally correlates with a decrease in the required energy for gasification. Taken to the extreme, a new phenomenon actually emerges, where a melt layer at the surface of the fuel allows droplets to be entrained as oxidizer flows past. At the flux levels commonly seen in hybrid rocketry, this entrainment actually accounts for the largest portion of regression (dominating vaporization).
The concept was originally discovered during a brief research period in which AFRL and Orbital Technologies Corporation (ORBITEC) tested several cryogenic fuels in an effort to increase specific impulse. Using solidified pentane, they found regression rates vastly increased over traditional hybrid fuels. Several tests with paraffin also foreshadowed modern liquifying rocket technology, with the Peregrine rocket among others leading the way for further development.
The alternative regression method does supply some other issues, mainly a reduction in combustion efficiency. Because of the large particle size, the entrained droplets may not be fully consumed before flowing out of the nozzle and leaving the engine. Indeed, paraffin has a tendency to even slough off large fragments, greatly reduces combustion efficiency and potentially contributing to combustion instability.
Complex geometry.
Although it is much harder to predict, complex grain geometries offer another technique for increasing regression rate and burn area in order to greatly increase fuel flow.
Using non-circular port cross sections increases the area exposed to the oxidizer to be gasified, especially at the start of the burn. However, as the fuel continues to regress it will begin to round out the shape because regression generally occurs normal to the fuel’s surface, and corners tend to regress faster. Generally, this will cause the O/F ratio to shift away from stoichiometric.
Some of the first attempts at complex geometries were wagon wheel designs developed by the United Technology Center. Though they massively increase fuel flow, wagon wheels require that a significant portion of fuel is left behind, or the structure could break apart.
More recently, helical designs have been used to create a centripetal component of flow, reducing blowing and providing greater friction between the oxidizer and fuel in order to increase convection. Analysis at the University of Utah concluded that regression rates generally increased by at least a factor of two, up to even a factor of four. In general, helical regression rate is modeled by several multiplicative adjustments to the skin friction coefficient and to the blowing coefficient.
Burn area.
The burn area refers to the surface exposed to the heat of the combustion chamber, and it is just as pivotal to the regression of the rocket as the regression rate itself, since the volume flow rate of fuel is usually given by the regression rate multiplied by the burn area. Depending on the complexity of the grain geometry, it can also be quite difficult to calculate. At its simplest form, a tube-shaped fuel grain has a burn area of formula_17 added to the area on both ends. However, a star-shaped fuel grain could require the use of CAD or other geometric software to determine the surface area, particularly as the surface area regresses along the normals, often creating highly irregular geometry.
In fact, the process is even slightly more complicated because corners protruding into the combustion chamber will regress more quickly than their circular counterparts, since they are exposed to heat on both sides. To model the problem, Bath developed a technique of iteratively blurring pixels and removing those that fall below a certain threshold of brightness. Using the image processing to generate a table of surface area outputs for a given volume, it can easily be implemented into a model for regression of the fuel grain over time.
Unfortunately, most models still require an empirical factor that depends on variations in fuel and oxidizer flow paths for different port geometries. In the case of the image blurring model, predictions of regression are also dependent on the settings used in the image processing program.
Models of burn area based on 2D cross sections lose another component of accuracy because they assume regression in the radial direction. For a helical grain, for example, the burn area predicted by Bath's model would be incorrect.
Regression testing.
Because of the lack of accurate prediction methods, each system should generally be tested in full configuration to accurately determine the regression rate before flight. Typically, data points for several identical grains tested under different flux conditions are fitted to the space-time averaged power function. Initially, methods for fitting the power function were often left ambiguous in publications due to variation in the possible calculations for average mass flux, making it difficult to compare findings. A now commonly-referenced study by Karabeyoglu indicates that the easiest measurement, the port diameter average, also provides the most accurate results.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "\\rho_f\n"
},
{
"math_id": 3,
"text": "\\mu"
},
{
"math_id": 4,
"text": "u_e / u_c"
},
{
"math_id": 5,
"text": "\\Delta h / {h_v}"
},
{
"math_id": 6,
"text": "\\Delta h"
},
{
"math_id": 7,
"text": "h_v"
},
{
"math_id": 8,
"text": "\\dot{r} = \\frac{0.036 G}{\\rho_f} {( \\frac{Gx}{\\mu} )}^{-0.2} {(\\frac{u_e}{u_c} \\frac{\\Delta h}{h_v})}^{0.23}"
},
{
"math_id": 9,
"text": "a"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "m"
},
{
"math_id": 12,
"text": "\\dot{r} = a{G}^n x^m"
},
{
"math_id": 13,
"text": "n = 0.8"
},
{
"math_id": 14,
"text": "m=-0.2"
},
{
"math_id": 15,
"text": "G_o"
},
{
"math_id": 16,
"text": "\\dot{r} = a{G_o}^n"
},
{
"math_id": 17,
"text": "\\pi * D * l"
}
]
| https://en.wikipedia.org/wiki?curid=70478462 |
70478590 | Natarajan dimension | In the theory of Probably Approximately Correct Machine Learning, the Natarajan dimension characterizes the complexity of learning a set of functions, generalizing from the Vapnik-Chervonenkis dimension for boolean functions to multi-class functions. Originally introduced as the "Generalized Dimension" by Natarajan, it was subsequently renamed the "Natarajan Dimension" by Haussler and Long.
Definition.
Let formula_0 be a set of functions from a set formula_1 to a set formula_2. formula_0 shatters a set formula_3
if there exist two functions formula_4 such that
for all formula_8 and for all formula_9.
The Natarajan dimension of H is the maximal cardinality of a set shattered by formula_0.
It is easy to see that if formula_10, the Natarajan dimension collapses to the Vapnik Chervonenkis dimension.
Shalev-Shwartz and Ben-David present comprehensive material on multi-class learning and the Natarajan dimension, including uniform convergence and learnability. | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "C \\subset X"
},
{
"math_id": 4,
"text": "f_0, f_1 \\in H"
},
{
"math_id": 5,
"text": " x \\in C, f_0(x) \\neq f_1(x)"
},
{
"math_id": 6,
"text": "B\\subset C "
},
{
"math_id": 7,
"text": "h \\in H "
},
{
"math_id": 8,
"text": "x \\in B, h(x) = f_0(x)"
},
{
"math_id": 9,
"text": "x \\in C - B, h(x) = f_1(x)"
},
{
"math_id": 10,
"text": "|Y| = 2"
}
]
| https://en.wikipedia.org/wiki?curid=70478590 |
70480889 | Z-HIT | Validation of Impedance Spectra
Z-HIT, also denoted as ZHIT, Z-HIT relationship, is a bidirectional mathematical transformation, connecting the two parts of a complex function, - i.e. its modulus and its phase. Z-HIT relations are somewhat similar to the Kramers–Kronig relations, where the real part can be computed from the imaginary part (or vice versa). In contrast to the Kramers–Kronig relations, in the Z-HIT the impedance modulus is computed from the course of the phase angle (or vice versa). The main practical advantage of Z-HIT relationships over Kramers–Kronig relationships is, that the Z-HIT integration limits do not require any extrapolation: instead, an intergration over the experimentally available frequency range provides accurate data.
More specifically, the angular frequency (ω) boundaries for computing one component of the complex function from the other one using the Kramers-Kronig relations, are ω=0 and ω=∞; these boundaries require extrapolation procedures of the measured impedance spectra. Concerning the ZHIT however, the computing of the course of the impedance modulus from the course of the phase shift can be performed "within the measured frequency range", without the need of extrapolation. This avoids complications which may arise from the fact that impedance spectra can only be measured in a limited frequency range. Therefore, the Z-HIT-algorithm allows for verification of the stationarity of the measured test object as well as calculating the impedance values using the phase data. The latter property becomes important when drift effects are present in the impedance spectra which had to be detected or even removed when analysing and/or interpreting the spectra.
Z-HIT relations find use in Dielectric spectroscopy and in Electrochemical Impedance Spectroscopy.
Motivation.
An important application of Z-HIT is the examination of experimental impedance spectra for artifacts. The examination of EIS series measurements is often difficult due to the tendency of examined objects to undergo changes during the measurement. This may occur in many standard EIS applications such as the evaluation of fuel cells or batteries during discharge. Further examples include the investigation of light-sensitive systems under illumination (e.g. Photoelectrochemistry) or the analysis of water uptake of lacquers on metal surfaces (e.g. corrosion-protection).
A descriptive example for an unsteady system is a Lithium-ion battery. Under cyclization or discharging, the amount of charge in the battery changes over time. The change in charge is coupled with a chemical redox reaction, transferring to a change in concentrations of the involved substances. This violates the principles of stationarity and causality which are prerequisites for proper EIS measurements. In theory, this would exclude drift-affected samples from valid evaluation. Using the ZHIT-algorithm, these and similar artifacts can be recognized and spectra following causality can even be reconstructed, which are consistent with the Kramers–Kronig relations and thereby valid for analysis.
Mathematical Formulation.
Z-HIT is a special case of the Hilbert transform and through restriction by the Kramers–Kronig relations it can be derived for one-Port-systems. The frequency-dependent relationship between impedance and phase angle can be observed in the Bode plot of an impedance spectrum. Equation (1) is obtained as a general solution of the correlation between impedance modulus and phase shift.
formula_0
Equation (1) indicates that the logarithm of the impedance (formula_1) at a specific frequency formula_2 can be calculated up to a constant value of (formula_3), if the phase shift formula_4 is integrated up to the frequency point of interest formula_2, while the starting value formula_5 of the integral can be freely chosen. As an additional contribution to the calculation of formula_1, the odd-numbered derivatives of the phase shift at the point formula_2 have to be added, weighted with the factors formula_6.
The factors formula_6 can be calculated according to equation (2), whereat formula_7 represents the Riemann ζ-function.
formula_8
The practically applied Z-HIT approximation is obtained from equation (1) by limitation to the first derivative of the phase shift neglecting higher derivatives (equation (3)), where "C" represents a constant.
formula_9
The free choice of the integration boundaries in the ZHIT algorithm is a fundamental difference concerning the Kramers-Kronig relations; in ZHIT the integration boundaries are
formula_10 and formula_11.
The greatest advantage of the ZHIT results from the fact, that both integration boundaries can be chosen within the measured spectrum, and thus does not require extrapolation to frequencies 0 and formula_12, as with the Kramers-Kronig relations.
Practical implementation.
The practical implementation of the Z-HIT approximation is shown schematically in Figure 1. A continuous curve (spline) for each of the two independent measured quantities (impedance and phase) is created by smoothing (part 1 in Figure (1)) from the measured data points. With the help of the spline for the phase shift, values for the impedance are now calculated. First, the integral of the phase shift is calculated up to the corresponding frequency formula_13, where (if suited) the highest measured frequency is selected as starting point formula_5 for the integration - c.f. part 2 in Figure (1). From the spline of the phase shift, its slope can be calculated at formula_13 (part 3 in figure (1)). Thereby, a reconstructed curve of the impedance is obtained which is (in the ideal case) only shifted parallelly with regard to the measured curve. There exist several possibilities to determine the constant "C" in the Z-HIT equation (part 4 in Figure (1)), one of which contains a parallel shift of the reconstructed impedance in a frequency range not affected by artifacts (see notes). This shift is performed by a linear regression procedure. Comparing the resulting reconstructed impedance curve to the measured data (or the Splines of the impedance), artifacts can easily be detected. These are usually located in the high frequency range (caused by induction or mutual induction, especially when low impedance systems are investigated) or in the low frequency range (caused by the change of the system during the measurement (=drift)).
Notes (time requirements during measurement).
The measurement time required for a single impedance measurement point strongly depends on the frequency of interest. While frequencies above about 1 Hz can be measured within seconds, the measurement time increases significantly in the lower frequency range.
Although the exact duration for measuring a complete impedance spectrum depends on the measauring device as well as on internal settings, the following measurement times can be considered as rules of thumb when measuring the frequency measurement points sequentially, with the upper frequency assumed as 100 kHz or 1 MHz:
Measurements down to or below 0.01 Hz are typically associated with measurement times in the range of several hours. Therefore, a spectrum can be roughly divided into three sub-ranges with regard to the occurrence of artifacts: in the high-frequency domain (approx. > 100 to 1000 Hz), induction or mutual induction can dominate. In the low frequency region (< 1 Hz), drift can occur due to noticeable change in the system. The range between about 1 Hz and 1000 Hz is usually not affected by high- or low-frequency artifacts. However, the mains frequency (50/60 Hz) may come into play as distorting artifact in this region.
Notes (application procedure).
In addition to the reconstruction of the impedance from the phase shift, the reverse approach is also possible. However, the herein presented procedure possesses several advantages:
Applications.
Figure 3 shows an impedance spectrum of a measurement series of a painted steel sample during water uptake (upper part in Figure 3). The symbols in the diagram represent the interpolation points (nodes) of the measurement, while the solid lines represent the theoretical values simulated according to an appropriate model. The interpolation points for the impedance were obtained by the Z-HIT reconstruction of the phase shift.
The bottom part of Figure 3 depicts the normalized error (ZZHIT − Zsmooth)/ZZHIT·100 of the impedance. For the error calculation, two different procedures are used to determine the "extrapolated impedance values":
The simulation according to the appropriate model is performed using the two different impedance curves. The corresponding residuals are calculated and depicted in the bottom part of the diagram in Figure (3).
Note: Error patterns as shown in the magenta bottom diagram in Figure (3) may be the motivation to extend an existing model by additional elements to minimize the fitting error. However, this is not possible in every case. The drift in the impedance spectrum mainly influences the low-frequency part by means of a changing system during the measurement. The spectrum in Figure 3 is caused by water penetrating into the pores of the lacquer, which reduces the impedance (resistance) of the coating. Therefore, the system behaves as if at each low-frequency measurement point the resistance of the coating was replaced by a further, smaller resistance due to the water uptake. However, there is no impedance element that exhibits such behavior. Therefore, any extension of the model would only result in a "smearing" of the error over a wider frequency range without reducing the error itself. Only the removal of the drift by reconstructing the impedance using Z-HIT leads to a significantly better compatibility between measurement and model.
Figure 4 shows a Bode plot of an impedance series measurement, performed on a fuel cell where the hydrogen of the fuel gas was deliberately poisoned by the addition of carbon monoxide. Due to the poisoning, active centers of the platinum catalyst are blocked, which severely impairs the performance of the fuel cell. Thereby, the blocking of the catalyst is depending on the potential, resulting in an alternating sorption and desorption of the carbon monoxide on the catalyst surface within the cell. This "cyclical" change of the active catalyst surface translates to pseudo-inductive behavior, which can be observed in the impedance spectrum of Figure 4 at low frequencies (< 3 Hz). The impedance curve was reconstructed by Z-HIT and is represented by the purple line, while the originally measured values are represented by the blue circles. The deviation in the low frequency part of the measurement can be clearly observed. Evaluation of the spectra shows significantly better agreement between model and measurement if the reconstructed Z-HIT impedances are used instead of the original data.
References.
Original work:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (1)\\text{ } \\ln \\left[Z \\left(\\omega_o \\right) \\right] - \\ln \\left[Z \\left(0 \\right) \\right] \\text{ }= \\text{ }\\frac{2}{\\pi}\\cdot \\int\\limits^{\\omega_O}_{\\omega_S}\\varphi \\left(\\omega \\right)dln \\left(\\omega \\right) \\text{ }+ \\text{ }\\gamma_k \\cdot \\sum^{\\infty}_{k=1}\\frac{d^{k}\\varphi \\left(\\omega_0 \\right)}{d \\ln {\\left(\\omega \\right)}^{k}} \\text { } \\text { } \\text { } \\text { } with \\text { } k \\text { }= \\text { }1, 3, 5, 7, \\ldots (k\\text { } = \\text {odd})"
},
{
"math_id": 1,
"text": " \\ln \\left[Z \\left(\\omega_o \\right) \\right] "
},
{
"math_id": 2,
"text": " \\omega_O "
},
{
"math_id": 3,
"text": " \\ln \\left[Z \\left(0 \\right) \\right] "
},
{
"math_id": 4,
"text": " \\varphi \\left(\\omega \\right) "
},
{
"math_id": 5,
"text": " \\omega_S "
},
{
"math_id": 6,
"text": " \\gamma_k "
},
{
"math_id": 7,
"text": " \\zeta \\left(k + 1 \\right) "
},
{
"math_id": 8,
"text": " (2)\\text{ } \\gamma_k = {\\left(- 1 \\right)}^k \\cdot \\frac{2}{\\pi}\\cdot \\frac{1}{2^k}\\cdot \\zeta \\left(k + 1 \\right) \\text { } \\text { } \\text { } \\text { } with \\text { } k \\text { }= \\text { } 1, 3, 5, 7, \\ldots (k\\text { } = \\text {odd numbered}) "
},
{
"math_id": 9,
"text": "(3)\\text{ } \\ln \\left[Z \\left(\\omega_o \\right) \\right] \\text{ }= \\text{ }\\frac{2}{\\pi}\\text{ }\\int\\limits^{\\omega_O}_{\\omega_S}\\varphi \\left(\\omega \\right)dln \\left(\\omega \\right) \\text{ }\\text{ }+ \\text{ }\\text{ }\\gamma_{1} \\text{ }\\frac{d\\varphi \\left(\\omega_O \\right)}{d \\ln \\left(\\omega \\right)}\\text{ }+ \\text{ }C"
},
{
"math_id": 10,
"text": " \\omega = {\\omega_S}\\text{ } "
},
{
"math_id": 11,
"text": " \\omega = {\\omega_0}\\text{ } "
},
{
"math_id": 12,
"text": " \\infty "
},
{
"math_id": 13,
"text": " \\omega_0 "
}
]
| https://en.wikipedia.org/wiki?curid=70480889 |
7048539 | Pinning force | Pinning force is a force acting on a pinned object from a pinning center. In solid state physics, this most often refers to the vortex pinning, the pinning of the magnetic vortices (magnetic flux quanta, Abrikosov vortices) by different kinds of the defects in a type II superconductor. Important quantities are the "individual" maximal pinning force, which defines the depinning of a single vortex, and an "average" pinning force, which defines the depinning of the correlated vortex structures and can be associated with the critical current density (the maximal density of non-dissipative current). The interaction of the correlated vortex lattice with system of pinning centers forms the magnetic phase diagram of the vortex matter in superconductors. This phase diagram is especially rich for high temperature superconductors (HTSC) where the thermo-activation processes are essential.
The pinning mechanism is based on the fact that the amount of grain boundary area is reduced when a particle is located on a grain boundary. It is also assumed that particles are spherical and the particle-matrix interface is incoherent. When a moving grain boundary meets a particle at an angle formula_0, the particle exerts a pinning force formula_1 on the grain boundary that is equal to formula_2; with formula_3 the particle radius and formula_4 the energy per unit of grain boundary area. | [
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "F = 2\\pi \\sigma \\cos \\beta \\sin \\beta"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=7048539 |
70486756 | Yarn realisation | Operational parameter of spinning
In textile spinning, yarn realisation (YR), or yarn recovery, is an operational parameter of yarn manufacturing. It is the percentage conversion of raw material to finished yarn. The rest of the waste fibers with less value are compared to the weight of the produced yarn from a given weight of raw material. The quantity of waste removed during the various phases of yarn spinning, such as blow-room, carding, and combing, is often used to determine yarn realisation. Yarn realisation ranges between 85% and 90% in carded cotton yarns and between 67% and 75% in combed cotton yarns.
Significance.
Yarn realisation is one of the important factors that affect the quality of the yarn, profitability, and lead time of a spinning mill. Better realisations make spinning mills more competitive, and greater realisations mean better economics for a spinning business. Even minor changes in yarn realisation, say 1%, translate into a huge impact on spinning production economics. Thus, controlling yarn realisation is as critical to a mill as controlling cotton and mixing costs.
Formula.
formula_0
Components.
The following components play a significant role in yarn realisation:
Raw material.
In the spinning industry, the cost of raw material is directly influenced by: procurement, methods of mixing, yarn realisation (waste standards), and re-use of waste. After picking, the cotton lint in compressed bales is transferred to the yarn spinning mills.
Cotton lint.
Cotton lint refers to the fibrous coat that covers the cotton seeds. Cotton lint is ginned cotton. The lint that is delivered to the spinning mill contains a variety of extraneous materials, including seed pieces, dust, and motes, which are collectively referred to as trash. Yarn realisation (YR) is largely influenced by the trash content of cotton, the intended yarn quality, and the type of machinery used.
Trash percentage.
Trash is non-lint material that is present with cotton lint. It is made up of leaf fragments, bark bits, grass, plastic pieces, sand, and dust. The level of contamination is determined by cultivation, harvesting, and ginning conditions.
Short fibers.
Cotton is a natural plant fiber, and depending upon many conditions, such as geography, seed quality, and cultivation, the length of the fiber varies from lot to lot, as well as different qualities. Increased efficiency in yarn manufacturing and yarn quality are dependent on certain fiber characteristics. For an example short fiber content (SFC) by number and by weight influences the productivity and quality of the yarn. Cotton lint with more than 25%SFC(n) is a problem in spinning. Short fibers also known as "noil" extraction improves the yarn quality, consequently affects yarn realisation. Extra noil extraction is required for superior and fine quality yarn, which affects the yarn realization.
Moisture.
Cotton is a hygroscopic fiber, which means it takes in moisture from the environment and also dries quickly if it is kept in a dry place. A small amount of moisture loss in the lint may also contribute to yarn realisation.
Humidification.
To maintain a specified level of moisture in cotton, the relative humidity must be maintained at 65 percent during mixing, winding, and packing. Moisture helps in reducing fluff generation and decreasing in invisible losses.
Spinning.
Spinning is a process in textile manufacturing in which staple fibers are converted into yarn. Optimizing spinning processes and waste management benefits the yarn realisation and the economics of a spinning mill.
Preparatory.
Mixing of cotton.
Cotton fibers vary in terms of staple length and other physical qualities; it is an inherent characteristic. Bale mixing, or bale management is the process of testing, sorting, and then mixing fibers from different bales [also include the bales received from different stations] according to their fiber qualities in order to produce a certain quality yarn at the lowest possible cost.
Waste management.
The waste in textiles is classified into two types: production waste, which is the raw material for subsequent steps in spinning production (it is cleaning waste left out short fibers in carding or combing, it is a reusable waste). Post-production waste that is not related to spinning, but does happen at the stages of yarn to fabric [manufacturing and processing]. Waste management in spinning contributes to better yarn realisation.
The two types of waste that contribute to yarn realisation are: one is hard waste, which is not reused, and the second type of waste is reusable waste, also called soft waste, which includes sliver bits, lap bits, roving ends, roller waste, and pneumafil.
Recycled yarn.
Yarn produced from the waste fibers can reduce the losses and contribute to the mill's profitability. There are many areas where waste fibers can be used, such as blending. | [
{
"math_id": 0,
"text": " \\mathrm{Yarn\\ realisation} = \\frac{\\mathrm{Yarn\\ produced}}{\\mathrm{Consumption\\ of\\ cotton}}\\times 100 "
}
]
| https://en.wikipedia.org/wiki?curid=70486756 |
70493982 | Radial flux motor | A radial flux motor generates flux perpendicular to the axis of rotation. By contrast, an axial flux motor generates flux parallel to the axis.
Design.
The features of a radial flux motor are placed on the sides. The copper windings are wrapped around slots.
A traditional radial flux BLDC motor places a rotor made of permanent magnets inside the stator. The stator contains support known as a yoke, which is outfitted with "teeth", individually wrapped with electromagnetic coils. The teeth function as alternating magnetic poles. The rotor’s magnetic poles interact with the alternating magnetic flux of the teeth, producing torque.
The use of grain-oriented steel in radial flux motors is challenging due to the curving geometry of the magnetic flux path.
Permanent magnet motors.
Radial flux motors typically use less permanent magnet material, at the cost of lower torque density.
Torque, speed, power.
Torque, speed, and power are related by:
formula_0
where P is mechanical power, T is torque, in Newton-metres, and ω is speed in radians/second.
Thermal management.
While permanent magnent radial flux motors offer considerably higher power than induction motors, they produce more heat, which must therefore be removed. This occurs either via conduction or air/water cooling, depending on application requirements.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P = T * \\omega"
}
]
| https://en.wikipedia.org/wiki?curid=70493982 |
70501237 | GCD matrix | In mathematics, a greatest common divisor matrix (sometimes abbreviated as GCD matrix) is a matrix that may also be referred to as Smith's matrix. The study was initiated by H.J.S. Smith (1875). A new inspiration was begun from the paper of Bourque & Ligh (1992). This led to intensive investigations on singularity and divisibility of GCD type matrices. A brief review of papers on GCD type matrices before that time is presented in .
Definition.
Let formula_0 be a list of positive integers. Then the formula_1 matrix formula_2 having the greatest common divisor formula_3 as its formula_4 entry is referred to as the GCD matrix on formula_5.The LCM matrix formula_6 is defined analogously.
The study of GCD type matrices originates from who evaluated the determinant of certain GCD and LCM matrices.
Smith showed among others that the determinant of the formula_1 matrix formula_7 is formula_8, where formula_9 is Euler's totient function.
Bourque–Ligh conjecture.
conjectured that the LCM matrix on a GCD-closed set formula_10 is nonsingular. This conjecture was shown to be false by and subsequently by . A lattice-theoretic approach is provided by .
The counterexample presented in is formula_11 and that in is
formula_12 A counterexample consisting of odd numbers is formula_13. Its Hasse diagram is presented on the right below.
The cube-type structures of these sets with respect to the divisibility relation are explained in .
Divisibility.
Let formula_0 be a factor closed set.
Then the GCD matrix formula_2 divides the LCM matrix formula_6 in the ring of formula_1 matrices over the integers, that is there is an integral matrix formula_14 such that formula_15,
see . Since the matrices formula_2 and formula_6 are symmetric, we have formula_16. Thus, divisibility from the right coincides with that from the left. We may thus use the term divisibility.
There is in the literature a large number of generalizations and analogues of this basic divisibility result.
Matrix norms.
Some results on matrix norms of GCD type matrices are presented in the literature.
Two basic results concern the asymptotic behaviour of the formula_17 norm of
the GCD and LCM matrix on formula_18.
Given formula_19, the formula_17 norm of an formula_1 matrix formula_20 is defined as
formula_21
Let formula_18. If formula_22, then
formula_23
where
formula_24
and formula_25 for formula_26 and formula_27.
Further, if formula_28, then
formula_29
where
formula_30
Factorizations.
Let formula_31 be an arithmetical function, and let formula_0 be a set of distinct positive integers. Then the matrix formula_32 is referred to as the GCD matrix on formula_5 associated with formula_31.
The LCM matrix formula_33 on formula_5 associated with formula_31 is defined analogously.
One may also use the notations formula_34 and formula_35.
Let formula_5 be a GCD-closed set.
Then
formula_36
where formula_37 is the formula_1 matrix defined by
formula_38
and formula_39 is the formula_1 diagonal matrix, whose diagonal elements are
formula_40
Here formula_41 is the Dirichlet convolution and formula_42 is the Möbius function.
Further, if formula_31 is a multiplicative function and always nonzero,
then
formula_43
where formula_44 and formula_45 are the formula_1 diagonal matrices, whose diagonal elements are
formula_46
and
formula_47
If formula_5 is factor-closed, then formula_48 and
formula_49.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S=(x_1, x_2,\\ldots, x_n)"
},
{
"math_id": 1,
"text": "n\\times n"
},
{
"math_id": 2,
"text": "(S)"
},
{
"math_id": 3,
"text": "\\gcd(x_i, x_j)"
},
{
"math_id": 4,
"text": "ij"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "[S]"
},
{
"math_id": 7,
"text": "(\\gcd(i,j))"
},
{
"math_id": 8,
"text": "\\phi(1)\\phi(2)\\cdots\\phi(n)"
},
{
"math_id": 9,
"text": "\\phi"
},
{
"math_id": 10,
"text": " S "
},
{
"math_id": 11,
"text": "S = \\{1,2,3,4,5,6, 10,45,180\\}"
},
{
"math_id": 12,
"text": "S=\\{1,2,3,5,36,230,825,227700\\}."
},
{
"math_id": 13,
"text": "S = \\{1, 3, 5, 7, 195, 291, 1407, 4025, 1020180525 \\}"
},
{
"math_id": 14,
"text": "B"
},
{
"math_id": 15,
"text": "[S]=B(S)"
},
{
"math_id": 16,
"text": "[S]=(S) B^T"
},
{
"math_id": 17,
"text": "\\ell_p"
},
{
"math_id": 18,
"text": "S=\\{1, 2,\\dots, n\\}"
},
{
"math_id": 19,
"text": "p\\in\\N^+"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "\n\\Vert A\\Vert_p\n=\\left(\\sum_{i=1}^n \\sum_{j=1}^n |a_{ij}|^p \\right)^{1/p}.\n"
},
{
"math_id": 22,
"text": "p\\ge 2"
},
{
"math_id": 23,
"text": "\n\\Vert (S)\\Vert_p=C_p^{1/p} n^{1+(1/p)}+O((n^{(1/p)-p}E_p(n)),\n"
},
{
"math_id": 24,
"text": "\nC_p:=\\frac{2\\zeta(p)-\\zeta(p+1)}{(p+1)\\zeta(p+1)}\n"
},
{
"math_id": 25,
"text": "E_p(x)=x^p"
},
{
"math_id": 26,
"text": "p>2"
},
{
"math_id": 27,
"text": "E_2(x)=x^2\\log x"
},
{
"math_id": 28,
"text": "p\\ge 1"
},
{
"math_id": 29,
"text": "\n\\Vert [S]\\Vert_p=D_p^{1/p} n^{2+(2/p)}+O((n^{(2/p)+1}(\\log n)^{2/3}(\\log\\log n)^{4/3}), \n"
},
{
"math_id": 30,
"text": " \nD_p:=\\frac{\\zeta(p+2)}{(p+1)^2\\zeta(p)}.\n"
},
{
"math_id": 31,
"text": "f"
},
{
"math_id": 32,
"text": "(S)_f=(f(\\gcd(x_i, x_j))"
},
{
"math_id": 33,
"text": "[S]_f"
},
{
"math_id": 34,
"text": "(S)_f=f(S)"
},
{
"math_id": 35,
"text": "[S]_f=f[S]"
},
{
"math_id": 36,
"text": "\n(S)_f=E\\Delta E^T,\n"
},
{
"math_id": 37,
"text": "E"
},
{
"math_id": 38,
"text": "\ne_{ij}=\n\\begin{cases}\n1 & \\mbox{if } x_j\\,\\mid\\, x_i,\\\\\n0 & \\mbox{otherwise}\n\\end{cases}\n"
},
{
"math_id": 39,
"text": "\\Delta"
},
{
"math_id": 40,
"text": "\n\\delta_i=\\sum_{d\\mid x_i\\atop {d\\nmid x_t\\atop x_t<x_i}}\n(f\\star\\mu)(d).\n"
},
{
"math_id": 41,
"text": "\\star"
},
{
"math_id": 42,
"text": "\\mu"
},
{
"math_id": 43,
"text": "\n[S]_f=\\Lambda E\\Delta^\\prime E^T\\Lambda,\n"
},
{
"math_id": 44,
"text": "\\Lambda"
},
{
"math_id": 45,
"text": "\\Delta'"
},
{
"math_id": 46,
"text": "\\lambda_i=f(x_i)"
},
{
"math_id": 47,
"text": "\n\\delta_i^\\prime=\\sum_{d\\vert x_i\\atop {d\\nmid x_t\\atop x_t<x_i}}\n(\\frac{1}{f}\\star\\mu)(d).\n"
},
{
"math_id": 48,
"text": "\\delta_i=(f\\star\\mu)(x_i)"
},
{
"math_id": 49,
"text": "\\delta_i^\\prime=(\\frac{1}{f}\\star\\mu)(x_i)"
}
]
| https://en.wikipedia.org/wiki?curid=70501237 |
70509023 | Balance of angular momentum | The balance of angular momentum or Euler's second law in classical mechanics is a law of physics, stating that to alter the angular momentum of a body a torque must be applied to it.
An example of use is the playground merry-go-round in the picture. To put it in rotation it must be pushed. Technically one summons a torque that feeds angular momentum to the merry-go-round. The torque of frictional forces in the bearing and drag, however, make a resistive torque that will gradually lessen the angular momentum and eventually stop rotation.
The mathematical formulation states that the rate of change formula_0 of angular momentum formula_1 about a point formula_2, is equal to the sum of the external torques formula_3 acting on that body about that point:
formula_4
The point formula_2 is a fixed point in an inertial system or the center of mass of the body. In the special case, when external torques vanish, it shows that the angular momentum is preserved. The d'Alembert force counteracting the change of angular momentum shows as a gyroscopic effect.
From the balance of angular momentum follows the equality of corresponding shear stresses or the symmetry of the Cauchy stress tensor. The same follows from the Boltzmann Axiom, according to which internal forces in a continuum are torque-free. Thus the balance of angular momentum, the symmetry of the Cauchy stress tensor, and the Boltzmann Axiom in continuum mechanics are related terms.
Especially in the theory of the top the balance of angular momentum plays a crucial part. In continuum mechanics it serves to exactly determine the skew-symmetric part of the stress tensor.
The balance of angular momentum is, besides the Newtonian laws, a fundamental and independent principle and was introduced as such first by Swiss mathematician and physicist Leonhard Euler in 1775.
History.
Swiss mathematician Jakob I Bernoulli applied the balance of angular momentum in 1703 – without explicitly formulating it – to find the center of oscillation of a pendulum, which he had already done in a first, somewhat incorrect manner in 1686. The balance of angular momentum thus preceded Newton's laws, which were first published in 1687.
In 1744, Euler was the first to use the principles of momentum and of angular momentum to state the equations of motion of a system. In 1750, in his treatise "Discovery of a new principle of mechanics" he published the Euler's equations of rigid body dynamics, which today are derived from the balance of angular momentum, which Euler, however, could deduce for the rigid body from Newton's second law. After studies on plane elastic continua, which are indispensable to the balance of the torques, Euler raised the balance of angular momentum to an independent principle for calculation of the movement of bodies in 1775.
In 1822, French mathematician Augustin-Louis Cauchy introduced the stress tensor whose symmetry in combination with the balance of linear momentum made sure the fulfillment of the balance of angular momentum in the general case of the deformable body. The interpretation formula_5 of the balance of angular momentum was first noted by M. P. Saint-Guilhem in 1851.
Kinetics of rotation.
Kinetics deals with states that are not in mechanical equilibrium. According to Newton's second law, an external force leads to a change in velocity (acceleration) of a body. Analogously an external torque means a change in angular velocity resulting in an angular acceleration. The inertia relating to rotation depends not only on the mass of a body but also on its spatial distribution. With a rigid body this is expressed by the moment of inertia. With a rotation around a fixed axis, the torque is proportional to the angular acceleration with the moment of inertia as proportionality factor. Here it is to be noted that the moment of inertia is not only dependent on the position of the axis of rotation (see Steiner Theorem) but also on its direction. Should the above law be formulated more generally for any axis of rotation then the inertia tensor must be used.
With the two-dimensional special case, a torque only results in an acceleration or slowing down of a rotation. With the general three-dimensional case, however, it can also alter the direction of the axis (precession).
Boltzmann Axiom.
In 1905, Austrian physicist Ludwig Boltzmann pointed out that with reduction of a body into infinitesimally smaller volume elements, the inner reactions have to meet all static conditions for mechanical equilibrium. Cauchy's stress theorem handles the equilibrium in terms of force. For the analogous statement in terms of torque, German mathematician Georg Hamel coined the name "Boltzmann Axiom".
This axiom is equivalent to the symmetry of the Cauchy stress tensor. For the resultants of the stresses do not exert a torque on the volume element, the resultant force must lead through the center of the volume element. The line of action of the inertia forces and the normal stress resultants σxx·dy and σyy·dx lead through the center of the volume element. In order that the shear stress resultants τxy·dy and τyx·dx lead through the center of the volume element
formula_6
must hold. This is actually the statement of the equality of corresponding shear stresses in the xy plane.
Cosserat Continuum.
In addition to the torque-free classical continuum with a symmetric stress tensor, cosserat continua (polar continua) that are not torque-free have also been defined. One application of such a continuum is the theory of shells. Cosserat continua are not only capable to transport a momentum flux but also an angular momentum flux. Therefore, there also may be sources of momentum and angular momentum inside the body. Here the Boltzmann Axiom does not apply and the stress tensor may be skew-symmetric.
If these fluxes are treated as usual in continuum mechanics, field equations arise in which the skew-symmetric part of the stress tensor has no energetic significance. The balance of angular momentum becomes independent of the balance of energy and is used to determine the skew-symmetric part of the stress tensor. American mathematician Clifford Truesdell saw in this the "true basic sense of Euler's second law".
Area rule.
The area rule is a corollary of the angular momentum law in the form: "The resulting moment is equal to the product of twice the mass and the time derivative of the areal velocity."
It refers to the ray formula_7 to a point mass with mass "m". This has the angular momentum with the velocity formula_8 and the momentum formula_9
formula_10.
In the infinitesimal time d"t" the trajectory sweeps over a triangle whose content is formula_11, see image, areal velocity and cross product "×". This is how it turns out:
formula_12.
With Euler's second law this becomes:
formula_13.
The special case of plane, moment-free central force motion is treated by Kepler's second law, also known as the "area rule".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dot{\\vec L}_c"
},
{
"math_id": 1,
"text": "\\vec L_c"
},
{
"math_id": 2,
"text": "\\vec c"
},
{
"math_id": 3,
"text": "\\vec M_c"
},
{
"math_id": 4,
"text": "\\vec M_c = \\dot{\\vec L}_c"
},
{
"math_id": 5,
"text": "\\vec M=\\dot{\\vec L}"
},
{
"math_id": 6,
"text": "\\frac{\\tau_{xy}\\mathrm{d}y}{\\tau_{yx}\\mathrm{d}x}=\\frac{\\mathrm{d}y}{\\mathrm{d}x}\n\\quad\\rightarrow\\quad\\tau_{yx}=\\tau_{xy}"
},
{
"math_id": 7,
"text": "\\vec r"
},
{
"math_id": 8,
"text": "\\dot{\\vec r}"
},
{
"math_id": 9,
"text": "\\vec p=m\\dot{\\vec r}"
},
{
"math_id": 10,
"text": "\\vec L=\\vec r\\times\\vec p=m\\vec r\\times\\dot{\\vec r}\n=m\\vec r\\times\\mathrm{d}\\vec r/\\mathrm{d}t"
},
{
"math_id": 11,
"text": "\\mathrm{d}\\vec A=\\tfrac12\\vec r\\times\\mathrm{d}\\vec r"
},
{
"math_id": 12,
"text": "\\vec L=m\\vec r\\times\\mathrm{d}\\vec r/\\mathrm{d}t\n=2m\\,\\mathrm{d}\\vec A/\\mathrm{d}t=2m\\dot{\\vec A}"
},
{
"math_id": 13,
"text": "\\vec M=\\dot{\\vec L}=2m\\ddot{\\vec A}"
}
]
| https://en.wikipedia.org/wiki?curid=70509023 |
705128 | Dupin cyclide | Geometric inversion of a torus, cylinder or double cone
In mathematics, a Dupin cyclide or cyclide of Dupin is any geometric inversion of a standard torus, cylinder or double cone. In particular, these latter are themselves examples of Dupin cyclides. They were discovered c. 1802 by (and named after) Charles Dupin, while he was still a student at the École polytechnique following Gaspard Monge's lectures. The key property of a Dupin cyclide is that it is a channel surface (envelope of a one-parameter family of spheres) in two different ways. This property means that Dupin cyclides are natural objects in Lie sphere geometry.
Dupin cyclides are often simply known as "cyclides", but the latter term is also used to refer to a more general class of quartic surfaces which are important in the theory of separation of variables for the Laplace equation in three dimensions.
Dupin cyclides were investigated not only by Dupin, but also by A. Cayley, J.C. Maxwell and Mabel M. Young.
Dupin cyclides are used in computer-aided design because cyclide patches have rational representations and are suitable for blending canal surfaces (cylinder, cones, tori, and others).
Definitions and properties.
There are several equivalent definitions of Dupin cyclides. In formula_0, they can be defined as the images under any inversion of tori, cylinders and double cones. This shows that the class of Dupin cyclides is invariant under Möbius (or conformal) transformations.
In complex space formula_1 these three latter varieties can be mapped to one another by inversion, so Dupin cyclides can be defined as inversions of the torus (or the cylinder, or the double cone).
Since a standard torus is the orbit of a point under a two dimensional abelian subgroup of the Möbius group, it follows that the cyclides also are, and this provides a second way to define them.
A third property which characterizes Dupin cyclides is that their curvature lines are all circles (possibly through the point at infinity). Equivalently, the curvature spheres, which are the spheres tangent to the surface with radii equal to the reciprocals of the principal curvatures at the point of tangency, are constant along the corresponding curvature lines: they are the tangent spheres containing the corresponding curvature lines as great circles. Equivalently again, both sheets of the focal surface degenerate to conics. It follows that any Dupin cyclide is a channel surface (i.e., the envelope of a one-parameter family of spheres) in two different ways, and this gives another characterization.
The definition in terms of spheres shows that the class of Dupin cyclides is invariant under the larger group of all Lie sphere transformations; any two Dupin cyclides are Lie-equivalent. They form (in some sense) the simplest class of Lie-invariant surfaces after the spheres, and are therefore particularly significant in Lie sphere geometry.
The definition also means that a Dupin cyclide is the envelope of the one-parameter family of spheres tangent to three given mutually tangent spheres. It follows that it is tangent to infinitely many Soddy's hexlet configurations of spheres.
(CS): A Dupin cyclide can be represented in two ways as the envelope of a one parametric pencil of spheres, i.e. it is a canal surface with two directrices. The pair of directrices are focal conics and consists either of an ellipse and a hyperbola or of two parabolas. In the first case one defines the cyclide as "elliptic", in the second case as "parabolic". In both cases the conics are contained in two mutually orthogonal planes. In extreme cases (if the ellipse is a circle) the hyperbola degenerates to a line and the cyclide is a torus of revolution.
Parametric and implicit representation.
A further special property of a cyclide is:
(CL): Any curvature line of a Dupin cyclide is a "circle".
Elliptic cyclides.
An elliptic cyclide can be represented parametrically by the following formulas (see section Cyclide as channel surface):
formula_5
formula_6
formula_7
formula_8
The numbers formula_2 are the semi major and semi minor axes and formula_9 the linear eccentricity of the ellipse:
formula_10
The hyperbola formula_11
is the focal conic to the ellipse. That means: The foci/vertices of the ellipse are the vertices/foci of the hyperbola. The two conics form the two degenerated focal surfaces of the cyclide.
formula_3 can be considered as the average radius of the generating spheres.
For formula_12 , formula_13 respectively one gets the curvature lines (circles) of the surface.
The corresponding implicit representation is:
formula_14
In case of formula_15 one gets formula_16, i. e. the ellipse is a circle and the hyperbola degenerates to a line. The corresponding cyclides are tori of revolution.
More intuitive design parameters are the intersections of the cyclide with the x-axis. See section Cyclide through 4 points on the x-axis.
Parabolic cyclides.
A parabolic cyclide can be represented by the following parametric representation (see section Cyclide as channel surface):
formula_17
formula_18
formula_19
formula_20
The number formula_21 determines the shape of both the parabolas, which are focal conics:
formula_22 and formula_23
formula_24 determines the relation of the diameters of the two holes (see diagram). formula_25 means: both diameters are equal. For the diagram is formula_26.
A corresponding implicit representation is
formula_27
"Remark": By displaying the circles there appear gaps which are caused by the necessary restriction of the parameters formula_28.
Cyclide as channel surface.
There are two ways to generate an elliptic Dupin cyclide as a channel surface. The first one uses an ellipse as directrix, the second one a hyperbola:
Ellipse as directrix.
In the x-y-plane the directrix is the ellipse with equation
formula_29 and formula_30.
It has the parametric representation
formula_31
formula_32 is the semi major and formula_33 the semi minor axis.
formula_9 is the linear eccentricity of the ellipse. Hence: formula_34.
The radii of the generating spheres are
formula_35
formula_3 is a design parameter. It can be seen as the average of the radii of the spheres. In case of formula_16 the ellipse is a circle and the cyclide a torus of revolution with formula_3 the radius of the generating circle (generatrix).
In the diagram: formula_36.
Maxwell property.
The following simple relation between the actual sphere center (ellipse point) and the corresponding sphere radius is due to Maxwell:
The foci of the ellipse formula_37 are formula_38. If one chooses formula_39 and calculates the distance formula_40, one gets formula_41. Together with the radius of the actual sphere (see above) one gets formula_42.
<br>
Choosing the other focus yields: formula_43
Hence:
In the x-y-plane the envelopes of the circles of the spheres are two circles with the foci of the ellipse as centers and the radii formula_44 (see diagram).
Cyclide through 4 points on the x-axis.
The Maxwell-property gives reason for determining a ring cyclide by prescribing its intersections with the x-axis:
"Given:" Four points formula_46 on the x-axis (see diagram).
"Wanted:" Center formula_47, semiaxes formula_48, linear eccentricity formula_9 and foci of the directrix ellipse and the parameter formula_3 of the corresponding ring cyclide.
From the Maxwell-property one derives
formula_49
Solving for formula_50 yields
formula_51 formula_52
The foci (on the x-axis) are
formula_53 and hence
formula_54
The center of the focal conics (ellipse and hyperbola) has the x-coordinate
formula_55
If one wants to display the cyclide with help of the parametric representation above one has to consider the shift formula_47 of the center !
(H) Swapping formula_56 generates a horn cyclide.<br>
(S) Swapping formula_57, generates a spindel cyclide.<br>
(H1) For formula_58 one gets a 1-horn cyclide.<br>
(R) For formula_59 one gets a ring cyclide touching itself at the origin.
Parallel surfaces.
By increasing or decreasing parameter formula_3, such that the type does not change, one gets parallel surfaces (similar to parallel curves) of the same type (see diagram).
Hyperbola as directrix.
The second way to generate the ring cyclide as channel surface uses the focal hyperbola as directrix. It has the equation
formula_60
In this case the spheres touch the cyclide from outside at the second family of circles (curvature lines). To each arm of the hyperbola belongs a subfamily of circles. The spheres of one family enclose the cyclide (in diagram: purple). Spheres of the other family are touched from outside by the cyclide (blue).
Parametric representation of the hyperbola:
formula_61
The radii of the corresponding spheres are
formula_62
In case of a torus (formula_16) the hyperbola degenerates into the axis of the torus.
Maxwell-property (hyperbola case).
The foci of the hyperbola formula_63 are formula_64. The distance of hyperbola point formula_65 to the focus formula_66 is formula_67 and together with the sphere radius formula_68 one gets formula_69. Analogously one gets formula_70 . For a point on the second arm of the hyperbola one derives the equations: formula_71
Hence:
In the x-z-plane the circles of the spheres with centers formula_72 and radii formula_73 have the two circles (in diagram grey) with centers formula_74 and radii formula_4 as envelopes.
Derivation of the parametric representation.
Elliptic cyclide.
The ellipse and hyperbola (focal conics) are the degenerated focal surfaces of the elliptic cyclide. For any pair formula_75 of points of the ellipse and hyperbola the following is true (because of the definition of a focal surface):
1) The line formula_76 is a normal of the cyclide and
2) the corresponding point formula_77 of the cyclide divides the chord formula_78 with relation formula_79 (see diagram).
From the parametric representation of the focal conics and the radii of the spheres
Ellipse: formula_80
Hyperbola: formula_81
one gets the corresponding point formula_77 of the cyclide (see diagram):
formula_82
Calculation in detail leads to the parametric representation of the elliptic cyclide given above.
If one uses the parametric representation given in the article on channel surfaces, then, in general, only one family of parametric curves consists of circles.
Parabolic cyclide.
The derivation of the parametric representation for the parabolic case runs analogously:
With the parametric representations of the focal parabolas (degenerated focal surfaces) and the radii of the spheres:
formula_83
formula_84
one gets
formula_85
which provides the parametric representation above of a parabolic cyclide.
Dupin cyclides and geometric inversions.
An advantage for investigations of cyclides is the property:
(I): Any Dupin cyclide is the image either of a right circular cylinder or a right circular double cone or a torus of revolution by an inversion (reflection at a sphere).
The inversion at the sphere with equation formula_86 can be described analytically by:
formula_87
The most important properties of an inversion at a sphere are:
One can map arbitrary surfaces by an inversion. The formulas above give in any case parametric or implicit representations of the image surface, if the surfaces are given parametrically or implicitly. In case of a parametric surface one gets:
formula_88
But: Only in case of right circular cylinders and cones and tori of revolution one gets Dupin cyclides and vice versa.
Example cylinder.
a) Because lines, which do not contain the origin, are mapped by an inversion at a sphere (in picture: magenta) on circles containing the origin the image of the cylinder is a ring cyclide with mutually touching circles at the origin. As the images of the line segments, shown in the picture, there appear on line circle segments as images. The spheres which touch the cylinder on the inner side are mapped on a first pencil of spheres which generate the cyclide as a canal surface. The images of the tangent planes of the cylinder become the second pencil of spheres touching the cyclide. The latter ones pass through the origin.
b) The second example inverses a cylinder that contains the origin. Lines passing the origin are mapped onto themselves. Hence the surface is unbounded and a parabolic cyclide.
Example cone.
The lines generating the cone are mapped on circles, which intersect at the origin and the image of the cone's vertex. The image of the cone is a double horn cyclide. The picture shows the images of the line segments (of the cone), which are circles segments, actually.
Example torus.
Both the pencils of circles on the torus (shown in the picture) are mapped on the corresponding pencils of circles on the cyclide. In case of a self-intersecting torus one would get a spindle cyclide.
Because Dupin ring-cyclides can be seen as images of tori via suitable inversions and an inversion maps a circle onto a circle or line, the images of the Villarceau circles form further two families of circles on a cyclide (see diagram).
The formula of the inversion of a parametric surface (see above) provides a parametric representation of a cyclide (as inversion of a torus) with circles as parametric curves. But the points of a parametric net are not well distributed. So it is better to calculate the design parameters formula_2 and to use the parametric representation above:
"Given:" A torus, which is shifted out of the standard position along the x-axis.
Let be formula_89 the intersections of the torus with the x-axis (see diagram). All not zero. Otherwise the inversion of the torus would not be a ring-cyclide.<br>
"Wanted:" semi-axes formula_48 and linear eccentricity formula_9 of the ellipse (directrix) and parameter formula_3 of the ring-cyclide, which is the image of the torus under the inversion at the unitsphere.
The inversion maps formula_90 onto formula_91, which are the x-coordinates of 4 points of the ring-cyclide (see diagram). From section Cyclide through 4 points on the x-axis one gets
formula_92 formula_93 and
formula_94
The center of the focal conics has the x-ccordinate
formula_95
Separation of variables.
Dupin cyclides are a special case of a more general notion of a cyclide, which is a natural extension of the notion of a quadric surface. Whereas a quadric can be described as the zero-set of second order polynomial in Cartesian coordinates ("x"1,"x"2,"x"3), a cyclide is given by the zero-set of a second order polynomial in ("x"1,"x"2,"x"3,"r"2), where
"r"2="x"12+"x"22+"x"32. Thus it is a quartic surface in Cartesian coordinates, with an equation of the form:
formula_96
where "Q" is a 3x3 matrix, "P" and "R" are a 3-dimensional vectors, and "A" and "B" are constants.
Families of cyclides give rise to various cyclidic coordinate geometries.
In Maxime Bôcher's 1891 dissertation, "Ueber die Reihenentwickelungen der Potentialtheorie", it was shown that the Laplace equation in three variables can be solved using separation of variables in 17 conformally distinct quadric and cyclidic coordinate geometries. Many other cyclidic geometries can be obtained by studying R-separation of variables for the Laplace equation.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^3"
},
{
"math_id": 1,
"text": "\\Complex^3"
},
{
"math_id": 2,
"text": "a,b,c,d"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "d\\mp c"
},
{
"math_id": 5,
"text": " x=\\frac{d(c-a\\cos u\\cos v)+b^2\\cos u}{a-c\\cos u \\cos v} \\ ,"
},
{
"math_id": 6,
"text": " y=\\frac{b\\sin u (a-d\\cos v)}{a-c\\cos u \\cos v} \\ ,"
},
{
"math_id": 7,
"text": " z=\\frac{b\\sin v (c \\cos u-d)}{a-c\\cos u \\cos v} \\ ,"
},
{
"math_id": 8,
"text": "0\\le u,v <2\\pi \\ ."
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "\\frac{x^2}{a^2}+\\frac{y^2}{b^2}=1, z=0\\ ."
},
{
"math_id": 11,
"text": "\\frac{x^2}{c^2}-\\frac{z^2}{b^2}=1, y=0"
},
{
"math_id": 12,
"text": "u=const"
},
{
"math_id": 13,
"text": "v=const "
},
{
"math_id": 14,
"text": "(x^2+y^2+z^2+b^2-d^2)^2-4(ax-cd)^2-4b^2y^2=0 \\ ."
},
{
"math_id": 15,
"text": "a=b"
},
{
"math_id": 16,
"text": "c=0"
},
{
"math_id": 17,
"text": " x=\\frac{p}{2}\\, \\frac{2v^2+k(1-u^2-v^2)}{1+u^2+v^2} \\ , "
},
{
"math_id": 18,
"text": " y=pu\\, \\frac{v^2+k}{1+u^2+v^2} \\ , "
},
{
"math_id": 19,
"text": " z=pv\\, \\frac{1+u^2-k}{1+u^2+v^2} \\ , "
},
{
"math_id": 20,
"text": " -\\infty<u,v<\\infty \\ . "
},
{
"math_id": 21,
"text": "p"
},
{
"math_id": 22,
"text": "y^2=p^2-2px, \\ z=0\\ "
},
{
"math_id": 23,
"text": "\\ z^2=2px, \\ y=0\\ ."
},
{
"math_id": 24,
"text": "k"
},
{
"math_id": 25,
"text": "k=0.5"
},
{
"math_id": 26,
"text": "k=0.7"
},
{
"math_id": 27,
"text": "\\left(x + \\left(\\frac{k}{2} - 1\\right)p \\right)\\left(x^2+y^2+z^2 - \\frac{k^2p^2}{4}\\right) + pz^2 = 0 \\ ."
},
{
"math_id": 28,
"text": "u,v"
},
{
"math_id": 29,
"text": "\\frac{x^2}{a^2}+\\frac{y^2}{b^2}=1\\ ,\\ z=0\\quad "
},
{
"math_id": 30,
"text": "\\ a>b"
},
{
"math_id": 31,
"text": "\\ x=a\\cos\\varphi\\; ,\\ y=b\\sin \\varphi \\;, \\ z=0\\ ."
},
{
"math_id": 32,
"text": "a"
},
{
"math_id": 33,
"text": "b"
},
{
"math_id": 34,
"text": "c^2=a^2-b^2"
},
{
"math_id": 35,
"text": "r(\\varphi)=d-c\\cos\\varphi \\ ."
},
{
"math_id": 36,
"text": "\\; a=1,\\; b=0.99,\\ d=0.25\\; "
},
{
"math_id": 37,
"text": "\\ E=(a\\cos\\varphi,b\\sin\\varphi,0)\\ "
},
{
"math_id": 38,
"text": "\\ F_i=(\\pm c,0,0)\\ "
},
{
"math_id": 39,
"text": "\\ F_1=(c,0,0)\\ "
},
{
"math_id": 40,
"text": "\\ |EF_1|\\ "
},
{
"math_id": 41,
"text": "\\ |EF_1|=a-c\\cos\\varphi\\ "
},
{
"math_id": 42,
"text": "\\ |EF_1|-r=a-d\\ "
},
{
"math_id": 43,
"text": "\\ |EF_2|+r=a+d\\ . "
},
{
"math_id": 44,
"text": "a\\pm d"
},
{
"math_id": 45,
"text": "x_1,x_2,x_3,x_4"
},
{
"math_id": 46,
"text": "x_1> x_2>x_3>x_4"
},
{
"math_id": 47,
"text": "m_0"
},
{
"math_id": 48,
"text": "a,b"
},
{
"math_id": 49,
"text": "2(a+d)=x_1-x_4 \\; ,\\quad \\ 2(a-d)=x_2-x_3 \\ ."
},
{
"math_id": 50,
"text": "a,c"
},
{
"math_id": 51,
"text": " a=\\frac{1}{4}\\left(x_1+x_2-x_3-x_4\\right),\\quad"
},
{
"math_id": 52,
"text": " d=\\frac{1}{4}\\left(x_1-x_2+x_3-x_4\\right)\\ ."
},
{
"math_id": 53,
"text": "f_1=\\frac{1}{2}(x_2+x_3),\\quad f_2=\\frac{1}{2}(x_1+x_4)"
},
{
"math_id": 54,
"text": "c=\\frac{1}{4}\\left(-x_1+x_2+x_3-x_4\\right)\\; ,\\quad \\ b=\\sqrt{a^2-c^2} \\ "
},
{
"math_id": 55,
"text": "m_0=\\frac{1}{4}\\left(x_1+x_2+x_3+x_4\\right)\\ ."
},
{
"math_id": 56,
"text": "x_1,x_2"
},
{
"math_id": 57,
"text": "x_2,x_3"
},
{
"math_id": 58,
"text": "x_1=x_2=2"
},
{
"math_id": 59,
"text": "x_2=x_3=0"
},
{
"math_id": 60,
"text": "\\frac{x^2}{c^2}-\\frac{z^2}{b^2}=1\\ ,\\ y=0\\ ."
},
{
"math_id": 61,
"text": "(\\pm c\\cosh\\psi,0,b\\sinh\\psi) \\ . "
},
{
"math_id": 62,
"text": "R(\\psi)=a\\cosh\\psi\\mp d\\ ."
},
{
"math_id": 63,
"text": "\\;H_\\pm(\\psi)=(\\pm c\\cosh\\psi, 0, b\\sinh\\psi)\\;"
},
{
"math_id": 64,
"text": "\\;F_i=(\\pm a,0,0)\\;"
},
{
"math_id": 65,
"text": "\\;H_+(\\psi)=(c\\cosh\\psi,0,b\\sinh\\psi)\\;"
},
{
"math_id": 66,
"text": "F_1"
},
{
"math_id": 67,
"text": "\\; |H_+F_1|=a\\cosh\\psi-c\\; "
},
{
"math_id": 68,
"text": "R_+(\\psi)=a\\cosh\\psi-d"
},
{
"math_id": 69,
"text": "\\;|H_+F_1|-R_+=d-c\\;"
},
{
"math_id": 70,
"text": "\\;|H_+F_2|-R_+=d+c\\;"
},
{
"math_id": 71,
"text": "\\;R_--|H_-F_1|=d-c\\; ,\\ R_--|H_-F_2|=d+c\\; ."
},
{
"math_id": 72,
"text": "H_\\pm(\\psi)"
},
{
"math_id": 73,
"text": "R_\\pm(\\psi)"
},
{
"math_id": 74,
"text": "F_{1/2}=(\\pm a,0,0)"
},
{
"math_id": 75,
"text": "E, H"
},
{
"math_id": 76,
"text": "\\overline{EH}"
},
{
"math_id": 77,
"text": "P"
},
{
"math_id": 78,
"text": "EH"
},
{
"math_id": 79,
"text": "r:R"
},
{
"math_id": 80,
"text": "\\quad\\ \\ \\ E(u)=(a\\cos u,b\\sin u,0),\\quad r(u)=d-c\\cos u \\ \\ ,"
},
{
"math_id": 81,
"text": "\\quad H(v)=(\\frac{c}{\\cos v}, 0, b\\tan v), \\quad R(u)=\\frac{a}{\\cos v}-d\\ ."
},
{
"math_id": 82,
"text": " \\ P(u,v)=\\frac{R(v)}{r(u)+R(v)}\\;E(u) +\\frac{r(u)}{r(u)+R(v)}\\;H(v)\\ ."
},
{
"math_id": 83,
"text": "P_1(u)= \\left(\\frac p 2 (1-u^2),pu,0\\right), \\quad r_1=\\frac p 2 (1-k +u^2)\\; "
},
{
"math_id": 84,
"text": "P_2(v)= \\left(\\frac p 2 v^2,0,pv\\right), \\qquad \\qquad r_2=\\frac p 2 (k+v^2) \\ "
},
{
"math_id": 85,
"text": " \\ P(u,v)=\\frac{r_2(v)}{r_1(u)+r_2(v)}\\;P_1(u) +\\frac{r_1(u)}{r_1(u)+r_2(v)}\\;P_2(v) "
},
{
"math_id": 86,
"text": "x^2+y^2+z^2=R^2"
},
{
"math_id": 87,
"text": " (x,y,z) \\rightarrow \\frac{R^2\\cdot(x,y,z)}{x^2+y^2+z^2} \\ ."
},
{
"math_id": 88,
"text": "(x(u,v),y(u,v),z(u,v)) \\rightarrow \\frac{R^2\\cdot(x(u,v),y(u,v),z(u,v))}{x(u,v)^2+y(u,v)^2+z(u,v)^2} \\ ."
},
{
"math_id": 89,
"text": "x_1, x_2,x_3,x_4"
},
{
"math_id": 90,
"text": "x_i"
},
{
"math_id": 91,
"text": "\\tfrac{1}{x_i}"
},
{
"math_id": 92,
"text": " a=\\frac{1}{4}\\left(\\frac{1}{x_1}+\\frac{1}{x_2}-\\frac{1}{x_3}-\\frac{1}{x_4}\\right),\\quad"
},
{
"math_id": 93,
"text": " d=\\frac{1}{4}\\left(-\\frac{1}{x_1}+\\frac{1}{x_2}-\\frac{1}{x_3}+\\frac{1}{x_4}\\right)\\ "
},
{
"math_id": 94,
"text": "c=\\frac{1}{4}\\left(\\frac{1}{x_1}-\\frac{1}{x_2}-\\frac{1}{x_3}+\\frac{1}{x_4}\\right)\\; ,\\quad \\ b=\\sqrt{a^2-c^2} \\ "
},
{
"math_id": 95,
"text": "m_0=\\frac{1}{4}\\left(\\frac{1}{x_1}+\\frac{1}{x_2}+\\frac{1}{x_3}+\\frac{1}{x_4}\\right)\\ ."
},
{
"math_id": 96,
"text": "\nA r^4 + \\sum_{i=1}^3 P_i x_i r^2 + \\sum_{i,j=1}^3 Q_{ij} x_i x_j + \\sum_{i=1}^3 R_i x_i + B = 0\n"
}
]
| https://en.wikipedia.org/wiki?curid=705128 |
70515538 | EBU colour bars | Television test card
The EBU colour bars are a television test card used to check if a video signal has been altered by recording or transmission, and what adjustments must be made to bring it back to specification. It is also used for setting a television monitor or receiver to reproduce chrominance and luminance information correctly. The EBU bars are most commonly shown arranged side-by-side in a vertical manner (as in the images in this article), though some broadcasters – such as TVP in Poland, and Gabon Télévision in Gabon – were known to have aired a horizontal version of the EBU bars.
It is similar to the SMPTE color bars, although that pattern is typically associated with the NTSC analogue colour TV system. Many test cards, such as Philips PM5544 or Telefunken FuBK, feature elements equivalent to the EBU colour bars.
75% Colour Bars.
The 75% Colour Bars or EBU/IBA 100/0/75/0 Colour Bars pattern is very similar to the SMPTE colour bars pattern, although it only features seven colour bars, and the white bar is at 100% intensity.
There is a variant where the white bar is also at 75% intensity (EBU 75/0/75/0). This pattern is generated by certain types of test equipment – including the Philips PM5519.
The signal values of these bars for the PAL analogue system are:
100% Colour Bars.
An alternate form of colour bars is the 100% Colour Bars or EBU 100/0/100/0 Colour Bars pattern (specified in ITU-R Rec. BT.1729), also known as the RGB pattern or full field bars, which consists of eight vertical bars of 100% intensity, and does not include the castellation or luminance patterns. Like the SMPTE colour bars pattern, the colour order is white, yellow, cyan, green, magenta, red, and blue – but with an additional column of saturated black. This pattern is used to check peak colour levels, and colour saturation, as well as colour alignment. The 100% pattern is not as common as the SMPTE bars, or the above-mentioned EBU 75% pattern, but many pieces of test equipment can be selected to generate either one. Many professional cameras can be set to generate a 100% pattern for calibration of broadcast or recording equipment, especially in a multi-camera installation where all camera signals must match.
Standard Definition.
EBU colour bar values for standard-definition television systems following BT.601, as specified in ITU-R Rec. BT.1729:
Calculation of formula_0 (luminance) and formula_1 (colour difference) signals from formula_2, formula_3 and formula_4 components according to BT.601 is as follows:
formula_5
High Definition.
EBU colour bar values for high definition TV systems following BT.709, as specified in ITU-R Rec. BT.1729:
Calculation of formula_0 (luminance) and formula_1 (colour difference) signals from formula_2, formula_3 and formula_4 components according to BT.709 is as follows:
formula_6
HDR UHDTV.
In 2020 the EBU published a newer colour bar pattern named Colour Bars for Use in the Production of Hybrid Log Gamma (HDR) UHDTV, designed for HDR broadcasts, taking into account the extended colour gamut of these systems. It includes 100% and 75% ITU-R BT.2100 HLG colour bars, and colour bars which can be converted to ITU-R BT.709 75% bars when "scene-light" and "display-light" mathematical transforms defined in ITU-R BT.2408 are used.
This pattern allows testing of UHDTV to HDTV conversion, measuring luminance response, saturation and hue shifts, and checking near‑black performance. It can also be used to check for correct hardware settings, transmission chain errors, and proper colour space transforms from ITU‑R BT.2100 HLG to ITU‑R BT.709. Versions of the pattern are freely available as a still image or video file.
The pattern is similar to the ITU-R recommendation BT.2111 that also covers the PQ transfer function. Another similar pattern named Colour Bar Test Pattern for Hybrid Log-Gamma (HLG) High Dynamic Range Television (HDR-TV) System was developed by ARIB in 2018 (ARIB STD-B72), based on the SMPTE color bars commonly used in Japan and United States.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "C_B C_R"
},
{
"math_id": 2,
"text": "R'"
},
{
"math_id": 3,
"text": "G' "
},
{
"math_id": 4,
"text": "B'"
},
{
"math_id": 5,
"text": "\\begin{align}\n Y' &= 0.299 \\cdot R' + 0.587 \\cdot G' + 0.114 \\cdot B'\\\\\n C_B ' &= 0.564 \\cdot (B' - Y')\\\\\n C_R ' &= 0.713 \\cdot (R' - Y')\\\\\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\n Y' &= 0.2126 \\cdot R' + 0.7152 \\cdot G' + 0.0722 \\cdot B'\\\\\n C_B ' &= (B' - Y') / 1.8556\\\\\n C_R ' &= (R' - Y') / 1.5748\\\\\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=70515538 |
705158 | Ruled surface | Surface containing a line through every point
In geometry, a surface in 3-dimensional Euclidean space S is ruled (also called a scroll) if through every point of S, there is a straight line that lies on S. Examples include the plane, the lateral surface of a cylinder or cone, a conical surface with elliptical directrix, the right conoid, the helicoid, and the tangent developable of a smooth curve in space.
A ruled surface can be described as the set of points swept by a moving straight line. For example, a cone is formed by keeping one point of a line fixed whilst moving another point along a circle. A surface is doubly ruled if through every one of its points there are two distinct lines that lie on the surface. The hyperbolic paraboloid and the hyperboloid of one sheet are doubly ruled surfaces. The plane is the only surface which contains at least three distinct lines through each of its points .
The properties of being ruled or doubly ruled are preserved by projective maps, and therefore are concepts of projective geometry. In algebraic geometry, ruled surfaces are sometimes considered to be surfaces in affine or projective space over a field, but they are also sometimes considered as abstract algebraic surfaces without an embedding into affine or projective space, in which case "straight line" is understood to mean an affine or projective line.
Definition and parametric representation.
A surface in 3-dimensional Euclidean space is called a "ruled surface" if it is the union of a differentiable one-parameter family of lines.
Formally, a ruled surface is a surface in formula_0 is described by a parametric representation of the form
formula_1
for formula_2 varying over an interval and formula_3 ranging over the reals. It is required that formula_4, and both formula_5 and formula_6 should be differentiable.
Any straight line formula_7 with fixed parameter formula_8 is called a "generator". The vectors formula_9 describe the directions of the generators. The curve formula_10 is called the "directrix" of the representation. The directrix may collapse to a point (in case of a cone, see example below).
The ruled surface above may alternatively be described by
formula_11
with the second directrix formula_12. To go back to the first description starting with two non intersecting curves formula_13 as directrices, set formula_14
The geometric shape of the directrices and generators are of course essential to the shape of the ruled surface they produce. However, the specific parametric representations of them also influence the shape of the ruled surface.
Examples.
Right circular cylinder.
A right circular cylinder is given by the equation
formula_15
It can be parameterized as
formula_16
formula_17
formula_18
with
formula_19
formula_20
formula_21
Right circular cone.
A right circular cylinder is given by the equation
formula_22
It can be parameterized as
formula_23
formula_24
with
formula_25
formula_26
formula_27
In this case one could have used the apex as the directrix, i.e.
formula_28
and
formula_29
as the line directions.
For any cone one can choose the apex as the directrix. This shows that "the directrix of a ruled surface may degenerate to a point".
Helicoid.
A helicoid can be parameterized as
formula_30
formula_31
formula_32
The directrix
formula_33
is the z-axis, the line directions are
formula_34,
and the second directrix
formula_35
is a helix.
The helicoid is a special case of the ruled generalized helicoids.
Cylinder, cone and hyperboloids.
The parametric representation
formula_36
has two horizontal circles as directrices. The additional parameter formula_37 allows to vary the parametric representations of the circles. For
formula_38 one gets the cylinder formula_39,
formula_40 one gets the cone formula_41,
formula_42 one gets a hyperboloid of one sheet with equation formula_43 and the semi axes formula_44.
A hyperboloid of one sheet is a doubly ruled surface.
Hyperbolic paraboloid.
If the two directrices in (CD) are the lines
formula_45
one gets
formula_46,
which is the hyperbolic paraboloid that interpolates the 4 points formula_47 bilinearly.
The surface is doubly ruled, because any point lies on two lines of the surface.
For the example shown in the diagram:
formula_48
The hyperbolic paraboloid has the equation formula_49.
Möbius strip.
The ruled surface
formula_50
with
formula_51 (circle as directrix),
formula_52
contains a Möbius strip.
The diagram shows the Möbius strip for formula_53.
A simple calculation shows formula_54 (see next section). Hence the given realization of a Möbius strip is "not developable". But there exist developable Möbius strips.
Developable surfaces.
For the determination of the normal vector at a point one needs the partial derivatives of the representation formula_50:
formula_55,
formula_56.
Hence the normal vector is
formula_57
Since formula_58 (A mixed product with two equal vectors is always 0), formula_59 is a tangent vector at any point formula_60. The tangent planes along this line are all the same, if formula_61 is a multiple of formula_62. This is possible only if the three vectors formula_63 lie in a plane, i.e. if they are linearly dependent. The linear dependency of three vectors can be checked using the determinant of these vectors:
The tangent planes along the line formula_64 are equal, if
formula_65.
A smooth surface with zero Gaussian curvature is called "developable into a plane", or just "developable". The determinant condition can be used to prove the following statement:
A ruled surface formula_50 is developable if and only if
formula_66
at every point.
The generators of any ruled surface coalesce with one family of its asymptotic lines. For developable surfaces they also form one family of its lines of curvature. It can be shown that any developable surface is a cone, a cylinder, or a surface formed by all tangents of a space curve.
The determinant condition for developable surfaces is used to determine numerically developable connections between space curves (directrices). The diagram shows a developable connection between two ellipses contained in different planes (one horizontal, the other vertical) and its development.
An impression of the usage of developable surfaces in "Computer Aided Design" (CAD) is given in "Interactive design of developable surfaces".
A "historical" survey on developable surfaces can be found in "Developable Surfaces: Their History and Application".
Ruled surfaces in algebraic geometry.
In algebraic geometry, ruled surfaces were originally defined as projective surfaces in projective space containing a straight line through any given point. This immediately implies that there is a projective line on the surface through any given point, and this condition is now often used as the definition of a ruled surface: ruled surfaces are defined to be abstract projective surfaces satisfying this condition that there is a projective line through any point. This is equivalent to saying that they are birational to the product of a curve and a projective line. Sometimes a ruled surface is defined to be one satisfying the stronger condition that it has a fibration over a curve with fibers that are projective lines. This excludes the projective plane, which has a projective line though every point but cannot be written as such a fibration.
Ruled surfaces appear in the Enriques classification of projective complex surfaces, because every algebraic surface of Kodaira dimension formula_67 is a ruled surface (or a projective plane, if one uses the restrictive definition of ruled surface).
Every minimal projective ruled surface other than the projective plane is the projective bundle of a 2-dimensional vector bundle over some curve. The ruled surfaces with base curve of genus 0 are the Hirzebruch surfaces.
Ruled surfaces in architecture.
Doubly ruled surfaces are the inspiration for curved hyperboloid structures that can be built with a latticework of straight elements, namely:
The RM-81 Agena rocket engine employed straight cooling channels that were laid out in a ruled surface to form the throat of the nozzle section.
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb R^3"
},
{
"math_id": 1,
"text": "\\quad \\mathbf x(u,v) = \\mathbf c(u) + v \\mathbf r(u)"
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "\\mathbf r(u) \\neq (0,0,0)"
},
{
"math_id": 5,
"text": "\\mathbf c"
},
{
"math_id": 6,
"text": "\\mathbf r"
},
{
"math_id": 7,
"text": "v \\mapsto \\mathbf x(u_0,v)"
},
{
"math_id": 8,
"text": "u=u_0"
},
{
"math_id": 9,
"text": "\\mathbf r(u)"
},
{
"math_id": 10,
"text": "u\\mapsto \\mathbf c(u)"
},
{
"math_id": 11,
"text": "\\quad \\mathbf x(u,v) = (1-v) \\mathbf c(u) + v \\mathbf d(u)"
},
{
"math_id": 12,
"text": "\\mathbf d(u)= \\mathbf c(u) + \\mathbf r(u)"
},
{
"math_id": 13,
"text": "\\mathbf c(u), \\mathbf d(u)"
},
{
"math_id": 14,
"text": "\\mathbf r(u)= \\mathbf d(u) - \\mathbf c(u)."
},
{
"math_id": 15,
"text": "x^2+y^2=a^2."
},
{
"math_id": 16,
"text": "\\mathbf x(u,v)=(a\\cos u,a\\sin u,v)"
},
{
"math_id": 17,
"text": "= (a\\cos u,a\\sin u,0) + v (0,0,1)"
},
{
"math_id": 18,
"text": "= (1-v) (a\\cos u,a\\sin u,0) + v (a\\cos u,a\\sin u,1)."
},
{
"math_id": 19,
"text": "\\mathbf c(u) = (a\\cos u,a\\sin u,0),"
},
{
"math_id": 20,
"text": "\\mathbf r(u) = (0,0,1),"
},
{
"math_id": 21,
"text": "\\mathbf d(u) = (a\\cos u,a\\sin u,1)."
},
{
"math_id": 22,
"text": "x^2+y^2=z^2."
},
{
"math_id": 23,
"text": "\\mathbf x(u,v) = (\\cos u,\\sin u,1) + v (\\cos u,\\sin u,1)"
},
{
"math_id": 24,
"text": "= (1-v) (\\cos u,\\sin u,1) + v (2\\cos u,2\\sin u,2)."
},
{
"math_id": 25,
"text": "\\mathbf c(u) = (\\cos u,\\sin u,1),"
},
{
"math_id": 26,
"text": "\\mathbf r(u) = (\\cos u,\\sin u,1),"
},
{
"math_id": 27,
"text": "\\mathbf d(u) = (2\\cos u,2\\sin u,2)."
},
{
"math_id": 28,
"text": "\\mathbf c(u) = (0,0,0)"
},
{
"math_id": 29,
"text": "\\mathbf r(u) = (\\cos u,\\sin u,1)"
},
{
"math_id": 30,
"text": "\\mathbf x(u,v) = (v\\cos u,v\\sin u, ku)"
},
{
"math_id": 31,
"text": "= (0,0,ku) + v (\\cos u, \\sin u, 0)"
},
{
"math_id": 32,
"text": "= (1-v) (0,0,ku) + v (\\cos u,\\sin u, ku)."
},
{
"math_id": 33,
"text": "\\mathbf c(u) = (0,0,ku)"
},
{
"math_id": 34,
"text": "\\mathbf r(u) = (\\cos u, \\sin u, 0)"
},
{
"math_id": 35,
"text": "\\mathbf d(u) = (\\cos u,\\sin u, ku)"
},
{
"math_id": 36,
"text": "\\mathbf x(u,v) = (1-v) (\\cos (u-\\varphi), \\sin(u-\\varphi),-1) + v (\\cos(u+\\varphi), \\sin(u+\\varphi), 1)"
},
{
"math_id": 37,
"text": "\\varphi"
},
{
"math_id": 38,
"text": "\\varphi=0"
},
{
"math_id": 39,
"text": "x^2+y^2=1"
},
{
"math_id": 40,
"text": "\\varphi=\\pi/2"
},
{
"math_id": 41,
"text": "x^2+y^2=z^2"
},
{
"math_id": 42,
"text": "0<\\varphi<\\pi/2"
},
{
"math_id": 43,
"text": "\\frac{x^2+y^2}{a^2}-\\frac{z^2}{c^2}=1"
},
{
"math_id": 44,
"text": "a=\\cos\\varphi, c=\\cot\\varphi"
},
{
"math_id": 45,
"text": "\\mathbf c(u) =(1-u)\\mathbf a_1 + u\\mathbf a_2, \\quad \\mathbf d(u)=(1-u)\\mathbf b_1 + u\\mathbf b_2"
},
{
"math_id": 46,
"text": "\\mathbf x(u,v)=(1-v)\\big((1-u)\\mathbf a_1 + u\\mathbf a_2\\big) + v\\big((1-u)\\mathbf b_1 + u\\mathbf b_2\\big)"
},
{
"math_id": 47,
"text": "\\mathbf a_1, \\mathbf a_2, \\mathbf b_1, \\mathbf b_2"
},
{
"math_id": 48,
"text": "\\mathbf a_1=(0,0,0),\\; \\mathbf a_2=(1,0,0),\\; \\mathbf b_1=(0,1,0),\\; \\mathbf b_2=(1,1,1)."
},
{
"math_id": 49,
"text": "z=xy"
},
{
"math_id": 50,
"text": "\\mathbf x(u,v) = \\mathbf c(u) + v \\mathbf r(u)"
},
{
"math_id": 51,
"text": "\\mathbf c(u) = (\\cos2u,\\sin2u,0)"
},
{
"math_id": 52,
"text": "\\mathbf r(u) = ( \\cos u \\cos 2 u , \\cos u \\sin 2 u, \\sin u ) \\quad 0\\le u< \\pi,"
},
{
"math_id": 53,
"text": "-0.3\\le v \\le 0.3"
},
{
"math_id": 54,
"text": "\\det(\\mathbf \\dot c(0), \\mathbf \\dot r(0), \\mathbf r(0)) \\ne 0"
},
{
"math_id": 55,
"text": "\\mathbf x_u = \\mathbf \\dot c(u)+ v \\mathbf \\dot r(u)"
},
{
"math_id": 56,
"text": "\\mathbf x_v= \\mathbf r(u)"
},
{
"math_id": 57,
"text": "\\mathbf n = \\mathbf x_u \\times \\mathbf x_v = \\mathbf \\dot c\\times \\mathbf r + v( \\mathbf \\dot r \\times \\mathbf r)."
},
{
"math_id": 58,
"text": "\\mathbf n \\cdot \\mathbf r = 0"
},
{
"math_id": 59,
"text": "\\mathbf r (u_0)"
},
{
"math_id": 60,
"text": "\\mathbf x(u_0,v)"
},
{
"math_id": 61,
"text": "\\mathbf \\dot r \\times \\mathbf r"
},
{
"math_id": 62,
"text": "\\mathbf \\dot c\\times \\mathbf r"
},
{
"math_id": 63,
"text": "\\mathbf \\dot c, \\mathbf \\dot r, \\mathbf r"
},
{
"math_id": 64,
"text": "\\mathbf x(u_0,v) = \\mathbf c(u_0) + v \\mathbf r(u_0)"
},
{
"math_id": 65,
"text": "\\det(\\mathbf \\dot c(u_0), \\mathbf \\dot r(u_0), \\mathbf r(u_0)) = 0"
},
{
"math_id": 66,
"text": "\\det(\\mathbf \\dot c, \\mathbf \\dot r, \\mathbf r) = 0"
},
{
"math_id": 67,
"text": "-\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=705158 |
7052521 | Tonelli–Hobson test | In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function "ƒ" on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to "ƒ". It is named for Leonida Tonelli and E. W. Hobson.
More precisely, the Tonelli–Hobson test states that if "ƒ" is a real-valued measurable function on R2, and either of the two iterated integrals
formula_0
or
formula_1
is finite, then "ƒ" is Lebesgue-integrable on R2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_\\mathbb{R}\\left(\\int_\\mathbb{R}|f(x,y)|\\,dx\\right)\\, dy"
},
{
"math_id": 1,
"text": "\\int_\\mathbb{R}\\left(\\int_\\mathbb{R}|f(x,y)|\\,dy\\right)\\, dx"
}
]
| https://en.wikipedia.org/wiki?curid=7052521 |
70526 | Fault tree analysis | Failure analysis system used in safety engineering and reliability engineering
Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined. This analysis method is mainly used in safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk and to determine (or get a feeling for) event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs.
In aerospace, the more general term "system failure condition" is used for the "undesired state" / top event of the fault tree. These conditions are classified by the severity of their effects. The most severe conditions require the most extensive fault tree analysis. These system failure conditions and their classification are often previously determined in the functional hazard analysis.
Usage.
Fault tree analysis can be used to:
History.
Fault tree analysis (FTA) was originally developed in 1962 at Bell Laboratories by H.A. Watson, under a U.S. Air Force Ballistics Systems Division contract to evaluate the Minuteman I Intercontinental Ballistic Missile (ICBM) Launch Control System. The use of fault trees has since gained widespread support and is often used as a failure analysis tool by reliability experts. Following the first published use of FTA in the 1962 Minuteman I Launch Control Safety Study, Boeing and AVCO expanded use of FTA to the entire Minuteman II system in 1963–1964. FTA received extensive coverage at a 1965 System Safety Symposium in Seattle sponsored by Boeing and the University of Washington. Boeing began using FTA for civil aircraft design around 1966.
Subsequently, within the U.S. military, application of FTA for use with fuses was explored by Picatinny Arsenal in the 1960s and 1970s. In 1976 the U.S. Army Materiel Command incorporated FTA into an Engineering Design Handbook on Design for Reliability. The Reliability Analysis Center at Rome Laboratory and its successor organizations now with the Defense Technical Information Center (Reliability Information Analysis Center, and now Defense Systems Information Analysis Center) has published documents on FTA and reliability block diagrams since the 1960s. MIL-HDBK-338B provides a more recent reference.
In 1970, the U.S. Federal Aviation Administration (FAA) published a change to 14 CFR 25.1309 airworthiness regulations for transport category aircraft in the Federal Register at 35 FR 5665 (1970-04-08). This change adopted failure probability criteria for aircraft systems and equipment and led to widespread use of FTA in civil aviation. In 1998, the FAA published Order 8040.4, establishing risk management policy including hazard analysis in a range of critical activities beyond aircraft certification, including air traffic control and modernization of the U.S. National Airspace System. This led to the publication of the FAA System Safety Handbook, which describes the use of FTA in various types of formal hazard analysis.
Early in the Apollo program the question was asked about the probability of successfully sending astronauts to the moon and returning them safely to Earth. A risk, or reliability, calculation of some sort was performed and the result was a mission success probability that was unacceptably low. This result discouraged NASA from further quantitative risk or reliability analysis until after the "Challenger" accident in 1986. Instead, NASA decided to rely on the use of failure modes and effects analysis (FMEA) and other qualitative methods for system safety assessments. After the "Challenger" accident, the importance of probabilistic risk assessment (PRA) and FTA in systems risk and reliability analysis was realized and its use at NASA has begun to grow and now FTA is considered as one of the most important system reliability and safety analysis techniques.
Within the nuclear power industry, the U.S. Nuclear Regulatory Commission began using PRA methods including FTA in 1975, and significantly expanded PRA research following the 1979 incident at Three Mile Island. This eventually led to the 1981 publication of the NRC Fault Tree Handbook NUREG–0492, and mandatory use of PRA under the NRC's regulatory authority.
Following process industry disasters such as the 1984 Bhopal disaster and 1988 Piper Alpha explosion, in 1992 the United States Department of Labor Occupational Safety and Health Administration (OSHA) published in the Federal Register at 57 FR 6356 (1992-02-24) its Process Safety Management (PSM) standard in 19 CFR 1910.119. OSHA PSM recognizes FTA as an acceptable method for process hazard analysis (PHA).
Today FTA is widely used in system safety and reliability engineering, and in all major fields of engineering.
Methodology.
FTA methodology is described in several industry and government standards, including NRC NUREG–0492 for the nuclear power industry, an aerospace-oriented revision to NUREG–0492 for use by NASA, SAE ARP4761 for civil aerospace, MIL–HDBK–338 for military systems, IEC standard IEC 61025 is intended for cross-industry use and has been adopted as European Norm EN 61025.
Any sufficiently complex system is subject to failure as a result of one or more subsystems failing. The likelihood of failure, however, can often be reduced through improved system design. Fault tree analysis maps the relationship between faults, subsystems, and redundant safety design elements by creating a logic diagram of the overall system.
The undesired outcome is taken as the root ('top event') of a tree of logic. For instance, the undesired outcome of a metal stamping press operation being considered might be a human appendage being stamped. Working backward from this top event it might be determined that there are two ways this could happen: during normal operation or during maintenance operation. This condition is a logical OR. Considering the branch of the hazard occurring during normal operation, perhaps it is determined that there are two ways this could happen: the press cycles and harms the operator, or the press cycles and harms another person. This is another logical OR. A design improvement can be made by requiring the operator to press two separate buttons to cycle the machine—this is a safety feature in the form of a logical AND. The button may have an intrinsic failure rate—this becomes a fault stimulus that can be analyzed.
When fault trees are labeled with actual numbers for failure probabilities, computer programs can calculate failure probabilities from fault trees. When a specific event is found to have more than one effect event, i.e. it has impact on several subsystems, it is called a common cause or common mode. Graphically speaking, it means this event will appear at several locations in the tree. Common causes introduce dependency relations between events. The probability computations of a tree which contains some common causes are much more complicated than regular trees where all events are considered as independent. Not all software tools available on the market provide such capability.
The tree is usually written out using conventional logic gate symbols. A cut set is a combination of events, typically component failures, causing the top event. If no event can be removed from a cut set without failing to cause the top event, then it is called a minimal cut set.
Some industries use both fault trees and event trees (see Probabilistic Risk Assessment). An event tree starts from an undesired initiator (loss of critical supply, component failure etc.) and follows possible further system events through to a series of final consequences. As each new event is considered, a new node on the tree is added with a split of probabilities of taking either branch. The probabilities of a range of 'top events' arising from the initial event can then be seen.
Classic programs include the Electric Power Research Institute's (EPRI) CAFTA software, which is used by many of the US nuclear power plants and by a majority of US and international aerospace manufacturers, and the Idaho National Laboratory's SAPHIRE, which is used by the U.S. Government to evaluate the safety and reliability of nuclear reactors, the Space Shuttle, and the International Space Station. Outside the US, the software RiskSpectrum is a popular tool for fault tree and event tree analysis, and is licensed for use at more than 60% of the world's nuclear power plants for probabilistic safety assessment. Professional-grade free software is also widely available; SCRAM is an open-source tool that implements the Open-PSA Model Exchange Format open standard for probabilistic safety assessment applications.
Graphic symbols.
The basic symbols used in FTA are grouped as events, gates, and transfer symbols. Minor variations may be used in FTA software.
Event symbols.
Event symbols are used for "primary events" and "intermediate events". Primary events are not further developed on the fault tree. Intermediate events are found at the output of a gate. The event symbols are shown below:
The primary event symbols are typically used as follows:
An intermediate event gate can be used immediately above a primary event to provide more room to type the event description.
FTA is a top-to-bottom approach.
Gate symbols.
Gate symbols describe the relationship between input and output events. The symbols are derived from Boolean logic symbols:
The gates work as follows:
Transfer symbols.
Transfer symbols are used to connect the inputs and outputs of related fault trees, such as the fault tree of a subsystem to its system. NASA prepared a complete document about FTA through practical incidents.
Basic mathematical foundation.
Events in a fault tree are associated with statistical probabilities or Poisson-Exponentially distributed constant rates. For example, component failures may typically occur at some constant failure rate λ (a constant hazard function). In this simplest case, failure probability depends on the rate λ and the exposure time t:
formula_0
where:
formula_1 if formula_2
A fault tree is often normalized to a given time interval, such as a flight hour or an average mission time. Event probabilities depend on the relationship of the event hazard function to this interval.
Unlike conventional logic gate diagrams in which inputs and outputs hold the binary values of TRUE (1) or FALSE (0), the gates in a fault tree output probabilities related to the set operations of Boolean logic. The probability of a gate's output event depends on the input event probabilities.
An AND gate represents a combination of independent events. That is, the probability of any input event to an AND gate is unaffected by any other input event to the same gate. In set theoretic terms, this is equivalent to the intersection of the input event sets, and the probability of the AND gate output is given by:
P (A and B) = P (A ∩ B) = P(A) P(B)
An OR gate, on the other hand, corresponds to set union:
P (A or B) = P (A ∪ B) = P(A) + P(B) - P (A ∩ B)
Since failure probabilities on fault trees tend to be small (less than .01), P (A ∩ B) usually becomes a very small error term, and the output of an OR gate may be conservatively approximated by using an assumption that the inputs are mutually exclusive events:
P (A or B) ≈ P(A) + P(B), P (A ∩ B) ≈ 0
An exclusive OR gate with two inputs represents the probability that one or the other input, but not both, occurs:
P (A xor B) = P(A) + P(B) - 2P (A ∩ B)
Again, since P (A ∩ B) usually becomes a very small error term, the exclusive OR gate has limited value in a fault tree.
Quite often, Poisson-Exponentially distributed rates are used to quantify a fault tree instead of probabilities. Rates are often modeled as constant in time while probability is a function of time. Poisson-Exponential events are modelled as infinitely short so no two events can overlap. An OR gate is the superposition (addition of rates) of the two input failure frequencies or failure rates which are modeled as Poisson point processes. The output of an AND gate is calculated using the unavailability (Q1) of one event thinning the Poisson point process of the other event (λ2). The unavailability (Q2) of the other event then thins the Poisson point process of the first event (λ1). The two resulting Poisson point processes are superimposed according to the following equations.
The output of an AND gate is the combination of independent input events 1 and 2 to the AND gate:
Failure Frequency = λ1Q2 + λ2Q1 where Q = 1 - e-λt ≈ λt if λt < 0.001
Failure Frequency ≈ λ1λ2t2 + λ2λ1t1 if λ1t1 < 0.001 and λ2t2 < 0.001
In a fault tree, unavailability (Q) may be defined as the unavailability of safe operation and may not refer to the unavailability of the system operation depending on how the fault tree was structured. The input terms to the fault tree must be carefully defined.
Analysis.
Many different approaches can be used to model a FTA, but the most common and popular way can be summarized in a few steps. A single fault tree is used to analyze one and only one undesired event, which may be subsequently fed into another fault tree as a basic event. Though the nature of the undesired event may vary dramatically, a FTA follows the same procedure for any undesired event; be it a delay of 0.25 ms for the generation of electrical power, an undetected cargo bay fire, or the random, unintended launch of an ICBM.
FTA analysis involves five steps:
Comparison with other analytical methods.
FTA is a deductive, top-down method aimed at analyzing the effects of initiating faults and events on a complex system. This contrasts with failure mode and effects analysis (FMEA), which is an inductive, bottom-up analysis method aimed at analyzing the effects of single component or function failures on equipment or subsystems. FTA is very good at showing how resistant a system is to single or multiple initiating faults. It is not good at finding all possible initiating faults. FMEA is good at exhaustively cataloging initiating faults, and identifying their local effects. It is not good at examining multiple failures or their effects at a system level. FTA considers external events, FMEA does not. In civil aerospace the usual practice is to perform both FTA and FMEA, with a failure mode effects summary (FMES) as the interface between FMEA and FTA.
Alternatives to FTA include dependence diagram (DD), also known as reliability block diagram (RBD) and Markov analysis. A dependence diagram is equivalent to a success tree analysis (STA), the logical inverse of an FTA, and depicts the system using paths instead of gates. DD and STA produce probability of success (i.e., avoiding a top event) rather than probability of a top event.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " P = 1 - e^{- \\lambda t} "
},
{
"math_id": 1,
"text": " P \\approx \\lambda t "
},
{
"math_id": 2,
"text": " \\lambda t < 0.001 "
}
]
| https://en.wikipedia.org/wiki?curid=70526 |
70526597 | Cornell potential | Simple potential between quarks
In particle physics, the Cornell potential is an effective method to account for the confinement of quarks in quantum chromodynamics (QCD). It was developed by Estia J. Eichten, Kurt Gottfried, Toichiro Kinoshita, John Kogut, Kenneth Lane and Tung-Mow Yan at Cornell University in the 1970s to explain the masses of quarkonium states and account for the relation between the mass and angular momentum of the hadron (the so-called Regge trajectories). The potential has the form:
formula_0
where formula_1 is the effective radius of the quarkonium state, formula_2 is the QCD running coupling, formula_3 is the QCD string tension and formula_4 GeV is a constant. Initially, formula_2 and formula_3 were merely empirical parameters but with the development of QCD can now be calculated using perturbative QCD and lattice QCD, respectively.
Short distance potential.
The potential consists of two parts. The first one, formula_5 dominate at short distances, typically for formula_6 fm. It arises from the one-gluon exchange between the quark and its anti-quark, and is known as the Coulombic part of the potential, since it has the same form as the well-known Coulombic potential formula_7 induced by the electromagnetic force (where formula_8 is the electromagnetic coupling constant).
The factor formula_9 in QCD comes from the fact that quarks have different type of charges ("colors") and is associated with any gluon emission from a quark. Specifically, this factor is called the "color factor" or "Casimir factor" and is formula_10, where formula_11 is the number of color charges.
The value for formula_2 depends on the radius of the studied hadron. Its value ranges from 0.19 to 0.4.
For precise determination of the short distance potential, the running of formula_2 must be accounted for, resulting in a distant-dependent formula_12. Specifically, formula_2 must be calculated in the so-called "potential renormalization scheme" (also denoted V-scheme) and, since quantum field theory calculations are usually done in momentum space, Fourier transformed to position space.
Long distance potential.
The second term of the potential, formula_13, is the linear confinement term and fold-in the non-perturbative QCD effects that result in color confinement. formula_3 is interpreted as the tension of the QCD string that forms when the gluonic field lines collapse into a flux tube. Its value is formula_14 GeVformula_15.
formula_3 controls the intercepts and slopes of the linear Regge trajectories.
Domains of application.
The Cornell potential applies best for the case of static quarks (or very heavy quarks with non-relativistic motion), although relativistic improvements to the potential using speed-dependent terms are available. Likewise, the potential has been extended to include spin-dependent terms
Calculation of the quark-quark potential.
A test of validity for approaches that seek to explain color confinement is that they must produce, in the limit that quark motions are non-relativistic, a potential that agrees with the Cornell potential.
A significant achievement of lattice QCD is to be able compute from first principles the static quark-antiquark potential, with results confirming the empirical Cornell Potential.
Other approaches to the confinement problem also results in the Cornell potential, including the dual superconductor model, the Abelian Higgs model, the center vortex models.
More recently, calculations based on the AdS/CFT correspondence have reproduced the Cornell potential using the AdS/QCD correspondence
or light front holography.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V(r) = -\\frac{4}{3}\\frac{\\alpha_s}{\\;r\\;} + \\sigma\\,r + \\text{constant}"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "\\alpha_s"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "\\text{constant} \\simeq -0.3"
},
{
"math_id": 5,
"text": "-\\frac{4}{3}\\frac{\\alpha_s}{\\;r\\;}"
},
{
"math_id": 6,
"text": "r <0.1"
},
{
"math_id": 7,
"text": "\\;\\frac{\\alpha}{\\;r\\;}\\;"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "\\frac{4}{3}"
},
{
"math_id": 10,
"text": " C_F \\equiv \\frac{N_c^2-1}{2N_c}= \\frac{4}{3}"
},
{
"math_id": 11,
"text": "N_c = 3"
},
{
"math_id": 12,
"text": "\\alpha_s(r)"
},
{
"math_id": 13,
"text": "\\sigma\\,r"
},
{
"math_id": 14,
"text": " \\sigma \\sim 0.18"
},
{
"math_id": 15,
"text": "^2"
}
]
| https://en.wikipedia.org/wiki?curid=70526597 |
70533517 | Joshua 3 | Book of Joshua, chapter 3
Joshua 3 is the third chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition, the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the Israelites crossing the Jordan River westward into the land of Canaan under the leadership of Joshua, a part of a section comprising Joshua 1:1–5:12 about the entry to the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 17 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q48 (4QJoshb; 100–50 BCE) with extant verses 15–17.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Analysis.
The narrative of Israelites entering the land of Canaan comprises verses 1:1 to 5:12 of the Book of Joshua and has the following outline:
A. Preparations for Entering the Land (1:1–18)
1. Directives to Joshua (1:1–9)
2. Directives to the Leaders (1:10–11)
3. Discussions with the Eastern Tribes (1:12–18)
B. Rahab and the Spies in Jericho (2:1–24)
1. Directives to the Spies (2:1a)
2. Deceiving the King of Jericho (2:1b–7)
3. The Oath with Rahab (2:8–21)
4. The Report to Joshua (2:22–24)
C. Crossing the Jordan (3:1–4:24)
1. Initial Preparations for Crossing (3:1–6)
2. Directives for Crossing (3:7–13)
3. A Miraculous Crossing: Part 1 (3:14–17)
4. Twelve-Stone Memorial: Part 1 (4:1–10a)
5. A Miraculous Crossing: Part 2 (4:10b–18)
6. Twelve-Stone Memorial: Part 2 (4:19–24)
D. Circumcision and Passover (5:1–12)
1. Canaanite Fear (5:1)
2. Circumcision (5:2–9)
3. Passover (5:10–12)
Preparations and directives of the crossing (3:1–13).
The crossing of the Jordan narrative (3:1–5:1) consists of several units that backtrack and overlap, with a number of elements recounted more than once (e.g. the selection of men to carry the stones, 3:12; 4:2; the setting up of the stones, 4:8–9, 20). It includes a miraculous parting of the waters (Joshua 3:16) which recalls the crossing of the Reed Sea (Exodus 14:21–22; cf. Psalm 114:3, 5; Micah 6:4–5), to be followed by the first Passover kept in the new land (Joshua 5:10-12) corresponding to the first ever Passover in Egypt (Exodus 12–13). The centrality of the Ark of the Covenant in the whole narrative emphasizes the guidance of YHWH on the way into the land, and the preparation for the Holy War ahead (verse 10; Numbers 10:33–36), although the differences in the terminology of the ark throughout this chapter may indicate diverse origins:
The crossing narrative is connected to that of the spies (chapter 2) by the mention of Shittim (3:1), as well as bringing Joshua, together with 'all the Israelites', to the verge of Jordan for the crossing (cf. verse 17), where the officers play their part to observe the due timing of three days (verses 2–3; cf. 1:10–11). The crossing respects the requirements of holiness, the ark being attended by the properly authorized personnel (verses. 3, 6; cf. Numbers 3:5–10, 31), and the people keeping due distance, recalling the encounter with YHWH at Sinai (cf. Exodus 19:10–12). The preparations also include a reaffirmation of Joshua's leadership, and of YHWH's special promise to accompany him (3:7; cf. 1:5) throughout his conquest (verses 10–11; cf. Exodus 3:17). The phrase 'the LORD, the Lord of all the earth' (verse 13; cf. Micah 4:13; Psalm 97:5) states a claim to absolute universal dominion, as also found in other ancient Near-Eastern documents for local deities, for examples, Baal in Ugarit literature is written as "zbl b'I arș" ('the prince, lord of the earth').
"And Joshua rose early in the morning; and they removed from Shittim, and came to Jordan, he and all the children of Israel, and lodged there before they passed over."
Crossing the Jordan (3:14–17).
After all the preparations, an initial report of the crossing was given, with a note that it was miraculous, as the river being in its spring flood (verse 15) was suddenly cut off of its flow of water, leaving a dry land to walk on (verse 16). This passage anticipates a fuller account in the following chapter.
"14 So when the people set out from their tents to pass over the Jordan with the priests bearing the ark of the covenant before the people, 15 and as soon as those bearing the ark had come as far as the Jordan, and the feet of the priests bearing the ark were dipped in the brink of the water (now the Jordan overflows all its banks throughout the time of harvest), 16 the waters coming down from above stood and rose up in a heap very far away, at Adam, the city that is beside Zarethan, and those flowing down toward the Sea of the Arabah, the Salt Sea, were completely cut off. And the people passed over opposite Jericho."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70533517 |
70540286 | Joshua 4 | Book of Joshua, chapter 4
Joshua 4 is the fourth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the Israelites crossing the Jordan River westward into the land of Canaan under the leadership of Joshua, a part of a section comprising Joshua 1:1–5:12 about the entry to the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 24 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q48 (4QJoshb; 100–50 BCE) with extant verses 1, 3.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Analysis.
The narrative of Israelites entering the land of Canaan comprises verses 1:1 to 5:12 of the Book of Joshua and has the following outline:
A. Preparations for Entering the Land (1:1–18)
1. Directives to Joshua (1:1–9)
2. Directives to the Leaders (1:10–11)
3. Discussions with the Eastern Tribes (1:12–18)
B. Rahab and the Spies in Jericho (2:1–24)
1. Directives to the Spies (2:1a)
2. Deceiving the King of Jericho (2:1b–7)
3. The Oath with Rahab (2:8–21)
4. The Report to Joshua (2:22–24)
C. Crossing the Jordan (3:1–4:24)
1. Initial Preparations for Crossing (3:1–6)
2. Directives for Crossing (3:7–13)
3. A Miraculous Crossing: Part 1 (3:14–17)
4. Twelve-Stone Memorial: Part 1 (4:1–10a)
5. A Miraculous Crossing: Part 2 (4:10b–18)
6. Twelve-Stone Memorial: Part 2 (4:19–24)
D. Circumcision and Passover (5:1–12)
1. Canaanite Fear (5:1)
2. Circumcision (5:2–9)
3. Passover (5:10–12)
Twelve Stones from the Jordan (4:1–18).
The crossing of the Jordan narrative (3:1–5:1) consists of several units that backtrack and overlap, with a number of elements recounted more than once (e.g. the selection of men to carry the stones, 3:12; 4:2; the setting up of the stones, 4:8–9, 20). In contrast to chapter 3, this chapter places more emphasis on the setting of twelve stones during and after the crossing. Just as the directive the priest in Joshua 3:6 is only resolved in Joshua 3:16, the directive to the twelve men, one from each tribe of Israel, in Joshua 3:12 is clarified in Joshua 4:2 that they are to carry stones from the midst of the Jordan to the place of destination for the camp. The account is also instituting an act of worship for all future generations (verses 6–7, 21–22), similar to the narrative of the first Passover (cf. Exodus 12:24–27). Joshua's importance (verses 10–14) echoes earlier passage (3:7–8). The priests remained in the middle of dry river bed with the ark until the complete crossing of the people and the ceremonies with the stones (4:10), before finally ascended to the west bank and when they did the river resumes its normal flow (4:15–18).
"And it came to pass, when all the people had completely crossed over the Jordan, that the Lord spoke to Joshua, saying:"
Verse 1.
Hebrew text (read from right to left):
In the middle of the verse in Hebrew text, after the phrase "all the people had completely crossed over the Jordan", there is a "petuhah" (open paragraph sign) from an old pre-Masoretic mark, which the Masorites have retained, marking the end of the previous paragraph and the beginning of a new "parashah" or "paragraph". The next phrase (in literal Hebrew translation: "and spoke YHWH to Joshua, saying"), together with verses 2, 3 and 4 ("and Joshua called the twelve men"), form a 'parenthesis' (as also pointed out by, among others, Kimchi and Calvin), joined together here by consecutive "waw" (a form of historical Hebrew composition), that is supposed to be arranged in logical order with their proper subordination to one another to be rendered as "Then Joshua called the twelve men — as Jehovah had commanded him, saying, 'Take you twelve men out of the people,' etc. — and said to them," etc.
Camp at Gilgal (4:19–24).
The date of the Jordan crossing is significant, the tenth day of the 'first month' in relation to the Passover celebration, when the paschal lamb was set apart for the feast (Exodus 12:2–3), thus linking the Exodus (the crossing of the Reed Sea), and the entry into the land of promise. The stones taken from the river are set up in the Israelite camp at Gilgal (verse 20), for the purpose of the demonstration of God's power so 'that all the peoples of the earth might know it', thus pointing towards the future triumphs of YHWH, which greatly terrifying the inhabitants of the land (Joshua 5:1).
"And the people came up out of Jordan on the tenth day of the first month, and encamped in Gilgal, in the east border of Jericho."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70540286 |
70541543 | Word n-gram language model | Purely statistical model of language
A word "n"-gram language model is a purely statistical model of language. It has been superseded by recurrent neural network–based models, which have been superseded by large language models. It is based on an assumption that the probability of the next word in a sequence depends only on a fixed size window of previous words. If only one previous word was considered, it was called a bigram model; if two words, a trigram model; if "n" − 1 words, an "n"-gram model. Special tokens were introduced to denote the start and end of a sentence formula_0 and formula_1.
To prevent a zero probability being assigned to unseen words, each word's probability is slightly lower than its frequency count in a corpus. To calculate it, various methods were used, from simple "add-one" smoothing (assign a count of 1 to unseen "n"-grams, as an uninformative prior) to more sophisticated models, such as Good–Turing discounting or back-off models.
Unigram model.
A special case, where "n" = 1, is called a unigram model. Probability of each word in a sequence is independent from probabilities of other word in the sequence. Each word's probability in the sequence is equal to the word's probability in an entire document.
formula_2
The model consists of units, each treated as one-state finite automata. Words with their probabilities in a document can be illustrated as follows.
Total mass of word probabilities distributed across the document's vocabulary, is 1.
formula_3
The probability generated for a specific query is calculated as
formula_4
Unigram models of different documents have different probabilities of words in it. The probability distributions from different documents are used to generate hit probabilities for each query. Documents can be ranked for a query according to the probabilities. Example of unigram models of two documents:
Bigram model.
In a bigram word ("n" = 2) language model, the probability of the sentence "I saw the red house" is approximated as
formula_5
Trigram model.
In a trigram ("n" = 3) language model, the approximation is
formula_6
Note that the context of the first "n" – 1 "n"-grams is filled with start-of-sentence markers, typically denoted .
Additionally, without an end-of-sentence marker, the probability of an ungrammatical sequence "*I saw the" would always be higher than that of the longer sentence "I saw the red house."
Approximation method.
The approximation method calculates the probability formula_7 of observing the sentence formula_8
formula_9
It is assumed that the probability of observing the "i"th word "wi" (in the context window consisting of the preceding "i" − 1 words) can be approximated by the probability of observing it in the shortened context window consisting of the preceding "n" − 1 words ("n"th-order Markov property). To clarify, for "n" = 3 and "i" = 2 we have formula_10.
The conditional probability can be calculated from "n"-gram model frequency counts:
formula_11
Out-of-vocabulary words.
An issue when using "n"-gram language models are out-of-vocabulary (OOV) words. They are encountered in computational linguistics and natural language processing when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used. In some cases, it may be necessary to estimate the language model with a specific fixed vocabulary. In such a scenario, the "n"-grams in the corpus that contain an out-of-vocabulary word are ignored. The "n"-gram probabilities are smoothed over all the words in the vocabulary even if they were not observed.
Nonetheless, it is essential in some cases to explicitly model the probability of out-of-vocabulary words by introducing a special token (e.g. "<unk>") into the vocabulary. Out-of-vocabulary words in the corpus are effectively replaced with this special <unk> token before "n"-grams counts are cumulated. With this option, it is possible to estimate the transition probabilities of "n"-grams involving out-of-vocabulary words.
"n"-grams for approximate matching.
"n"-grams were also used for approximate matching. If we convert strings (with only letters in the English alphabet) into character 3-grams, we get a formula_12-dimensional space (the first dimension measures the number of occurrences of "aaa", the second "aab", and so forth for all possible combinations of three letters). Using this representation, we lose information about the string. However, we know empirically that if two strings of real text have a similar vector representation (as measured by cosine distance) then they are likely to be similar. Other metrics have also been applied to vectors of "n"-grams with varying, sometimes better, results. For example, z-scores have been used to compare documents by examining how many standard deviations each "n"-gram differs from its mean occurrence in a large collection, or text corpus, of documents (which form the "background" vector). In the event of small counts, the g-score (also known as g-test) gave better results.
It is also possible to take a more principled approach to the statistics of "n"-grams, modeling similarity as the likelihood that two strings came from the same source directly in terms of a problem in Bayesian inference.
"n"-gram-based searching was also used for plagiarism detection.
Bias-versus-variance trade-off.
To choose a value for "n" in an "n"-gram model, it is necessary to find the right trade-off between the stability of the estimate against its appropriateness. This means that trigram (i.e. triplets of words) is a common choice with large training corpora (millions of words), whereas a bigram is often used with smaller ones.
Smoothing techniques.
There are problems of balance weight between "infrequent grams" (for example, if a proper name appeared in the training data) and "frequent grams". Also, items not seen in the training data will be given a probability of 0.0 without smoothing. For unseen but plausible data from a sample, one can introduce pseudocounts. Pseudocounts are generally motivated on Bayesian grounds.
In practice it was necessary to "smooth" the probability distributions by also assigning non-zero probabilities to unseen words or "n"-grams. The reason is that models derived directly from the "n"-gram frequency counts have severe problems when confronted with any "n"-grams that have not explicitly been seen before – the zero-frequency problem. Various smoothing methods were used, from simple "add-one" (Laplace) smoothing (assign a count of 1 to unseen "n"-grams; see Rule of succession) to more sophisticated models, such as Good–Turing discounting or back-off models. Some of these methods are equivalent to assigning a prior distribution to the probabilities of the "n"-grams and using Bayesian inference to compute the resulting posterior "n"-gram probabilities. However, the more sophisticated smoothing models were typically not derived in this fashion, but instead through independent considerations.
Skip-gram language model.
Skip-gram language model is an attempt at overcoming the data sparsity problem that preceding (i.e. word "n"-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are "skipped" over.
Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other.
For example, in the input text:
"the rain in Spain falls mainly on the plain"
the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences
"the in", "rain Spain", "in falls", "Spain mainly", "falls on", "mainly the", and "on plain".
In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then
formula_13
where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side.
Syntactic "n"-grams.
Syntactic "n"-grams are "n"-grams defined by paths in syntactic dependency or constituent trees rather than the linear structure of the text. For example, the sentence "economic news has little effect on financial markets" can be transformed to syntactic "n"-grams following the tree structure of its dependency relations: news-economic, effect-little, effect-on-markets-financial.
Syntactic "n"-grams are intended to reflect syntactic structure more faithfully than linear "n"-grams, and have many of the same applications, especially as features in a vector space model. Syntactic "n"-grams for certain tasks gives better results than the use of standard "n"-grams, for example, for authorship attribution.
Another type of syntactic "n"-grams are part-of-speech "n"-grams, defined as fixed-length contiguous overlapping subsequences that are extracted from part-of-speech sequences of text. Part-of-speech "n"-grams have several applications, most commonly in information retrieval.
Other applications.
"n"-grams find use in several areas of computer science, computational linguistics, and applied mathematics.
They have been used to:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\langle s\\rangle"
},
{
"math_id": 1,
"text": "\\langle /s\\rangle"
},
{
"math_id": 2,
"text": "P_\\text{uni}(t_1t_2t_3)=P(t_1)P(t_2)P(t_3)."
},
{
"math_id": 3,
"text": "\\sum_{\\text{word in doc}} P(\\text{word}) = 1"
},
{
"math_id": 4,
"text": "P(\\text{query}) = \\prod_{\\text{word in query}} P(\\text{word})"
},
{
"math_id": 5,
"text": "P(\\text{I, saw, the, red, house}) \\approx P(\\text{I}\\mid\\langle s\\rangle) P(\\text{saw}\\mid \\text{I}) P(\\text{the}\\mid\\text{saw}) P(\\text{red}\\mid\\text{the}) P(\\text{house}\\mid\\text{red}) P(\\langle /s\\rangle\\mid \\text{house})"
},
{
"math_id": 6,
"text": "P(\\text{I, saw, the, red, house}) \\approx P(\\text{I}\\mid \\langle s\\rangle,\\langle s\\rangle) P(\\text{saw}\\mid\\langle s\\rangle,I) P(\\text{the}\\mid\\text{I, saw}) P(\\text{red}\\mid\\text{saw, the}) P(\\text{house}\\mid\\text{the, red}) P(\\langle /s\\rangle\\mid\\text{red, house})"
},
{
"math_id": 7,
"text": "P(w_1,\\ldots,w_m)"
},
{
"math_id": 8,
"text": "w_1,\\ldots,w_m"
},
{
"math_id": 9,
"text": "P(w_1,\\ldots,w_m) = \\prod^m_{i=1} P(w_i\\mid w_1,\\ldots,w_{i-1})\\approx \\prod^m_{i=2} P(w_i\\mid w_{i-(n-1)},\\ldots,w_{i-1})"
},
{
"math_id": 10,
"text": "P(w_i\\mid w_{i-(n-1)},\\ldots,w_{i-1})=P(w_2\\mid w_1)"
},
{
"math_id": 11,
"text": "P(w_i\\mid w_{i-(n-1)},\\ldots,w_{i-1}) = \\frac{\\mathrm{count}(w_{i-(n-1)},\\ldots,w_{i-1},w_i)}{\\mathrm{count}(w_{i-(n-1)},\\ldots,w_{i-1})}"
},
{
"math_id": 12,
"text": "26^3"
},
{
"math_id": 13,
"text": "v(\\mathrm{king}) - v(\\mathrm{male}) + v(\\mathrm{female}) \\approx v(\\mathrm{queen})"
}
]
| https://en.wikipedia.org/wiki?curid=70541543 |
70548926 | Light in painting | Light in painting fulfills several objectives, both plastic and aesthetic: on the one hand, it is a fundamental factor in the technical representation of the work, since its presence determines the vision of the projected image, as it affects certain values such as color, texture and volume; on the other hand, light has a great aesthetic value, since its combination with shadow and with certain lighting and color effects can determine the composition of the work and the image that the artist wants to project. Also, light can have a symbolic component, especially in religion, where this element has often been associated with divinity.
The incidence of light on the human eye produces visual impressions, so its presence is indispensable for the capture of art. At the same time, light is intrinsically found in painting, since it is indispensable for the composition of the image: the play of light and shadow is the basis of drawing and, in its interaction with color, is the primordial aspect of painting, with a direct influence on factors such as modeling and relief.
The technical representation of light has evolved throughout the history of painting, and various techniques have been created over time to capture it, such as shading, chiaroscuro, sfumato, or tenebrism. On the other hand, light has been a particularly determining factor in various periods and styles, such as Renaissance, Baroque, Impressionism, or Fauvism. The greater emphasis given to the expression of light in painting is called "luminism", a term generally applied to various styles such as Baroque tenebrism and impressionism, as well as to various movements of the late 19th century and early 20th century such as American, Belgian, and Valencian luminism.
<templatestyles src="Template:Blockquote/styles.css" />Light is the fundamental building block of observational art, as well as the key to controlling composition and storytelling. It is one of the most important aspects of visual art.
Optics.
Light (ultimately from Proto-Indo-European "*lewktom", with the meaning "brightess") is an electromagnetic radiation with a wavelength between 380 nm and 750 nm, the part of the visible spectrum that is perceived by the human eye, located between infrared and ultraviolet radiation. It consists of massless elementary particles called photons, which move at a speed of 299 792 458 m/s in a vacuum, while in matter it depends on its refractive index formula_0. The branch of physics that studies the behavior and characteristics of light is optics.
Light is the physical agent that makes objects visible to the human eye. Its origin can be in celestial bodies such as the Sun, the Moon, or the stars, natural phenomena such as lightning, or in materials in combustion, ignition, or incandescence. Throughout history, human beings have devised different procedures to obtain light in spaces lacking it, such as torches, candles, candlesticks, lamps or, more recently, electric lighting. Light is both the agent that enables vision and a visible phenomenon in itself, since light is also an object perceptible by the human eye. Light enables the perception of color, which reaches the retina through light rays that are transmitted by the retina to the optic nerve, which in turn transmits them to the brain by means of nerve impulses. The perception of light is a psychological process and each person perceives the same physical object and the same luminosity in a different way.Physical objects have different levels of luminance (or reflectance), that is, they absorb or reflect to a greater or lesser extent the light that strikes them, which affects the color, from white (maximum reflection) to black (maximum absorption). Both black and white are not considered colors of the conventional chromatic circle, but gradations of brightness and darkness, whose transitions make up the shadows. When white light hits a surface of a certain color, photons of that color are reflected; if these photons subsequently hit another surface they will illuminate it with the same color, an effect known as radiance — generally perceptible only with intense light. If that object is in turn the same color, it will reinforce its level of colored luminosity, i.e. its saturation.
White light from the sun consists of a continuous spectrum of colors which, when divided, forms the colors of the rainbow: violet, indigo blue, blue, green, yellow, orange, and red. In its interaction with the Earth's atmosphere, sunlight tends to scatter the shorter wavelengths, i.e. the blue photons, which is why the sky is perceived as blue. On the other hand, at sunset, when the atmosphere is denser, the light is less scattered, so that the longer wavelengths, red, are perceived.
Color is a specific wavelength of white light. The colors of the chromatic spectrum have different shades or tones, which are usually represented in the chromatic circle, where the primary colors and their derivatives are located. There are three primary colors: lemon yellow, magenta red, and cyan blue. If they are mixed, the three secondary colors are obtained: orange red, bluish violet, and green. If a primary and a secondary are mixed, the tertiary colors are obtained: greenish blue, orange yellow, etc. On the other hand, complementary colors are two colors that are on opposite sides of the chromatic circle (green and magenta, yellow and violet, blue and orange) and adjacent colors are those that are close within the circle (yellow and green, red and orange). If a color is mixed with an adjacent color, it is shaded, and if it is mixed with a complementary color, it is neutralized (darkened). Three factors are involved in the definition of color: hue, the position within the chromatic circle; saturation, the purity of the color, which is involved in its brightness – the maximum saturation is that of a color that has no mixture with black or its complementary; and value, the level of luminosity of a color, increasing when mixed with white and decreasing when mixed with black or a complementary.
The main source of light is the Sun and its perception can vary according to the time of day: the most normal is mid-morning or mid-afternoon light, generally blue, clear and diaphanous, although it depends on atmospheric dispersion and cloudiness and other climatic factors; midday light is whiter and more intense, with high contrast and darker shadows; dusk light is more yellowish, soft and warm; sunset light is orange or red, low contrast, with intense bluish shadows; evening light is a darker red, dimmer light, with weaker shadows and contrast (the moment known as alpenglow, which occurs in the eastern sky on clear days, gives pinkish tones); the light of cloudy skies depends on the time of day and the degree of cloudiness, is a dim and diffuse light with soft shadows, low contrast and high saturation (in natural environments there can be a mixture of light and shadow known as "mottled light"); finally, night light can be lunar or some atmospheric refraction of sunlight, is diffuse and dim (in contemporary times there is also light pollution from cities). We must also point out the natural light that filters indoors, a diffuse light of lower intensity, with a variable contrast depending on whether it has a single origin or several (for example, several windows), as well as a coloring also variable, depending on the time of day, the weather or the surface on which it is reflected. An outstanding interior light is the so-called "north light", which is the light that enters through a north-facing window, which does not come directly from the sun -always located to the south- and is therefore a soft and diffuse, constant and homogeneous light, much appreciated by artists in times when there was no adequate artificial lighting.
As for artificial light, the main ones are: fire and candles, red or orange; electric, yellow or orange – generally tungsten or wolfram – it can be direct (focal) or diffused by lamp shades; fluorescent, greenish; and photographic, white (flash light). Logically, in many environments there can be mixed light, a combination of natural and artificial light.
The visible reality is made up of a play of light and shadow: the shadow is formed when an opaque body obstructs the path of the light. In general, there is a ratio between light and shadow whose gradation depends on various factors, from lighting to the presence and placement of various objects that can generate shadows; however, there are conditions in which one of the two factors can reach the extreme, as in the case of snow or fog or, conversely, at night. We speak of high key lighting when white or light tones predominate, or low key lighting if black or dark tones predominate.
Shadows can be of shape (also called "self shadows") or of projection ("cast shadows"): the former are the shaded areas of a physical object, that is, the part of that object on which light does not fall; the latter are the shadows cast by these objects on some surface, usually the ground. Self shadows define the volume and texture of an object; cast shadows help define space. The lightest part of the shadow is the "umbra" and the darkest part is the "penumbra". The shape and appearance of the shadow depends on the size and distance of the light source: the most pronounced shadows are from small or distant sources, while a large or close source will give more diffuse shadows. In the first case, the shadow will have sharp edges and the darker area (penumbra) will occupy most of it; in the second, the edge will be more diffuse and the umbra will predominate. A shadow can receive illumination from a secondary source, known as "fill light". The color of a shadow is between blue and black, and also depends on several factors, such as light contrast, transparency and translucency. The projection of shadows is different if they come from natural or artificial light: with natural light the beams are parallel and the shadow adapts both to the terrain and to the various obstacles that may intervene; with artificial light the beams are divergent, with less defined limits, and if there are several light sources, combined shadows may be produced.
The reflection of light produces four derived phenomena: glints, which are reflections of the light source, be it the Sun, artificial lights or incidental sources such as doors and windows; glares, which are reflections produced by illuminated bodies as a reflective screen, especially white surfaces; color reflections, produced by the proximity between various objects, especially if they are luminous; and image reflections, produced by polished surfaces, such as mirrors or water. Another phenomenon produced by light is transparency, which occurs in bodies that are not opaque, with a greater or lesser degree depending on the opacity of the object, from total transparency to varying degrees of translucency. Transparency generates filtered light, a type of luminosity that can also be produced through curtains, blinds, awnings, various fabrics, pergolas and arbors, or through the foliage of trees.
Pictorial representation of light.
<templatestyles src="Template:Blockquote/styles.css" />The attraction that light exerts on the artist goes beyond its practical function as an element that defines volumes and spaces. Light is also an element that carries in itself a very special magic and attraction.
In artistic terminology, "light" is the point or center of light diffusion in the composition of a painting, or the luminous part of a painting in relation to the shadows. This term is also used to describe the way a painting is illuminated: zenithal or plumb light (vertical rays), high light (oblique rays), straight light (horizontal rays), workshop or studio light (artificial light), etc. The term "accidental light" is also used to refer to light not produced by the Sun, which can be either moonlight or artificial light from candles, torches, etc. The light can come from different directions, which according to its incidence can be differentiated between: "lateral", when it comes from the side, it is a light that highlights more the texture of the objects; "frontal", when it comes from the front, it eliminates the shadows and the sensation of volume; "zenithal", a vertical light of higher origin than the object, it produces a certain deformation of the figure; "contrapicado", vertical light of lower origin, it deforms the figure in an exaggerated way; and "backlight", when the origin is behind the object, thus darkening and diluting its silhouette.
In relation to the distribution of light in the painting, it can be: "homogeneous", when it is distributed equally; "dual", in which the figures stand out against a dark background; or "insertive", when light and shadows are interrelated. According to its origin, light can be intrinsic ("own or autonomous light"), when the light is homogeneous, without luminous effects, directional lights or contrasts of lights and shadows; or extrinsic ("illuminating light"), when it presents contrasts, directional lights and other objective sources of light. The first occurred mainly in Romanesque and Gothic art, and the second especially in the Renaissance and Baroque. In turn, the illuminating light can occur in different ways: "focal light", when it directly presents a light-emitting object ("tangible light") or comes from an external source that illuminates the painting ("intangible light"); "diffuse light", which blurs the contours, as in Leonardo's "sfumato"; "real light", which aims to realistically capture sunlight, an almost utopian attempt in which artists such as Claude of Lorraine, J. M. W. Turner or the impressionist artists were especially employed; and "unreal light", which has no natural or scientific basis and is closer to a symbolic light, as in the illumination of religious figures. As for the artist's intention, light can be "compositional", when it helps the composition of the painting, as in all the previous cases; or "conceptual light", when it serves to enhance the message, for example by illuminating a certain part of the painting and leaving the rest in semi-darkness, as Caravaggio used to do.
In terms of its origin, light can be "natural ambient light", in which no shadows of figures or objects appear, or "projected light", which generates shadows and serves to model the figures. It is also important to differentiate between source and focus of light: the source of light in a painting is the element that radiates the light, be it the sun, a candle or any other; the focus of light is the part of the painting that has the most luminosity and radiates it around the painting. On the other hand, in relation to the shadow, the interrelation between light and shadow is called "chiaroscuro"; if the dark area is larger than the illuminated one, it is called "tenebrism".Light in painting plays a decisive role in the composition and structuring of the painting. Unlike in architecture and sculpture, where light is real, the light of the surrounding space, in painting light is represented, so it responds to the will of the artist both in its physical and aesthetic aspect. The painter determines the illumination of the painting, that is to say, the origin and incidence of the light, which marks the composition and expression of the image. In turn, the shadow provides solidity and volume, while it can generate dramatic effects of various kinds.
In the pictorial representation of light it is essential to distinguish its nature (natural, artificial) and to establish its origin, intensity and chromatic quality. Natural light depends on various factors, such as the season of the year, the time of day (auroral, diurnal, twilight or nocturnal light – from the Moon or stars) or the weather. Artificial light, on the other hand, differs according to its origin: a candle, a torch, a fluorescent, a lamp, neon lights, etc. As for the origin, it can be focused or act in a diffuse way, without a determined origin. The chromatism of the image depends on the light, since depending on its incidence an object can have different tonalities, as well as the reflections, ambiances and shadows projected. In an illuminated image the color is considered saturated at the correct level of illumination, while the color in shadow will always have a darker tonal value and will be the one that determines the relief and volume.
Light is linked to space, so in painting it is intimately linked to perspective, the way of representing a three-dimensional space in a two-dimensional support such as painting. Thus, in linear perspective, light fulfills the function of highlighting objects, of generating volume, through modeling, in the form of luminous gradations; while in aerial perspective, the effects of light are sought as they are perceived by the spectator in the environment, as another element present in the physical reality represented. The light source can be present in the painting or not, it can have a direct or indirect origin, internal or external to the painting. The light defines the space through the modeling of volumes, which is achieved with the contrast between light and shadow: the relationship between the values of light and shadow defines the volumetric characteristics of the form, with a scale of values that can range from a soft fade to a hard contrast. Spatial limits can be objective, when they are produced by people, objects, architectures, natural elements and other factors of corporeality; or subjective, when they come from sensations such as atmosphere, depth, a hollow, an abyss, etc. In human perception, light creates closeness and darkness creates remoteness, so that a light-darkness gradient gives a sensation of depth.Aspects such as contrast, relief, texture, volume, gradients or the tactile quality of the image depend on light. The play of light and shadow helps to define the location and orientation of objects in space. For their correct representation, their shape, density and extension, as well as their differences in intensity, must be taken into account. It should also be taken into account that, apart from its physical qualities, light can generate dramatic effects and give the painting a certain emotional atmosphere.
Contrast is a fundamental factor in painting; it is the language with which the image is shaped. There are two types of contrast: the "luminous", which can be by chiaroscuro (light and shadow) or by surface (a point of light that shines brighter than the rest); and the "chromatic", which can be tonal (contrast between two tones) or by saturation (a bright color with a neutral one). Both types of contrast are not mutually exclusive, in fact they coincide in the same image most of the time. Contrast can have different levels of intensity and its regulation is the artist's main tool to achieve the appropriate expression for his work. From the contrast between light and shadow depends the tonal expression that the artist wants to give to his work, which can range from softness to hardness, which gives a lesser or greater degree of dramatization. Backlighting, for example, is one of the resources that provide greater drama, since it produces elongated shadows and darker tones.
The correspondence between light and shadow and color is achieved through tonal evaluation: the lightest tones are found in the most illuminated areas of the painting and the darkest in those that receive less illumination. Once the artist establishes the tonal values, he chooses the most appropriate color ranges for their representation. Colors can be lightened or darkened until the desired effect is achieved: to lighten a color, lighter related colors – such as groups of warm or cool colors – are added to it, as well as amounts of white until the right tone is found; to darken, related dark colors and some blue or shadow are added. In general, the shade is made by mixing a color with a darker shade, plus blue and a complementary of the proper color (such as yellow and dark blue, red and primary blue or magenta and green).
The light and chromatic harmony of a painting depends on color, i.e. the relationship between the parts of a painting to create cohesion. There are several ways to harmonize: it can be done through "monochrome and tone dominant melodic ranges", with a single color as a base to which the value and tone is changed; if the value is changed with white or black it is a monochrome, while if the tone is changed it is a simple melodic range: for example, taking red as the dominant tone can be shaded with various shades of red (vermilion, cadmium, carmine) or orange, pink, violet, maroon, salmon, warm gray, etc. Another method is the "harmonic trios", which consists of combining three colors equidistant from each other on the chromatic circle; there can also be four, in which case we speak of "quaternions". Another way is the combination of "warm and cool thermal ranges": warm colors are for example red, orange, purple and yellowish green, as well as black; cool colors are blue, green and violet, as well as white (this perception of color with respect to its temperature is subjective and comes from Goethe's Theory of Colors). It is also possible to harmonize between "complementary colors", which is the one that produces the greatest chromatic contrast. Finally, "broken ranges" consist of neutralization by mixing primary colors and their complementary colors, which produces intense luminous effects, since the chromatic vibration is more subtle and the saturated colors stand out more.
Techniques.
The quality and appearance of the luminous representation is in many cases linked to the technique used. The expression and the different light effects of a work depend to a great extent on the different techniques and materials used. In drawing, whether in pencil or charcoal, the effects of light are achieved through the black-white duality, where white is generally the color of the paper (there are colored pencils, but they produce little contrast, so they are not very suitable for chiaroscuro and light effects). Pencil is usually worked with line and hatching, or by means of blurred spots. Charcoal allows the use of gouache and chalk or white chalk to add touches of light, as well as sanguine or sepia. Another monochrome technique is Indian ink, which generates very violent chiaroscuro, without intermediate values, making it a very expressive medium.
Oil painting consists of dissolving the colors in an oily binder (linseed, walnut, almond or hazelnut oil; animal oils), adding turpentine to make it dry better. The oil painting is the one that best allows to value the light effects and the chromatic tones. It is a technique that produces vivid colors and intense effects of brightness and brilliance, and allows a free and fresh stroke, as well as a great richness of textures. On the other hand, thanks to its long permanence in a fluid state, it allows for subsequent corrections.For its application, brushes, spatulas or scrapers can be used, allowing multiple textures, from thin layers and glazes to thick fillings, which produce a denser light.
Pastel painting is made with a pigment pencil of various mineral colors, with binders (kaolin, gypsum, gum arabic, fig latex, fish glue, candi sugar, etc.), kneaded with wax and Marseilles soap and cut into sticks. The color should be spread with a smudger, a cylinder of leather or paper used to smudge the color strokes. Pastel combines the qualities of drawing and painting, and brings freshness and spontaneity.
Watercolor is a technique made with transparent pigments diluted in water, with binders such as gum arabic or honey, using the white of the paper itself. Known since ancient Egypt, it has been a technique used throughout the ages, although with more intensity during the 18th and 19th centuries. As it is a wet technique, it provides great transparency, which highlights the luminous effect of the white color. Generally, the light tones are applied first, leaving spaces on the paper for the pure white; then the dark tones are applied.
In acrylic paint, a plastic binder is added to the colorant, which produces a fast drying and is more resistant to corrosive agents. The speed of drying allows the addition of multiple layers to correct defects and produces flat colors and glazes. Acrylic can be worked by gradient, blurred or contrasted, by flat spots or by filling the color, as in the oil technique.
Genres.
Depending on the pictorial genre, light has different considerations, since its incidence is different in interiors than in exteriors, on objects than on people. In interiors, light generally tends to create intimate environments, usually a type of indirect light filtered through doors or windows, or filtered by curtains or other elements. In these spaces, private scenes are usually developed, which are reinforced by contrasts of light and shadow, intense or soft, natural or artificial, with areas in semi-darkness and atmospheres influenced by gravitating dust and other effects caused by these spaces. A separate genre of interior painting is "naturaleza muerta" or "still life", which usually shows a series of objects or food arranged as in a sideboard. In these works the artist can manipulate the light at will, generally with dramatic effects such as side lights, frontal lights, zenithal lights, back lights, back-lights, etc. The main difficulty consists in the correct evaluation of the tones and textures of the objects, as well as their brightness and transparency depending on the material.
In exteriors, the main genre is landscape, perhaps the most relevant in relation to light in that its presence is fundamental, since any exterior is enveloped in a luminous atmosphere determined by the time of day and the weather and environmental conditions. There are three main types of landscapes: landscape, seascape, and skyscape. The main challenge for the artist in these works is to capture the precise tone of the natural light according to the time of day, the season of the year, the viewing conditions – which can be affected by phenomena such as cloud cover, rain or fog – and an infinite number of variables that can occur in a medium as volatile as the landscape. On numerous occasions artists have gone out to paint in nature to capture their impressions first hand, a working method known by the French term "en plen air" ("in the open air", equivalent to "outdoors"). There is also the variant of the urban landscape, frequent especially since the 20th century, in which a factor to take into account is the artificial illumination of the cities and the presence of neon lights and other types of effects; in general, in these images the planes and contrasts are more differentiated, with hard shadows and artificial and grayish colors.
Light is also fundamental for the representation of the human figure in painting, since it affects the volume and generates different limits according to the play of light and shadow, which delimits the anatomical profile. Light allows us to nuance the surface of the body, and provides a sensation of smoothness and softness to the skin. The focus of the light is important, since its direction influences the general contour of the figure and the illumination of its surroundings: for example, frontal light makes the shadows disappear, attenuating the volume and the sensation of depth, while emphasizing the color of the skin. On the other hand, a partially lateral illumination causes shadows and gives relief to the volumes, and if it is from the side, the shadow covers the opposite side of the figure, which appears with an enhanced volume. On the other hand, in backlighting the body is shown with a characteristic halo around its contour, while the volume acquires a weightless sensation. With overhead lighting, the projection of shadows blurs the relief and gives a somewhat ghostly appearance, just as it does when illuminated from below – although the latter is rare. A determining factor is that of the shadows, which generate a series of contours apart from the anatomical ones that provide drama to the image. Together with the luminous reflections, the gradation of shadows generates a series of effects of great richness in the figure, which the artist can exploit in different ways to achieve different results of greater or lesser effect. It should also be taken into account that direct light or shadow on the skin modifies the color, varying the tonality from the characteristic pale pink to gray or white. The light can also be filtered by objects that get in its path (such as curtains, fabrics, vases or various objects), which generates different effects and colors on the skin.
In relation to the human being, the portrait genre is characteristic, in which light plays a decisive role in the modeling of the face. Its elaboration is based on the same premises as those of the human body, with the addition of a greater demand in the faithful representation of the physiognomic features and even the need to capture the psychology of the character. The drawing is essential to model the features according to the model and, from there, light and color are again the vehicle of translation of the visual image to its representation on the canvas.
In the 20th century, abstraction emerged as a new pictorial language, in which painting is reduced to non-figurative images that no longer describe reality, but rather concepts or sensations of the artist himself, who plays with form, color, light, matter, space and other elements in a totally subjective way and not subject to conventionalisms. Despite the absence of concrete images of the surrounding reality, light is still present on numerous occasions, generally contributing luminosity to the colors or creating chiaroscuro effects by contrasting tonal values.
Chronological factor.
Another aspect in which light is a determining factor is in time, in the representation of chronological time in painting. Until the Renaissance, artists did not represent a specific time in painting and, in general, the only difference in light was between exterior and interior lights. In many occasions it is difficult to identify the specific time of day in a work, since neither the direction of the light nor its quality nor the dimension of the shadows are decisive elements to recognize a certain time of day. Night was rarely represented until practically Mannerism and, in the cases in which a nocturnal atmosphere was used, it was because the narrative required it or because of some symbolic aspect: in Giotto's "The Annunciation to the Shepherds" or in Ambrogio Lorenzetti's "Annunciation", the nocturnal atmosphere contributes to accentuate the halo of mystery surrounding the birth of Christ; in Uccello's "Saint George and the Dragon", night represents evil, the world in which the dragon lives. On the other hand, even in narrative themes that take place at night, such as the Last Supper or the supper at Emmaus, this factor is sometimes deliberately avoided, as in Andrea del Sarto's "Last Supper", set in daylight.
Generally, the chronological setting of a scene has been linked to its narrative correlate, albeit in an approximate manner and with certain licenses on the part of the artist. Practically until the 19th century, it was not until the industrial civilization, thanks to the advances in artificial lighting, that a complete and exact use of the entire time zone was achieved, thanks to the advances in artificial illumination. But just as in the contemporary age time has had a more realistic component, in the past it was more of a narrative factor, accompanying the action represented: dawn was a time of travel or hunting; noon, of action or its subsequent rest; dusk, of return or reflection; night was sleep, fear or adventure, or fun and passion; birth was morning, death was night.
The temporal dimension began to gain relevance in the 17th century, when artists such as Claude Lorrain and Salvator Rosa began to detach landscape painting from a narrative context and to produce works in which the protagonist was nature, with the only variations being the time of day or the season of the year. This new conception developed with 18th-century Vedutism and 19th-century Romantic landscape, and culminated with Impressionism.
The first light of the day is that of dawn, sunrise or aurora (sometimes the aurora, which would be the first brightness of the sky, is differentiated from dawn, which would correspond to sunrise). Until the 17th century, dawn appeared only in small pieces of landscape, usually behind a door or a window, but was never used to illuminate the foreground. The light of dawn generally has a spherical effect, so until the appearance of Leonardo's aerial perspective it was not widely used. In his "Dictionary of the Fine Arts of Design" (1797), Francesco Milizia states that:
<templatestyles src="Template:Blockquote/styles.css" />The dawn sweetly colors the extremity of the bodies, begins to dissipate the darkness of the night and the air still full of vapors leaves the objects wavering... But the sun has not yet appeared, therefore the shadows cannot be very sensitive. All the bodies must participate in the freshness of the air and remain in a kind of half-ink. [...] The background of the sky wants to be dark blue... so that the celestial vault stands out better and the origin of light appears: there the sky will be colored of a reddish-red incarnation from a certain height with alternating golden and silver bands, which will diminish in vivacity as they move away from the place from where the light comes out.For Milizia, the light of dawn was the most suitable for the representation of landscapes.
Noon and the hours immediately before and after have always been a stable frame for an objective representation of reality, although it is difficult to pinpoint the exact moment in most paintings depending on the different light intensities. On the other hand, the exact noon was discouraged by its extreme refulgence, to the point that Leonardo advised that:
<templatestyles src="Template:Blockquote/styles.css" />If you do it at noon, keep the window covered in such a way that the sun, illuminating it all day, does not change the situation.
Milizia also points out that:
<templatestyles src="Template:Blockquote/styles.css" />Can the painter imitate the brightness of midday that dazzles the eye? No; then let him not do so. If ever an event should be treated at noon, let the sun be hidden among clouds, trees, mountains and buildings, and let that star be pointed out by means of some rays that escape those obstacles. Let it be considered then that the bodies do not give shadows, or little, and that the colors, by the excessive vivacity of the light, appear less vivid than in the hours when the light is more attenuated.
Most art treatises advised the afternoon light, which was the most used especially from the Renaissance to the 18th century. Vasari advised to place the sun to the east because "the figure that is made has a great relief and great goodness and perfection is achieved".
In the early days of modern painting, the sunset used to be circumscribed to a celestial vault characterized by its reddish color, without an exact correspondence with the illumination of figures and objects. It was again with Leonardo that a more naturalistic study of twilight began, pointing out in his notes that:
<templatestyles src="Template:Blockquote/styles.css" />The reddening of the clouds, together with the reddening of the sun, makes everything that takes light from them redden; and the part of the bodies which is not seen that reddening remains of the color of the air, and whoever sees such bodies seems to him that they are of two colors; and from this you cannot escape since, showing the cause of such shadows and lights, you must make the shadows and lights participants of the said causes, otherwise your work is vain and false.For Milizia this moment is risky, since "the more splendid these accidents are (the flaming twilight is always an excess), the more they must be observed to represent them well".
Finally, the night has always been a singularity within painting, to the point of constituting a genre of its own: the nocturne. In these scenes the light comes from the Moon, the stars or from some type of artificial illumination (bonfires, torches, candles or, more recently, gas or electric light). The justification for a night scene has generally been given from iconographic themes occurring in this time period. In the 14th century painting began to move away from the symbolic and conceptual content of medieval art in search of a figurative content based on a more objective spatio-temporal axis. Renaissance artists were refractory to the nocturnal setting, since their experimentation in the field of linear perspective required an objective and stable frame in which full light was indispensable. Thus, Lorenzo Ghiberti stated that "it is not possible to be seen in darkness" and Leonardo wrote that "darkness means complete deprivation of light". Leonardo advised a night scene only with the illumination of a fire, as a mere artifice to make a night scene diurnal. However, Leonardo's "sfumato" opened a first door to a naturalistic representation of the night, thanks to the chromatic decrease in the distance in which the bluish white of Leonardo's luminous air can become a bluish black for the night: just as the first creates an effect of remoteness, the second provokes closeness, the dilution of the background in the gloom. This tendency will have its climax in baroque tenebrism, in which darkness is used to add drama to the scene and to emphasize certain parts of the painting, often with a symbolic aspect. On the other hand, in the 17th century the representation of the night acquired a more scientific character, especially thanks to the invention of the telescope by Galileo and a more detailed observation of the night sky. Finally, advances in artificial lighting in the 19th century boosted the conquest of nighttime, which became a time for leisure and entertainment, a circumstance that was especially captured by the Impressionists.
<templatestyles src="Template:Blockquote/styles.css" />All that of being a painter consists in distinguishing the light of each day of the week, more than in distinguishing colors. Who does not distinguish red from blue and yellow? But there are very few who distinguish the light of Sunday from that of Friday or Wednesday.
Symbology.
Light has had on numerous occasions throughout the history of painting an aesthetic component, which identifies light with beauty, as well as a symbolic meaning, especially related to religion, but also with knowledge, good, happiness and life, or in general the spiritual and immaterial. Sometimes the light of the Sun has been equated with inspiration and imagination, and that of the Moon with rational thought. In contrast, shadows and darkness represent evil, death, ignorance, immorality, misfortune or secrecy. Thus, many religions and philosophies throughout history have been based on the dichotomy between light and darkness, such as Ahura Mazda and Ahriman, yin and yang, angels and demons, spirit and matter, and so on. In general, light has been associated with the immaterial and spiritual, probably because of its ethereal and weightless aspect, and that association has often been extended to other concepts related to light, such as color, shadow, radiance, evanescence, etc.
The identification of light with a transcendent meaning comes from antiquity and probably existed in the minds of many artists and religious people before the idea was written down. In many ancient religions the deity was identified with light, such as the Semitic Baal, the Egyptian Ra or the Iranian Ahura Mazda. Primitive peoples already had a transcendental concept of light – the so-called "metaphor of light" – generally linked to immortality, which related the afterlife to starlight. Many cultures sketched a place of infinite light where the souls rested, a concept also picked up by Aristotle and various Fathers of the Church such as Saint Basil and Saint Augustine. On the other hand, many religious rites were based on "illumination" to purify the soul, from ancient Babylon to the Pythagoreans.
In Greek mythology Apollo was the god of the Sun and has often been depicted in art within a disk of light. On the other hand, Apollo was also the god of beauty and the arts, a clear symbolism between light and these two concepts. Also related to light is the goddess of dawn, Eos (Aurora in Roman mythology). In Ancient Greece, light was synonymous with life and was also related to beauty. Sometimes the fluctuation of light was related to emotional changes, as well as to intellectual capacity. On the other hand, the shadow had a negative component, it was related to the dark and hidden, to evil forces, such as the spectral shadows of Tartarus. The Greeks also related the sun to "intelligent light" ("φῶς νοετόν"), a driving principle of the movement of the universe, and Plato drew a parallel between light and knowledge.
The ancient Romans distinguished between "lux" (luminous source) and "lumen" (rays of light emanating from that source), terms they used according to the context: thus, for example, l"ux gloriae" or "lux intelligibilis", or "lumen naturale" or "lumen gratiae".
In Christianity, God is also often associated with light, a tradition that goes back to the philosopher Pseudo-Dionysius Areopagite ("On the Celestial Hierarchy, On the Divine Names"), who adapted a similar one from Neoplatonism. For this 5th century author, "Light derives from Good and is the image of Goodness". Later, in the 9th century, John Scotus Erigena defined God as "the father of lights". Already the Bible begins with the phrase "let there be light" (Ge 1:3) and points out that "God saw that the light was good" (Ge 1:4). This "good" had in Hebrew a more ethical sense, but in its translation into Greek the term "καλός" ("kalós", "beautiful") was used, in the sense of "kalokagathía", which identified goodness and beauty; although later in the "Latin Vulgate" a more literal translation was made ("bonum" instead of "pulchrum"), it remained fixed in the Christian mentality the idea of the intrinsic beauty of the world as the work of the Creator. On the other hand, the Holy Scriptures identify light with God, and Jesus goes so far as to affirm: "I am the light of the world, he who follows me will not walk in darkness, for he will have the light of life" (John 8:12). This identification of light with divinity led to the incorporation in Christian churches of a lamp known as "eternal light", as well as the custom of lighting candles to remember the dead and various other rites.
Light is also present in other areas of the Christian religion: the Conception of Jesus in Mary is realized in the form of a ray of light, as seen in numerous representations of the Annunciation; likewise, it represents the Incarnation, as expressed by Pseudo-Saint Bernard: "as the splendor of the sun passes through glass without breaking it and penetrates its solidity in its impalpable subtlety, without opening it when it enters and without breaking it when it leaves, so the Word God penetrates Mary's womb and comes forth from her womb intact." This symbolism of light passing through glass is the same concept that was applied to Gothic stained glass, where light symbolizes divine omnipresence. Another symbolism related to light is that which identifies Jesus with the Sun and Mary as the Dawn that precedes him. In addition to all this, in Christianity light can also signify truth, virtue and salvation. In patristics, light is a symbol of eternity and the heavenly world: according to Saint Bernard, souls separated from the body will be "plunged into an immense ocean of eternal light and luminous eternity". On the other hand, in ancient Christianity, baptism was initially called "illumination".
In Orthodox Christianity, light is, more than a symbol, a "real aspect of divinity," according to Vladimir Lossky. A reality that can be apprehended by the human being, as expressed by Saint Simeon the New Theologian:
<templatestyles src="Template:Blockquote/styles.css" />[God] never appears as any image or figure, but shows himself in his simplicity, formed by light without form, incomprehensible, ineffable.
Because of the opposition of light and darkness, this element has also been used on occasions as a repeller of demons, so that light has often been represented in various acts and ceremonies such as circumcision, baptisms, weddings or funerals, in the form of candles or fires.
In Christian iconography, light is also present in the halos of the saints, which used to be made – especially in medieval art – with a golden nimbus, a circle of light placed around the heads of saints, angels and members of the Holy Family. In Fra Angelico's "The Annunciation", in addition to the halo, the artist placed rays of light radiating from the figure of the archangel Gabriel, to emphasize his divinity, the same resource he uses with the dove symbolizing the Holy Spirit. On other occasions, it is God himself who is represented in the form of rays of sunlight, as in "The Baptism of Christ" (1445) by Piero della Francesca. The rays can also signify God's wrath, as in "The Tempest" (1505) by Giorgione. On other occasions light represents eternity or divinity: in the vanitas genre, beams of light used to focus on objects whose transience was to be emphasized as a symbol of the ephemerality of life, as in "Vanities" (1645) by Harmen Steenwijck, where a powerful beam of light illuminates the skull in the center of the painting.
Between the 14th and 15th centuries Italian painters used supernatural-looking lights in night scenes to depict miracles: for example, in the "Annunciation to the Shepherds" by Taddeo Gaddi (Santa Croce, Florence) or in the "Stigmatization of Saint Francis" by Gentile da Fabriano (1420, private collection). In the 16th century, supernatural lights with brilliant effects were also used to point out miraculous events, as in Matthias Grünewald's "Risen Christ" (1512-1516, Isenheim altar, Museum Unterlinden, Colmar) or in Titian's "Annunciation" (1564, San Salvatore, Venice). In the following century, Rembrandt and Caravaggio identified light in their works with divine grace and as an agent of action against evil. The Baroque was the period in which light became more symbolic: in medieval art the luminosity of the backgrounds, of the halos of the saints and other objects – generally made with gold leaf – was an attribute that did not correspond to real luminosity, while in the Renaissance it responded more to a desire for experimentation and aesthetic delight; Rembrandt was the first to combine both concepts, the divine light is a real, sensory light, but with a strong symbolic charge, an instrument of revelation.
Between the 17th and 18th centuries, mystical theories of light were abandoned as philosophical rationalism gained ground. From transcendental or divine light, a new symbolism of light evolved that identified it with concepts such as knowledge, goodness or rebirth, and opposed it to ignorance, evil and death. Descartes spoke of an "inner light" capable of capturing the "eternal truths", a concept also taken up by Leibniz, who distinguished between "lumière naturelle" (natural light) and "lumière révélée" (revealed light).
In the 19th century light was related by the German Romantics (Friedrich Schlegel, Friedrich Schelling, Georg Wilhelm Friedrich Hegel) to nature, in a pantheistic sense of communion with nature. For Schelling, light was a medium in which the "universal soul" (Weltseele) moved. For Hegel, light was the "ideality of matter", the foundation of the material world.
Between the 19th and 20th centuries, a more scientific view of light prevailed. Science had been trying to unravel the nature of light since the early Modern Age, with two main theories: the corpuscular theory, defended by Descartes and Newton; and the wave theory, defended by Christiaan Huygens, Thomas Young and Augustin-Jean Fresnel. Later, James Clerk Maxwell presented an electromagnetic theory of light. Finally, Albert Einstein brought together the corpuscular and wave theories.
Light can also have a symbolic character in landscape painting: in general, dawn and the passage from night to day represent the divine plan – or cosmic system – that transcends the simple will of the human being; dawn also symbolizes the renewal and redemption of Christ. On other occasions, the sun and the moon have been associated with various vital forces: thus, the sun and the day are associated with the masculine, the vital force and energy; and the moon and the night with the feminine, rest, sleep and spirituality, sometimes even death.
In other religions light also has a transcendent meaning: in Buddhism it represents truth and the overcoming of matter in the ascent to nirvana. In Hinduism it is synonymous with wisdom and the spiritual understanding of participation with divinity ("atman"); it is also the manifestation of Krishna, the "Lord of Light". In Islam it is the sacred name "Nûr". According to the Koran (24:35), "Allah is the light of the heavens and the earth. Light upon light! Allah guides to his light whomever he wills". In the Zohar of the Jewish Kabbalah the primordial light Or (or Awr) appears, and points out that the universe is divided between the empires of light and darkness; also in Jewish synagogues there is usually a lamp of "eternal light" or ner tamid. Finally, in Freemasonry, the search for light is considered the ascent to the various Masonic degrees; some of the Masonic symbols, such as the compass, the bevel and the holy book, are called "great lights"; also the principal Masonic officials are called "lights". On the other hand, initiation into Freemasonry is called "receiving the light".
<templatestyles src="Template:Blockquote/styles.css" />Light is the most joyful of things: it is the symbol of all that is good and wholesome. In all religions it signifies eternal salvation.
History.
The use of light is intrinsic to painting, so it has been present directly or indirectly since prehistoric times, when cave paintings sought light and relief effects by taking advantage of the roughness of the walls where these scenes were represented. However, serious attempts at greater experimentation in the technical representation of light did not take place until classical Greco-Roman art: Francisco Pacheco, in "El arte de la pintura" (1649), points out that: "adumbration was invented by Surias, Samian, covering or staining the shadow of a horse, looked at in the sunlight". On the other hand, Apollodorus of Athens is credited with the invention of chiaroscuro, a procedure of contrast between light and shadow to produce effects of luminous reality in a two-dimensional representation such as painting. The effects of light and shadow were also developed by Greek scenographers in a technique called "skiagraphia", consisting of the contrast between black and white to create contrast, to the point that they were called "shadow painters".
The first scientific studies on light also emerged in Greece: Aristotle stated in relation to colors that they are "mixtures of different forces of sunlight and the light of fire, air and water", as well as that "darkness is due to the deprivation of light". One of the most famous Greek painters was Apelles, one of the pioneers in the representation of light in painting. Pliny said of Apelles that he was the only one who "painted what cannot be painted, thunder, lightning and thunderbolts". Another outstanding painter was Nicias of Athens, of whom Pliny praised the "care he took with light and shade to achieve the appearance of relief".
With the emergence of landscape painting, a new method was developed to represent distance through gradations of light and shadow, contrasting more the plane closest to the viewer and progressively blurring with distance. These early landscape painters created the modeling through shades of light and shadow, without mixing the colors in the palette. Claudius Ptolemy explained in his "Optics" how painters created the illusion of depth through distances that seemed "veiled by air". In general, the strongest contrasts were made in the areas closest to the observer and progressively reduced towards the background. This technique was picked up by early Christian and Byzantine art, as seen in the apsidal mosaic of Sant'Apollinare in Classe, and even reached as far as India, as denoted in the Buddhist murals of Ajantā.
In the 5th century the philosopher John Philoponus, in his commentary on Aristotle's Meteorology, outlined a theory on the subjective effect of light and shadow in painting, known today as "Philoponus' rule":
<templatestyles src="Template:Blockquote/styles.css" />If we apply black and white on the same surface and then look at them from a distance, the white will always appear much closer and the black much farther away. So when painters want something to look hollow, like a well, a cistern, a ditch or a cave, they paint it black or brown. But when they want something to appear prominent, such as a girl's breasts, an outstretched hand or a horse's legs, they apply black over the adjoining areas so that they appear to recede and the parts in between appear to come forward.
This effect was already known empirically by ancient painters. Cicero was of the opinion that painters saw more than normal people "in umbris et eminentia" ("in shadows and eminences"), that is, depth and protrusion. And Pseudo-Longinus – in his work "On the Sublime" – said that "although the colors of shadow and light are on the same plane, side by side, the light jumps immediately into view and seems not only to stand out but actually to be closer."
Hellenistic art was fond of light effects, especially in landscape painting, as denoted in the "stuccoes" of "La Farnesina". Chiaroscuro was widely used in Roman painting, as denoted in the illusory architectures of the frescoes of Pompeii, although it disappeared during the Middle Ages. Vitruvius recommended as more suitable for painting the northern light, being more constant due to its low mutability in tone. Later, in Paleochristian art, the taste for contrasts between light and shadow became evident – as can be seen in Christian sepulchral paintings and in the mosaics of Santa Pudenciana and Santa María la Mayor – in such a way that this style has sometimes been called "ancient impressionism".
Byzantine art inherited the use of illusionistic touches of light that were used in Pompeian art, but just as in the original its main function was naturalistic, here it is already a rhetorical formula far removed from the representation of reality. In Byzantine art, as well as in Romanesque art, which it powerfully influenced, the luminosity and splendor of shines and reflections, especially of gold and precious stones, were more valued, with a more aesthetic than pictorial component, since these shines were synonymous of beauty, of a type of beauty more spiritual than material. These briils were identified with the divine light, as did Abbot Suger to justify his expenditure on jewels and precious materials.
Both Greek and Roman art laid the foundations of the style known as classicism, whose main premises are truthfulness, proportion and harmony. Classicist painting is fundamentally based on drawing as a preliminary design tool, on which the pigment is applied taking into account a correct proportion of chromaticism and shading. These precepts laid the foundations of a way of understanding art that has lasted throughout history, with a series of cyclical ups and downs that have been followed to a greater or lesser extent: some of the periods in which the classical canons have been returned to were the Renaissance, Baroque classicism, neoclassicism and academicism.
Medieval art.
The art historian Wolfgang Schöne divided the history of painting in terms of light into two periods: "proper light" ("eigenlicht"), which would correspond to medieval art; and "illuminating light" ("beleuchtungslicht"), which would develop in modern and contemporary art ("Über das Licht in der Malerei", Berlin, 1979).
In the Middle Ages, light had a strong symbolic component in art, since it was considered a reflection of divinity. Within medieval scholastic philosophy, a current called the aesthetics of light emerged, which identified light with divine beauty, and greatly influenced medieval art, especially Gothic art: the new Gothic cathedrals were brighter, with large windows that flooded the interior space, which was indefinite, without limits, as a concretion of an absolute, infinite beauty. The introduction of new architectural elements such as the pointed arch and the ribbed vault, together with the use of buttresses and flying buttresses to support the weight of the building, allowed the opening of windows covered with stained glass that filled the interior with light, which gained in transparency and luminosity. These stained-glass windows allowed the light that entered through them to be nuanced, creating fantastic plays of light and color, fluctuating at different times of the day, which were reflected in a harmonious way in the interior of the buildings.
Light was associated with divinity, but also with beauty and perfection: according to Saint Bonaventure ("De Intelligentii"), the perfection of a body depends on its luminosity ("perfectio omnium eorum quae sunt in ordine universo, est lux"). William of Auxerre ("Summa Aurea") also related beauty and light, so that a body is more or less beautiful according to its degree of radiance. This new aesthetics was parallel in many moments to the advances of science in subjects such as optics and the physics of light, especially thanks to the studies of Roger Bacon. At this time the works of Alhacen were also known, which would be collected by Witelo in "De perspectiva" (ca. 1270–1278) and Adam Pulchrae Mulieris in "Liber intelligentiis" (ca. 1230).
The new prominence given to light in medieval times had a powerful influence on all artistic genres, to the point that Daniel Boorstein points out that "it was the power of light that produced the most modern artistic forms, because light, the almost instantaneous messenger of sensation, is the swiftest and most transitory element". In addition to architecture, light had a special influence on the miniature, with manuscripts illuminated with bright and brilliant colors, generally thanks to the use of pure colors (white, red, blue, green, gold and silver), which gave the image a great luminosity, without shades or chiaroscuro. The conjugation of these elementary colors generates light by the overall concordance, thanks to the approximation of the inks, without having to resort to shading effects to outline the contours. The light radiates from the objects, which are luminous without the need for the play of volumes that will be characteristic of modern painting. In particular, the use of gold in medieval miniatures generated areas of great light intensity, often contrasted with cold and light tones, to provide greater chromaticism.
However, in painting, light did not have the prominence it had in architecture: medieval "proper light" was alien to reality and without contact with the spectator, since it neither came from outside – lacking a light source – nor went outward, since it did not expand light. Chiaroscuro was not used, since shadow was forbidden as it was considered a refuge for evil. Light was considered of divine origin and conqueror of darkness, so it illuminated everything equally, with the consequence of the lack of modeling and volume in the objects, a fact that resulted in the weightless and incorporeal image that was sought to emphasize spirituality. Although there is a greater interest in the representation of light, it is more symbolic than naturalistic. Just as in architecture the stained glass windows created a space where illumination took on a transcendent character, in painting a spatial staging was developed through gold backgrounds, which although they did not represent a physical space, they did represent a metaphysical realm, linked to the sacred. This "gothic light" was a feigned illumination and created a type of unreal image that transcended mere nature.
<templatestyles src="Template:Blockquote/styles.css" />The "unnatural" light of Gothic art is also presented as the bearer of a world of images of great figurative opulence, whose power acts with extraordinary force on the soul of man.
The gold background reinforced the sacred symbolism of light: the figures are immersed in an indeterminate space of unnatural light, a scenario of sacred character where figures and objects are part of the religious symbolism. Cennino Cennini ("Il libro dell'Arte"), compiled various technical procedures for the use of gold leaf in painting (backgrounds, draperies, nimbuses), which remained in force until the 16th century. Gold leaf was used profusely, especially in halos and backgrounds, as can be seen in Duccio's Maestà, which shone brightly in the interior of the cathedral of Siena. Sometimes, before applying the gold leaf, a layer of red clay was spread; after wetting the surface and placing the gold leaf, it was smoothed and polished with ivory or a smooth stone. To achieve more brilliance and to catch the light, incisions were made in the gilding. It is noteworthy that in early Gothic painting there are no shadows, but the entire representation is uniformly illuminated; according to Hans Jantzen, "to the extent that medieval painting suppresses the shadow, it raises its sensitive light to the power of a super-sensible light".
In Gothic painting there is a progressive evolution in the use of light: the linear or Franco-Gothic Gothic was characterized by linear drawing and strong chromaticism, and gave greater importance to the luminosity of flat color than to tonality, emphasizing chromatic pigment as opposed to luminous gradation. With the Italic or Trecentist Gothic a more naturalistic use of light began, characterized by the approach to the representation of depth – which would crystallize in the Renaissance with the linear perspective – the studies on anatomy and the analysis of light to achieve tonal nuance, as seen in the work of Cimabue, Giotto, Duccio, Simone Martini, and Ambrogio Lorenzetti. In the Flemish Gothic period, the technique of oil painting emerged, which provided brighter colors and allowed their gradation in different chromatic ranges, while facilitating greater detail in the details (Jan van Eyck, Rogier van der Weyden, Hans Memling, Gerard David).
Between the 13th and 14th centuries a new sensibility towards a more naturalistic representation of reality emerged in Italy, which had as one of its contributing factors the study of a realistic light in the pictorial composition. In the frescoes of the Scrovegni Chapel (Padua), Giotto studied how to distinguish flat and curved surfaces by the presence or absence of gradients and how to distinguish the orientation of flat surfaces by three tones: lighter for horizontal surfaces, medium for frontal vertical surfaces and darker for receding vertical surfaces. Giotto was the first painter to represent sunlight, a type of soft, transparent illumination, but one that already served to model figures and enhance the quality of clothes and objects. For his part, Taddeo Gaddi – in his "Annunciation to the Shepherds" (Baroncelli Chapel, Santa Croce, Florence) – depicted divine light in a night scene with a visible light source and a rapid fall in the pattern of light distribution characteristic of point sources of light, through contrasts of yellow and violet.
In the Netherlands, the brothers Hubert and Jan van Eyck and Robert Campin sought to capture various plays of light on surfaces of different textures and sheen, imitating the reflections of light on mirrors and metallic surfaces and highlighting the brilliance of colored jewels and gems ("Triptych of Mérode", by Campin, 1425–1428; Polyptych of Ghent, by Hubert and Jan van Eyck, 1432). Hubert was the first to develop a certain sense of saturation of light in his "Hours of Turin" (1414-1417), in which he recreated the first "modern landscapes" of Western painting – according to Kenneth Clark. In these small landscapes the artist recreates effects such as the reflection of the evening sky on the water or the light sparkling on the waves of a lake, effects that would not be seen again until the Dutch landscape painting of the 17th century. In the Ghent Polyptych (1432, Saint Bavo's Cathedral, Ghent), by Hubert and Jan, the landscape of "The Adoration of the Mystic Lamb" melts into light in the celestial background, with a subtlety that only the Baroque Claude of Lorraine would later achieve.
Jan van Eyck developed the light experiments of his brother and managed to capture an atmospheric luminosity of naturalistic aspect in his works, in paintings such as "The Virgin of Chancellor Rolin" (1435, Louvre Museum, Paris), or "The Arnolfini Marriage" (1434, The National Gallery, London), where he combines the natural light that enters through two side windows with that of a single candle lit on the candlestick, which here has a more symbolic than plastic value, since it symbolizes human life. In Van Eyck's workshop, oil painting was developed, which gave a greater luminosity to the painting thanks to the glazes: in general, they applied a first layer of tempera, more opaque, on which they applied the oil (pigments ground in oil), which is more transparent, through several thin layers that let the light pass through, achieving greater luminosity, depth and tonal and chromatic richness.
Other Dutch artists who stood out in the expression of light were: Dirk Bouts, who in his works enhances with light the coloring and, in general, the plastic sense of the composition; Petrus Christus, whose use of light approaches a certain abstraction of the forms; and Geertgen tot Sint Jans, author in some of his works of surprising light effects, as in his "Nativity" (1490, National Gallery, London), where the light emanates from the body of the Child Jesus in the cradle, symbol of the Divine Grace.
Modern Age Art.
Renaissance.
The art of the Modern Age – not to be confused with modern art, which is often used as a synonym for contemporary art – began with the Renaissance, which emerged in Italy in the 15th century ("Quattrocento"), a style influenced by classical Greco-Roman art and inspired by nature, with a more rational and measured component, based on harmony and proportion. Linear perspective emerged as a new method of composition and light became more naturalistic, with an empirical study of physical reality. Renaissance culture meant a return to rationalism, the study of nature, empirical research, with a special influence of classical Greco-Roman philosophy. Theology took a back seat and the object of study of the philosopher returned to the human being (humanism).
In the Renaissance, the use of canvas as a support and the technique of oil painting became widespread, especially in Venice from 1460. Oil painting provided a greater chromatic richness and facilitated the representation of brightness and light effects, which could be represented in a wider range of shades. In general, Renaissance light tended to be intense in the foreground, diminishing progressively towards the background. It was a fixed lighting, which meant an abstraction with respect to reality, since it created an aseptic space subordinated to the idealizing character of Renaissance painting; to reconvert this ideal space into a real atmosphere, a slow process was followed based on the subordination of volumetric values to lighting effects, through the dissolution of the solidity of forms in the luminous space.
During this period, chiaroscuro was recovered as a method to give relief to objects, while the study of gradation as a technique to diminish the intensity of color and modeling to graduate the different values of light and shadow was deepened. Renaissance natural light not only determined the space of the pictorial composition, but also the volume of figures and objects. It is a light that loses the metaphorical character of Gothic light and becomes a tool for measuring and ordering reality, shaping a plastic space through a naturalistic representation of light effects. Even when light retains a metaphorical reference – in religious scenes – it is a light subordinated to the realistic composition.
Light had a special relevance in landscape painting, a genre in which it signified the transition from a symbolic representation in medieval art to a naturalistic transcription of reality. Light is the medium that unifies all parts of the composition into a structured and coherent whole. According to Kenneth Clark, "the sun shines for the first time in the landscape of the "Flight into Egypt" that Gentile da Fabriano painted in his "Adoration" of 1423. This sun is a golden disk, which is reminiscent of medieval symbolism, but its light is already fully naturalistic, spilling over the hillside, casting shadows and creating the compositional space of the image.
In the Renaissance, the first theoretical treatises on the representation of light in painting appeared: Leonardo da Vinci dedicated a good part of his "Treatise on Painting" to the scientific study of light. Albrecht Dürer investigated a mathematical procedure to determine the location of shadows cast by objects illuminated by point source lights, such as candlelight. Giovanni Paolo Lomazzo devoted the fourth book of his "Trattato" (1584) to light, in which he arranged light in descending order from primary sunlight, divine light and artificial light to the weaker secondary light reflected by illuminated bodies. Cennino Cennini took up in his treatise "Il libro dell'arte" the rule of Philoponus on the creation of distance by contrasts: "the farther away you want the mountains to appear, the darker you will make your color; and the closer you want them to appear, the lighter you will make the colors".
Another theoretical reference was Leon Battista Alberti, who in his treatise "De pictura" (1435) pointed out the indissolubility of light and color, and affirmed that "philosophers say that no object is visible if it is not illuminated and has no color. Therefore they affirm that between light and color there is a great interdependence, since they make themselves reciprocally visible". In his treatise, Alberti pointed out three fundamental concepts in painting: "circumscriptio" (drawing, outline), "compositio" (arrangement of the elements), and "luminum receptio" (illumination). He stated that color is a quality of light and that to color is to "give light" to a painting. Alberti pointed out that relief in painting was achieved by the effects of light and shadow ("lumina et umbrae"), and warned that "on the surface on which the rays of light fall the color is lighter and more luminous, and that the color becomes darker where the strength of the light gradually diminishes." Likewise, he spoke of the use of white as the main tool for creating brilliance: "the painter has nothing but white pigment ("album colorem") to imitate the flash ("fulgorem") of the most polished surfaces, just as he has nothing but black to represent the most extreme darkness of the night. Thus, the darker the general tone of the painting, the more possibilities the artist has to create light effects, as they will stand out more.
Alberti's theories greatly influenced Florentine painting in the mid-15th century, so much so that this style is sometimes called "pittura di luce" (light painting), represented by Domenico Veneziano, Fra Angelico, Paolo Uccello, Andrea del Castagno and the early works of Piero della Francesca.
Domenico Veneziano, who as his name indicates was originally from Venice but settled in Florence, was the introducer of a style based more on color than on line. In one of his masterpieces, "The Virgin and Child with Saint Francis", "Saint John the Baptist", "Saint Cenobius and Saint Lucy" (c. 1445, Uffizi, Florence), he achieved a believably naturalistic representation by combining the new techniques of representing light and space. The solidity of the forms is solidly based on the light-shadow modeling, but the image also has a serene and radiant atmosphere that comes from the clear sunlight that floods the courtyard where the scene takes place, one of the stylistic hallmarks of this artist.
Fra Angelico synthesized the symbolism of the spiritual light of medieval Christianity with the naturalism of Renaissance scientific light. He knew how to distinguish between the light of dawn, noon and twilight, a diffuse and non-contrasting light, like an eternal spring, which gives his works an aura of serenity and placidity that reflects his inner spirituality. In "Scenes from the Life of Saint Nicholas" (1437, Pinacoteca Vaticana, Rome) he applied Alberti's method of balancing illuminated and shaded halves, especially in the figure with his back turned and the mountainous background.
Uccello was also a great innovator in the field of pictorial lighting: in his works – such as "The Battle of San Romano" (1456, Musée du Louvre, Paris) – each object is conceived independently, with its own lighting that defines its corporeality, in conjunction with the geometric values that determine its volume. These objects are grouped together in a scenographic composition, with a type of artificial lighting reminiscent of that of the performing arts.
In turn, Piero della Francesca used light as the main element of spatial definition, establishing a system of volumetric composition in which even the figures are reduced to mere geometric outlines, as in "The Baptism of Christ" (1440-1445, The National Gallery, London). According to Giulio Carlo Argan, Piero did not consider "a transmission of light, but a fixation of light", which turns the figures into references of a certain definition of space. He carried out scientific studies of perspective and optics ("De prospectiva pingendi") and in his works, full of a colorful luminosity of great beauty, he uses light as both an expressive and symbolic element, as can be seen in his frescoes of San Francesco in Arezzo. Della Francesca was one of the first modern artists to paint night scenes, such as "The Dream of Constantine (Legend of the Cross", 1452–1466, San Francesco in Arezzo). He cleverly assimilated the luminism of the Flemish school, which he combined with Florentine spatialism: in some of his landscapes there are luminous moonscapes reminiscent of the Van Eyck brothers, although transcribed with the golden Mediterranean light of his native Umbria.
Masaccio was a pioneer in using light to emphasize the drama of the scene, as seen in his frescoes in the Brancacci chapel of Santa Maria del Carmine (Florence), where he uses light to configure and model the volume, while the combination of light and shadow serves to determine the space. In these frescoes, Masaccio achieved a sense of perspective without resorting to geometry, as would be usual in linear perspective, but by distributing light among the figures and other elements of the representation. In "The Tribute of the Coin", for example, he placed a light source outside the painting that illuminates the figures obliquely, casting shadows on the ground with which the artist plays.
Straddling the Gothic and Renaissance periods, Gentile da Fabriano was also a pioneer in the naturalistic use of light: in the predella of "the Adoration of the Magi" (1423, Uffizi, Florence) he distinguished between natural, artificial and supernatural light sources, using a technique of gold leaf and graphite to create the illusion of light through tonal modeling.
Sandro Botticelli was a Gothic painter who moved away from the naturalistic style initiated by Masaccio and returned to a certain symbolic concept of light. In "The Birth of Venus" (1483-1485, Uffizi, Florence), he symbolized the dichotomy between matter and spirit with the contrast between light and darkness, in line with the Neoplatonic theories of the Florentine Academy of which he was a follower: on the left side of the painting the light corresponds to the dawn, both physical and symbolic, since the female character that appears embracing Zephyrus is Aurora, the goddess of dawn; on the right side, darker, are the earth and the forest, as metaphorical elements of matter, while the character that tends a mantle to Venus is the Hour, which personifies time. Venus is in the center, between day and night, between sea and land, between the divine and the human.
A remarkable pictorial school emerged in Venice, characterized by the use of canvas and oil painting, where light played a fundamental role in the structuring of forms, while great importance was given to color: chromaticism would be the main hallmark of this school, as it would be in the 16th century with Mannerism. Its main representatives were Carlo Crivelli, Antonello da Messina, and Giovanni Bellini. In the "Altarpiece of Saint Job" (c. 1485, Gallerie dell'Accademia, Venice), Bellini brought together for the first time the Florentine linear perspective with Venetian color, combining space and atmosphere, and made the most of the new oil technique initiated in Flanders, thus creating a new artistic language that was quickly imitated. According to Kenneth Clark, Bellini "was born with the landscape painter's greatest gift: emotional sensitivity to light". In his "Christ on the Mount of Olives" (1459, National Gallery, London) he made the effects of light the driving force of the painting, with a shadowy valley in which the rising sun peeks through the hills. This emotive light is also seen in his "Resurrection" at the Staatliche Museen in Berlin (1475-1479), where the figure of Jesus radiates a light that bathes the sleeping soldiers. While his early works are dominated by sunrises and sunsets, in his mature production he appreciates more the full light of day, in which the forms merge with the general atmosphere. However, he also knew how to take advantage of the cold and pale lights of winter, as in the "Virgin of the Meadow" (1505, National Gallery, London), where a pale sun struggles with the shadows of the foreground, creating a fleeting effect of marble light.
The Renaissance saw the emergence of the "sfumato" technique, traditionally attributed to Leonardo da Vinci, which consisted of the degradation of light tones to blur the contours and thus give a sense of remoteness. This technique was intended to give greater verisimilitude to the pictorial representation, by creating effects similar to those of human vision in environments with a wide perspective. The technique consisted of a progressive application of glazes and the feathering of the shadows to achieve a smooth gradient between the various parts of light and shadow of the painting, with a tonal gradation achieved with progressive retouching, leaving no trace of the brushstroke. It is also called "aerial perspective", since its results resemble the vision in a natural environment determined by atmospheric and environmental effects. This technique was used, in addition to Leonardo, by Dürer, Giorgione and Bernardino Luini, and later by Velázquez and other Baroque painters.
Leonardo was essentially concerned with perception, the observation of nature. He sought life in painting, which he found in color, in the light of chromaticism. In his "Treatise on Painting" (1540) he stated that painting is the sum of light and darkness ("chiaroscuro"), which gives movement, life: according to Leonardo, darkness is the body and light is the spirit, and the mixture of both is life. In his treatise he established that "painting is a composition of light and shadows, combined with the various qualities of all the simple and compound colors". He also distinguished between illumination ("lume") and brilliance ("lustro"), and warned that "opaque bodies with hard and rough surface never generate luster in any illuminated part".
The Florentine polymath included light among the main components of painting and pointed it out as an element that articulates pictorial representation and conditions the spatial structure and the volume and chromaticism of objects and figures. He was also concerned with the study of shadows and their effects, which he analyzed together with light in his treatise. He also distinguished between shadow ("ombra") and darkness ("tenebre"), the former being an oscillation between light and darkness. He also studied nocturnal painting, for which he recommended the presence of fire as a means of illumination, and he wrote down the different necessary gradations of light and color according to the distance from the light source. Leonardo was one of the first artists to be concerned with the degree of illumination of the painter's studio, suggesting that for nudes or carnations the studio should have uncovered lights and red walls, while for portraits the walls should be black and the light diffused by a canopy.
Leonardo's subtle chiaroscuro effects are perceived in his female portraits, in which the shadows fall on the faces as if submerging them in a subtle and mysterious atmosphere. In these works he advocated intermediate lights, stating that "the contours and figures of dark bodies are poorly distinguished in the dark as well as in the light, but in the intermediate zones between light and shadow they are better perceived". Likewise, on color he wrote that "colors placed in shadows will participate to a greater or lesser degree in their natural beauty according as they are placed in greater or lesser darkness. But if the colors are placed in a luminous space, then they will possess a beauty all the greater the more splendorous the luminosity".
<templatestyles src="Template:Blockquote/styles.css" />Look at the light and consider its beauty. Blink and look at it again: what you now see of the light was not there before and what was there before no longer exists.
The other great name of the early "Cinquecento" was Raphael, a serene and balanced artist whose work shows a certain idealism framed in a realistic technique of great virtuoso execution. According to Giovanni Paolo Lomazzo, Raphael "has given enchanting, loving and sweet light, so that his figures appear beautiful, pleasing and intricate in their contours, and endowed with such relief that they seem to move." Some of his lighting solutions were quite innovative, with resources halfway between Leonardo and Caravaggio, as seen in "The Transfiguration" (1517-1520, Vatican Museums, Vatican City), in which he divides the image into two halves, the heavenly and the earthly, each with different pictorial resources. In the "Liberation of Saint Peter" (1514, Vatican Museums, Vatican City) he painted a nocturnal scene in which the light radiating from the angel in the center stands out, giving a sensation of depth, while at the same time it is reflected in the breastplates of the guards, creating intense luminous effects. This was perhaps the first work to include artificial lighting with a naturalistic sense: the light radiating from the angel influences the illumination of the surrounding objects, while diluting the distant forms.
Outside Italy, Albrecht Dürer was especially concerned with light in his watercolor landscapes, treated with an almost topographical detail, in which he shows a special delicacy in the capture of light, with poetic effects that prelude the sentimental landscape of Romanticism. Albrecht Altdorfer showed a surprising use of light in "The Battle of Alexander at Issos" (1529, Alte Pinakothek, Munich), where the appearance of the sun among the clouds produces a supernatural refulgence, effects of bubbling lights that also precede Romanticism. Matthias Grünewald was a solitary and melancholic artist, whose original work reflects a certain mysticism in the treatment of religious themes, with an emotive and expressionist style, still with medieval roots. His main work was the altar of Isenheim (1512-1516, Museum Unterlinden, Colmar), in which the refulgent halo in which he places his "Risen Christ" stands out.
Between Gothic and Renaissance is the unclassifiable work of Bosch, a Flemish artist gifted with a great imagination, author of dreamlike images that continue to surprise for their fantasy and originality. In his works – and especially in his landscape backgrounds – there is a great skill in the use of light in different temporal and environmental circumstances, but he also knew how to recreate in his infernal scenes fantastic effects of flames and fires, as well as supernatural lights and other original effects, especially in works such as "The Last Judgment" (c. 1486–1510, Groenige Museum, Bruges), "Visions of the Beyond" (c. 1490, Doge's Palace, Venice), "The Garden of Earthly Delights" (c. 1500–1505, Museo del Prado, Madrid), "The Hay Chariot" (c. 1500–1502, Museo del Prado, Madrid) or "The Temptations of Saint Anthony" (c. 1501, Museum of Fine Arts, Lisbon). Bosch had a predilection for the effects of light generated by fire, by the glow of flames, which gave rise to a new series of paintings in which the effects of violent and fantastic lights originated by fire stood out, as is denoted in a work by an anonymous artist linked to the workshop of Lucas van Leyden, "Lot and his daughters" (c. 1530, Musée du Louvre, Paris), or in some works by Joachim Patinir, such as "Charon crossing the Styx Lagoon" (c. 1520–1524, Museo del Prado, Madrid) or "Landscape with the Destruction of Sodom" "and Gomorrah" (c. 1520, Boymans Van Beuningen Museum, Rotterdam). These effects also influenced Giorgione, as well as some Mannerist painters such as Lorenzo Lotto, Dosso Dossi and Domenico Beccafumi.
Mannerism.
At the end of the High Renaissance, in the middle of the 16th century, Mannerism followed, a movement that abandoned nature as a source of inspiration to seek a more emotional and expressive tone, in which the artist's subjective interpretation of the work of art became more important, with a taste for sinuous and stylized form, with deformation of reality, distorted perspectives and gimmicky atmospheres. In this style light was used in a gimmicky way, with an unreal treatment, looking for a colored light of different origins, both a cold moonlight and a warm firelight. Mannerism broke with the full Renaissance light by introducing night scenes with intense chromatic interplay between light and shadow and a dynamic rhythm far from Renaissance harmony. Mannerist light, in contrast to Renaissance classicism, took on a more expressive function, with a natural origin but an unreal treatment, a disarticulating factor of the classicist balance, as seen in the work of Pontormo, Rosso or Beccafumi.
In Mannerism, the Renaissance optical scheme of light and shadow was broken by suppressing the visual relationship between the light source and the illuminated parts of the painting, as well as in the intermediate steps of gradation. The result was strong contrasts of color and chiaroscuro, and an artificial and refulgent aspect of the illuminated parts, independent of the light source.
Between Renaissance classicism and Mannerism lies the work of Michelangelo, one of the most renowned artists of universal stature. His use of light was generally with plastic criteria, but sometimes he used it as a dramatic resource, especially in his frescoes in the Pauline Chapel: "Crucifixion of Saint Peter" and "Conversion of Saint Paul" (1549). Placed on opposite walls, the artist valued the entry of natural light into the chapel, which illuminated one wall and left the other in semi-darkness: in the darkest part he placed the "Crucifixion", a subject more suitable for the absence of light, which emphasizes the tragedy of the scene, intensified in its symbolic aspect by the fading light of dusk that is perceived on the horizon; instead, the "Conversion" receives natural light, but at the same time the pictorial composition has more luminosity, especially for the powerful ray of light that comes from the hand of Christ and is projected on the figure of Saul, who thanks to this divine intervention is converted to Christianity.
Another reference of Mannerism was Correggio, the first artist – according to Vasari – to apply a dark tone in contrast to light to produce effects of depth, while masterfully developing the Leonardoesque "sfumato" through diffuse lights and gradients. In his work "The Nativity" (1522, Gemäldegalerie Alte Meister, Dresden) he was the first to show the birth of Jesus as a "miracle of light", an assimilation that would become habitual from then on. In "The Assumption of the Virgin" (1526-1530), painted on the dome of the cathedral of Parma, he created an illusionistic effect with figures seen from below ("sotto in sù") that would be the forerunner of Baroque optical illusionism; in this work the subtle nuances of his flesh tones stand out, as well as the luminous break of glory of its upper part.
Jacopo Pontormo, a disciple of Leonardo, developed a strongly emotional, dynamic style with unreal effects of space and scale, in which a great mastery of color and light can be glimpsed, applied by color stains, especially red. Domenico Beccafumi stood out for his colorism, fantasy and unusual light effects, as in "The Birth of the Virgin" (1543, Pinacoteca Nazionale di Siena). Rosso Fiorentino also developed an unusual coloring and fanciful play of light and shadow, as in his "Descent of Christ" (1521, Pinacoteca Comunale, Volterra). Luca Cambiasso showed a great interest in nocturnal illumination, which is why he is considered a forerunner of tenebrism. Bernardino Luini, a disciple of Leonardo, showed a Leonardoesque treatment of light in the "Madonna of the Rosebush" (c. 1525–1530, Pinacoteca di Brera).
Alongside this more whimsical mannerism, a school of a more serene style emerged in Venice that stood out for its treatment of light, which subordinated plastic form to luminous values, as can be seen in the work of Giorgione, Titian, Tintoretto and Veronese. In this school, light and color were fused, and Renaissance linear perspective was replaced by aerial perspective, the use of which would culminate in the Baroque. The technique used by these Venetian painters is called "tonalism": it consisted in the superimposition of glazes to form the image through the modulation of color and light, which are harmonized through relations of tone modulating them in a space of plausible appearance. The color assumes the function of light and shadow, and it is the chromatic relationships that create the effects of volume. In this modality, the chromatic tone depends on the intensity of light and shadow (the color value).
Giorgione brought the Leonardesque influence to Venice. He was an original artist, one of the first to specialize in cabinet paintings for private collectors, and the first to subordinate the subject of the work to the evocation of moods. Vasari considered him, together with Leonardo, one of the founders of "modern painting". A great innovator, he reformulated landscape painting both in composition and iconography, with images conceived in depth with a careful modulation of chromatic and light values, as is evident in one of his masterpieces, "The Tempest" (1508, Gallerie dell'Accademia, Venice).Titian was a virtuoso in the recreation of vibrant atmospheres with subtle shades of light achieved with infinite variations obtained after a meticulous study of reality and a skillful handling of the brushes that demonstrated a great technical mastery. In his "Pentecost" (1546, Santa Maria della Salute, Venice) he made rays of light emanate from the dove representing the Holy Spirit, ending in tongues of fire on the heads of the Virgin and the apostles, with surprising light effects that were innovative for his time. This research gradually evolved into increasingly dramatic effects, giving more emphasis to artificial lighting, as seen in "The Martyrdom of Saint Lawrence" (1558, Jesuit Church, Venice), where he combines the light of the torches and the fire of the grill where the saint is martyred with the supernatural effect of a powerful flash of divine light in the sky that is projected on the figure of the saint. This experimentation with light influenced the work of artists such as Veronese, Tintoretto, Jacopo Bassano and El Greco.
Tintoretto liked to paint enclosed in his studio with the windows closed by the light of candles and torches, which is why his paintings are often called "di notte e di fuoco" ("by night and fire"). In his works, of deep atmospheres, with thin and vertical figures, the violent effects of artificial lights stand out, with strong chiaroscuro and phosphorescent effects. These luminous effects were adopted by other members of the Venetian school such as the Bassano (Jacopo, Leandro, and Francesco), as well as by the so-called "Lombard illuminists" (Giovanni Girolamo Savoldo, Moretto da Brescia), while influencing El Greco and Baroque tenebrism.
Another artist framed in the painting "di notte e di fuoco" was Jacopo Bassano, whose indirect incidence lights influenced Baroque naturalism. In works such as "Christ in the House of Mary", "Martha and Lazarus" (c. 1577, Museum of Fine Arts, Houston), he combined natural and artificial lights with striking lighting effects.
For his part, Paolo Veronese was heir to the luminism of Giovanni Bellini and Vittore Carpaccio, in scenes of Palladian architecture with dense morning lights, golden and warm, without prominent shadows, emphasizing the brightness of fabrics and jewels. In "Allegory of the Battle of Lepanto" (1571) he divided the scene into two halves, the battle below and the Virgin with the saints who ask for her favor for the battle at the top, where angels are placed, throwing lightning bolts towards the battle, creating spectacular lighting effects.
Outside Italy it is worth mentioning the work of Pieter Brueghel the Elder, author of costumist scenes and landscapes that denote a great sensitivity towards nature. In some of his works the influence of Hieronymous Bosch can be seen in his fire lights and fantastic effects, as in "The Triumph of Death" (c. 1562, Museo del Prado, Madrid). In some of his landscapes he added the sun as a direct source of luminosity, such as the yellow sun of "The Flemish Proverbs" (1559, Staatliche Museen, Berlin), the red winter sun of "The Census in Bethlehem" (1556, Royal Museums of Fine Arts of Belgium, Brussels) or the evening sun of "Landscape with the Fall of Icarus" (c. 1558, Royal Museums of Fine Arts of Belgium, Brussels).
El Greco worked in Spain during this period, a singular painter who developed an individual style, marked by the influence of the Venetian school, the city where he lived for a time, as well as Michelangelo, from whom he took his conception of the human figure. In El Greco's work, light always prevails over shadows, as a clear symbolism of the preeminence of faith over unbelief. In one of his first works from Toledo, the "Expolio" for the sacristy of the cathedral of Toledo (1577), a zenithal light illuminates the figure of Jesus, focusing on his face, which becomes the focus of light in the painting. In the "Trinity" of the church of Santo Domingo el Antiguo (1577-1580) he introduced a dazzling Gloria light of an intense golden yellow. In "The Martyrdom of Saint Maurice" (1580-1582, Royal Monastery of San Lorenzo de El Escorial) he created two areas of differentiated light: the natural light that surrounds the earthly characters and that of the breaking of the glory in the sky, furrowed with angels. Among his last works stands out "The Adoration of the Shepherds" (1612-1613, Museo del Prado, Madrid), where the focus of light is the Child Jesus, who radiates his luminosity around producing phosphorescent effects of strong chromatism and luminosity.
El Greco's illumination evolved from the light coming from a specific point – or in a diffuse way – of the Venetian school to a light rooted in Byzantine art, in which the figures are illuminated without a specific light source or even a diffuse light. It is an unnatural light, which can come from multiple sources or none at all, an arbitrary and unequal light that produces hallucinatory effects. El Greco had a plastic conception of light: his execution went from dark to light tones, finally applying touches of white that created shimmering effects. The refulgent aspect of his works was achieved through glazes, while the whites were finished with almost dry applications. His light is mystical, subjective, almost spectral in appearance, with a taste for shimmering gleams and incandescent reflections.
Barroco.
In the 17th century, the Baroque emerged, a more refined and ornamented style, with the survival of a certain classicist rationalism but with more dynamic and dramatic forms, with a taste for the surprising and the anecdotal, for optical illusions and effects. Baroque painting had a marked geographical differentiating accent, since its development took place in different countries, in various national schools, each with a distinctive stamp. However, there is a common influence coming again from Italy, where two opposing trends emerged: naturalism (also called caravagism), based on the imitation of natural reality, with a certain taste for chiaroscuro – the so-called tenebrism – and classicism, which is realistic but with a more intellectual and idealized concept of reality. Later, in the so-called "full baroque" (second half of the 17th and early 18th centuries), painting evolved to a more decorative style, with a predominance of mural painting and a certain predilection for optical effects ("trompe-l'œil") and luxurious and exuberant scenographies.
During this period, many scientific studies on light were carried out (Johannes Kepler, Francesco Maria Grimaldi, Isaac Newton, Christiaan Huygens, Robert Boyle), which influenced its pictorial representation. Newton proved that color comes from the spectrum of white light and designed the first chromatic circle showing the relationships between colors. In this period the maximum degree of perfection was reached in the pictorial representation of light and the tactile form was diluted in favor of a greater visual impression, achieved by giving greater importance to light, losing the form the accuracy of its contours. In the Baroque, light was studied for the first time as a system of composition, articulating it as a regulating element of the painting: light fulfills several functions, such as symbolic, modeling and illumination, and begins to be directed as an emphatic element, selective of the part of the painting to be highlighted, so that artificial light becomes more important, which can be manipulated at the free will of the artist. Sacred light (nimbus, haloes) was abandoned and natural light was used exclusively, even as a symbolic element. On the other hand, the light of different times of the day (morning, twilight) began to be distinguished. Illumination was conceived as a luminous unit, as opposed to the multiple sources of Renaissance light; in the Baroque there may be several sources, but they are circumscribed to a global and unitary sense of the work.
In the Baroque, the nocturne genre became fashionable, which implies a special difficulty in terms of the representation of light, due to the absence of daylight, so that on numerous occasions it was necessary to resort to chiaroscuro and lighting effects from artificial light, while the natural light should come from the moon or the stars. For artificial light, bonfires, candles, lanterns, lanterns, candles, fireworks or similar elements were used. These light sources could be direct or indirect, they could appear in the painting or illuminate the scene from outside.
Naturalism.
Chiaroscuro resurfaced during the Baroque, especially in the Counter-Reformation, as a method of focusing the viewer's vision on the primordial parts of religious paintings, which were emphasized as didactic elements, as opposed to the Renaissance "pictorial decor". An exacerbated variant of chiaroscuro was tenebrism, a technique based on strong contrasts of light and shadow, with a violent type of lighting, generally artificial, which gives greater prominence to the illuminated areas, on which a powerful focus of directed light is placed. These effects have a strong dramatism, which emphasizes the scenes represented, generally of religious type, although they also abound in mythological scenes, still lifes or "vanitas". One of its main representatives was Caravaggio, as well as Orazio and Artemisia Gentileschi, Bartolomeo Manfredi, Carlo Saraceni, Giovanni Battista Caracciolo, Pieter van Laer ("il Bamboccio"), Adam Elsheimer, Gerard van Honthorst, Georges de La Tour, Valentin de Boulogne, the Le Nain brothers and José de Ribera ("lo Spagnoletto").
Caravaggio was a pioneer in the dramatization of light, in scenes set in dark interiors with strong spotlights of directed light that used to emphasize one or more characters. With this painter, light acquired a structural character in painting, since, together with drawing and color, it would become one of its indispensable elements. He was influenced by Leonardo's chiaroscuro through "The Virgin of the Rocks", which he was able to contemplate in the church of "San Francesco il Grande" in Milan. For Caravaggio, light served to configure the space, controlling its direction and expressive force. He was aware of the artist's power to shape the space at will, so in the composition of a work he would previously establish which lighting effects he was going to use, generally opting for sharp contrasts between the figures and the background, with darkness as a starting point: the figures emerge from the dark background and it is the light that determines their position and their prominence in the scene represented. Caravaggiesque light is conceptual, not imitative or symbolic, so it transcends materiality and becomes something substantial. It is a projected and solid light, which constitutes the basis of its spatial conception and becomes another volume in space.
His main hallmark in depicting light was the diagonal entry of light, which he first used in "Boy with a Basket of Fruit" (1593-1594, Galleria Borghese, Rome). In "La bonaventure" (1595-1598, Musée du Louvre, Paris) he used a warm golden light of the sunset, which falls directly on the young man and obliquely on the gypsy woman. His pictorial maturity came with the canvases for the Contarelli chapel in the church of San Luigi dei Francesi in Rome (1599-1600): "The Martyrdom of Saint Matthew" and "The Vocation of Saint Matthew". In the first, he established a composition formed by two diagonals defined by the illuminated planes and the shadows that form the volume of the figures, in a complex composition cohesive thanks to the light, which relates the figures to each other. In the second, a powerful beam of light that enters diagonally from the upper right directly illuminates the figure of Matthew, a beam parallel to the raised arm of Jesus and that seems to accompany his gesture; an open shutter of the central window cuts this beam of light at the top, leaving the left side of the image in semi-darkness. In works such as the "Crucifixion of Saint Peter" and the "Conversion of Saint Paul" (1600-1601, Cerasi Chapel, Santa Maria del Popolo, Rome) light makes objects and people glow, to the point that it becomes the true protagonist of the works; these scenes are immersed in light in a way that constitutes more than a simple attribute of reality, but rather the medium through which reality manifests itself. In the final stage of his career he accentuated the dramatic tension of his works through a luminism of flashing effects, as in "Seven Works of Mercy" (1607, Pio Monte della Misericordia, Naples), a nocturne with several spotlights of light that help to emphasize the acts of mercy depicted in simultaneous action.
Artemisia Gentileschi trained with her father, Orazio Gentileschi, coinciding with the years when Caravaggio lived in Rome, whose work she could appreciate in San Luigi dei Francesi and Santa Maria del Popolo. His work was channeled in the tenebrist naturalism, assuming its most characteristic features: expressive use of light and chiaroscuro, dramatism of the scenes and figures of round anatomy. His most famous work is "Judith beheading Holofernes" (two versions: 1612–1613, Museo Capodimonte, Naples; and 1620, Uffizi, Florence), where the light focuses on Judith, her maid and the Assyrian general, against a complete darkness, emphasizing the drama of the scene. In the 1630s, established in Naples, his style adopted a more classicist component, without completely abandoning naturalism, with more diaphanous spaces and clearer and sharper atmospheres, although chiaroscuro remained an essential part of the composition, as a means to create space, give volume and expressiveness to the image. One of his best compositions due to the complexity of its lighting is "The Birth of Saint John the Baptist" (1630, Museo del Prado, Madrid), where he mixes natural and artificial light: the light from the portal in the upper right part of the painting softens the light inside the room, in a "subtle transition of light values" – according to Roberto Longhi – that would later become common in Dutch painting.
Adam Elsheimer was noted for his light studies of landscape painting, with an interest in dawn and dusk lights, as well as night lighting and atmospheric effects such as mists and fogs. His light was strange and intense, with an enamel-like appearance typical of German painting, in a tradition ranging from Lukas Moser to Albrecht Altdorfer. His most famous painting is "Flight into Egypt" (1609, Alte Pinakothek, Munich), a night scene that is considered the first moonlit landscape; four sources of light are visible in this work: the shepherds' bonfire, the torch carried by Saint Joseph, the moon and its reflection in the water; the Milky Way can also be perceived, whose representation can also be considered as the first one done in a naturalistic way.
Georges de La Tour was a magnificent interpreter of artificial light, generally lamp or candle lights, with a visible and precise focus, which he used to place inside the image, emphasizing its dramatic aspect. Sometimes, in order not to dazzle, the characters placed their hands in front of the candle, creating translucent effects on the skin, which acquired a reddish tone, of great realism and that proved his virtuosity in capturing reality. While his early works show the influence of Italian Caravaggism, from his stay in Paris between 1636 and 1643 he came closer to Dutch Caravaggism, more prone to the direct inclusion of the light source on the canvas. He thus began his most tenebrist period, with scenes of strong half-light where the light, generally from a candle, illuminates with greater or lesser intensity certain areas of the painting. In general, two types of composition can be distinguished: the fully visible light source (Job with his wife, Musée Départemental des Vosges, Épinal; "Woman spurring herself", Musée Historique Lorrain, Nancy; "Madeleine Terff", Musée du Louvre, Paris) or the light blocked by an object or character, creating a backlit illumination ("Madeleine Fabius", Fabius collection, Paris; "Angel appearing to Saint Joseph", Musée des Beaux-Arts, Nantes; "The Adoration of the Shepherds", Musée du Louvre, Paris). In his later works he reduces the characters to schematic figures of geometric appearance, like mannequins, to fully recreate the effects of light on masses and surfaces ("The Repentance" of Saint Peter, Museum of Art, Cleveland; The Newborn, Musée des Beaux-Arts, Rennes; "Saint Sebastian cured by Saint Irene", parish church of Broglie).
Despite its plausible appearance, La Tour's lighting is not fully naturalistic, but is sifted by the artist's will; at all times he prints the desired amount of light and shadow to recreate the desired effect; in general, it is a serene and diffuse lighting, which brings out the volume without excessive drama. The light serves to unite the figures, to highlight the part of the painting that best suits the plot of the work, it is a timeless light of a poetic, transcendent character; it is just the right light necessary to provide credibility, but it serves a more symbolic than realistic purpose. It is an unreal light, since no candle generates such a serene and diffuse light, a conceptual and stylistic light, which serves only the compositional intention of the painter.
Another French Caravaggist was Trophime Bigot, nicknamed "Maître à la chandelle" (Master of the candle) for his scenes of artificial light, in which he showed great expertise in the technique of chiaroscuro.
The Valencian artist José de Ribera (nicknamed "lo Spagnoletto"), who lived in Naples, fully assumed the Caravaggesque light, with an anti-idealist style of pasty brushstrokes and dynamic effects of movement. Ribera assumed the tenebrist illumination in a personal way, sifted by other influences, such as Venetian coloring or the compositional rigor of Bolognese classicism. In his early work he used the violent contrasts of light and shadow characteristic of tenebrism, but from the 1630s he evolved to a greater chromaticism and clearer and more diaphanous backgrounds. In contrast to the flat painting of Caravaggio, Ribera used a dense paste that gave more volume and emphasized the brightness. One of his best works, "Sileno ebrio" (1626, Museum of Capodimonte, Naples) stands out for the flashes of light that illuminate the various characters, with special emphasis on the naked body of the Sileno, illuminated by a flat light of morbid appearance.
In addition to Ribera, in Spain, Caravaggism had the figure of Juan Bautista Maíno, a Dominican friar who was drawing teacher of Philip IV, resident in Rome between 1598 and 1612, where he was a disciple of Annibale Carracci; his work stands out for its colorism and luminosity, as in "The Adoration of the Shepherds" (1611-1613, Museo del Prado, Madrid). Also noteworthy is the work of the still life painters Juan Sánchez Cotán and Juan van der Hamen. In general, Spanish naturalism treated light with a sense close to Caravaggism, but with a certain sensuality coming from the Venetian school and a detailing with Flemish roots. Francisco de Zurbarán developed a somewhat sweetened tenebrism, although one of his best works, "San Hugo in the refectory of the Carthusian monks" (c. 1630, Museo de Bellas Artes de Sevilla) stands out for the presence of white color, with a subtle play of light and shadow that stands out for the multiplicity of intensities applied to each figure and object.
In Venice, Baroque painting did not produce such exceptional figures as in the Renaissance and Mannerism, but in the work of artists such as Domenico Fetti, Johann Liss, and Bernardo Strozzi one can perceive the vibrant luminism and the enveloping atmospheres so characteristic of Venetian painting.
The Caravaggist novelties had a special echo in Holland, where the so-called Caravaggist School of Utrecht emerged, a series of painters who assumed the description of reality and the chiaroscuro effects of Caravaggio as pictorial principles, on which they developed a new style based on tonal chromaticism and the search for new compositional schemes, resulting in a painting that stands out for its optical values. Among its members were Hendrik Terbrugghen, Dirck van Baburen, and Gerard van Honthorst, all three trained in Rome. The first assumed the thematic repertoire of Caravaggio but with a more sweetened tone, with a sharp drawing, a grayish-silver chromatism and an atmosphere of soft light clarity. Van Baburen sought full light effects rather than chiaroscuro contrasts, with intense volumes and contours. Honthorst was a skillful producer of night scenes, which earned him the nickname "Gherardo delle Notti" ("Gerard of the Nights"). In works such as "Christ before the High Priest" (1617), "Nativity" (1622), "The Prodigal Son" (1623) or "The Procuress" (1625), he showed great mastery in the use of artificial light, generally from candles, with one or two light sources that illuminated the scene unevenly, highlighting the most significant parts of the painting and leaving the rest in semi-darkness. Of his "Christ on the Column", Joachim von Sandrart said: "the brightness of the candles and lights illuminates everything with a naturalness that resembles life so closely that no art has ever reached such heights".
One of the greatest exponents of the symbolic use of light was Rembrandt, an original artist with a strong personal stamp, with a style close to tenebrism but more diffused, without the marked contrasts between light and shadow typical of the Caravaggists, but a more subtle and diffuse penumbra. According to Giovanni Arpino, Rembrandt "invented light, not as heat, but as value. He invented light not to illuminate, but to make his world unapproachable". In general, he elaborated images where darkness predominated, illuminated in certain parts of the scene by a ray of zenithal light of divine connotation; if the light is inside the painting it means that the world is circumscribed to the illuminated part and nothing exists outside this light. Rembrandtian light is a reflection of an external force, which affects the objects causing them to radiate energy, like the retransmission of a message. Although he starts from tenebrism, his contrasts of light and shadow are not as sharp as those of Caravaggio, but he likes more a kind of golden shadows that give a mysterious air to his paintings. In Rembrandt, light was something structural, integrated in form, color and space, in such a way that it dematerializes bodies and plays with the texture of objects. It is a light that is not subject to the laws of physics, which he generally concentrates in one area of the painting, creating a glowing luminosity. In his work, light and shadow interact, dissolving the contours and deforming the forms, which become the sustaining object of the light. According to Wolfgang Schöne, in Rembrandt light and darkness are actually two types of light, one bright and the other dark. He used to use a canvas as a reflecting or diffusing screen, which he regulated as he wished to obtain the desired illumination in each scene. His concern for light led him not only to his pictorial study, but also to establish the correct placement of his paintings for optimal visualization; thus, in 1639 he advised Constantijn Huygens on the placement of his painting Samson blinded by the Philistines: "hang this painting where there is strong light, so that it can be seen from a certain distance, and thus it will have the best effect". Rembrandt also masterfully captured light in his etchings, such as "The Hundred Florins" and "The Three Crosses", in which light is almost the protagonist of the scene.
Rembrandt picked up the luminous tradition of the Venetian school, as did his compatriot Johannes Vermeer, although while the former stands out for his fantastic effects of light, the latter develops in his work a luminosity of great quality in the local tones. Vermeer imprinted his works – generally everyday scenes in interior spaces – with a pale luminosity that created placid and calm atmospheres. He used a technique called "pointillé", a series of dots of pigment with which he enhanced the objects, on which he often applied a luminosity that made the surfaces reflect the light in a special way. Vermeer's light softens the contours without losing the solidity of the forms, in a combination of softness and precision that few other artists have achieved.
Nicknamed the "painter of light", Vermeer masterfully synthesized light and color, he knew how to capture the color of light like no one else. In his works, light is itself a color, while shadow is inextricably linked to light. Vermeer's light is always natural, he does not like artificial light, and generally has a tone close to lemon yellow, which together with the dull blue and light gray were the main colors of his palette. It is the light that forms the figures and objects, and in conjunction with the color is what fixes the forms. As for the shadows, they are interspersed in the light, reversing the contrast: instead of fitting the luminous part of the painting into the shadows, it is the shadows that are cut out of the luminous space. Contrary to the practice of chiaroscuro, in which the form is progressively lost in the half-light, Vermeer placed a foreground of dark color to increase the tonal intensity, which reaches its zenith in the middle light; from here he dissolves the color towards white, instead of towards black as was done in chiaroscuro. In Vermeer's work, the painting is an organized structure through which light circulates, is absorbed and diffused by the objects that appear on the scene. He builds the forms thanks to the harmony between light and color, which is saturated, with a predominance of pure colors and cold tones. The light gives visual existence to the space, which in turn receives and diffuses it.
<templatestyles src="Template:Blockquote/styles.css" />In Vermeer, light is never artificial: it is precise and normal like that of nature, and of an accuracy capable of satisfying the most scrupulous physicist. [...] This accuracy of light in Vermeer is due to the harmony of the coloring.
Other prominent Dutch painters were Frans Hals and Jacob Jordaens. The former had a Caravaggist phase between 1625 and 1630, with a clear chromaticism and diffuse luminosity ("The Merry Drinker", 1627–1628, Rijksmuseum, Amsterdam; Malle Babbe, 1629–1630, Gemäldegalerie, Berlin), to evolve later to a more sober, dark and monochromatic style. Jordaens had a style characterized by a bright and fantastic coloring, with strong contrasts of light and shadow and a technique of dense impasto. Between 1625 and 1630 he had a period in which he deepened the luminous values of his images, in works such as "The Martyrdom of Saint Apollonia" (1628, Church of Saint Augustine, Antwerp) or "The Fecundity of the Earth" (1630, Royal Museums of Fine Arts of Belgium, Brussels).
One should also mention Godfried Schalcken, a disciple of Gerard Dou who worked not only in his native country but also in England and Germany. An excellent portraitist, in many of his works he used artificial candlelight or candle light, influenced by Rembrandt, as in "Portrait of William III" (1692-1697, Rijksmuseum, Amsterdam), "Portrait of James Stuart, Duke of Lennox and Richmond" (1692-1696, Leiden Collection, New York), "Young Man and Woman Studying a Statue of Venus by Lamplight" (c. 1690, Leiden Collection, New York) or "Old Man Reading by Candlelight" (c. 1700, Museo del Prado, Madrid).
A genre that flourished in Holland in an exceptional way in this century was landscape painting, which, in line with the mannerist landscape painting of Pieter Brueghel the Elder and Joos de Momper, developed a new sensitivity to atmospheric effects and the reflections of the sun on water. Jan van Goyen was its first representative, followed by artists such as Salomon van Ruysdael, Jacob van Ruysdael, Meindert Hobbema, Aelbert Cuyp, Jan van de Cappelle and Adriaen van de Velde. Salomon van Ruysdael sought atmospheric capture, which he treated by tonalities, studying the light of different times of the day. His nephew Jacob van Ruysdael was endowed with a great sensitivity for natural vision, and his depressive character led him to elaborate images of great expressiveness, where the play of light and shadow accentuated the drama of the scene. His light is not the saturating and static light of the Renaissance, but a light in movement, perceptible in the effects of light and shadow in the clouds and their reflections in the plains, a light that led John Constable to formulate one of his lessons on art: "remember that light and shadow never stand still". His assistant was Meindert Hobbema, from whom he differed in his chromatic contrasts and lively light effects, which reveal a certain nervousness of stroke. Aelbert Cuyp used a much lighter palette than his compatriots, with a warmer and more golden light, probably influenced by Jan Both's "Italianate landscape". He stood out for his atmospheric effects, for the detail of the light reflections on objects or landscape elements, for the use of elongated shadows and for the use of the sun's rays diagonally and backlit, in line with the stylistic novelties produced in Italy, especially around the figure of Claudius of Lorraine.
Another genre that flourished in Holland was the still life. One of its best representatives was Willem Kalf, author of still lifes of great precision in detail, which combined flowers, fruits and other foods with various objects generally of luxury, such as vases, Turkish carpets and bowls of Chinese porcelain, which emphasize their play of light and shadow and the bright reflections in the metallic and crystalline surfaces.
Classicism and full Baroque.
Classicism emerged in Bologna, around the so-called Bolognese School, initiated by the brothers Annibale and Agostino Carracci. This trend was a reaction against mannerism, which sought an idealized representation of nature, representing it not as it is, but as it should be. It pursued the ideal beauty as its sole objective, for which it was inspired by classical Greco-Roman and Renaissance art. This ideal found an ideal subject of representation in the landscape, as well as in historical and mythological themes. In addition to the Carracci brothers, Guido Reni, Domenichino, Francesco Albani, Guercino and Giovanni Lanfranco stood out.
In the classicist trend, the use of light is paramount in the composition of the painting, although with slight nuances depending on the artist: from the "Incamminati" and the Academy of Bologna (Carracci brothers), Italian classicism split into several currents: one moved more towards decorativism, with the use of light tones and shiny surfaces, where the lighting is articulated in large luminous spaces (Guido Reni, Lanfranco, Guercino); another specialized in landscape painting and, starting from the Carracci influence – mainly the frescoes of Palazzo Aldobrandini – developed along two parallel lines: the first focused more on classical-style composition, with a certain scenographic character in the arrangement of landscapes and figures (Poussin, Domenichino); the other is represented by Claude Lorrain, with a more lyrical component and greater concern for the representation of light, not only as a plastic factor but as an agglutinating element of a harmonious conception of the work.
Claude Lorrain was one of the baroque painters who best knew how to represent light in his works, to which he gave a primordial importance at the time of conceiving the painting: the light composition served firstly as a plastic factor, being the basis with which he organized the composition, with which he created space and time, with which he articulated the figures, the architectures, the elements of nature; secondly, it was an aesthetic factor, highlighting light as the main sensitive element, as the medium that attracts and envelops the viewer and leads him to a dream world, a world of ideal perfection recreated by the atmosphere of total serenity and placidity that Claude created with his light. Claude's light was direct and natural, coming from the sun, which he placed in the middle of the scene, in sunrises or sunsets that gently illuminated all parts of the painting, sometimes placing in certain areas intense contrasts of light and shadow, or backlighting that impacted on a certain element to emphasize it. The artist from Lorraine emphasized color and light over the material description of the elements, which precedes to a great extent the luminous investigations of Impressionism.
Claude's capture of light is unparalleled by any of his contemporaries: in the landscapes of Rembrandt or Ruysdael the light has more dramatic effects, piercing the clouds or flowing in oblique or horizontal rays, but in a directed manner, the source of which can be easily located. On the other hand, Claude's light is serene, diffuse; unlike the artists of his time, he gives it greater relevance if it is necessary to opt for a certain stylistic solution. On numerous occasions he uses the horizon line as a vanishing point, arranging in that place a focus of clarity that attracts the viewer, because that almost blinding luminosity acts as a focalizing element that brings the background closer to the foreground. The light is diffused from the background of the painting and, as it expands, it is enough by itself to create a sensation of depth, blurring the contours and degrading the colors to create the space of the painting. Lorena prefers the serene and placid light of the sun, direct or indirect, but always through a soft and uniform illumination, avoiding sensational effects such as moonlight, rainbows or storms, which were nevertheless used by other landscape painters of her time. His basic reference in the use of light is Elsheimer, but he differs from him in the choice of light sources and times represented: the German artist preferred exceptional light effects, nocturnal environments, moonlight or twilight; on the other hand, Claude prefers more natural environments, a limpid light of dawn or the refulgence of a warm sunset.
On the other hand, the Flemish Peter Paul Rubens represents serenity in the face of Tenebrist dramatism. He was a master in finding the precise tonality for the flesh tones of the skin, as well as its different textures and the multiple variants of the effects of brightness and the reflections of light on the flesh. Rubens had an in-depth knowledge of the different techniques and traditions related to light, and so he was able to assimilate both Mannerist iridescent light and Tenebrist focal light, internal and external light, homogeneous and dispersed light. In his work, light serves as an organizing element of the composition, in such a way that it agglutinates all the figures and objects in a unitary mass of the same light intensity, with different compositional systems, either with central or diagonal illumination or combining a light in the foreground with another in the background. In his beginnings he was influenced by the Caravaggist chiaroscuro, but from 1615 he sought a greater luminosity based on the tradition of Flemish painting, so he accentuated the light tones and marked the contours more. His images stand out for their sinuous movement, with atmospheres built with powerful lights that helped to organize the development of the action, combining the Flemish tradition with the Venetian coloring that he learned in his travels to Italy. Perhaps where he experimented most in the use of light was in his landscapes, most of them painted in his old age, whose use of color and light with agile and vibrant brushstrokes influenced Velázquez and other painters of his time, such as Jordaens and Van Dyck, and artists of later periods such as Jean-Antoine Watteau, Jean-Honoré Fragonard, Eugène Delacroix, and Pierre-Auguste Renoir.
Diego Velázquez was undoubtedly the most brilliant artist of his time in Spain, and one of the most internationally renowned. In the evolution of his style we can perceive a profound study of pictorial illumination, of the effects of light both on objects and on the environment, with which he reaches heights of great realism in the representation of his scenes, which however is not exempt from an air of classical idealization, which shows a clear intellectual background that for the artist was a vindication of the painter's craft as a creative and elevated activity. Velázquez was the architect of a space-light in which the atmosphere is a diaphanous matter full of light, which is freely distributed throughout a continuous space, without divisions of planes, in such a way that the light permeates the backgrounds, which acquire vitality and are as highlighted as the foreground. It is a world of instantaneous capture, alien to tangible reality, in which the light generates a dynamic effect that dilutes the contours, which together with the vibratory effect of the changing planes of light produces a sensation of movement. He usually alternated zones of light and shadow, creating a parallel stratification of space. Sometimes he even atomized the areas of light and shadow into small corpuscles, which was a precedent for impressionism.In his youth he was influenced by Caravaggio, to evolve later to a more diaphanous light, as shown in his two paintings of the Villa Medici, in which light filters through the trees. Throughout his career he achieved a great mastery in capturing a type of light of atmospheric origin, of the irradiation of light and chromatic vibration, with a fluid technique that pointed to the forms rather than defining them, thus achieving a dematerialized but truthful vision of reality, a reality that transcends matter and is framed in the world of ideas. After the smoothly executed tenebrism and precise drawing of his first period in Seville ("Vieja friendo huevos", 1618, National Gallery of Scotland, Edinburgh; "El aguador de Sevilla", 1620, Apsley House, London), his arrival at the Madrid court marked a stylistic change influenced by Rubens and the Venetian school – whose work he was able to study in the royal collections – with looser brushstrokes and soft volumes, while maintaining a realistic tone derived from his youthful period. Finally, after his trip to Italy between 1629 and 1631, he reached his definitive style, in which he synthesized the multiple influences received, with a fluid technique of pasty brushstrokes and great chromatic richness, as can be seen in "La fragua de Vulcano" (1631, Museo del Prado, Madrid). "The Surrender of Breda" (1635, Museo del Prado, Madrid) was a first milestone in his mastery of atmospheric light, where color and luminosity achieve an accentuated protagonism. In works such as "Pablo de Valladolid" (1633, Museo del Prado, Madrid), he managed to define the space without any geometric reference, only with lights and shadows. The Sevillian artist was a master at recreating the atmosphere of enclosed spaces, as shown in "Las Meninas" (1656, Museo del Prado, Madrid), where he placed several spotlights: the light that enters through the window and illuminates the figures of the Infanta and her ladies-in-waiting, the light from the rear window that shines around the lamp hanger and the light that enters through the door in the background. In this work he constructed a plausible space by defining or diluting the forms according to the use of light and the nuance of color, in a display of technical virtuosity that has led to the consideration of the canvas as one of the masterpieces in the history of painting. In a similar way, he succeeded in structuring space and forms by means of light planes in "Las hilanderas" (1657, Museo del Prado, Madrid).
<templatestyles src="Template:Blockquote/styles.css" />As it invades the room, the light is diffused irregularly over the various surfaces. The mirror shimmers with tremulous, silvery light and offers a clearer image than that of the large, dull canvases hanging above it. A sliver of light escapes from the half-closed window that opens in the last section, forming a well of luminosity around the lamp hook at the back of the ceiling. And then, in the background plane, a new light source is included that illuminates the figure in the doorway; from it emerges, thin as a beam, a ray that swiftly crosses the floor of the room under the mirror. The illusion of space and volume thus becomes irresistibly palpable.
Another outstanding Spanish Baroque painter was Bartolomé Esteban Murillo, one of whose favorite themes was the Immaculate Conception, of which he produced several versions, generally with the figure of the Virgin within an atmosphere of golden light symbolizing divinity. He generally used translucent colors applied in thin layers, with an almost watercolor appearance, a procedure that denotes the influence of Venetian painting. After a youthful period of tenebrist influence, in his mature work he rejected chiaroscuro dramatism and developed a serene luminosity that was shown in all its splendor in his characteristic breaks of glory, of rich chromaticism and soft luminosity.
The last period of this style was the so-called "full Baroque" (second half of the 17th and early 18th centuries), a decorative style in which the illusionist, theatrical and scenographic character of Baroque painting was intensified, with a predominance of mural painting – especially on ceilings – in which Pietro da Cortona, Andrea Pozzo, Giovanni Battista Gaulli ("il Baciccio"), Luca Giordano and Charles Le Brun stood out. In works such as the ceiling of the church of the Gesù, by Gaulli, or the Palazzo Barberini, in Cortona, is "where the ability to combine extreme light and darkness in a painting was pushed to the limit," according to John Gage, to which he adds that "the Baroque decorator not only introduced into painting the contrasts between extreme darkness and extreme light, but also a careful gradation between the two." Andrea Pozzo's "Glory of Saint Ignatius of Loyola" (1691-1694), on the ceiling of the church of Saint Ignatius in Rome, a scene full of heavenly light in which Christ sends a ray of light into the heart of the saint, who in turn deflects it into four beams of light directed towards the four continents, is noteworthy. In Spain, Francisco de Herrera el Mozo, Juan Carreño de Miranda, Claudio Coello and Francisco Ricci were exponents of this style.
<templatestyles src="Template:Blockquote/styles.css" />From Caravaggio to the last painting by Velázquez – which is the starting point – the history of painting is the great journey to the land of light, of the effective light that illuminates the world in which we live.
18th Century.
The 18th century was nicknamed the "Age of Enlightenment", as it was the period in which the Enlightenment emerged, a philosophical movement that defended reason and science against religious dogmatism. Art oscillated between the late Baroque exuberance of Rococo and neoclassicist sobriety, between artifice and naturalism. A certain autonomy of the artistic act began to take place: art moved away from religion and the representation of power to be a faithful reflection of the artist's will, and focused more on the sensitive qualities of the work than on its meaning.
In this century most national art academies were created, institutions in charge of preserving art as a cultural phenomenon, of regulating its study and conservation, and of promoting it through exhibitions and competitions; originally, they also served as training centers for artists, although over time they lost this function, which was transferred to private institutions. After the "Académie Royal d'Art", founded in Paris in 1648, this century saw the creation of the Royal Academy of Fine Arts of San Fernando in Madrid (1744), the Russian Academy of Arts in Saint Petersburg (1757), the Royal Academy of Arts in London (1768), etc. The art academies favored a classical and canonical style – academicism – often criticized for its conservatism, especially by the avant-garde movements that emerged between the 19th and 20th centuries.
During this period, when the science was gaining greater interest for scholars and the general public, numerous studies of optics were carried out. In particular, the study of shadows was deepened and scynography emerged as the science that studies the perspective and two-dimensional representation of the forms produced by shadows. Claude-Nicolas Lecat wrote in 1767: "the art of drawing proves that the mere gradation of the shadow, its distributions and its nuances with simple light, suffice to form the images of all objects". In the entry on shadow in L'Encyclopédie, the great project of Diderot and d'Alembert, he differentiates between several types of shadows: "inherent", the object itself; "cast", that which is projected onto another surface; "projected", that resulting from the interposition of a solid between a surface and the light source; "tilted shading", when the angle is on the vertical axis; "tilted shading", when it is on the horizontal axis. It also coded light sources as "point", "ambient light" and "extensive", the former producing shadows with clipped edges, the ambient light producing no shadow and the extensive producing shadows with little clipping divided into two areas: "umbra", the darkened part of the area where the light source is located; and "penumbra", the darkened part of the edge of a single proportion of the light area.
Several treatises on painting were also written in this century that studied in depth the representation of light and shadow, such as those by Claude-Henri Watelet ("L'Art de peindre, poème, avec des réflexions sur les différentes parties de la peinture", 1760) and Francesco Algarotti ("Saggio sopra la pittura", 1764). Pierre-Henri de Valenciennes ("Élémens de perspective pratique, a l'usage des artistes, suivis de réflexions et conseils à un élève sur la peinture, et particulièrement sur le genre du paysage", 1799) made several studies on the rendering of light at various times of the day, and recorded the various factors affecting the different types of light in the atmosphere, from the rotation of the Earth to the degree of humidity in the environment and the various reflective characteristics of a particular place. He advised his students to paint the same landscape at different times of the day and especially recommended four distinctive moments of the day: morning, characterized by freshness; noon, with its blinding sun; twilight and its fiery horizon; and night with the placid effects of moonlight. Acisclo Antonio Palomino, in "El Museo Pictórico" y "Escala Óptica" (1715-1724), stated that light is "the soul and life of everything visible" and that "it is in painting that gives such an extension to sight that it not only sees the physical and real but also the apparent and feigned, persuading bodies, distances and bulks with the elegant arrangement of light and dark, shadows and lights".
Rococo meant the survival of the main artistic manifestations of the Baroque, with a more emphasized sense of decoration and ornamental taste, which were taken to a paroxysm of richness, sophistication and elegance. Rococo painting had a special reference in France, in the court scenes of Jean-Antoine Watteau, François Boucher and Jean-Honoré Fragonard. Rococo painters preferred illuminated scenes in broad daylight or colorful sunrises and sunsets. Watteau was the painter of the "fête galante", of court scenes set in bucolic landscapes, a type of shady landscape of Flemish heritage. Boucher, an admirer of Correggio, specialized in the female nude, with a soft and delicate style in which the light emphasizes the placidity of the scenes, generally mythological. Fragonard had a sentimental style of free technique, with which he elaborated gallant scenes of a certain frivolity. In the still life genre Jean-Baptiste-Siméon Chardin stood out, a virtuoso in the creation of atmospheres and light effects on objects and surfaces, generally with a soft and warm light achieved through glazes and fading, with which he achieved intimate atmospheres of deep shadows and soft gradients.
In this century, one of the movements most concerned with the effects of light was Venetian vedutismo, a genre of urban views that meticulously depicted the canals, monuments and places most typical of Venice, alone or with the presence of the human figure, generally of small size and in large groups of people. The veduta is usually composed of wide perspectives, with a distribution of the elements close to the scenography and with a careful use of light, which collects all the tradition of atmospheric representation from the sfumato of Leonardo and the chromatic ranges of sunrises and sunsets of Claude Lorrain. Canaletto's work stands out, whose sublime landscapes of the Adriatic villa captured with great precision the atmosphere of the city suspended over the water. The great precision and detail of his works was due in large part to the use of the camera obscura, a forerunner of photography. Another outstanding representative was Francesco Guardi, interested in the sizzling effects of light on the water and the Venetian atmosphere, with a light touch technique that was a precursor of impressionism.
The landscape genre continued with the naturalistic experimentation begun in the Baroque in the Netherlands. Another reference was Claude Lorrain, whose influence was especially felt in England. The 18th century landscape incorporated the aesthetic concepts of the picturesque and the sublime, which gave the genre greater autonomy. One of the first exponents was the French painter Michel-Ange Houasse, who settled in Spain and initiated a new way of understanding the role of light in the landscape: in addition to illuminating it, light "constructs" the landscape, configures it and gives it consistency, and determines the vision of the work, since the variation of factors involved implies a specific and particular point of view. Claude Joseph Vernet specialized in seascapes, often painted in nocturnal environments by moonlight. He was influenced by Claude Lorrain and Salvator Rosa, from whom he inherited the concept of an idealized and sentimental landscape. The same type of landscape was developed by Hubert Robert, with a greater interest in picturesqueness, as evidenced by his interest in ruins, which serve as the setting for many of his works.
Landscape painting was also prominent in England, where the influence of Claude of Lorraine was felt to such an extent that it largely determined the planimetry of the English garden. Here there was a great love for gardens, so that landscape painting was quite sought after, unlike on the continent, where it was considered a minor genre. In this period many painters and watercolorists emerged who dedicated themselves to the transcription of the English landscape, where they captured a new sensibility towards the luminous and atmospheric effects of nature. In this type of work the main artistic value was the capture of the atmosphere and the clients valued above all a vision comparable to the contemplation of a real landscape. Prominent artists were: Richard Wilson, Alexander Cozens, John Robert Cozens, Robert Salmon, Samuel Scott, Francis Towne and Thomas Gainsborough.
One of the 18th century painters most concerned with light was Joseph Wright of Derby, who was interested in the effects of artificial light, which he masterfully captured. He spent some formative years in Italy, where he was interested in the effects of fireworks in the sky and painted the eruptions of Vesuvius. One of his masterpieces is "Experiment with a Bird in an Air Pump" (1768, The National Gallery, London), where he places a powerful light source in the center that illuminates all the characters, perhaps a metaphor for the Enlightenment light that illuminates all human beings equally. The light comes from a candle hidden behind the glass jar used to perform the experiment, whose shadow is placed next to a skull, both symbols of the transience of life, often used in vanitas. Wright made several paintings with artificial lighting, which he called candle light pictures, generally with violent contrasts of light and shadow. In addition – and especially in his paintings of scientific subjects, such as the one mentioned above or "A Philosopher Gives a Lesson on the Table Planetarium" (1766, Derby Museum and Art Gallery, Derby) – light symbolizes reason and knowledge, in keeping with the Enlightenment, the "Age of Enlightenment".
In the transition between the 18th and 19th centuries, one of the most outstanding artists was Francisco de Goya, who evolved from a more or less rococo style to a certain prerromanticism, but with a personal and expressive work with a strong intimate tone. Numerous scholars of his work have emphasized Goya's metaphorical use of light as the conqueror of darkness. For Goya, light represented reason, knowledge and freedom, as opposed to the ignorance, repression and superstition associated with darkness. He also said that in painting he saw "only illuminated bodies and bodies that are not, planes that advance and planes that recede, reliefs and depths". The artist himself painted a self-portrait of himself in his studio against the light of a large window that fills the room with light, but as if that were not enough, he is wearing lighted candles in his hat ("Autorretrato en el taller", 1793–1795, Real Academia de Bellas Artes de San Fernando, Madrid). At the same time, he felt a special predilection for nocturnal atmospheres and in many of his works he took up a tradition that began with Caravaggist tenebrism and reinterpreted it in a personal way. According to Jeannine Baticle, "Goya is the faithful heir of the great Spanish pictorial tradition. In him, shadow and light create powerful volumes built in the impasto, clarified with brief luminous strokes in which the subtlety of the colors produces infinite variations".
Among his first production, in which he was mainly in charge of the elaboration of cartoons for the Royal Tapestry Factory of Santa Barbara, "El quitasol" (1777, Museo del Prado, Madrid) stands out for its luminosity, which follows the popular and traditional tastes in fashion at the court at that time, where a boy shades a young woman with a parasol, with an intense chromatic contrast between the bluish and golden tones of the light reflection. Other outstanding works for their atmospheric light effects are "La nevada" (1786, Museo del Prado, Madrid) and "La pradera de San Isidro" (1788, Museo del Prado, Madrid). As a painter of the king's chamber, his collective portrait "La familia de Carlos IV" (1800, Museo del Prado, Madrid) stands out, in which he seems to give a protocol order to the illumination, from the most powerful one centered on the kings in the central part, passing through the dimmer of the rest of the family to the penumbra in which the artist himself is portrayed in the left corner.
Of his mature work, "Los fusilamientos del 3 de mayo de 1808 en la Moncloa" (1814, Museo del Prado, Madrid) stands out, where he places the light source in a beacon located in the lower part of the painting, although it is his reflection in the white shirt of one of the executed men that becomes the most powerful focus of light, extolling his figure as a symbol of the innocent victim in the face of barbarism. The choice of night is a clearly symbolic factor, since it is related to death, a fact accentuated by the Christological appearance of the character with his arms raised. Albert Boime wrote about this work ("Historia social del arte"):
<templatestyles src="Template:Blockquote/styles.css" />Un breve repaso de las representaciones de fuentes de «luz objetiva» en la obra de Goya, revela una evolución gradual, desde la explotación de efectos tetrales para glorificar a la familia real o un suceso religioso, pasando por una expresión más simbólica de sus preocupaciones ideológicas, hasta culminar en una maestría madura donde la realidad y el símbolo se funden en una síntesis sorprendente.
Among his last works is "The Milkmaid of Bordeaux" (1828, Museo del Prado, Madrid), where light is captured only with color, with a fluffy brushstroke that emphasizes the tonal values, a technique that points to impressionism.
Also between the two centuries, neoclassicism developed in France after the French Revolution, a style that favored the resurgence of classical forms, purer and more austere, as opposed to the ornamental excesses of the Baroque and Rococo. The discovery of the ruins of Pompeii and Herculaneum helped to make Greco-Latin culture and an aesthetic ideology that advocated the perfection of classical forms as an ideal of beauty fashionable, which generated a myth about the perfection of classical beauty that still conditions the perception of art today. Neoclassical painting maintained an austere and balanced style, influenced by Greco-Roman sculpture or figures such as Raphael and Poussin. Jacques-Louis David, as well as François Gérard, Antoine-Jean Gros, Pierre-Paul Prud'hon, Anne-Louis Girodet-Trioson, Jean Auguste Dominique Ingres, Anton Raphael Mengs and José de Madrazo stood out.
Neoclassicism replaced the dramatic illumination of the Baroque with the restraint and moderation of classicism, with cold tones and a preponderance of drawing over color, and gave special importance to line and contour. Neoclassical images put the idea before the feeling, the truthful description of reality before the imaginative whims of the Baroque artist. Neoclassicism is a clear, cold and diffuse light, which bathes the scenes with uniformity, without violent contrasts; even so, chiaroscuro was sometimes used, intensely illuminating figures or certain objects in contrast with the darkness of the background. The light delimits the contours and space, and generally gives an appearance of solemnity to the image, in keeping with the subjects treated, usually history, mythological and portrait paintings.
The initiator of this style was Jacques-Louis David, a sober artist who completely subordinated color to drawing. He meticulously studied the light composition of his works, as can be seen in "The Oath at the Jeu de Paume" (1791, Musée National du Château de Versailles) and "The Rape of the Sabine Women" (1794-1799, Musée du Louvre, Paris). In "The Death of Marat" (1793, Royal Museums of Fine Arts of Belgium, Brussels) he developed a play of light that shows the influence of Caravaggio. Anne-Louis Girodet-Trioson followed David's style, although his emotivism brought him closer to pre-Romanticism. He was interested in chromaticism and the concentration of light and shadow, as glimpsed in "The Dream of Endymion" (1791, Musée du Louvre, Paris) and "The Burial of Atala" (1808, Musée du Louvre, Paris). Jean Auguste Dominique Ingres was a prolific author always faithful to classicism, to the point of being considered the champion of academic painting against 19th century romanticism. He was especially devoted to portraits and nudes, which stand out for their purity of lines, their marked contours and a chromatism close to enamel. Pierre-Paul Prud'hon assumed neoclassicism with a certain rococo influence, with a predilection for feminine voluptuousness inherited from Boucher and Watteau, while his work shows a strong influence of Correggio. In his mythological paintings populated by nymphs, he showed a preference for twilight and lunar light, a dim and faint light that delicately bathes the female forms, whose white skin seems to glow.
Landscape painting was considered a minor genre by the neoclassicals. Even so, it had several outstanding exponents, especially in Germany, where Joseph Anton Koch, Ferdinand Kobell and Wilhelm von Kobell are worth mentioning. The former focused on the Alpine mountains, where he succeeded in capturing the cloudy atmosphere of the high mountains and the effects of sparkling light on the plant and water surfaces. He usually incorporated the human presence, sometimes with some thematic pretext of a historical or literary type – such as Shakespeare's plays or the Ossian cycle. The light in his paintings is generally clear and cold, natural, without too much stridency. If Koch represented a type of idealistic landscape, heir to Poussin or Lorraine, Ferdinand Kobell represents the realistic landscape, indebted to the Dutch Baroque landscape. His landscapes of valleys and plains with mountainous backgrounds are bathed in a translucent light, with intense contrasts between the various planes of the image. His son Wilhelm followed his style, with a greater concern for light, which is denoted in his clear environments of cold light and elongated shadows, which gives his figures a hard consistency and metallic appearance.
Contemporary Art.
19th Century.
In the 19th century began an evolutionary dynamic of styles that followed one another chronologically with increasing speed and modern art emerged as opposed to academic art, where the artist is at the forefront of the cultural evolution of humanity. The study of light was enriched with the appearance of photography and with new technological advances in artificial light, thanks to the appearance of gaslight at the beginning of the century, kerosene in the middle of the century and electricity at the end of the century. These two phenomena brought about a new awareness of light, as this element configures the visual appearance, changing the concept of reality from the tangible to the perceptible.
Romanticism.
The first style of the century was Romanticism, a movement of profound renewal in all artistic genres, which paid special attention to the field of spirituality, fantasy, sentiment, love of nature, along with a darker element of irrationality, attraction to the occult, madness, dreams. Popular culture, the exotic, the return to underrated artistic forms of the past – especially medieval ones – were especially valued, and the landscape gained notoriety, which became a protagonist in its own right. The Romantics had the idea of an art that arose spontaneously from the individual, emphasizing the figure of the "genius": art is the expression of the artist's emotions. The Romantics used a more expressive technique with respect to neoclassical restraint, modeling the forms by means of impasto and glazes, in such a way that the expressiveness of the artist is released.
In a certain pre-Romanticism we can place William Blake, an original writer and artist, difficult to classify, who devoted himself especially to illustration, in the manner of the ancient illuminators of codices. Most of Blake's images are set in a nocturnal world, in which light emphasizes certain parts of the image, a light of dawn or twilight, almost "liquid", unreal. Between neoclassicism and romanticism was also Johann Heinrich Füssli, author of dreamlike images in a style influenced by Italian mannerism, in which he used strong contrasts of light and shadow, with lighting of theatrical character, like candlesticks.
One of the pioneers of Romanticism was the prematurely deceased Frenchman Théodore Géricault, whose masterpiece, "The Raft of the Medusa" (1819, Musée du Louvre, Paris), presents a ray of light emerging from the stormy clouds in the background as a symbol of hope. The most prominent member of the movement in France was Eugène Delacroix, a painter influenced by Rubens and the Venetian school, who conceived of painting as a medium in which patches of light and color are related. He was also influenced by John Constable, whose painting "The Hay Wagon" opened his eyes to a new sensitivity to light. In 1832 he traveled to Morocco, where he developed a new style that could be considered proto-impressionist, characterized by the use of white to highlight light effects, with a rapid execution technique.
In the field of landscape painting, John Constable and Joseph Mallord William Turner stood out, heirs of the rich tradition of English landscape painting of the 18th century. Constable was a pioneer in capturing atmospheric phenomena. Kenneth Clark, in "The Art of Landscape", credited him with the invention of the "chiaroscuro of nature", which would be expressed in two ways: on the one hand, the contrast of light and shade that for Constable would be essential in any landscape painting and, on the other, the sparkling effects of dew and breeze that the British painter was able to capture so masterfully on his canvases, with a technique of interrupted strokes and touches of pure white made with a palette knife. Constable once said that "the form of an object is indifferent; light, shadow and perspective will always make it beautiful".
Joseph Mallord William Turner was a painter with a great intuition to capture the effects of light in nature, with environments that combine luminosity with atmospheric effects of great drama, as seen in "Hannibal Crossing the Alps" (1812, Tate Gallery, London). Turner had a predilection for violent atmospheric phenomena, such as storms, tidal waves, fog, rain, snow, or fire and spectacles of destruction, in landscapes in which he made numerous experiments on chromaticism and luminosity, which gave his works an aspect of great visual realism. His technique was based on a colored light that dissolved the forms in a space-color-light relationship that give his work an appearance of great modernity. According to Kenneth Clark, Turner "was the one who raised the key of color so that his paintings not only represented light, but also symbolized the nature of light". His early works still had a certain classical component, in which he imitated the style of artists such as Claude Lorrain, Richard Wilson, Adriaen van de Velde or Aelbert Cuyp. They are works in which he still represents light by means of contrast, executed in oil; however, his watercolors already pointed to what would be his mature style, characterized by the rendering of color and light in movement, with a clear tonality achieved with a primary application of a film of mother-of-pearl paint. In 1819 he visited Italy, whose light inspired him and induced him to elaborate images where the forms were diluted in a misty luminosity, with pearly moonscapes and shades of yellow or scarlet. He then devoted himself to his most characteristic images, mainly coastal scenes in which he made a profound study of atmospheric phenomena. In "Interior at Petworth" (1830, British Museum, London) the basis of his design is already light and color, the rest is subordinated to these values. In his later works Clark states that "Turner's imagination was capable of distilling, from light and color, poetry as delicate as Shelley's." Among his works are: "San Giorgio Maggiore: At Dawn" (1819, Tate Gallery), "Regulus" (1828, Tate Gallery), "The Burning of the Houses of Lords and Commons" (1835, Philadelphia Museum of Art), "The Last Voyage of the "Daredevil"" (1839, National Gallery), "Negreros throwing the Dead and Dying Overboard" (1840, Museum of Fine Arts, Boston), "Twilight over a Lake" (1840, Tate Gallery), "Rain, Steam and Speed" (1844, National Gallery), etc.
Mention should also be made of Richard Parkes Bonington, a prematurely deceased artist, primarily a watercolorist and lithographer, who lived most of his time in Paris. He had a light, clear and spontaneous style. His landscapes denote the same atmospheric sensibility of Constable and Turner, with a great delicacy in the treatment of light and color, to the point that he is considered a precursor of impressionism.
In Germany the figure of Caspar David Friedrich stands out, a painter with a pantheistic and poetic vision of nature, an uncorrupted and idealized nature where the human figure only represents the role of a spectator of the grandeur and infinity of nature. From his beginnings, Friedrich developed a style marked by sure contours and subtle play of light and shadow, in watercolor, oil or sepia ink. One of his first outstanding works is "The Cross on the Mountain" (1808, Gemäldegalerie Neue Meister, Dresden), where a cross with Christ crucified stands on a pyramid of rocks against the light, in front of a sky furrowed with clouds and crossed by five beams of light that emerge from an invisible sun that is intuited behind the mountain, without it being clear whether it is the sunrise or the sunset; One of the beams generates reflections on the crucifix, so it is understood that it is a metal sculpture. During his early years he focused on landscapes and seascapes, with warm sunrise and sunset lights, although he also experimented with the effects of winter, stormy and foggy lights. A more mature work is "Memorial Image for Johann Emanuel Bremer" (1817, Alte Nationalgalerie, Berlin), a night scene with a strong symbolic content alluding to death: in the foreground appears a garden in twilight, with a fence through which the rays of the moon filter; the background, with a faint light of dawn, represents the afterlife. In "Woman at Sunrise" (1818-1820, Folkwang Museum, Essen) – also called Woman at Sunset, since the time of day is not known with certainty – he showed one of his characteristic compositions, that of a human figure in front of the immensity of nature, a faithful reflection of the romantic feeling of the sublime, with a sky of a reddish yellow of great intensity; it is usually interpreted as an allegory of life as a permanent Holy Communion, a kind of religious communion devised by August Wilhelm von Schlegel. Between 1820 and 1822 he painted several landscapes in which he captured the variation of light at different times of the day: "Morning, Noon, Afternoon and Sunset", all of them in the Niedersächsisches Landesmuseum in Hannover. For Friedrich, dawn and dusk symbolized birth and death, the cycle of life. In "Sea with Sunrise" (1826, Hamburger Kunsthalle, Hamburg) he reduced the composition to a minimum, playing with light and color to create an image of great intensity, inspired by the engravings of the 16th and 17th centuries that recreated the appearance of light on the first day of Creation. One of his last works was "The Ages of Life" (1835, Museum der bildenden Künste, Leipzig), where the five characters are related to the five boats at different distances from the horizon, symbolizing the ages of life. Other outstanding works of his are: "Abbey in the Oak Grove" (1809, Alte Nationalgalerie, Berlin), "Rainbow in a Mountain Landscape" (1809-1810, Folkwang Museum, Essen), "View of a Harbor" (1815-1816, Charlottenburg Palace, Berlin), "The Wayfarer on the Sea of Clouds" (1818, Hamburger Kunsthalle, Hamburg), "Moonrise on the Seaside" (1821, Hermitage Museum, Saint Petersburg), "Sunset on the Baltic Sea" (1831, Gemäldegalerie Neue Meister, Dresden), "The Great Reservoir" (1832, Gemäldegalerie Neue Meister, Dresden), etc.
The Norwegian Johan Christian Dahl moved in the wake of Friedrich, although with a greater interest in light and atmospheric effects, which he captured in a naturalistic way, thus moving away from the romantic landscape. In his works he shows a special interest in the sky and clouds, as well as misty and moonlit landscapes. In many of his works the sky occupies almost the entire canvas, leaving only a narrow strip of land occupied by a solitary tree.
Georg Friedrich Kersting made a transposition of Friedrich's pantheistic mysticism to interior scenes, illuminated by a soft light of lamps or candles that gently illuminate the domestic environments that he used to represent, giving these scenes an appearance that transcends reality to become solemn images with a certain mysterious air.
Philipp Otto Runge developed his own theory of color, according to which he differentiated between opaque and transparent colors according to whether they tended to light or darkness. In his work this distinction served to highlight the figures in the foreground from the background of the scene, which was usually translucent, generating a psychological effect of transition between planes. This served to intensify the allegorical sense of his works, since his main objective was to show the mystical character of nature. Runge was a virtuoso in capturing the subtle effects of light, a mysterious light that has its roots in Altdorfer and Grünewald, as in his portraits illuminated from below with magical reflections that illuminate the character as if immersed in a halo.
The Nazarene movement also emerged in Germany, a series of painters who between 1810 and 1830 adopted a style that was supposedly old-fashioned, inspired by Renaissance classicism – mainly Fra Angelico, Perugino and Raphael – and with an accentuated religious sense. The Nazarene style was eclectic, with a preponderance of drawing over color and a diaphanous luminosity, with limitation or even rejection of chiaroscuro. Its main representatives were: Johann Friedrich Overbeck, Peter von Cornelius, Julius Schnorr von Carolsfeld and Franz Pforr.
Also in Germany and the Austro-Hungarian Empire there was the Biedermeier style, a more naturalistic tendency halfway between romanticism and realism. One of its main representatives was Ferdinand Georg Waldmüller, an advocate of the study of nature as the only goal of painting. His paintings are brimming with a resplendent clarity, a meticulously elaborated light of almost palpable quality, as an element that builds the reality of the painting, combined with well-defined shadows. Other artists of interest in this trend are Johann Erdmann Hummel, Carl Blechen, Carl Spitzweg and Moritz von Schwind. Hummel used light as a stylizing element, with a special interest in unusual light phenomena, from artificial light to glints and reflections. Blechen evolved from a typical romanticism with a heroic and fantastic tone to a naturalism that was characterized by light after a year's stay in Italy. Blechen's light is summery, a bright light that accentuates the volume of objects by giving them a tactile substance, combined with a skillful use of color. Spitzweg incorporated camera obscura effects into his paintings, in which light, whether sunlight or moonlight, appears in the form of beams that create effects that are sometimes unreal but of great visual impact. Schwind was the creator of a diaphanous and lyrical light, captured in resplendent luminous spaces with subtle tonal gradations in the reflections. Lastly, we should mention the Danish Christen Købke, author of landscapes of a delicate light reminiscent of the "Pointillé" of Vermeer or the luminosity of Gerrit Berckheyde.
In Italy in the 1830s the so-called Posillipo School, a group of anti-academic Neapolitan landscape painters, among whom Giacinto Gigante, Filippo Palizzi and Domenico Morelli stood out. These artists showed a new concern for light in the landscape, with a more truthful aspect, far from the classical canons, in which the shimmering effects gain prominence. Inspired by Vedutism and picturesque painting, as well as by the work of what they considered their direct master, Anton Sminck van Pitloo, they used to paint from life, in compositions in which the chromatism stands out without losing the solidity of the drawing.
Realism.
Romanticism was succeeded by realism, a trend that emphasized reality, the description of the surrounding world, especially of workers and peasants in the new framework of the industrial era, with a certain component of social denunciation, linked to political movements such as utopian socialism. These artists moved away from the usual historical, religious or mythological themes to deal with more mundane themes of modern life.
One of the realist painters most concerned with light was Jean-François Millet, influenced by Baroque and Romantic landscape painting, especially Caspar David Friedrich. He specialized in peasant scenes, often in landscapes set at dawn and dusk, as in "On the Way to Work" (1851, private collection), "Shepherdess Watching Her Flock" (1863, Musée d'Orsay, Paris) or "A Norman Milkmaid at Gréville" (1871, Los Angeles County Museum of Art). For the composition of his works he often used wax or clay figurines that he moved around to study the effects of light and volume. His technique was dense and vigorous brushwork, with strong contrasts of light and shadow. His masterpiece is "The Angelus" (1857, Musée d'Orsay, Paris): the evening setting of this work allows its author to emphasize the dramatic aspect of the scene, translated pictorially in non-contrasting tonalities, with the darkened figures standing out against the brightness of the sky, which increases its volumetry and accentuates its outline, resulting in an emotional vision that emphasizes the social message that the artist wants to convey. One of his last works was "Bird Hunters" (1874, Philadelphia Museum of Art), a nocturnal setting in which some peasants dazzle birds with a torch to hunt them, in which the luminosity of the torch stands out, achieved with a dense application of the pictorial impasto.
The champion of realism was Gustave Courbet, who in his training was nourished by Flemish, Dutch and Venetian painting of the 16th and 17th centuries, especially Rembrandt. His early works are still of romantic inspiration, in which he uses a dramatic light tone borrowed from the Flemish-Dutch tradition but reinterpreted with a more modern sensibility. His mature work, now fully realistic, shows the influence of the Le Nain brothers, and is characterized by large, meticulously worked works, with large shiny surfaces and a dense application of pigment, often done with a palette knife. At the end of his career he devoted himself more to landscape and nudes, which stand out for their luminous sensibility. Another reference was Honoré Daumier, painter, lithographer, and caricaturist with a strong satirical tone, loose and free stroke, with an effective use of chiaroscuro. In his paintings he was inspired by the light contrasts of Goya, giving his works little colorism and giving greater emphasis to light ("The Fugitives", 1850; "Barabbas", 1850; "The Butcher", 1857; "The Third Wagon", 1862).
Linked to realism was the French landscape school of Barbizon (Camille Corot, Théodore Rousseau, Charles-François Daubigny, Narcisse-Virgile Díaz de la Peña), marked by a pantheistic feeling of nature, with concern for the effects of light in the landscape, such as the light that filters through the branches of trees. The most outstanding was Corot, who discovered light in Italy, where he dedicated himself to painting outdoors Roman landscapes captured at different times of the day, in scenes of clean atmospheres in which he applied to the surfaces of the volumes the precise doses of light to achieve a panoramic vision in which the volumes are cut out in the atmosphere. Corot had a predilection for a type of tremulous light that reflected on the water or filtered through the branches of the trees, with which he found a formula that satisfied him while achieving great popularity among the public.
Eugène Boudin, one of the first landscape painters to paint outdoors, especially seascapes, also stood out as an independent artist. He achieved great mastery in the elaboration of skies, shimmering and slightly misty skies of dim and transparent light, a light that is also reflected in the water with instantaneous effects that he knew how to capture with spontaneity and precision, with a fast technique that already pointed to impressionism – in fact, he was Monet's teacher.
Naturalistic landscape painting had another outstanding representative in Germany, Adolph von Menzel, who was influenced by Constable and developed a style in which light is decisive for the visual aspect of his works, with a technique that was a precursor of impressionism. Also noteworthy are his interior scenes with artificial light, in which he recreates a multitude of anecdotal details and luminous effects of all kinds, as in his "Dinner after the Ball" (1878, Alte Nationalgalerie, Berlin). Next to him stands out Hans Thoma, who was influenced by Courbet, who in his works combined the social vindication of realism with a still somewhat romantic feeling of the landscape. Thoma was an exponent of a "lyrical realism", with landscapes and paintings of peasant themes, usually set in his native Black Forest, characterized by the use of a silver-toned light.
In the Netherlands there was the figure of Johan Barthold Jongkind, considered a pre-impressionist, whom Monet also considered his master. He was a great interpreter of atmospheric phenomena and of the play of light on water and snow, as well as of winter and night lights – his moonlit landscapes were highly valued.
In Spain, Carlos de Haes, Agustín Riancho and Joaquín Vayreda deserve to be mentioned. Haes, of Belgian origin, traveled the entire Spanish geography to capture its landscapes, which he captured with an almost topographical detail. Riancho had a predilection for mountain scenery, with a coloring with a certain tendency to dark shades, free and spontaneous. Vayreda was the founder of the so-called Olot School. Influenced by the Barbizon School, he applied this style to the Girona landscape, with works of diaphanous and serene composition with a certain lyrical component of bucolic evocation.
Also in Spain it is worth mentioning the work of Mariano Fortuny, who found his personal style in Morocco as a chronicler of the African War (1859-1860), where he discovered the colorfulness and exoticism that would characterize his work. Here he began to paint with quick sketches of luminous touches, with which he captured the action in a spontaneous and vigorous way, and which would be the basis of his style: a vibrantly executed colorism with flashing light effects, as is denoted in one of his masterpieces, "La vicaría" (1868-1870, Museo Nacional de Arte de Cataluña, Barcelona).
Another landscape school was the Italian school of the Macchiaioli (Silvestro Lega, Giovanni Fattori, Telemaco Signorini), of anti-academic style, characterized by the use of stains (macchia in Italian, hence the name of the group) of color and unfinished forms, sketched, a movement that preceded Impressionism. These artists painted from life and had as their main objective the reduction of painting to contrasts of light and brilliance. According to Diego Martelli, one of the theorists of the group, "we affirmed that form did not exist and that, just as in light everything results from color and chiaroscuro, so it is a matter of obtaining tones, the effects of the true". The Manchists revalued the light contrasts and knew how to transcribe in their canvases the power and clarity of the Mediterranean light. They captured like no one else the effects of the sun on objects and landscapes, as in the painting "The Patrol" by Giovanni Fattori, in which the artist uses a white wall as a luminous screen on which the figures are cut out.
In Great Britain, the school of the Pre-Raphaelites emerged, who were inspired – as their name indicates – by Italian painters before Raphael, as well as by the recently emerged photography, with exponents such as Dante Gabriel Rossetti, Edward Burne-Jones, John Everett Millais, William Holman Hunt and Ford Madox Brown. The Pre-Raphaelites sought a realistic vision of the world, based on images of great detail, vivid colors and brilliant workmanship; as opposed to the side lighting advocated by academicist painting, they preferred general lighting, which turned paintings into flat images, without great contrasts of light and shadow. To achieve maximum realism, they carried out numerous investigations, as in the painting "The Rescuer" (1855, National Gallery of Victoria, Melbourne), by John Everett Millais, in which a fireman saves two girls from a fire, for which the artist burned wood in his workshop to find the right lighting. The almost photographic detail of these works led John Ruskin to say of William Holman Hunt's "The Wandering Sheep" (1852, Tate Britain, London) that "for the first time in the history of art the absolutely faithful balance between color and shade is achieved, by which the actual brightness of the sun could be transported into a key by which possible harmonies with material pigments should produce on the mind the same impressions as are made by the light itself." Hunt was also the author of "The Light of the World" (1853, Keble College, Oxford University), in which light has a symbolic meaning, related to the biblical passage that identifies Christ with the phrase "I am the light of the world, he who follows me shall not walk in darkness, for he shall have the light of life" (John 8:12). This painter again portrayed the symbolic light of Jesus Christ in "The Awakening of Consciousness" (1853, Tate Britain), through the light of the garden streaming through the window.
Romanticism and realism were the first artistic movements that rejected the official art of the time, the art taught in the academies – academicism – an art that was institutionalized and anchored in the past both in the choice of subjects and in the techniques and resources made available to the artist. In France, in the second half of the 19th century, this art was called "art pompier" ("fireman's art", a pejorative name derived from the fact that many authors represented classical heroes with helmets that resembled fireman's helmets). Although in principle the academies were in tune with the art produced at the time, so we can not speak of a distinct style, in the 19th century, when the evolutionary dynamics of the styles began to move away from the classical canons, academic art was constrained in a classicist style based on strict rules. Academicism was stylistically based on Greco-Roman classicism, but also on earlier classicist authors, such as Raphael, Poussin or Guido Reni. Technically, it was based on careful drawing, formal balance, perfect line, plastic purity and careful detailing, together with realistic and harmonious coloring. Many of its representatives had a special predilection for the nude as an artistic theme, as well as a special attraction for orientalism. Its main representatives were: William-Adolphe Bouguereau, Alexandre Cabanel, Eùgene-Emmanuel Amaury-Duval and Jean-Léon Gérôme.
Impressionism.
Light played a fundamental role in impressionism, a style based on the representation of an image according to the "impression" that light produces to the eye. In contrast to academic art and its forms of representation based on linear perspective and geometry, the Impressionists sought to capture reality on the canvas as they perceived it visually, so they gave all the prominence to light and color. To this end, they used to paint outdoors ("en plen air"), capturing the various effects of light on the surrounding environment at different times of the day. They studied in depth the laws of optics and the physics of light and color. Their technique was based on loose brushstrokes and a combination of colors applied according to the viewer's vision, with a preponderance of contrast between elementary colors (yellow, red and blue) and their complements (orange, green and violet). In addition, they used to apply the pigment directly on the canvas, without mixing, thus achieving greater luminosity and brilliance.
Impressionism perfected the capture of light by means of fragmented touches of color, a procedure that had already been used to a greater or lesser extent by artists such as Giorgione, Titian, Guardi and Velázquez (it is well known that the Impressionists admired the genius of "Las Meninas", whom they considered "the painter of painters"). For the Impressionists, light was the protagonist of the painting, so they began to paint from life, capturing at all times the variations of light on landscapes and objects, the fleeting "impression" of light at different times of the day, so they often produced series of paintings of the same place at different times. For this they dispensed with drawing and defined form and volume directly with color, in loose brushstrokes of pure tones, juxtaposed with each other. They also abandoned chiaroscuro and violent contrasts of light and shadow, for which they dispensed with colors such as black, gray or brown: the chromatic research of impressionism led to the discarding of black in painting, since they claimed that it is a color that does not exist in nature. From there they began to use a luminous range of "light on light" (white, blue, pink, red, violet), elaborating the shades with cold tones. Thus, the impressionists concluded that there is neither form nor color, the only real thing is the air-light relationship. In impressionist paintings the theme is light and its effects, beyond the anecdotal of places and characters. Impressionism was considerably influenced by research in the field of photography, which had shown that the vision of an object depends on the quantity and quality of light.
<templatestyles src="Template:Blockquote/styles.css" />His discovery consists precisely in having realized that full light discolors tones, that the sun reflected by objects tends, by dint of clarity, to resize them in that luminous unity that fuses the seven prismatic rays into a single colorless brightness, which is light.
Impressionist painters were especially concerned with artificial light: according to Juan Antonio Ramirez (Mass Media and Art History, 1976), "the surprise at the effect of the new phenomenon of artificial light in the street, in cafés, and in the living room, gave rise to famous paintings such as Manet's "Un bar aux Folies Bergère" (1882, Courtauld Gallery, London), Renoir's "Dancing at the Moulin de la Galette" (1876, Musée d'Orsay, Paris) and Degas' "Women in a Café" (1877, Musée d'Orsay, Paris). Such paintings show the lighted lanterns and that glaucous tonality that only artificial light produces". Numerous Impressionist works are set in bars, cafés, dances, theaters and other establishments, with lamps or candelabras of dim light that mixes with the smoky air of the atmosphere of these places, or candle lights in the case of theaters and opera houses.
The main representatives were Claude Monet, Camille Pissarro, Alfred Sisley, Pierre-Auguste Renoir, and Edgar Degas, with an antecedent in Édouard Manet. The most strictly Impressionist painters were Monet, Sisley and Pissarro, the most concerned with capturing light in the landscape. Monet was a master in capturing atmospheric phenomena and the vibration of light on water and objects, with a technique of short brushstrokes of pure colors. He produced the greatest number of series of the same landscape at different times of the day, to capture all the nuances and subtle differences of each type of light, as in his series of "The Station of Saint-Lazare", "Haystacks, The Poplars, The Cathedral of Rouen, The Parliament of London, San Giorgio Maggiore" or "Water Lilies". His last works in Giverny on water lilies are close to abstraction, in which he achieves an unparalleled synthesis of light and color. In the mid-1880s he painted coastal scenes of the French Riviera with the highest degree of luminous intensity ever achieved in painting, in which the forms dissolve in pure incandescence and whose only subject is already the sensation of light.
Sisley also showed a great interest in the changing effects of light in the atmosphere, with a fragmented touch similar to that of Monet. His landscapes are of great lyricism, with a predilection for aquatic themes and a certain tendency to the dissolution of form. Pissarro, on the other hand, focused more on a rustic-looking landscape painting, with a vigorous and spontaneous brushstroke that conveyed "an intimate and profound feeling for nature", as the critic Théodore Duret said of him. In addition to his countryside landscapes, he produced urban views of Paris, Rouen and Dieppe, and also produced series of paintings at various times of the day and night, such as those of the "Avenue de l'Opera" and the "Boulevard de Montmartre".
Renoir developed a more personal style, notable for its optimism and joie de vivre. He evolved from a realism of Courbetian influence to an impressionism of light and luminous colors, and shared for a time a style similar to that of Monet, with whom he spent several stays in Argenteuil. He differed from the latter especially in his greater presence of the human figure, an essential element for Renoir, as well as the use of tones such as black that were rejected by the other members of the group. He liked the play of light and shadow, which he achieved by means of small spots, and achieved great mastery in effects such as the beams of light between the branches of trees, as seen in his work "Dance at the Moulin de la Galette" (1876, Musée d'Orsay, Paris), and in "Torso, sunlight effect" where sunlight is seen on the skin of a naked girl (1875, Musée d'Orsay, Paris).
Degas was an individual figure, who although he shared most of the impressionist assumptions never considered himself part of the group. Contrary to the preferences of his peers, he did not paint from life and used drawing as a compositional basis. His work was influenced by photography and Japanese prints, and from his beginnings he showed interest in night and artificial light, as he himself expressed: "I work a lot on night effects, lamps, candles, etc. The curious thing is not always to show the light source, but the effect of the light". In his series of works on dancers or horse races, he studied the effects of light in movement, in a disarticulated space in which the effects of lights and backlighting stand out.
Many Impressionist works were almost exclusively about the effects of light on the landscape, which they tried to recreate as spontaneously as possible. However, this led in the 1880s to a certain reaction in which they tried to return to more classical canons of representation and a return to the figure as the basis of the composition. From then on, several styles derived from impressionism emerged, such as neo-impressionism (also called divisionism or pointillism) and post-impressionism. Neo-Impressionism took up the optical experimentation of Impressionism: the Impressionists used to blur the contours of objects by lowering the contrasts between light and shadow, which implied replacing objectual solidity with a disembodied luminosity, a process that culminated in Pointillism: in this technique there is no precise source of illumination, but each point is a light source in itself. The composition is based on juxtaposed ("divided") dots of a pure color, which merge in the eye of the viewer at a given distance. When these juxtaposed colors were complementary (red-green, yellow-violet, orange-blue) a greater luminosity was achieved. Pointillism, based largely on the theories of Michel-Eugène Chevreul ("The Law of Simultaneous Contrast of Colors", 1839) and Ogden Rood ("Modern Chromatics", 1879), defended the exclusive use of pure and complementary colors, applied in small brushstrokes in the form of dots that composed the image on the viewer's retina, at a certain distance. Its best exponents were Georges Seurat and Paul Signac.
Seurat devoted his entire life to the search for a method that would reconcile science and aesthetics, a personal method that would transcend impressionism. His main concern was chromatic contrast, its gradation and the interaction between colors and their complementaries. He created a disc with all the tones of the rainbow united by their intermediate colors and placed the pure tones in the center, which he gradually lightened towards the periphery, where the pure white was located, so that he could easily locate the complementary colors. This disc allowed him to mix the colors in his mind before fixing them on the palette, thus reducing the loss of chromatic intensity and luminosity. In his works he first drew in black and white to achieve the maximum balance between light and dark masses, and applied the color by tiny dots that were mixed in the retina of the viewer by optical mixing. On the other hand, he took from Charles Henry his theory on the relationship between aesthetics and physiology, how some forms or spatial directions could express pleasure and pain; according to this author, warm colors were dynamogenic and cold ones inhibitory. From 1886 he focused more on interior scenes with artificial light. His work "Chahut" (1889–1890, Kröller-Müller Museum, Otterlo) had a powerful influence on Cubism for its way of modeling volumes in space through light, without the need to simulate a third dimension.
Signac was a disciple of Seurat, although with a freer and more spontaneous style, not so scientific, in which the brilliance of color stands out. In his last years his works evolved to a search for pure sensation, with a chromatism of expressionist tendency, while he reduced the pointillist technique to a grid of tesserae of larger sizes than the divisionist dots.
In Italy there was a variant – the so-called "divisionisti" – who applied this technique to scenes of greater social commitment, due to its link with socialism, although with some changes in technical execution, since instead of confronting complementary colors they contrasted them in terms of rays of light, producing images that stand out for their luminosity and transparency, as in the work of Angelo Morbelli. Gaetano Previati developed a style in which luminosity is linked to symbolism related to life and nature, as in his "Maternity" (1890-1891, Banca Popolare di Novara), generally with a certain component of poetic evocation. Another member of the group, Vittore Grubicy de Dragon, wrote that "light is life and, if, as many rightly affirm, art is life, and light is a form of life, the divisionist technique, which tends to greatly increase the expressiveness of the canvas, can become the cradle of new aesthetic horizons for tomorrow".
Post-impressionism was, rather than a homogeneous movement, a grouping of diverse artists initially trained in impressionism who later followed individual trajectories of great stylistic diversity. Its best representatives were Henri de Toulouse-Lautrec, Paul Gauguin, Paul Cézanne, and Vincent van Gogh. Cézanne established a compositional system based on geometric figures (cube, cylinder and pyramid), which would later influence Cubism. He also devised a new method of illumination, in which light is applied in the density and intensity of color, rather than in the transitional values between black and white. The one who experimented the most in the field of light was Van Gogh, author of works of strong dramatism and interior prospection, with sinuous and dense brushstrokes, of intense color, in which he deforms reality, to which he gave a dreamlike air. Van Gogh's work shows influences as disparate as those of Millet and Hiroshige, while from the Impressionist school he was particularly influenced by Renoir. Already in his early works, his interest in light is noticeable, which is why he gradually clarified his palette, until he practically reached a yellow monochrome, with a fierce and temperamental luminosity.
In his early works, such as "The Potato Eaters" (1885, Van Gogh Museum, Amsterdam), the influence of Dutch realism, which had a tendency to chiaroscuro and dense color with thick brushstrokes, is evident; here he created a dramatic atmosphere of artificial light that emphasizes the tragedy of the miserable situation of these workers marginalized by the Industrial Revolution. Later his coloring became more intense, influenced by the divisionist technique, with a technique of superimposing brushstrokes in different tones; for the most illuminated areas he used yellow, orange and reddish tones, seeking a harmonious relationship between them all. After settling in Arles in Arles in 1888 he was fascinated by the limpid Mediterranean light and in his landscapes of that period he created clear and shining atmospheres, with hardly any chiaroscuro. As was usual in impressionism, he sometimes made several versions of the same motif at different times of the day to capture its light variations. He also continued his interest in artificial and nocturnal lights, as in "Café de noche, interior" (1888, Yale University Art Gallery, New Haven), where the light of the lamps seems to vibrate thanks to the concentric halo-shaped circles with which he has reflected the radiation of the light; or "Café de noche, exterior" (1888, Kröller-Müller Museum, Otterlo), where the luminosity of the café terrace contrasts with the darkness of the sky, where the stars seem like flowers of light. Light also plays a special role in his "Sunflowers" series (1888-1889), where he used all imaginable shades of yellow, which for him symbolized light and life, as he expressed in a letter to his brother Theo: "a sun, a light that, for lack of a better adjective, I can only define with yellow, a pale sulfur yellow, a pale lemon yellow". To highlight the yellow and orange, he used green and sky blue in the outlines, creating an effect of soft light intensity.
In Italy during these years there was a movement called "Scapigliatura" (1860-1880), sometimes considered a predecessor of divisionism, characterized by its interest in the purity of color and the study of light. Artists like Tranquillo Cremona, Mosè Bianchi or Daniele Ranzoni tried to capture on canvas their feelings through chromatic vibrations and blurred contours, with characters and objects almost dematerialized. Giovanni Segantini, a personal artist who combined a drawing of academicist tradition with a post-impressionist coloring where the light effects have a great relief. Segantini's specialty was the mountain landscape, which he painted outdoors, with a technique of strong brushstrokes and simple colors, with a vibrant light that he only found in the high alpine mountains.
In Germany, impressionism was represented by Fritz von Uhde, Lovis Corinth, and Max Slevogt. The first was more of a plenairist than strictly an impressionist, although more than landscape painting he devoted himself to genre painting, especially of religious themes, works in which he also showed a special sensitivity to light. Corinth had a rather eclectic career, from academic beginnings – he was a disciple of Bouguereau – through realism and impressionism, to a certain decadentism and an approach to "Jugendstil", to finally end up in expressionism. Influenced by Rembrandt and Rubens, he painted portraits, landscapes and still lifes with a serene and brilliant chromatism. Slevogt assumed the fresh and brilliant chromatism of the Impressionists, although renouncing the fragmentation of colors that they made, and his technique was of loose brushstrokes and energetic movement, with bold and original light effects, which denote a certain influence of the baroque art of his native Bavaria.
In Great Britain, the work of James Abbott McNeil Whistler, American by birth but established in London since 1859, stood out. His landscapes are the antithesis of the sunny French landscapes, as they recreate the foggy and taciturn English climate, with a preference for night scenes, images from which he nevertheless knows how to distill an intense lyricism, with artificial light effects reflected in the waters of the Thames.
In the United States, it is worth mentioning the work of John Singer Sargent, Mary Cassatt, and Childe Hassam. Sargent was an admirer of Velázquez and Frans Hals, and excelled as a social portraitist, with a virtuoso and elegant technique, both in oil and watercolor, the latter mainly in landscapes of intense color. Cassatt lived for a long time in Paris, where he was related to the Impressionist circle, with whom he shared more the themes than the technique, and developed an intimate and sophisticated work, influenced by Japanese prints. Hassam's main motif was New York life, with a fresh but somewhat cloying style.
Mention should also be made of Scandinavian impressionism, many of whose artists were trained in Paris. These painters had a special sensitivity to light, perhaps due to its absence in their native land, so they traveled to France and Italy attracted by the "light of the south". The main exponents were Peder Severin Krøyer, Akseli Gallen-Kallela, and Anders Zorn. The former showed a special interest in highly complex lighting effects, such as the mixing of natural and artificial light. Gallen-Kallela was an original artist who later approached symbolism, with a personal expressive and stylized painting with a tendency towards romanticism, with a special interest in Finnish folklore. Zorn specialized in portraits, nudes and genre scenes, with a brilliant brushstroke of vibrant luminosity.
In Russia, Valentin Serov and Konstantin Korovin should be mentioned. Serov had a style similar to that of Manet or Renoir, with a taste for intense chromatism and light reflections, a bright light that extols the joy of life. Korovin painted both urban landscapes and natural landscapes in which he elevates a simple sketch of chromatic impression to the category of a work of art.
In Spain, the work of Aureliano de Beruete and Darío de Regoyos stands out. Beruete was a disciple of Carlos de Haes, so he was trained in the realist landscape, but assumed the impressionist technique after a period of training in France. An admirer of Velazquez's light, he knew how to apply it to the Castilian landscape – especially the mountains of Madrid – with his own personal style. Regoyos also trained with Haes and developed an intimate style halfway between pointillism and expressionism.
Luminism and symbolism.
From the mid-19th century until practically the transition to the 20th century, various styles emerged that placed special emphasis on the representation of light, which is why they were generically referred to as "luminism", with various national schools in the United States and various European countries or regions. The term luminism was introduced by John Ireland Howe Baur in 1954 to designate the landscape painting done in the United States between 1840 and 1880, which he defines as "a polished and meticulous realism in which there are no noticeable brushstrokes and no trace of impressionism, and in which atmospheric effects are achieved by infinitely careful gradations of tone, by the most exact study of the relative clarity of nearer and more distant objects, and by an accurate rendering of the variations of texture and color produced by direct or reflected rays".
The first was American Luminism, which gave rise to a group of landscape painters generally grouped in the so-called Hudson River School, in which we can include to a greater or lesser extent Thomas Cole, Asher Brown Durand, Frederic Edwin Church, Albert Bierstadt, Martin Johnson Heade, Fitz Henry Lane, John Frederick Kensett, James Augustus Suydam, Francis Augustus Silva, Jasper Francis Cropsey and George Caleb Bingham. In general, his works were based on bombastic compositions, with a horizon line of great depth and a sky of veiled aspect, with atmospheres of strong expressiveness. His light is serene and peaceful, reflecting a mood of love for nature, a nature largely in the United States of the time virgin and paradisiacal, yet to be explored. It is a transcendent light, of spiritual significance, whose radiance conveys a message of communion with nature. Although they use a classical structure and composition, the treatment of light is original because of the infinity of subtle variations in tonality, achieved through a meticulous study of the natural environment of their country. According to Barbara Novak, Luminism is a more serene form of the romantic aesthetic concept of the sublime, which had its translation in the deep expanses of the North American landscape.
Some historians differentiate between pure Luminism and Hudson River School landscape painting: in the former, the landscape – more centered in the New England area – is more peaceful, more anecdotal, with delicate tonal gradations characterized by a crystalline light that seems to emanate from the canvas, in neat brushstrokes that seem to recreate the surface of a mirror and in compositions in which the excess of detail is unreal due to its straightness and geometrism, resulting in an idealization of nature. Thus understood, Luminism would encompass Heade, Lane, Kensett, Suydam and Silva. Hudson River landscape painting, on the other hand, would have a more cosmic vision and a predilection for a wilder and more grandiloquent nature, with more dramatic visual effects, as seen in the work of Cole, Durand, Church, Bierstadt, Cropsey and Bingham. It must be said, however, that neither group ever accepted these labels.
Thomas Cole was the pioneer of the school. English by birth, one of his main references was Claude Lorrain. Settled in New York in 1825, he began to paint landscapes of the Hudson River area, with the aim of achieving "an elevated style of landscape" in which the moral message was equivalent to that of history painting. He also painted biblical subjects, in which light has a symbolic component, as in his "Expulsion from the Garden of Eden" (1828, Museum of Fine Arts, Boston). Durand was a little older than Cole and, after Cole's premature death, was considered the best American landscape painter of his time. An engraver by trade, from 1837 he turned to natural landscape painting, with a more intimate and picturesque vision of nature than Cole's allegorical one. Church was Cole's first disciple, who transmitted to him his vision of a majestic and exuberant nature, which he reflected in his scenes of the American West and the South American tropics. Bierstadt, of German origin, was influenced by Turner, whose atmospheric effects are seen in works such as "In the Sierra Nevada Mountains in California" (1868, Smithsonian American Art Museum, Washington D. C.), a lake between mountains seen after a storm, with the sun's rays breaking through the clouds. Heade was devoted to country landscapes of Massachusetts, Rhode Island and New Jersey, in meadows of endless horizons with clear or cloudy skies and lights of various times of day, sometimes refracted by humid atmospheres. Fitz Henry Lane is considered the greatest exponent of luminism. Handicapped since childhood by polio, he focused on the landscape of his native Gloucester (Massachusetts), with works that denote the influence of the English seascape painter Robert Salmon, in which light has a special role, a placid light that gives a sense of eternity, of time stopped in a serene perfection and harmony. Suydam focused on the coastal landscapes of New York and Rhode Island, in which he was able to reflect the light effects of the Atlantic coast. Kensett was influenced by Constable and devoted himself to the New England landscape with a special focus on the luminous reflections of the sky and the sea. Silva also excelled in the seascape, a genre in which he masterfully captured the subtle gradations of light in the coastal atmosphere. Cropsey combined the panoramic effect of the Hudson River School with the more serene luminism of Lane and Heade, with a meticulous and somewhat theatrical style. Bingham masterfully captured in his scenes of the Far West the limpid and clear light of dawn, his favorite when recreating scenes with American Indians and pioneers of the conquest of the West.
Winslow Homer, considered the best American painter of the second half of the 19th century, who excelled in both oil and watercolor and in both landscape and popular scenes of American society, deserves special mention. One of his favorite genres was the seascape, in which he displayed a great interest in atmospheric effects and the changing lights of the day. His painting "Moonlight. Wood Island Lighthouse" (1894, Museum of Modern Art, New York) was painted entirely by moonlight, in five hours of work.
Another important school was Belgian Luminism. In Belgium, the influence of French Impressionism was strongly felt, initially in the work of the group called "Les Vingt", as well as in the School of Tervueren, a group of landscape painters who already showed their interest in light, especially in the atmospheric effects, as can be seen in the work of Isidore Verheyden. Later, Pointillism was the main influence on Belgian artists of the time, a trend embraced by Émile Claus and Théo van Rysselberghe, the main representatives of Belgian Luminism. Claus adopted Impressionist techniques, although he maintained academic drawing as the basis for his compositions, and in his work – mainly landscapes – he showed great interest in the study of the effects of light in different atmospheric conditions, with a style that sometimes recalls Monet. Rysselberghe was influenced by Manet, Degas, and Whistler, as well as by the Baroque painter Frans Hals and Spanish painting. His technique was of loose and vigorous brushwork, with great luminous contrasts.
A luminist school also emerged in the Netherlands, more closely linked to the incipient Fauvism, in which Jan Toorop, Leo Gestel, Jan Sluyters, and the early work of Piet Mondrian stood out. Toorop was an eclectic artist, who combined different styles in the search for his own language, such as symbolism, modernism, pointillism, Gauguinian synthetism, Beardsley's linearism, and Japanese printmaking. He was especially devoted to allegorical and symbolic themes and, since 1905, to religious themes.
In Germany, Max Liebermann received an initial realist influence – mainly from Millet – and a slight impressionist inclination towards 1890, until he ended up in a luminism of personal inspiration, with violent brushstrokes and brilliant light, a light of his own research with which he experimented until his death in 1935.
In Spain, luminism developed especially in Valencia and Catalonia. The main representative of the Valencian school was Joaquín Sorolla, although the work of Ignacio Pinazo, Teodoro Andreu, Vicente Castell and Francisco Benítez Mellado is also noteworthy. Sorolla was a master at capturing the light in nature, as is evident in his seascapes, painted with a gradual palette of colors and a variable brushstroke, wider for specific shapes and smaller to capture the different effects of light. An interpreter of the Mediterranean sun like no other, a French critic said of him that "never has a paintbrush contained so much sun". After a period of training, in the 1890s he began to consolidate his style, based on a genre theme with a technique of rapid execution, preferably outdoors, with a thick brushstroke, energetic and impulsive, and with a constant concern for the capture of light, on which he did not cease to investigate its more subtle effects. "La vuelta de la pesca" (1895) is the first work that shows a particular interest in the study of light, especially in its reverberation in the water and in the sails moved by the wind. It was followed by "Pescadores valencianos" (1895), "Cosiendo la vela" (1896) and "Comiendo en la barca" (1898). In 1900 he visited with Aureliano de Beruete the Universal Exhibition in Paris, where he was fascinated by the intense chromatism of the Nordic artists, such as Anders Zorn, Max Liebermann or Peder Severin Krøyer; From here he intensified his coloring and, especially, his luminosity, with a light that invaded the whole painting, emphasizing the blinding whites, as in "Jávea" (1900), "Idilio" (1900), "Playa de Valencia" (1902), in two versions, morning and sunset, "Evening Sun" (1903), "The Three Sails" (1903), "Children at the Seashore" (1903), "Fisherman" (1904), "Summer" (1904), "The White Boat" (1905), "Bathing in Jávea" (1905), etc. They are preferably seascape, with a warm Mediterranean light of which he feels special predilection for that of the month of September, more golden. From 1906 he lowered the intensity of his palette, with a more nuanced tonality and a predilection for mauve ink; he continued with the seascapes, but increased the production of other types of landscapes, as well as gardens and portraits. He summered in Biarritz and the pale and soft light of the Atlantic Ocean made him lower the luminosity of his works. He also continues with his Valencian scenes: "Paseo a orillas del mar" (1909), "Después del baño" (1909). Between 1909 and 1910 his stays in Andalusia induced him to blur the contours, with a technique close to pointillism, with a predominance of white, pink, and mauve. Among his last works is "La bata rosa" (1916), in which he unleashes an abundance of light that filters through all parts of the canvas, highlighting the use of light and color on the treatment of the contours, which appear blurred.
The Luminist School of Sitges emerged in Catalonia, active in this town in the Garraf between 1878 and 1892. Its most prominent members were Arcadi Mas i Fondevila, Joaquim de Miró, Joan Batlle i Amell, Antoni Almirall and Joan Roig i Soler. Opposed in a certain way to the Olot School, whose painters treated the landscape of the interior of Catalonia with a softer and more filtered light, the Sitgetan artists opted for the warm and vibrant Mediterranean light and the atmospheric effects of the Garraf coast. Heirs to a large extent of Fortuny, the members of this school sought to faithfully reflect the luminous effects of the surrounding landscape, in harmonious compositions that combined verism and a certain poetic and idealized vision of nature, with a subtle chromaticism and a fluid brushstroke that was sometimes described as impressionist.
The Sitges School is generally considered a precursor of Catalan modernism: two of its main representatives, Ramon Casas and Santiago Rusiñol, spent several seasons in the town of Sitges, where they adopted the custom of painting "d'après nature" and assumed as the protagonist of their works the luminosity of the environment that surrounded them, although with other formal and compositional solutions in which the influence of French painting is evident. Casas studied in Paris, where he was trained in impressionism, with special influence of Degas and Whistler. His technique stands out for the synthetic brushstroke and the somewhat blurred line, with a theme focused preferably on interiors and outdoor images, as well as popular scenes and social vindication. Rusiñol showed a special sensitivity for the capture of light especially in his landscapes and his series of "Gardens of Spain" – he especially loved the gardens of Mallorca (the sones) and Granada – in which he developed a great ability for the effects of light filtered between the branches of the trees, creating unique environments where light and shadow play capriciously. Likewise, Rusiñol's light shows the longing for the past, for the time that flees, for the instant frozen in time whose memory will live on in the artist's work.
From the 1880s until the turn of the century, symbolism was a fantastic and dreamlike style that emerged as a reaction to the naturalism of the realist and impressionist currents, placing special emphasis on the world of dreams, as well as on satanic and terrifying aspects, sex and perversion. A main characteristic of symbolism was aestheticism, a reaction to the prevailing utilitarianism of the time and to the ugliness and materialism of the industrial era. Symbolism gave art and beauty an autonomy of their own, synthesized in Théophile Gautier's formula "art for art's sake" ("L'art pour l'art"). This current was also linked to modernism (also known as "Art Nouveau" in France, "Modern Style" in the United Kingdom, "Jugendstil" in Germany, "Sezession" in Austria or "Liberty" in Italy). Symbolism was an anti-scientific and anti-naturalist movement, so light lost objectivity and was used as a symbolic element, in conjunction with the rest of the visual and iconographic resources of this style. It is a transcendent light, which behind the material world suggests a spirituality, whether religious or pantheistic, or perhaps simply a state of mind of the artist, a feeling, an emotion. Light, by its dematerialization, exerted a powerful influence on these artists, a light far removed from the physical world in its conception, although for its execution they often made use of impressionist and pointillist techniques.
The movement originated in France with figures such as Gustave Moreau, Odilon Redon and Pierre Puvis de Chavannes. Moreau was still trained in romanticism under the influence of his teacher, Théodore Chassériau, but evolved a personal style in both subject matter and technique, with mystical images with a strong component of sensuality, a resplendent chromaticism with an enamel-like finish and the use of a chiaroscuro of golden shadows. Redon developed a fantastic and dreamlike theme, influenced by the literature of Edgar Allan Poe, which largely preceded surrealism. Until the age of fifty he worked almost exclusively in charcoal drawing and lithography, although he later became an excellent colorist, both in oil and pastel. Puvis de Chavannes was an outstanding muralist, a procedure that suited him well to develop his preference for cold tones, which gave the appearance of fresco painting. His style was more serene and harmonious, with an allegorical theme evoking an idealized past, simple forms, rhythmic lines and a subjective coloring, far from naturalism. In France there was also the movement of the Nabis ("prophets" in Hebrew), formed by Paul Sérusier, Édouard Vuillard, Pierre Bonnard, Maurice Denis and Félix Vallotton. This group was influenced by Gauguin's rhythmic scheme and stood out for an intense chromatism of strong expressiveness.
Another focus of symbolism was Belgium, where the work of Félicien Rops, Fernand Khnopff and William Degouve de Nuncques should be noted. The first was a painter and graphic artist of great imagination, with a predilection for a theme centered on perversity and eroticism. Khnopff developed a dreamlike-allegorical theme of women transformed into angels or sphinxes, with disturbing atmospheres of great technical refinement. Degouve de Nuncques elaborated urban landscapes with a preference for nocturnal settings, with a dreamlike component precursor of surrealism: his work "The Blind House" (1892, Kröller-Müller Museum, Otterlo) influenced René Magritte's "The Empire of Lights" (1954, Royal Museums of Fine Arts of Belgium, Brussels).
In Central Europe, the Swiss Arnold Böcklin and Ferdinand Hodler and the Austrian Gustav Klimt stood out. Böcklin specialized in a theme of fantastic beings, such as nymphs, satyrs, tritons or naiads, with a somber and somewhat morbid style, such as his painting "The Island of the Dead" (1880, Metropolitan Museum of Art, New York), where a pale, cold and whitish light envelops the atmosphere of the island where Charon's boat is headed. Hodler evolved from a certain naturalism to a personal style he called "parallelism", characterized by rhythmic schemes in which line, form and color are reproduced in a repetitive way, with simplified and monumental figures. It was in his landscapes that he showed the greatest luminosity, with pure and vibrant coloring. Klimt had an academic training, to lead to a personal style that synthesized impressionism, modernism and symbolism. He had a preference for mural painting, with an allegorical theme with a tendency towards eroticism, and with a decorative style populated with arabesques, butterfly wings or peacocks, and with a taste for the golden color that gave his works an intense luminosity.
In Italy, it is worth mentioning Giuseppe Pellizza da Volpedo, formed in the divisionist environment, but who evolved to a personal style marked by an intense and vibrant light, whose starting point is his work "Lost Hopes" (1894, Ponti-Grün collection, Rome). In "The Rising Sun or the Sun" (1903-1904, National Gallery of Modern Art, Rome) he carried out a prodigious exercise in the exaltation of light, a refulgent dawn light that peeks over a mountainous horizon and seems to burst into a myriad of rays that spread in all directions, dazzling the viewer. A symbolic reading can be established for this work, given the social and political commitment of the artist, since the rising sun was taken by socialism as a metaphor for the new society to which this ideology aspired.
In the Scandinavian sphere, it is worth remembering the Norwegian Christian Krohg and the Danish Vilhelm Hammershøi and Jens Ferdinand Willumsen. The former combined natural and artificial lights, often with theatrical effects and certain unreal connotations, as in "The Sleeping Seamstress" (1885, Nasjonalgalleriet, Oslo), where the double presence of a lamp next to a window through which daylight enters provokes a sensation of timelessness, of temporal indefinition. Hammershøi was a virtuoso in the handling of light, which he considered the main protagonist of his works. Most of his paintings were set in interior spaces with lights filtered through doors or windows, with figures generally with their backs turned. Willumsen developed a personal style based on the influence of Gauguin, with a taste for bright colors, as in "After the Storm" (1905, Nasjonalgalleriet, Oslo), a navy with a dazzling sun that seems to explode in the sky.
Finally, it is worth mentioning a phenomenon between the 19th and 20th centuries that was a precedent for avant-garde art, especially in terms of its anti-academic component: "naïf art" ("naïve" in French), a term applied to a series of self-taught painters who developed a spontaneous style, alien to the technical and aesthetic principles of traditional painting, sometimes labeled as childish or primitive. One of its best representatives was Henri Rousseau, a customs officer by trade, who produced a personal work, with a poetic tone and a taste for the exotic, in which he lost interest in perspective and resorted to unreal-looking lighting, without shadows or perceptible light sources, a type of image that influenced artists such as Picasso or Kandinski and movements such as metaphysical painting and surrealism.
20th Century.
The art of the 20th century underwent a profound transformation: in a more materialistic, more consumerist society, art was directed to the senses, not to the intellect. The avant-garde movements arose, which sought to integrate art into society through a greater interrelation between artist and spectator, since it was the latter who interpreted the work, and could discover meanings that the artist did not even know. Avant-gardism rejected the traditional methods of optical representation – Renaissance perspective – to vindicate the two-dimensionality of painting and the autonomous character of the image, which implied the abandonment of space and light contrasts. In their place, light and shadow would no longer be instruments of a technique of spatial representation, but integral parts of the image, of the conception of the work as a homogeneous whole. On the other hand, other artistic methods such as photography, film and video had a notable influence on the art of this century, as well as, in relation to light, the installation, one of the variants of which is "light art". On the other hand, the new interrelationship with the spectator means that the artist does not reflect what he sees, but lets the spectator see his vision of reality, which will be interpreted individually by each person.
Advances in artificial light (carbon and tungsten filaments, neon lights) led society in general to a new sensitivity to luminous impacts and, for artists in particular, to a new reflection on the technical and aesthetic properties of the new technological advances. Many artists of the new century experimented with all kinds of lights and their interrelation, such as the mixture and interweaving of natural and artificial lights, the control of the focal point, the dense atmospheres, the shaded or transparent colors and other types of sensorial experiences, already initiated by the impressionists but which in the new century acquired a category of their own.
Avant-garde.
The emergence of the avant-garde at the turn of the century brought a rapid succession of artistic movements, each with a particular technique and a particular vision of the function of light and color in painting: fauvism and expressionism were heirs of post-impressionism and treated light to the maximum of its saturation, with strong chromatic contrasts and the use of complementary colors for shadows; cubism, futurism and surrealism had in common a subjective use of color, giving primacy to the expression of the artist over the objectivity of the image.
One of the first movements of the 20th century concerned with light and, especially, color, was Fauvism (1904-1908). This style involved experimentation in the field of color, which was conceived in a subjective and personal way, applying emotional and expressive values to it, independent of nature. For these artists, colors had to generate emotions, through a subjective chromatic range and brilliant workmanship. In this movement a new conception of pictorial illumination arose, which consisted in the negation of shadows; the light comes from the colors themselves, which acquire an intense and radiant luminosity, whose contrast is achieved through the variety of pigments used.
Fauvist painters include Henri Matisse, Albert Marquet, Raoul Dufy, André Derain, Maurice de Vlaminck and Kees van Dongen. Perhaps the most gifted was Matisse, who "discovered" light in Collioure, where he understood that intense light eliminates shadows and highlights the purity of colors; from then on he used pure colors, to which he gave an intense luminosity. According to Matisse, "color contributes to expressing light, not its physical phenomenon but the only light that exists in fact, that of the artist's brain". One of his best works is "Luxury," "Calm and Voluptuousness" (1904, Musée d'Orsay, Paris), a scene of bathers on the beach illuminated by intense sunlight, in a pointillist technique of juxtaposed patches of pure and complementary colors.
Related to this style was Pierre Bonnard, who had been a member of the Nabis, an intimist painter with a predilection for the female nude, as in his "Nude against the light" (1908, Royal Museums of Fine Arts of Belgium, Brussels), in which the woman's body is elaborated with light, enclosed in a space formed by the vibrant light of a window sifted by a blind.
Expressionism emerged as a reaction to impressionism, against which they defended a more personal and intuitive art, where the artist's inner vision – the "expression" – prevailed over the representation of reality – the "impression". In their works they reflected a personal and intimate theme with a taste for the fantastic, deforming reality to accentuate the expressive character of the work. Expressionism was an eclectic movement, with multiple tendencies in its midst and a diverse variety of influences, from post-impressionism and symbolism to fauvism and cubism, as well as some aniconic tendencies that would lead to abstract art (Kandinski). Expressionist light is more conceptual than sensorial, it is a light that emerges from within and expresses the artist's mentality, his consciousness, his way of seeing the world, his subjective "expression".
With precedents in the figures of Edvard Munch and James Ensor, it was formed mainly around two groups: "Die Brücke" (Ernst Ludwig Kirchner, Erich Heckel, Karl Schmidt-Rottluff, Emil Nolde) and "Der Blaue Reiter" (Vasili Kandinski, Franz Marc, August Macke, Paul Klee). Other exponents were the "Vienna Group" (Egon Schiele, Oskar Kokoschka) and the "School of Paris" (Amedeo Modigliani, Marc Chagall, Georges Rouault, Chaïm Soutine). Edvard Munch was linked in his beginnings to symbolism, but his early work already reflects a certain existential anguish that will lead him to a personal painting of strong psychological introspection, in which light is a reflection of the emptiness of existence, of the lack of communication and of the subordination of physical reality to the artist's inner vision, as can be seen in the faces of his characters, with a spectral lighting that gives them the appearance of automatons. The members of "Die Brücke" ("The Bridge") – especially Kirchner, Heckel and Schmidt-Rottluff – developed a dark, introspective and anguished subject matter, where form, color and light are subjective, resulting in tense, unsettling works that emphasize the loneliness and rootlessness of the human being. The light in these artists is not illuminating, it does not respond to physical criteria, as can be seen in "Erich Heckel and Otto Müller playing Kirchner's chess" (1913, Brücke Museum Berlin), where the lamp on the table does not radiate light and constitutes a strange object, alien to the scene. "Der Blaue Reiter" ("The Blue Rider") emerged in Munich in 1911 and more than a common stylistic stamp shared a certain vision of art, in which the creative freedom of the artist and the personal and subjective expression of his works prevailed. It was a more spiritual and abstract movement, with a technical predilection for watercolor, which gave his works an intense chromatism and luminosity.
Cubism (1907-1914) was based on the deformation of reality by destroying the spatial perspective of Renaissance origin, organizing space according to a geometric grid, with simultaneous vision of objects, a range of cold and muted colors, and a new conception of the work of art, with the introduction of "collage". It was the first movement that dissociated light from reality, by eliminating the tangible focus that in all the previous history of painting illuminated the pictures, whether natural or artificial; in its place, each part of the picture, each space that has been deconstructed into geometric planes, has its own luminosity. Jean Metzinger, in On Cubism (1912), wrote that "beams of light and shadows distributed in such a way that one engenders the other plastically justify the ruptures whose orientation creates the rhythm".
The main figure of this movement was Pablo Picasso, one of the great geniuses of the 20th century, along with Georges Braque, Jean Metzinger, Albert Gleizes, Juan Gris, and Fernand Léger. Before ending up in cubism, Picasso went through the so-called blue and rose periods: in the first one, the influence of El Greco can be seen in his elongated figures of dramatic appearance, with profiles highlighted by a yellowish or greenish light and shadows of thick black brushstrokes; in the second one, he deals with kinder and more human themes, being characteristic the scenes of figures immersed in empty landscapes of luminous appearance. His cubist stage is divided into two phases: in "analytical cubism" he focused on portraits and still lifes, with images broken down into planes in which light loses its modeling and volume-defining character to become a constructive element that emphasizes contrast, giving the image an iridescent appearance; in "synthetic cubism" he expanded the chromatic range and included extra-pictorial elements, such as texts and fragments of literary works. After his cubist stage, his most famous work is "Guernica", entirely elaborated in shades of gray, a night scene illuminated by the lights of a light bulb in the ceiling – shaped like a sun and an eye at the same time – and of a quinque in the hands of the character leaning out of the window, with a light constructed by planes that serve as counterpoints of light in the midst of darkness.
A movement derived from Cubism was Orphism, represented especially by Robert Delaunay, who experimented with light and color in his abstracting search for rhythm and movement, as in his series on the Eiffel Tower or in "Field of Mars. The Red Tower", where he decomposes light into the colors of the prism to diffuse it through the space of the painting. Delaunay studied optics and came to the conclusion that "the fragmentation of form by light creates planes of colors", so in his work he explored with intensity the rhythms of colors, a style he called "simultaneism" taking the scientific concept of simultaneous contrasts created by Chevreul. For Delaunay, "painting is, properly speaking, a luminous language", which led him in his artistic evolution towards abstraction, as in his series of "Windows, Disks and Circular and Cosmic Forms", in which he represents beams of light elaborated with bright colors in an ideal space.
Another style concerned with optical experimentation was Futurism (1909–1930), an Italian movement that exalted the values of the technical and industrial progress of the 20th century and emphasized aspects of reality such as movement, speed and simultaneity of action. Prominent among its ranks were Giacomo Balla, Gino Severini, Carlo Carrà and Umberto Boccioni. These artists were the first to treat light in an almost abstract way, as in Boccioni's paintings, which were based on pointillist technique and the optical theories of color to carry out a study of the abstract effects of light, as in his work "The City Rises" (1910-1911, Museum of Modern Art, New York). Boccioni declared in 1910 that "movement and light destroy the matter of objects" and aimed to "represent not the optical or analytical impression, but the psychic and total experience". Gino Severini evolved from a still pointillist technique towards Cubist spatial fragmentation applied to Futurist themes, as in his "Expansión de la luz" (1912, Museo Thyssen-Bornemisza, Madrid), where the fragmentation of color planes contributes to the construction of plastic rhythms, which enhances the sensation of movement and speed. Carlo Carrà elaborated works of pointillist technique in which he experimented with light and movement, as in "La salida del teatro" (1909, private collection), where he shows a series of pedestrians barely sketched in their elemental forms and elaborated with lines of light and color, while in the street artificial lights gleam, whose flashes seem to cut the air.
Balla synthesized neo-Impressionist chromaticism, pointillist technique and cubist structural analysis in his works, decomposing light to achieve his desired effects of movement. In "La jornada del operario" (1904, private collection), he divided the work into three scenes separated by frames, two on the left and one on the right of double size. They represent dawn, noon and twilight, in which he depicts various phases of the construction of a building, consigning a day's work; the two parts on the left are actually a single image separated by the frame, but with a different treatment of light for the time of day. In "Arc Lamp" (1911-1912, Museum of Modern Art, New York) he made an analytical study of the patterns and colors of a beam of light, an artificial light in conflict with moonlight, in a symbolism in which the electric light represents the energy of youth as opposed to the lunar light of classicism and romanticism. In this work the light seems to be observed under a microscope, from the incandescent center of the lamp sprouts a series of colored arrows that gradually lose chromatism as they move away from the bright focus until they merge with the darkness. Balla himself stated that "the splendor of light is obtained by bringing pure colors closer together. This painting is not only original as a work of art, but also scientific, since I sought to represent light by separating the colors that compose it".
Outside Italy, Futurism influenced various parallel movements such as English Vorticism, whose best exponent was Christopher Richard Wynne Nevinson, a painter who showed a sensitivity for luminous effects reminiscent of Severini, as seen in his "Starry Shell" (1916, Tate Gallery, London); or Russian Rayonism, represented by Mikhail Larionov and Natalia Goncharova, a style that combined the interest in light beams typical of analytical cubism with the radiant dynamism of futurism, although it later evolved towards abstraction.
In Italy also emerged the so-called metaphysical painting, considered a forerunner of surrealism, represented mainly by Giorgio de Chirico and Carlo Carrà. Initially influenced by symbolism, De Chirico was the creator of a style opposed to futurism, more serene and static, with certain reminiscences of classical Greco-Roman art and Renaissance linear perspective. In his works he created a world of intellectual placidity, a dreamlike space where reality is transformed for the sake of a transcendent evocation, with spaces of wide perspectives populated by figures and isolated objects in which a diaphanous and uniform illumination creates elongated shadows of unreal aspect, creating an overwhelming sensation of loneliness. In his urban spaces, empty and geometrized, populated by faceless mannequins, the lights and shadows create strong contrasts that help to enhance the dreamlike factor of the image. Another artist of this movement is Giorgio Morandi, author of still lifes in which chiaroscuro has a clear protagonism, in compositions where light and shadow play a primordial role to build an unreal and dreamlike atmosphere.
With abstract art (1910-1932) the artist no longer tries to reflect reality, but his inner world, to express his feelings. The art loses all real aspect and imitation of nature to focus on the simple expressiveness of the artist, in shapes and colors that lack any referential component. Initiated by Vasili Kandinski, it was developed by the neoplasticist movement ("De Stijl"), with figures such as Piet Mondrian and Theo Van Doesburg, as well as Russian Suprematism (Kazimir Malevich). The presence of light in abstract art is inherent to its evolution, because although this movement dispenses with the theme in his works, it is no less true that it is part of this, after all, the human being cannot detach himself completely from the reality that shapes his existence. The path towards abstraction came from two paths: one of a psychic-emotive character originated by symbolism and expressionism, and the other objective-optical derived from fauvism and cubism. Light played a special role in the second one, since starting from the cubist light beams it was logical to reach the isolation of them outside the reality that originates them and their consequent expression in abstract forms.
In abstract art, light loses the prominence it has in an image based on natural reality, but its presence is still perceived in the various tonal gradations and chiaroscuro games that appear in numerous works by abstract artists such as Mark Rothko, whose images of intense chromaticism have a luminosity that seems to radiate from the color of the work itself. The pioneer of abstraction, Vasili Kandinski, received the inspiration for this type of work when he woke up one day and saw one of his paintings in which the sunlight was shining brightly, diluting the forms and accentuating the chromaticism, which showed an unprecedented brightness; he then began a process of experimentation to find the perfect chromatic harmony, giving total freedom to color without any formal or thematic subordination. Kandinski's research continued with Russian suprematism, especially with Kazimir Malevich, an artist with post-impressionist and fauvist roots who later adopted cubism, leading to a geometric abstraction in which color acquires special relevance, as shown in his "Black on Black" (1913) and "White on White" (1919).
In the interwar period, the New Objectivity ("Neue Sachlichkeit") movement emerged in Germany, which returned to realistic figuration and the objective representation of the surrounding reality, with a marked social and vindictive component. Although they advocated realism, they did not renounce the technical and aesthetic achievements of avant-garde art, such as Fauvist and expressionist coloring, Futurist "simultaneous vision" or the application of photomontage to painting. In this movement, the urban landscape, populated with artificial lights, played a special role. Among its main representatives were Otto Dix, George Grosz, and Max Beckmann.
Surrealism (1924-1955) placed special emphasis on imagination, fantasy and the world of dreams, with a strong influence of psychoanalysis. Surrealist painting moved between figuration (Salvador Dalí, Paul Delvaux, René Magritte, Max Ernst) and abstraction (Joan Miró, André Masson, Yves Tanguy, Paul Klee). René Magritte treated light as a special object of research, as is evident in his work "The Empire of Lights" (1954, Royal Museums of Fine Arts of Belgium, Brussels), where he presents an urban landscape with a house surrounded by trees in the lower part of the painting, immersed in a nocturnal darkness, and a daytime sky furrowed with clouds in the upper part; in front of the house there is a street lamp whose light, together with that of two windows on the upper floor of the house, is reflected in a pond located at the foot of the house. The contrasting day and night represent waking and sleeping, two worlds that never come to coexist.
Dalí evolved from a formative phase in which he tried different styles (impressionism, pointillism, futurism, cubism, fauvism) to a figurative surrealism strongly influenced by Freudian psychology. In his work he showed a special interest in light, a Mediterranean light that in many of his works bathes the scene with intensity: "The Bay of Cadaqués" (1921, private collection), "The Phantom Chariot" (1933, Nahmad collection, Geneva), "Solar Table" (1936, Boijmans Van Beuningen Museum, Rotterdam), "Composition" (1942, Tel Aviv Museum of Art). It is the light of his native Empordà, a region marked by the tramuntana wind, which, according to Josep Pla, generates a "static, clear, shining, sharp, glittering" light. Dalí's treatment of light is generally surprising, with singular fantastic effects, contrasts of light and shadow, backlighting and countershadows, always in continuous research of new and surprising effects. Towards 1948 he abandoned avant-gardism and returned to classicist painting, although interpreted in a personal and subjective way, in which he continues his incessant search for new pictorial effects, as in his "atomic stage" in which he seeks to capture reality through the principles of quantum physics. Among his last works stand out for their luminosity: "Christ of Saint John of the Cross" (1951, Kelvingrove Museum, Glasgow), "The Last Supper" (1955, National Gallery of Art, Washington D. C.), "The Perpignan Station" (1965, Museum Ludwig, Cologne) and "Cosmic Athlete" (1968, Zarzuela Palace, Madrid).
Joan Miró reflected in his works a light of magical and at the same time telluric aspect, rooted in the landscape of the countryside of Tarragona that was so dear to him, as is evident in "La masía" (1921-1922, National Gallery of Art, Washington D. C.), illuminated by a twilight that bathes the objects in contrast with the incipient darkness of the sky. In his work he uses flat and dense colors, in preferably nocturnal environments with special prominence of empty space, while objects and figures seem bathed in an unreal light, a light that seems to come from the stars, for which he felt a special devotion.
In the United States, between the 1920s and 1930s, several figurative movements emerged, especially interested in everyday reality and life in cities, always associated with modern life and technological advances, including artificial lights in streets and avenues as well as commercial and indoor lights. The first of these movements was the Ashcan School, whose leader was Robert Henri, and where George Wesley Bellows and John French Sloan also stood out. In opposition to American Impressionism, these artists developed a style of cold tones and dark palette, with a theme centered on marginalization and the world of nightlife.
This school was followed by the so-called American realism or "American Scene", whose main representative was Edward Hopper, a painter concerned with the expressive power of light, in urban images of anonymous and lonely characters framed in lights and deep shadows, with a palette of cold colors influenced by the luminosity of Vermeer. Hopper took from black and white cinema the contrast between light and shadow, which would be one of the keys to his work. He had a special predilection for the light of Cape Cod (Massachusetts), his summer resort, as can be seen in "Sunlight on the Second Floor" (1960, Whitney Museum of American Art, New York). His scenes are notable for their unusual perspectives, strong chromaticism and contrasts of light, in which metallic and electrifying glows stand out. In "New York Cinema" (1939, Museum of Modern Art, New York) he showed the interior of a cinema vaguely illuminated by – as he himself expressed in his notebook – "four sources of light, with the brightest point in the girl's hair and in the flash of the handrail". On one occasion, Hopper went so far as to state that the purpose of his painting was none other than to "paint sunlight on the side wall of a house." One critic defined the light in Hopper's mysterious paintings as a light that "illuminates but never warms," a light at the service of his vision of the desolate American urban landscape.
Latest trends.
Since the Second World War, art has undergone a vertiginous evolutionary dynamic, with styles and movements following each other more and more rapidly in time. The modern project originated with the historical avant-gardes reached its culmination with various anti-material styles that emphasized the intellectual origin of art over its material realization, such as action art and conceptual art. Once this level of analytical prospection of art was reached, the inverse effect was produced – as is usual in the history of art, where different styles confront and oppose each other, the rigor of some succeeding the excess of others, and vice versa – and a return was made to the classical forms of art, accepting its material and esthetic component, and renouncing its revolutionary and society-transforming character. Thus postmodern art emerged, where the artist shamelessly transits between different techniques and styles, without a vindictive character, and returns to artisanal work as the essence of the artist.
The first movements after the war were abstract, such as American abstract expressionism and European informalism (1945-1960), a set of trends based on the expressiveness of the artist, who renounces any rational aspect of art (structure, composition, preconceived application of color). It is an eminently abstract art, where the material support of the work becomes relevant, which assumes the leading role over any theme or composition. Abstract expressionism – also called "action painting" – was characterized by the use of the "dripping" technique, the dripping of paint on the canvas, on which the artist intervened with various tools or with his own body. Among its members, Jackson Pollock and Mark Rothko stand out. In addition to pigments, Pollock used glitter and aluminum enamel, which stands out for its brightness, giving his works a metallic light and creating a kind of chiaroscuro. For his part, Rothko worked in oil, with overlapping layers of very fluid paint, which created glazes and transparencies. He was especially interested in color, which he combined in an unprecedented way, but with a great sense of balance and harmony, and used white as a base to create luminosity. European informalism includes various currents such as tachism, art brut and matter painting. Georges Mathieu, Hans Hartung, Jean Fautrier, Jean Dubuffet, Lucio Fontana and Antoni Tàpies stand out. The latter developed a personal and innovative style, with a mixed technique of crushed marble powder with pigments, which he applied on the canvas to later carry out various interventions by means of "grattage". He used to use a dark coloring, almost "dirty", but in some of his works (such as "Zoom", 1946), he added a white from Spain that gave it a great luminosity.
Among the last movements especially concerned with light and color was "op-art" (optical art, also called kinetic or kinetic-luminescent), a style that emphasized the visual aspect of art, especially optical effects, which were produced either by optical illusions (ambiguous figures, persistent images, "moiré" effect), or by movement or play of light. Victor Vasarely, Jesús Rafael Soto and Yaacov Agam stood out. The technique of these artists is mixed, transcending canvas or pigment to incorporate metallic pieces, plastics and all kinds of materials; in fact, more than the material substrate of the work, the artistic matter is light, space and movement. Vasarely had a very precise and elaborate way of working, sometimes using photographs that he projected onto the canvas by means of slides, which he called "photographisms". In some works (such as "Eridan", 1956) he investigated with the contrasts between light and shadow, reaching high values of light achieved with white and yellow. His "Cappella" series (1964) focused on the opposition between light and dark combined with shapes. The "Vega" series (1967) was made with aluminum paint and gold and silver glitter, which reverberated the light. Soto carried out a type of serial painting influenced by dodecaphonism, with primary colors that stand out for their transparency and provoke a strong sensation of movement. Agam, on the other hand, was particularly interested in chromatic combinations, working with 150 different colors, in painting or sculpture-painting.
Among the figurative trends is "pop art" (1955-1970), which emerged in the United States as a movement to reject abstract expressionism. It includes a series of authors who returned to figuration, with a marked component of popular inspiration, with images inspired by the world of advertising, photography, comics, and mass media. Roy Lichtenstein, Tom Wesselmann, James Rosenquist, and Andy Warhol stood out. Lichtenstein was particularly inspired by comics, with paintings that look like vignettes, sometimes with the typical graininess of printed comics. He used flat inks, without mixtures, in pure colors. He also produced landscapes, with light colors and great luminosity. Wesselmann specialized in nudes, generally in bathrooms, with a cold and aseptic appearance. He also used pure colors, without tonal gradations, with sharp contrasts. Rosenquist had a more surrealist vein, with a preference for consumerist and advertising themes. Warhol was the most mediatic and commercial artist of this group. He used to work in silkscreen, in series ranging from portraits of famous people such as Elvis Presley, Marilyn Monroe or Mao Tse-tung to all kinds of objects, such as his series of Campbell's soup cans, made with a garish and strident colorism and a pure, impersonal technique.
Abstraction resurfaced between the 1960s and 1980s with Post-painterly abstraction and Minimalism. Post-painterly abstraction (also called "New Abstraction") focused on geometrism, with an austere, cold and impersonal language, due to an anti-anthropocentric tendency that could be glimpsed in these years in art and culture in general, also present in "pop-art", a style with which it coexisted. Thus, post-pictorial abstraction focuses on form and color, without making any iconographic reading, only interested in the visual impact, without any reflection. They use striking colors, sometimes of a metallic or fluorescent nature. Barnett Newman, Frank Stella, Ellsworth Kelly and Kenneth Noland stand out. Minimalism was a trend that involved a process of dematerialization that would lead to conceptual art. They are works of marked simplicity, reduced to a minimum motif, refined to the initial approach of the author. Robert Mangold and Robert Ryman stand out, who had in common the preference for monochrome, with a refined technique in which the brushstroke is not noticed and the use of light tones, preferably pastel colors.
Figuration returned again with hyperrealism – which emerged around 1965 – a trend characterized by its superlative and exaggerated vision of reality, which is captured with great accuracy in all its details, with an almost photographic aspect, in which Chuck Close, Richard Estes, Don Eddy, John Salt, and Ralph Goings stand out. These artists are concerned, among other things, with details such as glitter and reflections in cars and shop windows, as well as light effects, especially artificial city lights, in urban views with neon lights and the like. Linked to this movement is the Spaniard Antonio López García, author of academic works but where the most meticulous description of reality is combined with a vague unreal aspect close to magical realism. His urban landscapes of wide atmospheres stand out ("Madrid sur", 1965–1985; "Madrid desde Torres Blancas", 1976–1982), as well as images with an almost photographic aspect such as "Mujer en la bañera" (1968), in which a woman takes a bath in an atmosphere of electric light reflected on the bathroom tiles, creating an intense and vibrant composition.
Another movement especially concerned with the effects of light has been neo-luminism, an American movement inspired by American luminism and the Hudson River School, from which they adopt its majestic skies and calm water marinas, as well as the atmospheric effects of light rendered in subtle gradations. Its main representatives are: James Doolin, April Gornik, Norman Lundin, Scott Cameron, Steven DaLuz and Pauline Ziegen.
Since 1975, postmodern art has predominated in the international art scene: it emerged in opposition to the so-called modern art, it is the art of postmodernity, a socio-cultural theory that postulates the current validity of a historical period that would have surpassed the modern project, that is, the cultural, political and economic roots of the Contemporary Age, marked culturally by the Enlightenment, politically by the French Revolution and economically by the Industrial Revolution. These artists assume the failure of the avant-garde movements as the failure of the modern project: the avant-garde intended to eliminate the distance between art and life, to universalize art; the postmodern artist, on the other hand, is self-referential, art speaks of art, and does not intend to do social work. Postmodern painting returns to the traditional techniques and themes of art, although with a certain stylistic mixification, taking advantage of the resources of all the preceding artistic periods and intermingling and deconstructing them, in a procedure that has been baptized as "appropriationism" or artistic "nomadism". Individual artists such as Jeff Koons, David Salle, Jean-Michel Basquiat, Keith Haring, Julian Schnabel, Eric Fischl or Miquel Barceló stand out, as well as various movements such as the Italian trans-avant-garde (Sandro Chia, Francesco Clemente, Enzo Cucchi, Nicola De Maria, Mimmo Paladino), German Neo-Expressionism (Anselm Kiefer, Georg Baselitz, Jörg Immendorff, Markus Lüpertz, Sigmar Polke), Neo-Mannerism, free figuration, among others.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(n = \\frac{c}{v})"
}
]
| https://en.wikipedia.org/wiki?curid=70548926 |
70550690 | Joshua 5 | Book of Joshua, chapter 5
Joshua 5 is the fifth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition, the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the circumcision and Passover of the Israelites after crossing the Jordan River under the leadership of Joshua, a part of a section comprising Joshua 1:1–5:12 about the entry to the land of Canaan, and the meeting of Joshua with the Commander of the Lord's army near
Text.
This chapter was originally written in the Hebrew language. It is divided into 15 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q47 (4QJosha; 200–100 BCE) with extant verses 1?, 2–7.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Analysis.
The narrative of Israelites entering the land of Canaan comprises verses 1:1 to 5:12 of the Book of Joshua and has the following outline:
A. Preparations for Entering the Land (1:1–18)
B. Rahab and the Spies in Jericho (2:1–24)
C. Crossing the Jordan (3:1–4:24)
D. Circumcision and Passover (5:1–12)
1. Canaanite Fear (5:1)
2. Circumcision (5:2–9)
3. Passover (5:10–12)
The second section of the book contains the narratives of Israelites conquering the land of Canaan comprising verses 5:13 to 12:24 and has the following outline:
A. Jericho (5:13–6:27)
1. Joshua and the Commander of the Lord's Army (5:13–15)
2. Instructions for Capturing the City (6:1–5)
3. Obeying the Instructions (6:6–21)
4. The Deliverance of Rahab's Family and the City's Destruction (6:22–25)
5. Curse and Renown (6:26–27)
B. Achan and Ai (7:1–8:29)
C. Renewal at Mount Ebal (8:30–35)
D. The Gibeonite Deception (9:1–27)
E. The Campaign in the South (10:1–43)
F. The Campaign in the North and Summary List of Kings (11:1–12:24)
Circumcision of the new generation (5:1–9).
The main subject of chapter 5 is "beginning in new land", as the people of Israel finally entered the promised land "flowing with milk and honey" (verse 6), the male population having circumcision, then the timely and correct celebration of the Passover (cf. Exodus 12:43–49), followed by the cessation of Manna (cf. Exodus 16:35). Circumcision was widespread among ancient Semites, but for the people of Israel, it marked the covenantal relationship with God, traced back to Abraham, with a statement that 'no uncircumcised male can be regarded as an Israelite' (Genesis 17:9–14). it is also a condition to be ritually pure to celebrate Passover in the new land, in contrast to God's decree banning the previously circumcised generation from Egypt to step in the land of Canaan (verses 4, 6; cf. Numbers 14:22–23; Deuteronomy 1:34–40). The term 'a second time' shows, that Joshua did not initiate the practice in Israel. With the circumcision, the "disgrace of Egypt" has been 'rolled away' (the verb in Hebrew resembling the name "Gilgal").
"And it came to pass, when all the kings of the Amorites, which were on the side of Jordan westward, and all the kings of the Canaanites, which were by the sea, heard that the Lord had dried up the waters of Jordan from before the children of Israel, until we were passed over, that their heart melted, neither was there spirit in them any more, because of the children of Israel."
Verse 1.
The mention of 'Amorites' and 'Canaanites' for the inhabitants of the land of Canaan follows Deuteronomy 1–3, in which the Amorites are understood as the inhabitants of mountainous parts (Deuteronomy 1:7, 19, 20; Joshua 10:6), whereas the Canaanites are more to the west, toward the Mediterranean Sea, so geographically viewed from the position of the Israelites at this time, the Amorites are mentioned before the Canaanties. Earlier it was recorded how the Israelites' hearts 'melted' because of the Amorites and the sons of Anakim (Deuteronomy 1:27–28), and then rashly took on the enemy unprepared (Deuteronomy 1:41–45), now it is the Amorites and the Canaanites who tremble before the approach of Israel and YHWH.
Verse 5:1 shares the same language pattern as verses 9:1; 10:1 and 11:1, each of which introduces a new part of narrative.
In the Hebrew Masoretic Text, this verse is marked as an 'unconnected unit', bracketed with a "parashah setumah" ("closed portion marking") before and after.
First Passover in the land of Canaan (5:10–12).
The first Passover in the land of Canaan was held in Gilgal at the correct date as commanded in the Book of Exodus, although here the rituals are not recorded in detail (cf. the Feast of Unleavened Bread that followed Passover for seven days, Leviticus 23:5–6). Rather, this Passover is associated with the ceasing of the Manna (cf. Exodus 16) and the eating of the produce of land. The 'unleavened cakes' recalls the 'unleavened bread' which had been the food of hasty flight from Egypt (Exodus 12:15-20; Deuteronomy 16:3), and eating along with 'parched grain' is consistent with a people not yet settled, but already begun to enjoy the legitimate possession of the land (Deuteronomy 6:10-11).
"And the children of Israel encamped in Gilgal, and kept the passover on the fourteenth day of the month at even in the plains of Jericho."
Commander of the Lord's Army (5:13–15).
The narrative of Joshua's encounter with the 'commander of the army of the LORD' close to Jericho marks the beginning of the war of conquest. Joshua saw and presumed that this man is not an Israelite (hence the question "Are you for us or for our enemies?"). This commander indeed appears to be an 'angel (or messenger) of the LORD', who represents the presence of YHWH himself (cf. Judges 6:14; 13:20–22), sometimes in military function (Numbers 22:23; 2 Samuel 24:16–17; 2 Kings 19:35) or at other times in commissioning, as with Gideon (Judges 6:11–12); both are present here (cf. Moses' encounter with the angel of YHWH in the "burning bush"; Exodus 3:2, 4–6). Joshua evidently realized the angel's military role (verse 13) and the representation of God when he bowed down to worship this figure, experiencing a direct commissioning from God, like that of Moses, at the beginning of the real test of his leadership.
"And he said, “No; but I am the commander of the army of the Lord. Now I have come.”"
"And Joshua fell on his face to the earth and worshiped and said to him, “What does my lord say to his servant?”"
Verse 14.
The terse response "No" from the commander makes clear that Israel needs to join God's battle instead of God joins them, and indicates that it is possible for Israel in some way not to join God in the battles.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70550690 |
705600 | Cesàro summation | Modified summation method applicable to some divergent series
In mathematical analysis, Cesàro summation (also known as the Cesàro mean or Cesàro limit) assigns values to some infinite sums that are not necessarily convergent in the usual sense. The Cesàro sum is defined as the limit, as "n" tends to infinity, of the sequence of arithmetic means of the first "n" partial sums of the series.
This special case of a matrix summability method is named for the Italian analyst Ernesto Cesàro (1859–1906).
The term "summation" can be misleading, as some statements and proofs regarding Cesàro summation can be said to implicate the Eilenberg–Mazur swindle. For example, it is commonly applied to Grandi's series with the conclusion that the "sum" of that series is 1/2.
Definition.
Let formula_0 be a sequence, and let
formula_1
be its kth partial sum.
The sequence ("a""n") is called Cesàro summable, with Cesàro sum "A" ∈ formula_2, if, as n tends to infinity, the arithmetic mean of its first "n" partial sums "s"1, "s"2, ..., "s""n" tends to A:
formula_3
The value of the resulting limit is called the Cesàro sum of the series formula_4 If this series is convergent, then it is Cesàro summable and its Cesàro sum is the usual sum.
Examples.
First example.
Let "a""n"
(−1)"n" for "n" ≥ 0. That is, formula_5 is the sequence
formula_6
Let G denote the series
formula_7
The series G is known as Grandi's series.
Let formula_8 denote the sequence of partial sums of G:
formula_9
This sequence of partial sums does not converge, so the series G is divergent. However, G is Cesàro summable. Let formula_10 be the sequence of arithmetic means of the first n partial sums:
formula_11
Then
formula_12
and therefore, the Cesàro sum of the series G is 1/2.
Second example.
As another example, let "a""n"
"n" for "n" ≥ 1. That is, formula_0 is the sequence
formula_13
Let G now denote the series
formula_14
Then the sequence of partial sums formula_15 is
formula_16
Since the sequence of partial sums grows without bound, the series G diverges to infinity. The sequence ("t""n") of means of partial sums of G is
formula_17
This sequence diverges to infinity as well, so G is not Cesàro summable. In fact, for the series of any sequence which diverges to (positive or negative) infinity, the Cesàro method also leads to the series of a sequence that diverges likewise, and hence such a series is not Cesàro summable.
(C, "α") summation.
In 1890, Ernesto Cesàro stated a broader family of summation methods which have since been called (C, "α") for non-negative integers α. The (C, 0) method is just ordinary summation, and (C, 1) is Cesàro summation as described above.
The higher-order methods can be described as follows: given a series Σ"a""n", define the quantities
formula_18
(where the upper indices do not denote exponents) and define E to be A for the series 1 + 0 + 0 + 0 + ... Then the (C, "α") sum of Σ"a""n" is denoted by (C, "α")-Σ"a""n" and has the value
formula_19
if it exists . This description represents an α-times iterated application of the initial summation method and can be restated as
formula_20
Even more generally, for "α" ∈ formula_2 \ formula_21−, let A be implicitly given by the coefficients of the series
formula_22
and E as above. In particular, E are the binomial coefficients of power −1 − "α". Then the (C, "α") sum of Σ"a""n" is defined as above.
If Σ"a""n" has a (C, "α") sum, then it also has a (C, "β") sum for every "β" > "α", and the sums agree; furthermore we have "an"
"o"("nα") if "α" > −1 (see little-o notation).
Cesàro summability of an integral.
Let "α" ≥ 0. The integral formula_23 is (C, "α") summable if
formula_24
exists and is finite . The value of this limit, should it exist, is the (C, "α") sum of the integral. Analogously to the case of the sum of a series, if "α"
0, the result is convergence of the improper integral. In the case "α"
1, (C, 1) convergence is equivalent to the existence of the limit
formula_25
which is the limit of means of the partial integrals.
As is the case with series, if an integral is (C, "α") summable for some value of "α" ≥ 0, then it is also (C, "β") summable for all "β" > "α", and the value of the resulting limit is the same.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(a_n)_{n=1}^\\infty"
},
{
"math_id": 1,
"text": "s_k = a_1 + \\cdots + a_k= \\sum_{n=1}^k a_n"
},
{
"math_id": 2,
"text": "\\mathbb{R}"
},
{
"math_id": 3,
"text": "\\lim_{n\\to\\infty} \\frac{1}{n}\\sum_{k=1}^n s_k = A."
},
{
"math_id": 4,
"text": "\\textstyle\\sum_{n=1}^\\infty a_n."
},
{
"math_id": 5,
"text": "(a_n)_{n=0}^\\infty"
},
{
"math_id": 6,
"text": "(1, -1, 1, -1, \\ldots)."
},
{
"math_id": 7,
"text": "G = \\sum_{n=0}^\\infty a_n = 1-1+1-1+1-\\cdots "
},
{
"math_id": 8,
"text": "(s_k)_{k=0}^\\infty"
},
{
"math_id": 9,
"text": "\\begin{align}\n s_k &= \\sum_{n=0}^k a_n \\\\\n (s_k) &= (1, 0, 1, 0, \\ldots).\n \\end{align}"
},
{
"math_id": 10,
"text": "(t_n)_{n=1}^\\infty"
},
{
"math_id": 11,
"text": "\\begin{align}\n t_n &= \\frac{1}{n}\\sum_{k=0}^{n-1} s_k \\\\\n (t_n) &= \\left(\\frac{1}{1}, \\frac{1}{2}, \\frac{2}{3}, \\frac{2}{4}, \\frac{3}{5}, \\frac{3}{6}, \\frac{4}{7}, \\frac{4}{8}, \\ldots\\right).\n \\end{align}"
},
{
"math_id": 12,
"text": "\\lim_{n\\to\\infty} t_n = 1/2,"
},
{
"math_id": 13,
"text": "(1, 2, 3, 4, \\ldots)."
},
{
"math_id": 14,
"text": "G = \\sum_{n=1}^\\infty a_n = 1+2+3+4+\\cdots "
},
{
"math_id": 15,
"text": "(s_k)_{k=1}^\\infty"
},
{
"math_id": 16,
"text": "(1, 3, 6, 10, \\ldots)."
},
{
"math_id": 17,
"text": "\\left(\\frac{1}{1}, \\frac{4}{2}, \\frac{10}{3}, \\frac{20}{4}, \\ldots\\right)."
},
{
"math_id": 18,
"text": "\\begin{align} A_n^{-1}&=a_n \\\\ A_n^\\alpha&=\\sum_{k=0}^n A_k^{\\alpha-1} \\end{align}"
},
{
"math_id": 19,
"text": "(\\mathrm{C},\\alpha)\\text{-}\\sum_{j=0}^\\infty a_j=\\lim_{n\\to\\infty}\\frac{A_n^\\alpha}{E_n^\\alpha}"
},
{
"math_id": 20,
"text": "(\\mathrm{C},\\alpha)\\text{-}\\sum_{j=0}^\\infty a_j = \\lim_{n\\to\\infty} \\sum_{j=0}^n \\frac{\\binom{n}{j}}{\\binom{n+\\alpha}{j}} a_j."
},
{
"math_id": 21,
"text": "\\mathbb{Z}"
},
{
"math_id": 22,
"text": "\\sum_{n=0}^\\infty A_n^\\alpha x^n=\\frac{\\displaystyle{\\sum_{n=0}^\\infty a_nx^n}}{(1-x)^{1+\\alpha}},"
},
{
"math_id": 23,
"text": "\\textstyle\\int_0^\\infty f(x)\\,dx"
},
{
"math_id": 24,
"text": "\\lim_{\\lambda\\to\\infty}\\int_0^\\lambda\\left(1-\\frac{x}{\\lambda}\\right)^\\alpha f(x)\\, dx "
},
{
"math_id": 25,
"text": "\\lim_{\\lambda\\to \\infty}\\frac{1}{\\lambda}\\int_0^\\lambda \\int_0^x f(y)\\, dy\\,dx"
}
]
| https://en.wikipedia.org/wiki?curid=705600 |
70561275 | Joshua 7 | Book of Joshua, chapter 7
Joshua 7 is the seventh chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the first battle against Ai under the leadership of Joshua and Achan's sin, a part of a section comprising Joshua 5:13–12:24 about the conquest of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 26 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q47 (4QJosha; 200–100 BCE) with extant verses 12–17.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Analysis.
The narrative of Israelites conquering the land of Canaan comprises verses 5:13 to 12:24 of the Book of Joshua and has the following outline:
A. Jericho (5:13–6:27)
B. Achan and Ai (7:1–8:29)
1. The Sin of Achan (7:1-26)
a. Narrative Introduction (7:1)
b. Defeat at Ai (7:2-5)
c. Joshua's Prayer (7:6-9)
d. Process for Identifying the Guilty (7:10-15)
e. The Capture of Achan (7:16-21)
f. Execution of Achan and His Family (7:22-26)
2. The Capture of Ai (8:1-29)
a. Narrative Introduction (8:1-2)
b. God's Plan for Capturing the City (8:3-9)
c. Implementation of God's Plan (8:10-13)
d. The Successful Ambush (8:14-23)
e. Destruction of Ai (8:24-29)
C. Renewal at Mount Ebal (8:30–35)
D. The Gibeonite Deception (9:1–27)
E. The Campaign in the South (10:1–43)
F. The Campaign in the North and Summary List of Kings (11:1–12:24)
The narrative of Joshua 7–8 combines the story of Achan's offence against the 'devoted things', and the battle report concerning Ai, as the two themes are linked.
Chapter 7 has the following chiastic structure:
A. YHWH's wrath: burning (7:1)
B. Disaster for Israel: defeat (7:2–5)
C. Israel's leaders before YHWH: perplexity (7:6–9)
D. Divine revelation of the problem (7:10–12a)
E. Midpoint (7:12b)
D'. Divine revelation of the solution (7:13–15)
C'. Israel before YHWH: clarity and exposure (7:16–23)
B'. Disaster for Achan: execution (7:24–26a)
A'. YHWH's wrath: turned away (7:26b)
Defeat at Ai (7:1–15).
After the triumphant conquest of Jericho, it emerges that the "herem" ("ban") on Jericho was not completely executed by the Israelites (7:1), indicated by the word 'break faith' to mean 'rebellion against God' that brings severe punishment (cf. 1 Chronicles 10:13–14) and the whole nation is affected by the sin of one person (Achan). Meanwhile, Joshua turns his attention to Ai (literally 'the heap') a city east of Bethel in the central mountain ridge, to get an important foothold in the heartland. Joshua first sends spies (7:2–3), recalling both the first mission that he had authorized (2:1), and the earlier one sent by Moses (Numbers 13–14; Deuteronomy 1). Whereas the account of fearful spies to Moses (Deuteronomy 1:28) gave way to a false confidence which resulted in ignominious defeat (Deuteronomy 1:41–45), this time the message from the spies gave a false confidence (unknowing of Achan's sin) resulting in similar defeat, and in both cases the people's hearts 'melt' (Deuteronomy 1:28; Joshua 7:5) at the apparent invincibility of the enemy, because YHWH withdraws his presence from them (Deuteronomy 1:42; Joshua 7:12). Ironically, the fear felt by the Israelites here also directly reverses the fear (also the 'melting hearts') felt by the Amorites before their own advance (5:1).
Joshua assumes the Mosaic role of intercessor (verses 6–9) when he prays together with the 'elders of Israel', while Israel, as a whole, cries to YHWH during this crisis. YHWH's reply to Joshua (7:10–15) is the theological centre of the passage, revealing the problem, known to the reader since verses 1–2, but not yet to Joshua, that Israel was unfaithful in respect of the "ban", so now has become subject to the "ban" itself, as the sin against the "ban" is a 'breach of the covenant' (verse 11). God now prescribes the harsh penalty for infringement of the "ban" (verses 13–15).
"But the children of Israel committed a trespass in the accursed thing: for Achan, the son of Carmi, the son of Zabdi, the son of Zerah, of the tribe of Judah, took of the accursed thing: and the anger of the LORD was kindled against the children of Israel."
"Then Joshua tore his clothes and fell to the earth on his face before the ark of the LORD until the evening, he and the elders of Israel. And they put dust on their heads."
Verse 6.
Joshua's prostration and the elders dust-strewn heads as signs of mourning are also evident in other biblical text (Genesis 37:54; 44:13; 1 Samuel 4:12; 2 Samuel 1:2; Job 1:20; 2:12; Lamentations 2:10; Ezekiel 27:30) as well as in extrabiblical texts, such as in Ugaritic Baal epic that even the gods mourn in similar ways.("(descends) from the footstool, sits on the earth. He pours dirt of mourning on his head").
Sin of Achan (7:16–26).
The sin of Achan consists not only in having stolen the goods, a kind of robbery of God, but also in having illegitimately transferred them from the holy realm to the profane one, the penalty for the infringement of holiness conventions or regulations was death (cf. Numbers 16). The culprit must be found because otherwise all Israel must bear the guilt. The method of discovering the guilty party is by sacred lot (cf. 1 Samuel 10:20–21). The remaining narrative (7:16–26) records the execution of the divine command including to collectively stone Achan and his family to death. The call to 'probity before God', and the 'solemnity of commitment', is also found in the New Testament (Acts 5:1–11).
"Then they raised over him a great heap of stones, still there to this day. So the Lord turned from the fierceness of His anger. Therefore the name of that place has been called the Valley of Achor to this day."
Verse 26.
In , and in some Greek manuscripts of Septuagint; cf. Joshua 7:1, where the letters 'r' and 'n' being easily confused in Hebrew.). The "valley of Achor" is later mentioned in Joshua 15:7 among the places forming the northern border of Judah, not repeated for Benjamin, so Achan and his family was buried within the territory of his tribe (Judah).The name "valley of Achor" as "valley of disaster" is used for messianic promises in other books of the prophets, where it would be changed into "a resting place" for God's people (Isaiah 65:10) and "a door of hope" (Hosea 2:15).
Archaeology.
Archaeological works in the 1930s at the location of Et-Tell or Khirbet Haijah showed that the city of Ai, an early target for conquest in the putative Joshua account, had existed and been destroyed, but in the 22nd century BCE. Some alternate sites for Ai, such as Khirbet el-Maqatir or Khirbet Nisya, have been proposed which would partially resolve the discrepancy in dates, but these sites have not been widely accepted.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70561275 |
705635 | Air mass (astronomy) | Amount of air seen through in astronomical observations
In astronomy, air mass or airmass is a measure of the amount of air along the line of sight when observing a star or other celestial source from below Earth's atmosphere . It is formulated as the integral of air density along the light ray.
As it penetrates the atmosphere, light is attenuated by scattering and absorption; the thicker atmosphere through which it passes, the greater the attenuation. Consequently, celestial bodies when nearer the horizon appear less bright than when nearer the zenith. This attenuation, known as atmospheric extinction, is described quantitatively by the Beer–Lambert law.
"Air mass" normally indicates "relative air mass", the ratio of absolute air masses (as defined above) at oblique incidence relative to that at zenith. So, by definition, the relative air mass at the zenith is 1. Air mass increases as the angle between the source and the zenith increases, reaching a value of approximately 38 at the horizon. Air mass can be less than one at an elevation greater than sea level; however, most closed-form expressions for air mass do not include the effects of the observer's elevation, so adjustment must usually be accomplished by other means.
Tables of air mass have been published by numerous authors, including , , and .
Definition.
The "absolute air mass" is defined as:
formula_0
where formula_1 is volumetric density of air. Thus formula_2 is a type of oblique column density.
In the vertical direction, the "absolute air mass at zenith" is:
formula_3
So formula_4 is a type of vertical column density.
Finally, the "relative air mass" is:
formula_5
Assuming air density to be uniform allows removing it from the integrals. The absolute air mass then simplifies to a product:
formula_6
where formula_7 is the average density and the arc length formula_8 of the oblique and zenith light paths are:
formula_9
In the corresponding simplified relative air mass, the average density cancels out in the fraction, leading to the ratio of path lengths:
formula_10
Further simplifications are often made, assuming straight-line propagation (neglecting ray bending), as discussed below.
Calculation.
Background.
The angle of a celestial body with the zenith is the "zenith angle" (in astronomy, commonly referred to as the "zenith distance"). A body's angular position can also be given in terms of "altitude", the angle above the geometric horizon; the altitude formula_11 and the zenith angle formula_12 are thus related by
formula_13
Atmospheric refraction causes light entering the atmosphere to follow an approximately circular path that is slightly longer than the geometric path. Air mass must take into account the longer path . Additionally, refraction causes a celestial body to appear higher above the horizon than it actually is; at the horizon, the difference between the true zenith angle and the apparent zenith angle is approximately 34 minutes of arc. Most air mass formulas are based on the apparent zenith angle, but some are based on the true zenith angle, so it is important to ensure that the correct value is used, especially near the horizon.
Plane-parallel atmosphere.
When the zenith angle is small to moderate, a good approximation is given by assuming a homogeneous plane-parallel atmosphere (i.e., one in which density is constant and Earth's curvature is ignored). The air mass formula_14 then is simply the secant of the zenith angle formula_12:
formula_15
At a zenith angle of 60°, the air mass is approximately 2. However, because the Earth is not flat, this formula is only usable for zenith angles up to about 60° to 75°, depending on accuracy requirements. At greater zenith angles, the accuracy degrades rapidly, with formula_16 becoming infinite at the horizon; the horizon air mass in the more realistic spherical atmosphere is usually less than 40.
Interpolative formulas.
Many formulas have been developed to fit tabular values of air mass; one by included a simple corrective term:
formula_17
where formula_18 is the true zenith angle. This gives usable results up to approximately 80°, but the accuracy degrades rapidly at greater zenith angles. The calculated air mass reaches a maximum of 11.13 at 86.6°, becomes zero at 88°, and approaches negative infinity at the horizon. The plot of this formula on the accompanying graph includes a correction for atmospheric refraction so that the calculated air mass is for apparent rather than true zenith angle.
introduced a polynomial in formula_19:
formula_20
which gives usable results for zenith angles of up to perhaps 85°. As with the previous formula, the calculated air mass reaches a maximum, and then approaches negative infinity at the horizon.
suggested
formula_21
which gives reasonable results for high zenith angles, with a horizon air mass of 40.
developed
formula_22
which gives reasonable results for zenith angles of up to 90°, with an air mass of approximately 38 at the horizon. Here the second formula_12 term is in "degrees".
developed
formula_23
in terms of the true zenith angle formula_18, for which he claimed a maximum error (at the horizon) of 0.0037 air mass.
developed
formula_24
where formula_11 is apparent altitude formula_25 in degrees. Pickering claimed his equation to have a tenth the error of near the horizon.
Atmospheric models.
Interpolative formulas attempt to provide a good fit to tabular values of air mass using minimal computational overhead. The tabular values, however, must be determined from measurements or atmospheric models that derive from geometrical and physical considerations of Earth and its atmosphere.
Nonrefracting spherical atmosphere.
If atmospheric refraction is ignored, it can be shown from simple geometrical considerations (Schoenberg 1929, 173) that the path formula_8 of a light ray at zenith angle
formula_12 through a radially symmetrical atmosphere of height formula_26 above the Earth is given by
formula_27
or alternatively,
formula_28
where formula_29 is the radius of the Earth.
The relative air mass is then:
formula_30
Homogeneous atmosphere.
If the atmosphere is homogeneous (i.e., density is constant), the atmospheric height formula_26 follows from hydrostatic considerations as:
formula_31
where formula_32 is the Boltzmann constant, formula_33 is the sea-level temperature, formula_34 is the molecular mass of air, and formula_35 is the acceleration due to gravity. Although this is the same as the pressure scale height of an isothermal atmosphere, the implication is slightly different. In an isothermal atmosphere, 37% (1/e) of the atmosphere is above the pressure scale height; in a homogeneous atmosphere, there is no atmosphere above the atmospheric height.
Taking formula_36, formula_37, and formula_38 gives formula_39. Using Earth's mean radius of 6371 km, the sea-level air mass at the horizon is
formula_40
The homogeneous spherical model slightly underestimates the rate of increase in air mass near the horizon; a reasonable overall fit to values determined from more rigorous models can be had by setting the air mass to match a value at a zenith angle less than 90°. The air mass equation can be rearranged to give
formula_41
matching Bemporad's value of 19.787 at formula_12 = 88°
gives formula_42 ≈ 631.01 and
formula_43 ≈ 35.54. With the same value for formula_44 as above, formula_45 ≈ 10,096 m.
While a homogeneous atmosphere is not a physically realistic model, the approximation is reasonable as long as the scale height of the atmosphere is small compared to the radius of the planet. The model is usable (i.e., it does not diverge or go to zero) at all zenith angles, including those greater than 90° (see ""). The model requires comparatively little computational overhead, and if high accuracy is not required, it gives reasonable results.
However, for zenith angles less than 90°, a better fit to accepted values of air mass can be had with several
of the interpolative formulas.
Variable-density atmosphere.
In a real atmosphere, density is not constant (it decreases with elevation above mean sea level. The absolute air mass for the geometrical light path discussed above, becomes, for a sea-level observer,
formula_46
Isothermal atmosphere.
Several basic models for density variation with elevation are commonly used. The simplest, an isothermal atmosphere, gives
formula_47
where formula_48 is the sea-level density and formula_49 is the density scale height. When the limits of integration are zero and infinity, the result is known as Chapman function. An approximate result is obtained if some high-order terms are dropped, yielding ,
formula_50
An approximate correction for refraction can be made by taking
formula_51
where formula_29 is the physical radius of the Earth. At the horizon, the approximate equation becomes
formula_52
Using a scale height of 8435 m, Earth's mean radius of 6371 km, and including the correction for refraction,
formula_53
Polytropic atmosphere.
The assumption of constant temperature is simplistic; a more realistic model is the polytropic atmosphere, for which
formula_54
where formula_33 is the sea-level temperature and formula_55 is the temperature lapse rate. The density as a function of elevation is
formula_56
where formula_57 is the polytropic exponent (or polytropic index). The air mass integral for the polytropic model does not lend itself to a closed-form solution except at the zenith, so the integration usually is performed numerically.
Layered atmosphere.
Earth's atmosphere consists of multiple layers with different temperature and density characteristics; common atmospheric models include the International Standard Atmosphere and the US Standard Atmosphere. A good approximation for many purposes is a polytropic troposphere of 11 km height with a lapse rate of 6.5 K/km and an isothermal stratosphere of infinite height , which corresponds very closely to the first two layers of the International Standard Atmosphere. More layers can be used if greater accuracy is required.
Refracting radially symmetrical atmosphere.
When atmospheric refraction is considered, ray tracing becomes necessary , and the absolute air mass integral becomes
formula_58
where formula_59 is the index of refraction of air at the observer's elevation formula_60 above sea level, formula_61 is the index of refraction at elevation formula_62 above sea level, formula_63,
formula_64 is the distance from the center of the Earth to a point at elevation formula_62, and formula_65 is distance to the upper limit of the atmosphere at elevation formula_45. The index of refraction in terms of density is usually given to sufficient accuracy (Garfinkel 1967) by the Gladstone–Dale relation
formula_66
Rearrangement and substitution into the absolute air mass integral gives
formula_67
The quantity formula_68 is quite small; expanding the first term in parentheses, rearranging several times, and ignoring terms in formula_69 after each rearrangement, gives
formula_70
Homogeneous spherical atmosphere with elevated observer.
In the figure at right, an observer at O is at an elevation formula_71 above sea level in a uniform radially symmetrical atmosphere of height formula_45. The path length of a light ray at zenith angle formula_12 is formula_8; formula_44 is the radius of the Earth. Applying the law of cosines to triangle OAC,
formula_72
expanding the left- and right-hand sides, eliminating the common terms, and rearranging gives
formula_73
Solving the quadratic for the path length "s", factoring, and rearranging,
formula_74
The negative sign of the radical gives a negative result, which is not physically meaningful. Using the positive sign, dividing by formula_45, and cancelling common terms and rearranging gives the relative air mass:
formula_75
With the substitutions formula_76 and formula_77, this can be given as
formula_78
When the observer's elevation is zero, the air mass equation simplifies to
formula_79
In the limit of grazing incidence, the absolute airmass equals the distance to the horizon. Furthermore, if the observer is elevated, the horizon zenith angle can be greater than 90°.
Nonuniform distribution of attenuating species.
Atmospheric models that derive from hydrostatic considerations assume an atmosphere of constant composition and a single mechanism of extinction, which isn't quite correct. There are three main sources of attenuation : Rayleigh scattering by air molecules, Mie scattering by aerosols, and molecular absorption (primarily by ozone). The relative contribution of each source varies with elevation above sea level, and the concentrations of aerosols and ozone cannot be derived simply from hydrostatic considerations.
Rigorously, when the extinction coefficient depends on elevation, it must be determined as part of the air mass integral, as described by . A compromise approach often is possible, however. Methods for separately calculating the extinction from each species using closed-form expressions are described in and . The latter reference includes source code for a BASIC program to perform the calculations. Reasonably accurate calculation of extinction can sometimes be done by using one of the simple air mass formulas and separately determining extinction coefficients for each of the attenuating species (, ).
Implications.
Air mass and astronomy.
In optical astronomy, the air mass provides an indication of the deterioration of the observed image, not only as regards direct effects of spectral absorption, scattering and reduced brightness, but also an aggregation of visual aberrations, e.g. resulting from atmospheric turbulence, collectively referred to as the quality of the "seeing". On bigger telescopes, such as the WHT and VLT , the atmospheric dispersion can be so severe that it affects the pointing of the telescope to the target. In such cases an atmospheric dispersion compensator is used, which usually consists of two prisms.
The Greenwood frequency and Fried parameter, both relevant for adaptive optics, depend on the air mass above them (or more specifically, on the zenith angle).
In radio astronomy the air mass (which influences the optical path length) is not relevant. The lower layers of the atmosphere, modeled by the air mass, do not significantly impede radio waves, which are of much lower frequency than optical waves. Instead, some radio waves are affected by the ionosphere in the upper atmosphere. Newer aperture synthesis radio telescopes are especially affected by this as they “see” a much larger portion of the sky and thus the ionosphere. In fact, LOFAR needs to explicitly calibrate for these distorting effects (; ), but on the other hand can also study the ionosphere by instead measuring these distortions .
Air mass and solar energy.
In some fields, such as solar energy and photovoltaics, air mass is indicated by the acronym AM; additionally, the value of the air mass is often given by appending its value to AM, so that AM1 indicates an air mass of 1, AM2 indicates an air mass of 2, and so on. The region above Earth's atmosphere, where there is no atmospheric attenuation of solar radiation, is considered to have "air mass zero" (AM0).
Atmospheric attenuation of solar radiation is not the same for all wavelengths; consequently, passage through the atmosphere not only reduces intensity but also alters the spectral irradiance. Photovoltaic modules are commonly rated using spectral irradiance for an air mass of 1.5 (AM1.5); tables of these standard spectra are given in ASTM G 173-03. The extraterrestrial spectral irradiance (i.e., that for AM0) is given in ASTM E 490-00a.
For many solar energy applications when high accuracy near the horizon is not required, air mass is commonly determined using the simple secant formula described in "".
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma = \\int \\rho \\, \\mathrm ds \\,."
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "\\sigma"
},
{
"math_id": 3,
"text": "\\sigma_\\mathrm{zen} = \\int \\rho \\, \\mathrm dz "
},
{
"math_id": 4,
"text": "\\sigma_\\mathrm{zen}"
},
{
"math_id": 5,
"text": "X = \\frac \\sigma {\\sigma_\\mathrm{zen}} "
},
{
"math_id": 6,
"text": "\\sigma = \\bar\\rho s"
},
{
"math_id": 7,
"text": "\\bar\\rho = \\mathrm{const.}"
},
{
"math_id": 8,
"text": "s"
},
{
"math_id": 9,
"text": "\\begin{align}\ns &= \\int \\, \\mathrm ds \\\\\ns_\\mathrm{zen} &= \\int \\, \\mathrm dz\n\\end{align}"
},
{
"math_id": 10,
"text": "X = \\frac s {s_\\mathrm{zen}} \\,."
},
{
"math_id": 11,
"text": "h"
},
{
"math_id": 12,
"text": "z"
},
{
"math_id": 13,
"text": "h = 90^\\circ - z \\,."
},
{
"math_id": 14,
"text": "X"
},
{
"math_id": 15,
"text": "X = \\sec\\, z \\,."
},
{
"math_id": 16,
"text": "X = \\sec\\, z"
},
{
"math_id": 17,
"text": "X = \\sec\\,z_\\mathrm t \\, \\left [ 1 - 0.0012 \\,(\\sec^2 z_\\mathrm t - 1) \\right ] \\,,"
},
{
"math_id": 18,
"text": "z_\\mathrm t"
},
{
"math_id": 19,
"text": "\\sec\\,z - 1"
},
{
"math_id": 20,
"text": "X = \\sec\\,z \\,-\\, 0.0018167 \\,(\\sec\\,z \\,-\\, 1) \\,-\\, 0.002875 \\,(\\sec\\,z \\,-\\, 1)^2\n\t \\,-\\, 0.0008083 \\,(\\sec\\,z \\,-\\, 1)^3 "
},
{
"math_id": 21,
"text": "X = \\left (\\cos\\,z + 0.025 e^{-11 \\cos\\, z} \\right )^{-1} \\,,"
},
{
"math_id": 22,
"text": "X = \\frac{1} { \\cos\\, z + 0.50572 \\,(6.07995^\\circ + 90^\\circ - z)^{-1.6364}} \\,,"
},
{
"math_id": 23,
"text": "X = \\frac\n{ 1.002432\\, \\cos^2 z_\\mathrm t + 0.148386 \\, \\cos\\, z_\\mathrm t + 0.0096467 }\n{ \\cos^3 z_\\mathrm t + 0.149864\\, \\cos^2 z_\\mathrm t + 0.0102963 \\, \\cos\\, z_\\mathrm t + 0.000303978 } \\, "
},
{
"math_id": 24,
"text": "X = \\frac{1} { \\sin (h + {244}/(165+47 h^{1.1}) ) } \\,,"
},
{
"math_id": 25,
"text": "(90^\\circ - z)"
},
{
"math_id": 26,
"text": "y_{\\mathrm {atm}}"
},
{
"math_id": 27,
"text": "\n s = \\sqrt {R_\\mathrm {E}^2 \\cos^2 z + 2 R_\\mathrm {E} y_\\mathrm{atm}\n + y_\\mathrm{atm}^2}\n - R_\\mathrm {E} \\cos\\, z \\, "
},
{
"math_id": 28,
"text": "\n s = \\sqrt {\\left ( R_\\mathrm {E} + y_\\mathrm{atm} \\right )^2\n - R_\\mathrm {E}^2 \\sin^2 z}\n - R_\\mathrm {E} \\cos\\, z \\, "
},
{
"math_id": 29,
"text": "R_\\mathrm E"
},
{
"math_id": 30,
"text": "\n X = \\frac s {y_\\mathrm{atm}}\n = \\frac {R_\\mathrm {E}} {y_\\mathrm{atm}} \\sqrt {\\cos^2 z\n + 2 \\frac {y_\\mathrm{atm}} {R_\\mathrm {E}}\n + \\left ( \\frac {y_\\mathrm{atm}} {R_\\mathrm {E}} \\right )^2 }\n - \\frac {R_\\mathrm {E}} {y_\\mathrm{atm}} \\cos\\, z \\,.\n"
},
{
"math_id": 31,
"text": "y_\\mathrm{atm} = \\frac {kT_0} {mg} \\,,"
},
{
"math_id": 32,
"text": "k"
},
{
"math_id": 33,
"text": "T_0"
},
{
"math_id": 34,
"text": "m"
},
{
"math_id": 35,
"text": "g"
},
{
"math_id": 36,
"text": "T_0 = \\mathrm{288.15~K}"
},
{
"math_id": 37,
"text": "m = \\mathrm{ 28.9644 \\times 1.6605 \\times 10^{-27} ~ kg}"
},
{
"math_id": 38,
"text": "g = \\mathrm{9.80665 ~ m/s^2}"
},
{
"math_id": 39,
"text": "y_\\mathrm{atm} \\approx \\mathrm{8435 ~ m}"
},
{
"math_id": 40,
"text": "\n X_\\mathrm{horiz} = \\sqrt {1 + 2 \\frac {R_\\mathrm {E}} {y_\\mathrm{atm}}} \\approx 38.87 \\,.\n"
},
{
"math_id": 41,
"text": "\\frac {R_\\mathrm{E}} {y_\\mathrm{atm}}\n = \\frac {X^2 - 1} {2 \\left ( 1 - X \\cos z \\right )} \\,;"
},
{
"math_id": 42,
"text": "R_\\mathrm{E} / y_\\mathrm{atm}"
},
{
"math_id": 43,
"text": "X_\\mathrm{horiz}"
},
{
"math_id": 44,
"text": "R_\\mathrm{E}"
},
{
"math_id": 45,
"text": "y_\\mathrm{atm}"
},
{
"math_id": 46,
"text": "\n \\sigma = \\int_0^{y_\\mathrm{atm}}\n \\frac {\\rho \\, \\left ( R_\\mathrm {E} + y \\right ) \\mathrm d y}\n {\\sqrt {R_\\mathrm {E}^2 \\cos^2 z + 2 R_\\mathrm {E} y + y^2}} \\,.\n"
},
{
"math_id": 47,
"text": "\\rho = \\rho_0 e^{-y / H} \\,,"
},
{
"math_id": 48,
"text": "\\rho_0"
},
{
"math_id": 49,
"text": "H"
},
{
"math_id": 50,
"text": "\n X \\approx \\sqrt { \\frac {\\pi R} {2 H}}\n \\exp {\\left ( \\frac {R \\cos^2 z} {2 H} \\right )} \\,\n \\mathrm {erfc} \\left ( \\sqrt {\\frac {R \\cos^2 z} {2 H}} \\right ) \\,.\n"
},
{
"math_id": 51,
"text": "R = 7/6 \\, R_\\mathrm E \\,,"
},
{
"math_id": 52,
"text": "X_\\mathrm{horiz} \\approx \\sqrt { \\frac {\\pi R} {2 H}} \\,."
},
{
"math_id": 53,
"text": "X_\\mathrm{horiz} \\approx 37.20 \\,."
},
{
"math_id": 54,
"text": "T = T_0 - \\alpha y \\,,"
},
{
"math_id": 55,
"text": "\\alpha"
},
{
"math_id": 56,
"text": "\\rho = \\rho_0 \\left ( 1 - \\frac \\alpha T_0 y \\right )^{1 / (\\kappa - 1)} \\,,"
},
{
"math_id": 57,
"text": "\\kappa"
},
{
"math_id": 58,
"text": "\n \\sigma = \\int_{r_\\mathrm{obs}}^{r_\\mathrm{atm}} \\frac {\\rho\\, \\mathrm d r}\n {\\sqrt { 1 - \\left ( \\frac {n_\\mathrm{obs}} n \\frac {r_\\mathrm{obs}} r \\right )^2 \\sin^2 z}} \\, "
},
{
"math_id": 59,
"text": "n_\\mathrm{obs}"
},
{
"math_id": 60,
"text": "y_\\mathrm{obs}"
},
{
"math_id": 61,
"text": "n"
},
{
"math_id": 62,
"text": "y"
},
{
"math_id": 63,
"text": "r_\\mathrm{obs} = R_\\mathrm{E} + y_\\mathrm{obs}"
},
{
"math_id": 64,
"text": "r = R_\\mathrm{E} + y"
},
{
"math_id": 65,
"text": "r_\\mathrm{atm} = R_\\mathrm{E} + y_\\mathrm{atm}"
},
{
"math_id": 66,
"text": "\\frac {n - 1} {n_\\mathrm{obs} - 1} = \\frac {\\rho} {\\rho_\\mathrm{obs}} \\,."
},
{
"math_id": 67,
"text": "\n \\sigma = \\int_{r_\\mathrm{obs}}^{r_\\mathrm{atm}} \\frac {\\rho\\, \\mathrm d r}\n {\\sqrt { 1 - \\left ( \\frac {n_\\mathrm{obs}} {1 + ( n_\\mathrm{obs} - 1 ) \\rho/\\rho_\\mathrm{obs}} \\right )^2 \\left ( \\frac {r_\\mathrm{obs}} r \\right )^2 \\sin^2 z}} \\,.\n"
},
{
"math_id": 68,
"text": "n_\\mathrm{obs} - 1"
},
{
"math_id": 69,
"text": "(n_\\mathrm{obs} - 1)^2"
},
{
"math_id": 70,
"text": "\n \\sigma = \\int_{r_\\mathrm{obs}}^{r_\\mathrm{atm}} \\frac {\\rho\\, \\mathrm d r}\n {\\sqrt { 1 - \\left [ 1 + 2 ( n_\\mathrm{obs} - 1 )(1 - \\frac \\rho {\\rho_\\mathrm{obs}} ) \\right ]\n \\left ( \\frac {r_\\mathrm{obs}} r \\right )^2 \\sin^2 z}} \\,.\n"
},
{
"math_id": 71,
"text": "y_\\text{obs}"
},
{
"math_id": 72,
"text": " \\begin{align}\n\\left(R_{E}+y_\\text{atm}\\right)^2 & =s^2+\\left(R_{E}+y_\\text{obs}\\right)^2-2\\left(R_{E}+y_\\text{obs}\\right)s \\cos\\left(180^{\\circ}-z\\right)\\\\\n & =s^2+\\left(R_{E}+y_\\text{obs}\\right)^2+2\\left(R_{E}+y_\\text{obs}\\right)s\\cos z\\end{align}\n"
},
{
"math_id": 73,
"text": "{{s}^2}+2\\left( {R_{\\text{E}}}+{y_{\\text{obs}}} \\right)s\\cos z-2{R_{\\text{E}}}{y_{\\text{atm}}}-y_{\\text{atm}}^2+2{R_{\\text{E}}}{y_{\\text{obs}}}+y_{\\text{obs}}^2=0 \\,."
},
{
"math_id": 74,
"text": "s=\\pm \\sqrt{{{\\left( {R_{\\text{E}}}+{y_{\\text{obs}}} \\right)}^2}{{\\cos }^2}z+2{R_{\\text{E}}}\\left( {y_{\\text{atm }}}-{y_{\\text{obs}}} \\right)+y_{\\text{atm}}^2-y_{\\text{obs}}^2}-({R_{\\text{E}}}+{y_{\\text{obs}}})\\cos z \\,."
},
{
"math_id": 75,
"text": "X=\\sqrt{{{\\left( \\frac{{R_{\\text{E}}}+{y_{\\text{obs}}}}{{y_{\\text{atm}}}} \\right)}^2}{{\\cos }^2}z+\\frac{2{R_{\\text{E}}}}{y_{\\text{atm}}^2}\\left( {y_{\\text{atm}}}-{y_{\\text{obs}}} \\right)-{{\\left( \\frac{{y_{\\text{obs}}}}{{y_{\\text{atm}}}} \\right)}^2}+1}-\\frac{{R_{\\text{E}}}+{y_{\\text{obs}}}}{{y_{\\text{atm}}}}\\cos z \\,."
},
{
"math_id": 76,
"text": "\\hat{r} = R_\\mathrm{E} / y_\\mathrm{atm}"
},
{
"math_id": 77,
"text": "\\hat{y} = y_\\mathrm{obs} / y_\\mathrm{atm}"
},
{
"math_id": 78,
"text": "X = \\sqrt{{(\\hat{r}+\\hat{y})}^2 \\cos^2 z + 2 \\hat{r} (1-\\hat{y}) - \\hat{y}^2 +1} \\; - \\; (\\hat{r}+\\hat{y}) \\cos z \\,."
},
{
"math_id": 79,
"text": "X = \\sqrt{{{\\left( \\frac{R_\\text{E}}{y_\\text{atm}} \\right)}^2} \\cos^2 z + \\frac{2 R_\\text{E}}{y_\\text{atm}} + 1} - \\frac{R_\\text{E}}{y_\\text{atm}} \\cos z \\,."
}
]
| https://en.wikipedia.org/wiki?curid=705635 |
70565332 | CoreXY | 2-dimensional kinematic system
CoreXY is a technique used to move the printhead of a 3D printer or the toolhead in CNC machines in the horizontal plane. The advantage of this technique is that the two motors used to perform the movement in the horizontal plane are stationary and do not have to move themselves, which can result in less moving mass. Instead, drive belts are used which are connected in an intricate way to provide movement in a Cartesian coordinate system.
Movement.
For movement along the "x"-axis, both motors must rotate in the same direction. For movement along the "y"-axis, the motors must rotate in opposite directions. If only one motor rotates, the movement will be diagonal.
The movement can be mathematically described as follows. If formula_0 is the movement of the first motor and formula_1 the movement of the second motor, the movement in the "x" and "y" directions is given by:
formula_2
Compared to conventional printers.
Other Cartesian 3D printers which do not use the CoreXY technique most commonly also use two motors for the "xy"-plane, but where one motor is independently responsible for movement along the "x"-axis, and the other independently responsible for movement along the "y"-axis. This is sometimes called a "Cartesian technique". "Bed slinger" is a cartesian variant where the build surface moves along the "y"-axis, and the print head moves along the "x"-axis, and this technique is used on amongst other the Prusa i3 and clones.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "\n\\begin{align}\n\\Delta x &= \\frac{1}{2}(\\Delta A + \\Delta B) \\\\\n\\Delta y &= \\frac{1}{2}(\\Delta A - \\Delta B)\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=70565332 |
70569824 | Structured illumination light sheet microscopy | Structured-illumination light sheet microscopy
Structured illumination light sheet microscopy (SI-LSM) is an optical imaging technique used for achieving volumetric imaging with high temporal and spatial resolution in all three dimensions. It combines the ability of light sheet microscopy to maintain spatial resolution throughout relatively thick samples with the higher axial and spatial resolution characteristic of structured illumination microscopy. SI-LSM can achieve lateral resolution below 100 nm in biological samples hundreds of micrometers thick.
SI-LSM is most often used for fluorescent imaging of living biological samples, such as cell cultures. It is particularly useful for longitudinal studies, where high-rate imaging must be performed over long periods of time without damaging the sample. The two methods most used for fluorescent imaging of 3D samples – confocal microscopy and widefield microscopy – both have significant drawbacks for this type of application. In widefield microscopy, both in-focus light from the plane of interest as well as out-of-focus light from the rest of the sample is acquired together, creating the “missing cone problem” which makes high resolution imaging difficult. Although confocal microscopy largely solves this problem by using a pinhole to block unfocused light, this technique also inevitably blocks useful signal, which is particularly detrimental in fluorescent imaging when the signal is already very weak. In addition, both widefield and confocal microscopy illuminate the entirety of the sample throughout imaging, which leads to problems with photobleaching and phototoxicity in some samples. While light-field microscopy alone can address most of these issues, its achieved resolution is still fundamentally limited by the diffraction of light and it is unable to achieve super-resolution.
SI-LSM works by using a patterned rather than uniform light sheet to illuminate a single plane of a volume being imaged. In this way, it maintains the many benefits of light-sheet microscopy while achieving the high resolution of structured illumination microscopy.
Background and Theory.
The theory behind SI-LSM is best understood by considering the separate development of structured illumination and light sheet microscopy.
Structured Illumination Microscopy.
Structured illumination microscopy (SIM) is a method of super-resolution microscopy which is performed by acquiring multiple images of the same sample under different patterns of illumination, then computationally combining these images to achieve a single reconstruction with up to 2x improvement over the diffraction limited lateral resolution. The theory was first proposed and implemented in a 1995 paper by John M. Guerra in which a silicon grating with 50 nm lines and spaces was resolved with 650 nm wavelength (in air) illumination structured by a transparent replica proximal to said grating. The name “structured illumination microscopy” was coined in 2000 by M.G.L. Gustafsson. SIM takes advantage of the “Moiré Effect”, which occurs when two patterns are multiplicatively superimposed. The superimposition causes “Moiré Fringes” to appear, which are coarser than either original pattern but still contain information about the high frequency patterns which would otherwise not be visible.
The theory behind SIM is best understood in the Fourier or frequency domain. In general, imaging systems can only resolve frequencies below the diffraction limit. Thus, in the Fourier domain, all recorded frequencies from the imaged sample would reside within a circle of a fixed radius. Any frequencies outside this limit cannot be resolved. However, the frequency spectrum can be shifted by imaging the sample with patterned illumination. Most often, the pattern is a 1D sinusoidal gradient, such as the pattern used to create the Moiré fringes in the above image. Because the Fourier transform of a sinusoid is a shifted delta function, the transform of this pattern will consist of three delta functions: one at the zero frequency and two corresponding to the positive and negative frequency components of the sinusoid (see below image). When the target is illuminated using this pattern, the target and illumination pattern are multiplicatively superimposed, which means the Fourier transform of the resulting image is the convolution of the individual transforms of the target and the illumination pattern. Convolving any function with a delta function has the effect of shifting the center of the original function to the location of the delta function. Thus, in this situation, the frequency spectrum of the target is shifted and frequencies that were previously too high to resolve now lie within the circle of resolvable frequencies. The result is that for a single image acquisition with SIM, the frequency components from three separate regions in the Fourier domain (corresponding to the center and the positive and negative shifts) are all captured together. Finally, because rotation in the spatial domain results in the same rotation in the Fourier domain, high frequencies over the full 360° can be captured by rotating the illumination pattern. Figure b) in the image below shows which frequency components would be captured by acquiring 4 separate images and rotating the illumination pattern by 45° in between each acquisition.
Once all images have been captured, a single final image can be computationally reconstructed. Using this technique, resolution can be improved up to 2x over the diffraction limit. This 2x limit is imposed because the illumination pattern itself is still diffraction limited.
The concepts behind 2D SIM can be expanded to 3D volumetric imaging. By using three mutually coherent beams of excitation light, interference patterns with multiple frequency components can be created in the imaged sample. This ultimately makes it possible to perform 3D reconstructions with up to 2x improved resolution along all three axes. However, due to the strong scattering coefficient of biological tissues, this theoretical resolution can only be achieved in samples thinner than about 10 um. Beyond that, the scattering leads to an excess of background signal which makes accurate reconstruction impossible.
Light Sheet Microscopy.
Light sheet microscopy (LSM) was developed to allow for fine optical sectioning of thick biological samples without the need for physical sectioning or clearing, which are both time consuming and detrimental to in-vivo imaging. While most fluorescent imaging techniques use aligned illumination and detection axes, LSM utilizes orthogonal axes. A focused light sheet is used to illuminate the sample from the side, while the fluorescent signal is detected from above. This both eliminates the “cone problem” of widefield microscopy by eliminating out-of-focus contributions from planes not being actively imaged and reduces the impact of photobleaching since the entire sample is not illuminated throughout imaging. In addition, because the sample is illuminated from the side, the focus of the illumination light is not depth-dependent, making volumetric imaging of biological samples far more feasible. A major ongoing challenge in LSM is in shaping the light sheet. In general, there is a tradeoff between the thickness of the light sheet at the optical axis (which largely determines axial resolution) and the field-of-view over which the light sheet maintains adequate thickness. This problem can be partially addressed by the added resolution from SI-LSM.
Techniques.
SI-LSM can be divided into two main categories. Optical Sectioning SI-LSM is the most common approach and improves axial resolution by further reducing the impact of un-focused background signal. Super-resolution SI-LSM uses the illumination and reconstruction techniques of 2D SIM to achieve super-resolution in 3D samples.
Optical Sectioning SI-LSM.
Optical sectioning SI-LSM (OS-SI-LSM) was first described in a 1997 paper by M.A. Neil et al. Rather than achieving super-resolution, this technique uses the ideas behind structured illumination to improve axial resolution by removing background haze from layers other than where the illuminating light sheet is most focused. While there are several approaches for achieving this, the most common approach is known as “three-phase” SIM, which will be described here.
It is shown in the Neil paper that the signal acquired by imaging a target with a grid illumination pattern can be represented by the following equation:
formula_0
Here, formula_1 is the background signal, while formula_2 and formula_3 are signals from the region of the target illuminated by the cosine and sine components of the grid. It is also shown that an in-focus image of the plane of interest could be reconstructed using the equation:
formula_4
This can be achieved by acquiring three separate images under the grid illumination conditions, rotating the grid by 60° between each acquisition. The desired 2D image can then be reconstructed using the equation:
formula_5
This creates a 2D image containing only information from most focused region of the grid illumination pattern. If this pattern is created using a light sheet, the sheet can then be scanned in the axial direction to generate a full 3D reconstruction of a sample. The primary drawback of using this approach for reducing background signal is that it ultimately relies on subtracting out the shared background signal between two images. Some in-focus signal will inevitably be subtracted alongside the background haze. This will result in an overall reduction of signal, which can be detrimental in low-signal fluorescent imaging. Nevertheless, this technique is the most common use of SI-LSM and has shown improved axial resolution over LSM alone.
Super-resolution SI-LSM.
Super-resolution SI-LSM (SR-SI-LSM) uses the techniques from 2D or 3D SIM while using a light sheet as the illumination source to achieve the spatial resolution of SIM alongside the depth independent imaging and low photobleaching of LSM. In the most common application, a light sheet is used to create a 1D sinusoidal pattern at a single plane of the 3D target sample. The pattern is then rotated multiple times at this single plane to acquire enough images for a high resolution 2D reconstruction. The light is scanned in the axial resolution and the process is repeated until there are enough 2D images for a full 3D reconstruction. In general, this approach demonstrates not only improved resolution but also improved SNR over OS-SI-LSM, because no information is discarded in the reconstruction. In addition, although the theoretical resolution for SR-SI-LSM is slightly lower than 3D SIM, in depths >10 um this technique shows improved performance over 3D SIM due to the depth-independent focusing of illumination light characteristic of LSM.
Implementation.
A major challenge in SI-LSM is engineering systems which are physically capable of generating structured patterns in light sheets. The three main approaches for accomplishing this are using interfering light sheets, digital LSM, and spatial light modulators.
With interfering light sheets, two coherent counterpropagating sheets are sent into the sample. The interference pattern between these sheets creates the desired illumination pattern, which can be rotated and scanned using rotating mirrors to deflect the sheets. Additional flexibility can be added by using digital light-sheet microscopy to generate the illumination patterns. In digital LSM, the light sheet is created by rapidly scanning a laser beam through the sample. This allows for fine control over the specific illumination pattern by modulating the intensity of the laser as it scans. This technique has been used to create systems capable of multiple types of light sheet microscopy in addition to SI-LSM. Finally, spatial light modulators can be used to electronically control the light patterns, which has the advantage of allowing for very fine control of and fast switching between patterns.
In addition, much of the recent work around SI-LSM focuses on combining the approach with other techniques for deep imaging in biological tissues. For instance, a 2021 paper demonstrated the use of SI-LSM with NIR-II illumination to improve resolution of transcranial mouse brain imaging by ~1.7x with a penetration depth of ~750 um and almost 16x improvement in the signal to background ratio. Other promising directions include combining SIM with other techniques for shaping the light sheets in LSM, combining SI-LSM with two-photon excitation, or using non-linear fluorescence to further push the resolution limits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_n = I_0 + I_c \\cos(\\phi) + I_s \\sin(\\phi)"
},
{
"math_id": 1,
"text": "I_0"
},
{
"math_id": 2,
"text": "I_c"
},
{
"math_id": 3,
"text": "I_s"
},
{
"math_id": 4,
"text": "I_p = (I_c^2 + I_s^2)^{0.5}"
},
{
"math_id": 5,
"text": "I_p = ((I_1 - I_2)^2 + (I_1 - I_3)^2 + (I_2 - I_3)^2)^{0.5}"
}
]
| https://en.wikipedia.org/wiki?curid=70569824 |
70571129 | Finite sphere packing | Mathematical theory
In mathematics, the theory of finite sphere packing concerns the question of how a finite number of equally-sized spheres can be most efficiently packed. The question of packing finitely many spheres has only been investigated in detail in recent decades, with much of the groundwork being laid by László Fejes Tóth.
The similar problem for infinitely many spheres has a longer history of investigation, from which the Kepler conjecture is most well-known. Atoms in crystal structures can be simplistically viewed as closely-packed spheres and treated as infinite sphere packings thanks to their large number.
Sphere packing problems are distinguished between packings in given containers and free packings. This article primarily discusses free packings.
Packing and convex hulls.
In general, a "packing" refers to any arrangement of a set of spatially-connected, possibly differently-sized or differently-shaped objects in space such that none of them overlap. In the case of the finite sphere packing problem, these objects are restricted to equally-sized spheres. Such a packing of spheres determines a specific volume known as the convex hull of the packing, defined as the smallest convex set that includes all the spheres.
Packing shapes.
There are many possible ways to arrange spheres, which can be classified into three basic groups: sausage, pizza, and cluster packing.
Sausage packing.
An arrangement in which the midpoint of all the spheres lie on a single straight line is called a "sausage packing", as the convex hull has a sausage-like shape. An approximate example in real life is the packing of tennis balls in a tube, though the ends must be rounded for the tube to coincide with the actual convex hull.
Pizza packing.
If all the midpoints lie on a plane, the packing is a "pizza packing". Approximate real-life examples of this kind of packing include billiard balls being packed in a triangle as they are set up. This holds for packings in three-dimensional Euclidean space.
Cluster packing.
If the midpoints of the spheres are arranged throughout 3D space, the packing is termed a "cluster packing". Real-life approximations include fruit being packed in multiple layers in a box.
Relationships between types of packing.
By the given definitions, any sausage packing is technically also a pizza packing, and any pizza packing is technically also a cluster packing. In the more general case of formula_0 dimensions, "sausages" refer to one-dimensional arrangements, "clusters" to "formula_0"-dimensional arrangements, and "pizzas" to those with an in-between number of dimensions.
One or two spheres always make a sausage. With three, a pizza packing (that is not also a sausage) becomes possible, and with four or more, clusters (that are not also pizzas) become possible.
Optimal packing.
The empty space between spheres varies depending on the type of packing. The amount of empty space is measured in the packing density, which is defined as the ratio of the volume of the spheres to the volume of the total convex hull. The higher the packing density, the less empty space there is in the packing and thus the smaller the volume of the hull (in comparison to other packings with the same number and size of spheres).
To pack the spheres efficiently, it might be asked which packing has the highest possible density. It is easy to see that such a packing should have the property that the spheres lie next to each other, that is, each sphere should touch another on the surface. A more exact phrasing is to form a graph which assigns a vertex for each sphere and connects vertices with edges whenever the corresponding spheres if their surfaces touch. Then the highest-density packing must satisfy the property that the corresponding graph is connected.
Sausage catastrophe.
With three or four spheres, the sausage packing is optimal. It is believed that this holds true for any formula_1 up to formula_2 along with formula_3. For formula_4 and formula_5, a cluster packing exists that is more efficient that the sausage packing, as shown in 1992 by Jörg Wills and Pier Mario Gandini. It remains unknown what these most efficient cluster packings look like. For example, in the case formula_6, it is known that the optimal packing is not a tetrahedral packing like the classical packing of cannon balls, but is likely some kind of octahedral shape.
The sudden transition in optimal packing shape is jokingly known by some mathematicians as the "sausage catastrophe" (Wills, 1985). The designation "catastrophe" comes from the fact that the optimal packing shape suddenly shifts from the orderly sausage packing to the relatively unordered cluster packing and vice versa as one goes from one number to another, without a satisfying explanation as to why this happens. Even so, the transition in three dimensions is relatively tame; in formula_7 dimensions the sudden transition is conjectured to happen around 377000 spheres.
For dimensions formula_8, the optimal packing is always either a sausage or a cluster, and never a pizza. It is an open problem whether this holds true for all dimensions. This result only concerns spheres and not other convex bodies; in fact Gritzmann and Arhelger observed that for any dimension formula_9 there exists a convex shape for which the closest packing is a pizza.
Example of the sausage packing being non-optimal.
In the following section it is shown that for 455 spheres the sausage packing is non-optimal, and that there instead exists a special cluster packing that occupies a smaller volume.
The volume of a convex hull of a sausage packing with formula_1 spheres of radius formula_10 is calculable with elementary geometry. The middle part of the hull is a cylinder with length formula_11 while the caps at the end are half-spheres with radius formula_10. The total volume formula_12 is therefore given by.
formula_13
Similarly, it is possible to find the volume of the convex hull of a tetrahedral packing, in which the spheres are arranged so that they form a tetrahedral shape, which only leads to completely filled tetrahedra for specific numbers of spheres. If there are formula_14 spheres along one edge of the tetrahedron, the total number of spheres formula_1 is given by
formula_15.
Now the inradius formula_10 of a tetrahedral with side length formula_16 is
formula_17.
From this we have
formula_18.
The volume formula_19 of the tetrahedron is then given by the formula
formula_20
In the case of many spheres being arranged inside a tetrahedron, the length of an edge formula_16 increases by twice the radius of a sphere for each new layer, meaning that for formula_14 layers the side length becomes
formula_21.
Substituting this value into the volume formula for the tetrahedron, we know that the volume formula_22 of the convex hull must be smaller than the tetrahedron itself, so that
formula_23.
Taking the number of spheres in a tetrahedron of formula_1 layers and substituting into the earlier expression to get the volume formula_24 of the convex hull of a sausage packing with the same number of spheres, we have
formula_25.
For formula_26, which translates to formula_27 spheres the coefficient in front of formula_28 is about 2845 for the tetrahedral packing and 2856 for the sausage packing, which implies that for this number of spheres the tetrahedron is more closely packed.
It is also possible with some more effort to derive the exact formula for the volume of the tetrahedral convex hull formula_22, which would involve subtracting the excess volume at the corners and edges of the tetrahedron. This allows the sausage packing to be proved non-optimal for smaller values of formula_14 and therefore formula_1.
Sausage conjecture.
The term "sausage" comes from the mathematician László Fejes Tóth, who posited the sausage conjecture in 1975, which concerns a generalized version of the problem to spheres, convex hulls, and volume in higher dimensions. A generalized sphere in formula_0 dimensions is a formula_0-dimensional body in which every boundary point lies equally far away from the midpoint. Fejes Tóth's sausage conjecture then states that from formula_29 upwards it is always optimal to arrange the spheres along a straight line. That is, the sausage catastrophe no longer occurs once we go above 4 dimensions. The overall conjecture remains open. The best results so far are those of Ulrich Betke und Martin Henk, who proved the conjecture for dimensions 42 and above.
Parametric density and related methods.
While it may be proved that the sausage packing is not optimal for 56 spheres, and that there must be some other packing that is optimal, it is not known what the optimal packing looks like. It is difficult to find the optimal packing as there is no "simple" formula for the volume of an arbitrarily shaped cluster. Optimality (and non-optimality) is shown through appropriate estimates of the volume, using methods from convex geometry, such as the Brunn-Minkowski inequality, mixed Minkowski volumes and Steiner's formula. A crucial step towards a unified theory of both finite and infinite (lattice and non-lattice) sphere packings was the introduction of parametric densities by Jörg Wills in 1992. The parametric density takes into account the influence of the edges of the packing.
The definition of density used earlier concerns the volume of the convex hull of the spheres (or convex bodies) formula_30:
formula_31
where formula_32 is the convex hull of the formula_1 midpoints formula_33 of the spheres formula_34 (instead of the sphere, we can also take an arbitrary convex body for formula_30). For a linear arrangement (sausage), the convex hull is a line segment through all the midpoints of the spheres. The plus sign in the formula refers to Minkowski addition of sets, so that formula_35 refers to the volume of the convex hull of the spheres.
This definition works in two dimensions, where Laszlo Fejes-Toth, Claude Rogers and others used it to formulate a unified theory of finite and infinite packings. In three dimensions, Wills gives a simple argument that such a unified theory is not possible based on this definition: The densest finite arrangement of coins in three dimensions is the sausage with formula_36. However, the optimal infinite arrangement is a hexagonal arrangement with formula_37, so the infinite value cannot be obtained as a limit of finite values. To solve this issue, Wills introduces a modification to the definition by adding a positive parameter formula_38:
formula_39
formula_38 allows the influence of the edges to be considered (giving the convex hull a certain thickness). This is then combined with methods from the theory of mixed volumes and geometry of numbers by Hermann Minkowski.
For each dimension formula_40 there are parameter values formula_41 and formula_42 such that for formula_43 the sausage is the densenst packing (for all integers formula_1), while for formula_44 and suffiricently large formula_1 the cluster is densest. These parameters are dimension-specific. In two dimensions, formula_45 so that there is a transition from sausages to clusters (sausage catastrophe).
There holds an inequality:
formula_46
where the volume of the unit ball formula_47 in formula_0 dimensions is formula_48. For formula_49, we have formula_50 and it is predicted that this holds for all dimensions, in which case the value of formula_42 can be found from that of formula_51. | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "55"
},
{
"math_id": 3,
"text": "n=57, 58, 63, 64"
},
{
"math_id": 4,
"text": "n=56, 59, 60, 61, 62"
},
{
"math_id": 5,
"text": "n \\geq 65"
},
{
"math_id": 6,
"text": "n = 56"
},
{
"math_id": 7,
"text": "d = 4"
},
{
"math_id": 8,
"text": "d \\leq 10"
},
{
"math_id": 9,
"text": " d \\geq 3"
},
{
"math_id": 10,
"text": "r"
},
{
"math_id": 11,
"text": "h = 2r \\cdot (n-1)"
},
{
"math_id": 12,
"text": "V_W"
},
{
"math_id": 13,
"text": "\\begin{align}\nV_W &= V_\\text{cylinder} + 2 \\cdot V_\\text{half-sphere}\\\\\n &= V_\\text{cylinder} + V_\\text{sphere}\\\\\n &= \\pi h r^2 + \\frac{4}{3}\\pi r^3\\\\\n &= \\pi 2r \\cdot (n-1) \\cdot r^2 + \\frac{4}{3}\\pi r^3\\\\\n &= 2 \\cdot \\left( n - \\frac{1}{3}\\right) \\pi r^3\n\\end{align}"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "n = \\sum_{i=1}^x\\sum_{j=1}^i j = \\sum_{i=1}^x \\frac{i\\cdot(i+1)}{2} = \\frac{x\\cdot(x+1)\\cdot(x+2)}{6}"
},
{
"math_id": 16,
"text": "a"
},
{
"math_id": 17,
"text": "r = \\frac{\\sqrt{6}}{12} \\cdot a"
},
{
"math_id": 18,
"text": "a = 2\\sqrt{6} \\cdot r"
},
{
"math_id": 19,
"text": "V_T"
},
{
"math_id": 20,
"text": "V_T = \\frac{\\sqrt{2}}{12} \\cdot a^3 = \\sqrt{192} \\cdot r^3 "
},
{
"math_id": 21,
"text": "a = 2 \\cdot \\left( x - 1 + \\sqrt{6} \\right) \\cdot r"
},
{
"math_id": 22,
"text": "V"
},
{
"math_id": 23,
"text": "V < \\frac{2 \\cdot \\left( x - 1 + \\sqrt{6} \\right)^3 \\cdot \\sqrt{2} \\cdot r^3}{3}"
},
{
"math_id": 24,
"text": "V_\\text{W}"
},
{
"math_id": 25,
"text": "V_\\text{W} = \\frac{x\\cdot(x+1)\\cdot(x+2)-2}{3} \\cdot \\pi r^3"
},
{
"math_id": 26,
"text": "x = 13"
},
{
"math_id": 27,
"text": "n = 455"
},
{
"math_id": 28,
"text": "r^3"
},
{
"math_id": 29,
"text": "d=5"
},
{
"math_id": 30,
"text": "K"
},
{
"math_id": 31,
"text": "\\delta (K, C_n) =\\frac {n V(K)}{V (C_n +K)}"
},
{
"math_id": 32,
"text": "C_n"
},
{
"math_id": 33,
"text": "c_i"
},
{
"math_id": 34,
"text": "K_i"
},
{
"math_id": 35,
"text": "V (C_n + K)"
},
{
"math_id": 36,
"text": "\\delta =1"
},
{
"math_id": 37,
"text": "\\delta \\approx 0.9"
},
{
"math_id": 38,
"text": "\\rho"
},
{
"math_id": 39,
"text": "\\delta (K, C_n) =\\frac {n V(K)}{V (C_n + \\rho K)}"
},
{
"math_id": 40,
"text": "d \\geq 2"
},
{
"math_id": 41,
"text": "\\rho_s (d)"
},
{
"math_id": 42,
"text": "\\rho_c (d)"
},
{
"math_id": 43,
"text": "\\rho \\leq \\rho_s (d)"
},
{
"math_id": 44,
"text": "\\rho \\geq \\rho_c (d)"
},
{
"math_id": 45,
"text": "\\rho_c (2)=\\rho_s (2)= \\frac {\\sqrt {3}}{2}"
},
{
"math_id": 46,
"text": "\\frac {V (B^d)}{2 V (B^{d-1})} {\\rho_c (d)}^{1-d} \\leq \\delta (B^d) \\leq \\frac {V (B^d)}{2 V (B^{d-1})} {\\rho_s (d)}^{1-d} "
},
{
"math_id": 47,
"text": "B^d"
},
{
"math_id": 48,
"text": "V (B^d)"
},
{
"math_id": 49,
"text": "d = 2"
},
{
"math_id": 50,
"text": "\\rho_s (d) =\\rho_c (d)"
},
{
"math_id": 51,
"text": "\\delta (B^d) "
}
]
| https://en.wikipedia.org/wiki?curid=70571129 |
70573417 | Leray–Schauder degree | In mathematics, the Leray–Schauder degree is an extension of the degree of a base point preserving continuous map between spheres formula_0 or equivalently to a boundary sphere preserving continuous maps between balls formula_1 to boundary sphere preserving maps between balls in a Banach space formula_2, assuming that the map is of the form formula_3 where formula_4 is the identity map and formula_5 is some compact map (i.e. mapping bounded sets to sets whose closure is compact).
The degree was invented by Jean Leray and Juliusz Schauder to prove existence results for partial differential equations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " (S^n, *) \\to (S^n , *)"
},
{
"math_id": 1,
"text": "(B^n, S^{n-1}) \\to (B^n, S^{n-1})"
},
{
"math_id": 2,
"text": " f: (B(V), S(V)) \\to (B(V), S(V))"
},
{
"math_id": 3,
"text": "f = id - C"
},
{
"math_id": 4,
"text": "id"
},
{
"math_id": 5,
"text": "C"
}
]
| https://en.wikipedia.org/wiki?curid=70573417 |
705749 | Intersection homology | In topology, a branch of mathematics, intersection homology is an analogue of singular homology especially well-suited for the study of singular spaces, discovered by Mark Goresky and Robert MacPherson in the fall of 1974 and developed by them over the next few years.
Intersection cohomology was used to prove the Kazhdan–Lusztig conjectures and the Riemann–Hilbert correspondence. It is closely related to "L"2 cohomology.
Goresky–MacPherson approach.
The homology groups of a compact, oriented, connected, "n"-dimensional manifold "X" have a fundamental property called Poincaré duality: there is a perfect pairing
formula_0
Classically—going back, for instance, to Henri Poincaré—this duality was understood in terms of intersection theory. An element of
formula_1
is represented by a "j"-dimensional cycle. If an "i"-dimensional and an formula_2-dimensional cycle are in general position, then their intersection is a finite collection of points. Using the orientation of "X" one may assign to each of these points a sign; in other words intersection yields a "0"-dimensional cycle. One may prove that the homology class of this cycle depends only on the homology classes of the original "i"- and formula_2-dimensional cycles; one may furthermore prove that this pairing is perfect.
When "X" has "singularities"—that is, when the space has places that do not look like formula_3—these ideas break down. For example, it is no longer possible to make sense of the notion of "general position" for cycles. Goresky and MacPherson introduced a class of "allowable" cycles for which general position does make sense. They introduced an equivalence relation for allowable cycles (where only "allowable boundaries" are equivalent to zero), and called the group
formula_4
of "i"-dimensional allowable cycles modulo this equivalence relation "intersection homology". They furthermore showed that the intersection of an "i"- and an formula_2-dimensional allowable cycle gives an (ordinary) zero-cycle whose homology class is well-defined.
Stratifications.
Intersection homology was originally defined on suitable spaces with a stratification, though the groups often turn out to be independent of the choice of stratification. There are many different definitions of stratified spaces. A convenient one for intersection homology is an "n"-dimensional topological pseudomanifold. This is a (paracompact, Hausdorff) space "X" that has a filtration
formula_5
of "X" by closed subspaces such that:
If "X" is a topological pseudomanifold, the "i"-dimensional stratum of "X" is the space formula_6.
Examples:
Perversities.
Intersection homology groups formula_13 depend on a choice of perversity formula_14, which measures how far cycles are allowed to deviate from transversality. (The origin of the name "perversity" was explained by .) A perversity formula_14 is a function
formula_15
from integers formula_16 to the integers such that
The second condition is used to show invariance of intersection homology groups under change of stratification.
The complementary perversity formula_19 of formula_14 is the one with
formula_20.
Intersection homology groups of complementary dimension and complementary perversity are dually paired.
Singular intersection homology.
Fix a topological pseudomanifold "X" of dimension "n" with some stratification, and a perversity "p".
A map σ from the standard "i"-simplex formula_26 to "X" (a singular simplex) is called allowable if
formula_27
is contained in the formula_28 skeleton of formula_26.
The complex formula_29 is a subcomplex of the complex of singular chains on "X" that consists of all singular chains such that both the chain and its boundary are linear combinations of allowable singular simplexes. The singular intersection homology groups (with perversity "p")
formula_30
are the homology groups of this complex.
If "X" has a triangulation compatible with the stratification, then simplicial intersection homology groups can be defined in a similar way, and are naturally isomorphic to the singular intersection homology groups.
The intersection homology groups are independent of the choice of stratification of "X".
If "X" is a topological manifold, then the intersection homology groups (for any perversity) are the same as the usual homology groups.
Small resolutions.
A resolution of singularities
formula_31
of a complex variety "Y" is called a small resolution if for every "r" > 0, the space of points of "Y" where the fiber has dimension "r" is of codimension greater than 2"r". Roughly speaking, this means that most fibers are small. In this case the morphism induces an isomorphism from the (intersection) homology of "X" to the intersection homology of "Y" (with the middle perversity).
There is a variety with two different small resolutions that have different ring structures on their cohomology, showing that there is in general no natural ring structure on intersection (co)homology.
Sheaf theory.
Deligne's formula for intersection cohomology states that
formula_32
where formula_33 is the intersection complex, a certain complex of constructible sheaves on "X" (considered as an element of the derived category, so the cohomology on the right means the hypercohomology of the complex). The complex formula_33 is given by starting with the constant sheaf on the open set formula_34 and repeatedly extending it to larger open sets formula_35 and then truncating it in the derived category; more precisely it is given by Deligne's formula
formula_36
where formula_37 is a truncation functor in the derived category, formula_38 is the inclusion of formula_35 into formula_39, and formula_40 is the constant sheaf on formula_34.
By replacing the constant sheaf on formula_34 with a local system, one can use Deligne's formula to define intersection cohomology with coefficients in a local system.
Examples.
Given a smooth elliptic curve formula_41 defined by a cubic homogeneous polynomial formula_42, such as formula_43, the affine cone formula_44 has an isolated singularity at the origin since formula_45 and all partial derivatives formula_46 vanish. This is because it is homogeneous of degree formula_47, and the derivatives are homogeneous of degree 2. Setting formula_48 and formula_49 the inclusion map, the intersection complex formula_50 is given asformula_51
This can be computed explicitly by looking at the stalks of the cohomology. At formula_52 where formula_53 the derived pushforward is the identity map on a smooth point, hence the only possible cohomology is concentrated in degree formula_54. For formula_55 the cohomology is more interesting since
formula_56
for formula_57 where the closure of formula_58 contains the origin formula_59. Since any such formula_57 can be refined by considering the intersection of an open disk in formula_60 with formula_61, we can just compute the cohomology formula_62. This can be done by observing formula_61 is a formula_63 bundle over the elliptic curve formula_64, the hyperplane bundle, and the Wang sequence gives the cohomology groupsformula_65hence the cohomology sheaves at the stalk formula_59 areformula_66
Truncating this gives the nontrivial cohomology sheaves formula_67, hence the intersection complex formula_50 has cohomology sheaves
formula_68
Properties of the complex IC("X").
The complex IC"p"("X") has the following properties
formula_69 is 0 for "i" + "m" ≠ 0, and for "i" = −"m" the groups form the constant local system C
As usual, "q" is the complementary perversity to "p". Moreover, the complex is uniquely characterized by these conditions, up to isomorphism in the derived category. The conditions do not depend on the choice of stratification, so this shows that intersection cohomology does not depend on the choice of stratification either.
Verdier duality takes IC"p" to IC"q" shifted by "n" = dim("X") in the derived category.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " H_i(X,\\Q) \\times H_{n-i}(X,\\Q) \\to H_0(X,\\Q) \\cong \\Q."
},
{
"math_id": 1,
"text": "H_j(X)"
},
{
"math_id": 2,
"text": "(n-i)"
},
{
"math_id": 3,
"text": "\\R^n"
},
{
"math_id": 4,
"text": "IH_i(X)"
},
{
"math_id": 5,
"text": " \\emptyset = X_{-1} \\subset X_0 \\subset X_1 \\subset \\cdots \\subset X_n = X "
},
{
"math_id": 6,
"text": "X_i \\setminus X_{i-1}"
},
{
"math_id": 7,
"text": " U \\subset X "
},
{
"math_id": 8,
"text": "(n-i-1)"
},
{
"math_id": 9,
"text": " U \\cong \\R^i \\times CL"
},
{
"math_id": 10,
"text": "CL"
},
{
"math_id": 11,
"text": "X_{n-1} = X_{n-2}"
},
{
"math_id": 12,
"text": "X\\setminus X_{n-1}"
},
{
"math_id": 13,
"text": "I^\\mathbf{p}H_i(X)"
},
{
"math_id": 14,
"text": "\\mathbf{p}"
},
{
"math_id": 15,
"text": "\\mathbf{p}\\colon\\Z_{\\geq 2} \\to \\Z"
},
{
"math_id": 16,
"text": "\\geq 2"
},
{
"math_id": 17,
"text": "\\mathbf{p}(2) = 0"
},
{
"math_id": 18,
"text": "\\mathbf{p}(k+1) - \\mathbf{p}(k) \\in \\{0,1\\}"
},
{
"math_id": 19,
"text": "\\mathbf{q}"
},
{
"math_id": 20,
"text": "\\mathbf{p}(k)+\\mathbf{q}(k)=k-2"
},
{
"math_id": 21,
"text": "p(k) = 0"
},
{
"math_id": 22,
"text": "q(k)=k-2"
},
{
"math_id": 23,
"text": "m(k)=[(k-2)/2]"
},
{
"math_id": 24,
"text": "(k-2)/2"
},
{
"math_id": 25,
"text": "[(k-1)/2]"
},
{
"math_id": 26,
"text": "\\Delta^i"
},
{
"math_id": 27,
"text": "\\sigma^{-1} \\left (X_{n-k}\\setminus X_{n-k-1} \\right)"
},
{
"math_id": 28,
"text": "i-k+p(k)"
},
{
"math_id": 29,
"text": "I^p(X)"
},
{
"math_id": 30,
"text": "I^pH_i(X)"
},
{
"math_id": 31,
"text": "f:X\\to Y"
},
{
"math_id": 32,
"text": "I^pH_{n-i}(X) = I^pH^i(X) = H^{i}_c(IC_p(X))"
},
{
"math_id": 33,
"text": "IC_p(X)"
},
{
"math_id": 34,
"text": "X\\setminus X_{n-2}"
},
{
"math_id": 35,
"text": "X\\setminus X_{n-k}"
},
{
"math_id": 36,
"text": "IC_p(X) = \\tau_{\\le p(n)-n}\\mathbf{R}i_{n*}\\tau_{\\le p(n-1)-n}\\mathbf{R}i_{n-1*}\\cdots\\tau_{\\le p(2)-n}\\mathbf{R}i_{2*} \\Complex_{X\\setminus X_{n-2}}"
},
{
"math_id": 37,
"text": "\\tau_{\\le p}"
},
{
"math_id": 38,
"text": "i_k"
},
{
"math_id": 39,
"text": "X\\setminus X_{n-k-1}"
},
{
"math_id": 40,
"text": "\\Complex_{X\\setminus X_{n-2}}"
},
{
"math_id": 41,
"text": "X \\subset \\mathbb{CP}^2"
},
{
"math_id": 42,
"text": "f"
},
{
"math_id": 43,
"text": "x^3 + y^3 + z^3"
},
{
"math_id": 44,
"text": "\\mathbb{V}(f) \\subset \\mathbb{C}^3"
},
{
"math_id": 45,
"text": "f(0) = 0"
},
{
"math_id": 46,
"text": "\\partial_if(0) = 0"
},
{
"math_id": 47,
"text": "3"
},
{
"math_id": 48,
"text": "U = \\mathbb{V}(f) -\\{0\\}"
},
{
"math_id": 49,
"text": "i:U \\hookrightarrow X"
},
{
"math_id": 50,
"text": "IC_{\\mathbb{V}(f)}"
},
{
"math_id": 51,
"text": "\\tau_{\\leq 1} \\mathbf{R}i_*\\mathbb{Q}_U"
},
{
"math_id": 52,
"text": "p \\in \\mathbb{V}(f)"
},
{
"math_id": 53,
"text": "p \\neq 0"
},
{
"math_id": 54,
"text": "0"
},
{
"math_id": 55,
"text": "p = 0"
},
{
"math_id": 56,
"text": "\\mathbf{R}^ki_*\\mathbb{Q}_U|_{p=0} = \\mathop{\\underset{V \\subset U}\\text{colim}} H^k(V; \\mathbb{Q})"
},
{
"math_id": 57,
"text": "V"
},
{
"math_id": 58,
"text": "i(V)"
},
{
"math_id": 59,
"text": "p=0"
},
{
"math_id": 60,
"text": "\\mathbb{C}^3"
},
{
"math_id": 61,
"text": "U"
},
{
"math_id": 62,
"text": "H^k(U;\\mathbb{Q})"
},
{
"math_id": 63,
"text": "\\mathbb{C}^*"
},
{
"math_id": 64,
"text": "X"
},
{
"math_id": 65,
"text": "\\begin{align}\nH^0(U;\\mathbb{Q})&\\cong H^0(X;\\mathbb{Q})=\\mathbb{Q} \\\\\nH^1(U;\\mathbb{Q})&\\cong H^1(X;\\mathbb{Q})=\\mathbb{Q}^{\\oplus 2}\\\\\nH^2(U;\\mathbb{Q})&\\cong H^1(X;\\mathbb{Q})=\\mathbb{Q}^{\\oplus 2} \\\\\nH^3(U;\\mathbb{Q})&\\cong H^2(X;\\mathbb{Q})=\\mathbb{Q} \\\\\n\\end{align}"
},
{
"math_id": 66,
"text": "\\begin{matrix}\n\\mathcal{H}^2\\left(\\mathbf{R}i_*\\mathbb{Q}_U|_{p=0}\\right) & = & \\mathbb{Q}_{p=0} \\\\\n\\mathcal{H}^1\\left(\\mathbf{R}i_*\\mathbb{Q}_U|_{p=0}\\right) & = & \\mathbb{Q}_{p=0}^{\\oplus 2} \\\\\n\\mathcal{H}^0\\left(\\mathbf{R}i_*\\mathbb{Q}_U|_{p=0}\\right) & = & \\mathbb{Q}_{p=0}\n\\end{matrix}"
},
{
"math_id": 67,
"text": "\\mathcal{H}^0,\\mathcal{H}^1"
},
{
"math_id": 68,
"text": "\\begin{matrix}\n\\mathcal{H}^0(IC_{\\mathbb{V}(f)}) & = & \\mathbb{Q}_{\\mathbb{V}(f)} \\\\\n\\mathcal{H}^1(IC_{\\mathbb{V}(f)}) & = & \\mathbb{Q}_{p=0}^{\\oplus 2} \\\\\n\\mathcal{H}^i(IC_{\\mathbb{V}(f)}) & = & 0 & \\text{for }i\\ne 0,1\n\\end{matrix}"
},
{
"math_id": 69,
"text": "H^i(j_x^* IC_p) "
},
{
"math_id": 70,
"text": "H^{-i}(j_x^* IC_p) "
},
{
"math_id": 71,
"text": "H^{-i}(j_x^! IC_p) "
}
]
| https://en.wikipedia.org/wiki?curid=705749 |
7058047 | History of Lorentz transformations | The history of Lorentz transformations comprises the development of linear transformations forming the Lorentz group or Poincaré group preserving the Lorentz interval formula_0 and the Minkowski inner product formula_1.
In mathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory of quadratic forms, hyperbolic geometry, Möbius geometry, and sphere geometry, which is connected to the fact that the group of motions in hyperbolic space, the Möbius group or projective special linear group, and the Laguerre group are isomorphic to the Lorentz group.
In physics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant between different inertial frames. They relate the spacetime coordinates of two arbitrary inertial frames of reference with constant relative speed "v". In one frame, the position of an event is given by "x,y,z" and time "t", while in the other frame the same event has coordinates "x′,y′,z′" and "t′".
Mathematical prehistory.
Using the coefficients of a symmetric matrix A, the associated bilinear form, and a linear transformations in terms of transformation matrix g, the Lorentz transformation is given if the following conditions are satisfied:
formula_2
It forms an indefinite orthogonal group called the Lorentz group O(1,n), while the case det g=+1 forms the restricted Lorentz group SO(1,n). The quadratic form becomes the Lorentz interval in terms of an indefinite quadratic form of Minkowski space (being a special case of pseudo-Euclidean space), and the associated bilinear form becomes the Minkowski inner product. Long before the advent of special relativity it was used in topics such as the Cayley–Klein metric, hyperboloid model and other models of hyperbolic geometry, computations of elliptic functions and integrals, transformation of indefinite quadratic forms, squeeze mappings of the hyperbola, group theory, Möbius transformations, spherical wave transformation, transformation of the Sine-Gordon equation, Biquaternion algebra, split-complex numbers, Clifford algebra, and others.
Learning materials from Wikiversity:
includes contributions of Carl Friedrich Gauss (1818), Carl Gustav Jacob Jacobi (1827, 1833/34), Michel Chasles (1829), Victor-Amédée Lebesgue (1837), Thomas Weddle (1847), Edmond Bour (1856), Osip Ivanovich Somov (1863), Wilhelm Killing (1878–1893), Henri Poincaré (1881), Homersham Cox (1881–1883), George William Hill (1882), Émile Picard (1882-1884), Octave Callandreau (1885), Sophus Lie (1885-1890), Louis Gérard (1892), Felix Hausdorff (1899), Frederick S. Woods (1901-05), Heinrich Liebmann (1904/05).
includes contributions of Sophus Lie (1871), Hermann Minkowski (1907–1908), Arnold Sommerfeld (1909).
includes contributions of Vincenzo Riccati (1757), Johann Heinrich Lambert (1768–1770), Franz Taurinus (1826), Eugenio Beltrami (1868), Charles-Ange Laisant (1874), Gustav von Escherich (1874), James Whitbread Lee Glaisher (1878), Siegmund Günther (1880/81), Homersham Cox (1881/82), Rudolf Lipschitz (1885/86), Friedrich Schur (1885-1902), Ferdinand von Lindemann (1890–91), Louis Gérard (1892), Wilhelm Killing (1893-97), Alfred North Whitehead (1897/98), Edwin Bailey Elliott (1903), Frederick S. Woods (1903), Heinrich Liebmann (1904/05), Philipp Frank (1909), Gustav Herglotz (1909/10), Vladimir Varićak (1910).
includes contributions of Pierre Ossian Bonnet (1856), Albert Ribaucour (1870), Sophus Lie (1871a), Gaston Darboux (1873-87), Edmond Laguerre (1880), Cyparissos Stephanos (1883), Georg Scheffers (1899), Percey F. Smith (1900), Harry Bateman and Ebenezer Cunningham (1909–1910).
was used by Arthur Cayley (1846–1855), Charles Hermite (1853, 1854), Paul Gustav Heinrich Bachmann (1869), Edmond Laguerre (1882), Gaston Darboux (1887), Percey F. Smith (1900), Émile Borel (1913).
includes contributions of Carl Friedrich Gauss (1801/63), Felix Klein (1871–97), Eduard Selling (1873–74), Henri Poincaré (1881), Luigi Bianchi (1888-93), Robert Fricke (1891–97), Frederick S. Woods (1895), Gustav Herglotz (1909/10).
includes contributions of James Cockle (1848), Homersham Cox (1882/83), Cyparissos Stephanos (1883), Arthur Buchheim (1884), Rudolf Lipschitz (1885/86), Theodor Vahlen (1901/02), Fritz Noether (1910), Felix Klein (1910), Arthur W. Conway (1911), Ludwik Silberstein (1911).
includes contributions of Luigi Bianchi (1886), Gaston Darboux (1891/94), Georg Scheffers (1899), Luther Pfahler Eisenhart (1905), Vladimir Varićak (1910), Henry Crozier Keating Plummer (1910), Paul Gruner (1921).
includes contributions of Antoine André Louis Reynaud (1819), Felix Klein (1871), Charles-Ange Laisant (1874), Sophus Lie (1879-84), Siegmund Günther (1880/81), Edmond Laguerre (1882), Gaston Darboux (1883–1891), Rudolf Lipschitz (1885/86), Luigi Bianchi (1886–1894), Ferdinand von Lindemann (1890/91), Mellen W. Haskell (1895), Percey F. Smith (1900), Edwin Bailey Elliott (1903), Luther Pfahler Eisenhart (1905).
Electrodynamics and special relativity.
Overview.
In the special relativity, Lorentz transformations exhibit the symmetry of Minkowski spacetime by using a constant "c" as the speed of light, and a parameter "v" as the relative velocity between two inertial reference frames. Using the above conditions, the Lorentz transformation in 3+1 dimensions assume the form:
formula_3
In physics, analogous transformations have been introduced by Voigt (1887) related to an incompressible medium, and by Heaviside (1888), Thomson (1889), Searle (1896) and Lorentz (1892, 1895) who analyzed Maxwell's equations. They were completed by Larmor (1897, 1900) and Lorentz (1899, 1904), and brought into their modern form by Poincaré (1905) who gave the transformation the name of Lorentz. Eventually, Einstein (1905) showed in his development of special relativity that the transformations follow from the principle of relativity and constant light speed alone by modifying the traditional concepts of space and time, without requiring a mechanical aether in contradistinction to Lorentz and Poincaré. Minkowski (1907–1908) used them to argue that space and time are inseparably connected as spacetime.
Regarding special representations of the Lorentz transformations: Minkowski (1907–1908) and Sommerfeld (1909) used imaginary trigonometric functions, Frank (1909) and Varićak (1910) used hyperbolic functions, Bateman and Cunningham (1909–1910) used spherical wave transformations, Herglotz (1909–10) used Möbius transformations, Plummer (1910) and Gruner (1921) used trigonometric Lorentz boosts, Ignatowski (1910) derived the transformations without light speed postulate, Noether (1910) and Klein (1910) as well Conway (1911) and Silberstein (1911) used Biquaternions, Ignatowski (1910/11), Herglotz (1911), and others used vector transformations valid in arbitrary directions, Borel (1913–14) used Cayley–Hermite parameter,
Voigt (1887).
Woldemar Voigt (1887) developed a transformation in connection with the Doppler effect and an incompressible medium, being in modern notation:
formula_4
If the right-hand sides of his equations are multiplied by γ they are the modern Lorentz transformation. In Voigt's theory the speed of light is invariant, but his transformations mix up a relativistic boost together with a rescaling of space-time. Optical phenomena in free space are scale, conformal, and Lorentz invariant, so the combination is invariant too. For instance, Lorentz transformations can be extended by using factor formula_5:
formula_6.
"l"=1/γ gives the Voigt transformation, "l"=1 the Lorentz transformation. But scale transformations are not a symmetry of all the laws of nature, only of electromagnetism, so these transformations cannot be used to formulate a principle of relativity in general. It was demonstrated by Poincaré and Einstein that one has to set "l"=1 in order to make the above transformation symmetric and to form a group as required by the relativity principle, therefore the Lorentz transformation is the only viable choice.
Voigt sent his 1887 paper to Lorentz in 1908, and that was acknowledged in 1909: <templatestyles src="Template:Blockquote/styles.css" />In a paper "Über das Doppler'sche Princip", published in 1887 (Gött. Nachrichten, p. 41) and which to my regret has escaped my notice all these years, Voigt has applied to equations of the form (7) (§ 3 of this book) [namely formula_7] a transformation equivalent to the formulae (287) and (288) [namely formula_8]. The idea of the transformations used above (and in § 44) might therefore have been borrowed from Voigt and the proof that it does not alter the form of the equations for the "free" ether is contained in his paper.
Also Hermann Minkowski said in 1908 that the transformations which play the main role in the principle of relativity were first examined by Voigt in 1887. Voigt responded in the same paper by saying that his theory was based on an elastic theory of light, not an electromagnetic one. However, he concluded that some results were actually the same.
Heaviside (1888), Thomson (1889), Searle (1896).
In 1888, Oliver Heaviside investigated the properties of charges in motion according to Maxwell's electrodynamics. He calculated, among other things, anisotropies in the electric field of moving bodies represented by this formula:
formula_9.
Consequently, Joseph John Thomson (1889) found a way to substantially simplify calculations concerning moving charges by using the following mathematical transformation (like other authors such as Lorentz or Larmor, also Thomson implicitly used the Galilean transformation "z-vt" in his equation):
formula_10
Thereby, inhomogeneous electromagnetic wave equations are transformed into a Poisson equation. Eventually, George Frederick Charles Searle noted in (1896) that Heaviside's expression leads to a deformation of electric fields which he called "Heaviside-Ellipsoid" of axial ratio
formula_11
Lorentz (1892, 1895).
In order to explain the aberration of light and the result of the Fizeau experiment in accordance with Maxwell's equations, Lorentz in 1892 developed a model ("Lorentz ether theory") in which the aether is completely motionless, and the speed of light in the aether is constant in all directions. In order to calculate the optics of moving bodies, Lorentz introduced the following quantities to transform from the aether system into a moving system (it's unknown whether he was influenced by Voigt, Heaviside, and Thomson)
formula_12
where "x*" is the Galilean transformation "x-vt". Except the additional γ in the time transformation, this is the complete Lorentz transformation. While "t" is the "true" time for observers resting in the aether, "t′" is an auxiliary variable only for calculating processes for moving systems. It is also important that Lorentz and later also Larmor formulated this transformation in two steps. At first an implicit Galilean transformation, and later the expansion into the "fictitious" electromagnetic system with the aid of the Lorentz transformation. In order to explain the negative result of the Michelson–Morley experiment, he (1892b) introduced the additional hypothesis that also intermolecular forces are affected in a similar way and introduced length contraction in his theory (without proof as he admitted). The same hypothesis had been made previously by George FitzGerald in 1889 based on Heaviside's work. While length contraction was a real physical effect for Lorentz, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation.
In 1895, Lorentz further elaborated on his theory and introduced the "theorem of corresponding states". This theorem states that a moving observer (relative to the ether) in his "fictitious" field makes the same observations as a resting observers in his "real" field for velocities to first order in "v/c". Lorentz showed that the dimensions of electrostatic systems in the ether and a moving frame are connected by this transformation:
formula_13
For solving optical problems Lorentz used the following transformation, in which the modified time variable was called "local time" () by him:
formula_14
With this concept Lorentz could explain the Doppler effect, the aberration of light, and the Fizeau experiment.
Larmor (1897, 1900).
In 1897, Larmor extended the work of Lorentz and derived the following transformation
formula_15
Larmor noted that if it is assumed that the constitution of molecules is electrical then the FitzGerald–Lorentz contraction is a consequence of this transformation, explaining the Michelson–Morley experiment. It's notable that Larmor was the first who recognized that some sort of time dilation is a consequence of this transformation as well, because "individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio 1/γ". Larmor wrote his electrodynamical equations and transformations neglecting terms of higher order than "(v/c)"2 – when his 1897 paper was reprinted in 1929, Larmor added the following comment in which he described how they can be made valid to all orders of "v/c":
<templatestyles src="Template:Blockquote/styles.css" />Nothing need be neglected: the transformation is "exact" if "v/c"2 is replaced by "εv/c"2 in the equations and also in the change following from "t" to "t′", as is worked out in "Aether and Matter" (1900), p. 168, and as Lorentz found it to be in 1904, thereby stimulating the modern schemes of intrinsic relational relativity.
In line with that comment, in his book Aether and Matter published in 1900, Larmor used a modified local time "t″=t′-εvx′/c2" instead of the 1897 expression "t′=t-vx/c2" by replacing "v/c"2 with "εv/c"2, so that "t″" is now identical to the one given by Lorentz in 1892, which he combined with a Galilean transformation for the "x′, y′, z′, t′" coordinates:
formula_16
Larmor knew that the Michelson–Morley experiment was accurate enough to detect an effect of motion depending on the factor "(v/c)"2, and so he sought the transformations which were "accurate to second order" (as he put it). Thus he wrote the final transformations (where "x′=x-vt" and "t″" as given above) as:
formula_17
by which he arrived at the complete Lorentz transformation. Larmor showed that Maxwell's equations were invariant under this two-step transformation, "to second order in "v/c"" – it was later shown by Lorentz (1904) and Poincaré (1905) that they are indeed invariant under this transformation to all orders in "v/c".
Larmor gave credit to Lorentz in two papers published in 1904, in which he used the term "Lorentz transformation" for Lorentz's first order transformations of coordinates and field configurations:
<templatestyles src="Template:Blockquote/styles.css" />p. 583: [..] Lorentz's transformation for passing from the field of activity of a stationary electrodynamic material system to that of one moving with uniform velocity of translation through the aether. p. 585: [..] the Lorentz transformation has shown us what is not so immediately obvious [..] p. 622: [..] the transformation first developed by Lorentz: namely, each point in space is to have its own origin from which time is measured, its "local time" in Lorentz's phraseology, and then the values of the electric and magnetic vectors [..] at all points in the aether between the molecules in the system at rest, are the same as those of the vectors [..] at the corresponding points in the convected system at the same local times.
Lorentz (1899, 1904).
Also Lorentz extended his theorem of corresponding states in 1899. First he wrote a transformation equivalent to the one from 1892 (again, "x"* must be replaced by "x-vt"):
formula_18
Then he introduced a factor ε of which he said he has no means of determining it, and modified his transformation as follows (where the above value of "t′" has to be inserted):
formula_19
This is equivalent to the complete Lorentz transformation when solved for "x″" and "t″" and with ε=1. Like Larmor, Lorentz noticed in 1899 also some sort of time dilation effect in relation to the frequency of oscillating electrons "that in "S" the time of vibrations be "kε" times as great as in "S0"", where "S0" is the aether frame.
In 1904 he rewrote the equations in the following form by setting "l"=1/ε (again, "x"* must be replaced by "x-vt"):
formula_20
Under the assumption that "l=1" when "v"=0, he demonstrated that "l=1" must be the case at all velocities, therefore length contraction can only arise in the line of motion. So by setting the factor "l" to unity, Lorentz's transformations now assumed the same form as Larmor's and are now completed. Unlike Larmor, who restricted himself to show the covariance of Maxwell's equations to second order, Lorentz tried to widen its covariance to all orders in "v/c". He also derived the correct formulas for the velocity dependence of electromagnetic mass, and concluded that the transformation formulas must apply to all forces of nature, not only electrical ones. However, he didn't achieve full covariance of the transformation equations for charge density and velocity. When the 1904 paper was reprinted in 1913, Lorentz therefore added the following remark:
<templatestyles src="Template:Blockquote/styles.css" />One will notice that in this work the transformation equations of Einstein’s Relativity Theory have not quite been attained. [..] On this circumstance depends the clumsiness of many of the further considerations in this work.
Lorentz's 1904 transformation was cited and used by Alfred Bucherer in July 1904:
formula_21
or by Wilhelm Wien in July 1904:
formula_22
or by Emil Cohn in November 1904 (setting the speed of light to unity):
formula_23
or by Richard Gans in February 1905:
formula_24
Poincaré (1900, 1905).
Local time.
Neither Lorentz or Larmor gave a clear physical interpretation of the origin of local time. However, Henri Poincaré in 1900 commented on the origin of Lorentz's "wonderful invention" of local time. He remarked that it arose when clocks in a moving reference frame are synchronised by exchanging signals which are assumed to travel with the same speed formula_25 in both directions, which lead to what is nowadays called relativity of simultaneity, although Poincaré's calculation does not involve length contraction or time dilation. In order to synchronise the clocks here on Earth (the "x*, t"* frame) a light signal from one clock (at the origin) is sent to another (at "x"*), and is sent back. It's supposed that the Earth is moving with speed "v" in the "x"-direction (= "x"*-direction) in some rest system ("x, t") ("i.e." the luminiferous aether system for Lorentz and Larmor). The time of flight outwards is
formula_26
and the time of flight back is
formula_27.
The elapsed time on the clock when the signal is returned is "δta+δtb" and the time "t*=(δta+δtb)/2" is ascribed to the moment when the light signal reached the distant clock. In the rest frame the time "t=δta" is ascribed to that same instant. Some algebra gives the relation between the different time coordinates ascribed to the moment of reflection. Thus
formula_28
identical to Lorentz (1892). By dropping the factor γ2 under the assumption that formula_29, Poincaré gave the result "t*=t-vx*/c2", which is the form used by Lorentz in 1895.
Similar physical interpretations of local time were later given by Emil Cohn (1904) and Max Abraham (1905).
Lorentz transformation.
On June 5, 1905 (published June 9) Poincaré formulated transformation equations which are algebraically equivalent to those of Larmor and Lorentz and gave them the modern form:
formula_30.
Apparently Poincaré was unaware of Larmor's contributions, because he only mentioned Lorentz and therefore used for the first time the name "Lorentz transformation". Poincaré set the speed of light to unity, pointed out the group characteristics of the transformation by setting "l"=1, and modified/corrected Lorentz's derivation of the equations of electrodynamics in some details in order to fully satisfy the principle of relativity, "i.e." making them fully Lorentz covariant.
In July 1905 (published in January 1906) Poincaré showed in detail how the transformations and electrodynamic equations are a consequence of the principle of least action; he demonstrated in more detail the group characteristics of the transformation, which he called Lorentz group, and he showed that the combination "x2+y2+z2-t2" is invariant. He noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducing formula_31 as a fourth imaginary coordinate, and he used an early form of four-vectors. He also formulated the velocity addition formula, which he had already derived in unpublished letters to Lorentz from May 1905:
formula_32.
Einstein (1905) – Special relativity.
On June 30, 1905 (published September 1905) Einstein published what is now called special relativity and gave a new derivation of the transformation, which was based only on the principle of relativity and the principle of the constancy of the speed of light. While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. For quantities of first order in "v/c" this was also done by Poincaré in 1900, while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations applied to the kinematics of moving frames.
The notation for this transformation is equivalent to Poincaré's of 1905, except that Einstein didn't set the speed of light to unity:
formula_33
Einstein also defined the velocity addition formula:
formula_34
and the light aberration formula:
formula_35
Minkowski (1907–1908) – Spacetime.
The work on the principle of relativity by Lorentz, Einstein, Planck, together with Poincaré's four-dimensional approach, were further elaborated and combined with the hyperboloid model by Hermann Minkowski in 1907 and 1908. Minkowski particularly reformulated electrodynamics in a four-dimensional way (Minkowski spacetime). For instance, he wrote "x, y, z, it" in the form "x1, x2, x3, x4". By defining ψ as the angle of rotation around the "z"-axis, the Lorentz transformation assumes the form (with "c"=1):
formula_36
Even though Minkowski used the imaginary number iψ, he for once directly used the tangens hyperbolicus in the equation for velocity
formula_37 with formula_38.
Minkowski's expression can also by written as ψ=atanh(q) and was later called rapidity. He also wrote the Lorentz transformation in matrix form:
formula_39
As a graphical representation of the Lorentz transformation he introduced the Minkowski diagram, which became a standard tool in textbooks and research articles on relativity:
Sommerfeld (1909) – Spherical trigonometry.
Using an imaginary rapidity such as Minkowski, Arnold Sommerfeld (1909) formulated the Lorentz boost and the relativistic velocity addition in terms of trigonometric functions and the spherical law of cosines:
formula_40
Frank (1909) – Hyperbolic functions.
Hyperbolic functions were used by Philipp Frank (1909), who derived the Lorentz transformation using "ψ" as rapidity:
formula_41
Bateman and Cunningham (1909–1910) – Spherical wave transformation.
In line with Sophus Lie's (1871) research on the relation between sphere transformations with an imaginary radius coordinate and 4D conformal transformations, it was pointed out by Bateman and Cunningham (1909–1910), that by setting "u=ict" as the imaginary fourth coordinates one can produce spacetime conformal transformations. Not only the quadratic form formula_42, but also Maxwells equations are covariant with respect to these transformations, irrespective of the choice of λ. These variants of conformal or Lie sphere transformations were called spherical wave transformations by Bateman. However, this covariance is restricted to certain areas such as electrodynamics, whereas the totality of natural laws in inertial frames is covariant under the Lorentz group. In particular, by setting λ=1 the Lorentz group SO(1,3) can be seen as a 10-parameter subgroup of the 15-parameter spacetime conformal group Con(1,3).
Bateman (1910–12) also alluded to the identity between the Laguerre inversion and the Lorentz transformations. In general, the isomorphism between the Laguerre group and the Lorentz group was pointed out by Élie Cartan (1912, 1915–55), Henri Poincaré (1912–21) and others.
Herglotz (1909/10) – Möbius transformation.
Following Felix Klein (1889–1897) and Fricke & Klein (1897) concerning the Cayley absolute, hyperbolic motion and its transformation, Gustav Herglotz (1909–10) classified the one-parameter Lorentz transformations as loxodromic, hyperbolic, parabolic and elliptic. The general case (on the left) and the hyperbolic case equivalent to Lorentz transformations or squeeze mappings are as follows:
formula_43
Varićak (1910) – Hyperbolic functions.
Following Sommerfeld (1909), hyperbolic functions were used by Vladimir Varićak in several papers starting from 1910, who represented the equations of special relativity on the basis of hyperbolic geometry in terms of Weierstrass coordinates. For instance, by setting "l=ct" and "v/c=tanh(u)" with "u" as rapidity he wrote the Lorentz transformation:
formula_44
and showed the relation of rapidity to the Gudermannian function and the angle of parallelism:
formula_45
He also related the velocity addition to the hyperbolic law of cosines:
formula_46
Subsequently, other authors such as E. T. Whittaker (1910) or Alfred Robb (1911, who coined the name rapidity) used similar expressions, which are still used in modern textbooks.
Plummer (1910) – Trigonometric Lorentz boosts.
w:Henry Crozier Keating Plummer (1910) defined the Lorentz boost in terms of trigonometric functions
formula_47
Ignatowski (1910).
While earlier derivations and formulations of the Lorentz transformation relied from the outset on optics, electrodynamics, or the invariance of the speed of light, Vladimir Ignatowski (1910) showed that it is possible to use the principle of relativity (and related group theoretical principles) alone, in order to derive the following transformation between two inertial frames:
formula_48
The variable "n" can be seen as a space-time constant whose value has to be determined by experiment or taken from a known physical law such as electrodynamics. For that purpose, Ignatowski used the above-mentioned Heaviside ellipsoid representing a contraction of electrostatic fields by "x"/γ in the direction of motion. It can be seen that this is only consistent with Ignatowski's transformation when "n=1/c"2, resulting in "p"=γ and the Lorentz transformation. With "n"=0, no length changes arise and the Galilean transformation follows. Ignatowski's method was further developed and improved by Philipp Frank and Hermann Rothe (1911, 1912), with various authors developing similar methods in subsequent years.
Noether (1910), Klein (1910) – Quaternions.
Felix Klein (1908) described Cayley's (1854) 4D quaternion multiplications as "Drehstreckungen" (orthogonal substitutions in terms of rotations leaving invariant a quadratic form up to a factor), and pointed out that the modern principle of relativity as provided by Minkowski is essentially only the consequent application of such Drehstreckungen, even though he didn't provide details.
In an appendix to Klein's and Sommerfeld's "Theory of the top" (1910), Fritz Noether showed how to formulate hyperbolic rotations using biquaternions with formula_49, which he also related to the speed of light by setting ω2=-"c"2. He concluded that this is the principal ingredient for a rational representation of the group of Lorentz transformations:
formula_50
Besides citing quaternion related standard works by Arthur Cayley (1854), Noether referred to the entries in Klein's encyclopedia by Eduard Study (1899) and the French version by Élie Cartan (1908). Cartan's version contains a description of Study's dual numbers, Clifford's biquaternions (including the choice formula_49 for hyperbolic geometry), and Clifford algebra, with references to Stephanos (1883), Buchheim (1884–85), Vahlen (1901–02) and others.
Citing Noether, Klein himself published in August 1910 the following quaternion substitutions forming the group of Lorentz transformations:
formula_51
or in March 1911
formula_52
Conway (1911), Silberstein (1911) – Quaternions.
Arthur W. Conway in February 1911 explicitly formulated quaternionic Lorentz transformations of various electromagnetic quantities in terms of velocity λ:
formula_53
Also Ludwik Silberstein in November 1911 as well as in 1914, formulated the Lorentz transformation in terms of velocity "v":
formula_54
Silberstein cites Cayley (1854, 1855) and Study's encyclopedia entry (in the extended French version of Cartan in 1908), as well as the appendix of Klein's and Sommerfeld's book.
Ignatowski (1910/11), Herglotz (1911), and others – Vector transformation.
Vladimir Ignatowski (1910, published 1911) showed how to reformulate the Lorentz transformation in order to allow for arbitrary velocities and coordinates:
formula_55
Gustav Herglotz (1911) also showed how to formulate the transformation in order to allow for arbitrary velocities and coordinates v="(vx, vy, vz)" and r="(x, y, z)":
formula_56
This was simplified using vector notation by Ludwik Silberstein (1911 on the left, 1914 on the right):
formula_57
Equivalent formulas were also given by Wolfgang Pauli (1921), with Erwin Madelung (1922) providing the matrix form
formula_58
These formulas were called "general Lorentz transformation without rotation" by Christian Møller (1952), who in addition gave an even more general Lorentz transformation in which the Cartesian axes have different orientations, using a rotation operator formula_59. In this case, v′="(v′x, v′y, v′z)" is not equal to -v="(-vx, -vy, -vz)", but the relation formula_60 holds instead, with the result
formula_61
Borel (1913–14) – Cayley–Hermite parameter.
Émile Borel (1913) started by demonstrating Euclidean motions using Euler-Rodrigues parameter in three dimensions, and Cayley's (1846) parameter in four dimensions. Then he demonstrated the connection to indefinite quadratic forms expressing hyperbolic motions and Lorentz transformations. In three dimensions:
formula_62
In four dimensions:
formula_63
Gruner (1921) – Trigonometric Lorentz boosts.
In order to simplify the graphical representation of Minkowski space, Paul Gruner (1921) (with the aid of Josef Sauter) developed what is now called Loedel diagrams, using the following relations:
formula_64
In another paper Gruner used the alternative relations:
formula_65
References.
Historical mathematical sources.
Learning materials related to at Wikiversity
Historical relativity sources.
<templatestyles src="Reflist/styles.css" />
Secondary sources.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "-x_{0}^{2}+\\cdots+x_{n}^{2}"
},
{
"math_id": 1,
"text": "-x_{0}y_{0}+\\cdots+x_{n}y_{n}"
},
{
"math_id": 2,
"text": "\\begin{matrix}\\begin{align}-x_{0}^{2}+\\cdots+x_{n}^{2} & =-x_{0}^{\\prime2}+\\dots+x_{n}^{\\prime2}\\\\\n-x_{0}y_{0}+\\cdots+x_{n}y_{n} & =-x_{0}^{\\prime}y_{0}^{\\prime}+\\cdots+x_{n}^{\\prime}y_{n}^{\\prime}\n\\end{align}\n\\\\\n\\hline \\begin{matrix}\\mathbf{x}'=\\mathbf{g}\\cdot\\mathbf{x}\\\\\n\\mathbf{x}=\\mathbf{g}^{-1}\\cdot\\mathbf{x}'\n\\end{matrix}\\\\\n\\hline \\begin{matrix}\\begin{align}\\mathbf{A}\\cdot\\mathbf{g}^{\\mathrm{T}}\\cdot\\mathbf{A} & =\\mathbf{g}^{-1}\\\\\n\\mathbf{g}^{{\\rm T}}\\cdot\\mathbf{A}\\cdot\\mathbf{g} & =\\mathbf{A}\\\\\n\\mathbf{g}\\cdot\\mathbf{A}\\cdot\\mathbf{g}^{\\mathrm{T}} & =\\mathbf{A}\n\\end{align}\n\\end{matrix}\\\\\n\\hline \\mathbf{A}={\\rm diag}(-1,1,\\dots,1)\\\\\n\\det \\mathbf{g}=\\pm1\n\\end{matrix}"
},
{
"math_id": 3,
"text": "\\begin{matrix}-c^{2}t^{2}+x^{2}+y^{2}+z^{2}=-c^{2}t^{\\prime2}+x^{\\prime2}+y^{\\prime2}+z^{\\prime2}\\\\\n\\hline \\left.\\begin{align}t' & =\\gamma\\left(t-x\\frac{v}{c^{2}}\\right)\\\\\nx' & =\\gamma(x-vt)\\\\\ny' & =y\\\\\nz' & =z\n\\end{align}\n\\right|\\begin{align}t & =\\gamma\\left(t'+x\\frac{v}{c^{2}}\\right)\\\\\nx & =\\gamma(x'+vt')\\\\\ny & =y'\\\\\nz & =z'\n\\end{align}\n\\end{matrix}\\Rightarrow\\begin{align}(ct'+x') & =(ct+x)\\sqrt{\\frac{c+v}{c-v}}\\\\\n(ct'-x') & =(ct-x)\\sqrt{\\frac{c-v}{c+v}}\n\\end{align}\n"
},
{
"math_id": 4,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}\\xi_{1} & =x_{1}-\\varkappa t\\\\\n\\eta_{1} & =y_{1}q\\\\\n\\zeta_{1} & =z_{1}q\\\\\n\\tau & =t-\\frac{\\varkappa x_{1}}{\\omega^{2}}\\\\\nq & =\\sqrt{1-\\frac{\\varkappa^{2}}{\\omega^{2}}}\n\\end{align}\n\\right| & \\begin{align}x^{\\prime} & =x-vt\\\\\ny^{\\prime} & =\\frac{y}{\\gamma}\\\\\nz^{\\prime} & =\\frac{z}{\\gamma}\\\\\nt^{\\prime} & =t-\\frac{vx}{c^{2}}\\\\\n\\frac{1}{\\gamma} & =\\sqrt{1-\\frac{v^{2}}{c^{2}}}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 5,
"text": "l"
},
{
"math_id": 6,
"text": "x^{\\prime}=\\gamma l\\left(x-vt\\right),\\quad y^{\\prime}=ly,\\quad z^{\\prime}=lz,\\quad t^{\\prime}=\\gamma l\\left(t-x\\frac{v}{c^{2}}\\right)"
},
{
"math_id": 7,
"text": "\\Delta\\Psi-\\tfrac{1}{c^{2}}\\tfrac{\\partial^{2}\\Psi}{\\partial t^{2}}=0"
},
{
"math_id": 8,
"text": "x^{\\prime}=\\gamma l\\left(x-vt\\right),\\ y^{\\prime}=ly,\\ z^{\\prime}=lz,\\ t^{\\prime}=\\gamma l\\left(t-\\tfrac{v}{c^{2}}x\\right)"
},
{
"math_id": 9,
"text": "\\mathrm{E}=\\left(\\frac{q\\mathrm{r}}{r^{2}}\\right)\\left(1-\\frac{v^{2}\\sin^{2}\\theta}{c^{2}}\\right)^{-3/2}"
},
{
"math_id": 10,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}z & =\\left\\{ 1-\\frac{\\omega^{2}}{v^{2}}\\right\\} ^{\\frac{1}{2}}z'\\end{align}\n\\right| & \\begin{align}z^{\\ast}=z-vt & =\\frac{z'}{\\gamma}\\end{align}\n\\end{matrix}"
},
{
"math_id": 11,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align} & \\sqrt{\\alpha}:1:1\\\\\n\\alpha= & 1-\\frac{u^{2}}{v^{2}}\n\\end{align}\n\\right| & \\begin{align} & \\frac{1}{\\gamma}:1:1\\\\\n\\frac{1}{\\gamma^{2}} & =1-\\frac{v^{2}}{c^{2}}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 12,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}\\mathfrak{x} & =\\frac{V}{\\sqrt{V^{2}-p^{2}}}x\\\\\nt' & =t-\\frac{\\varepsilon}{V}\\mathfrak{x}\\\\\n\\varepsilon & =\\frac{p}{\\sqrt{V^{2}-p^{2}}}\n\\end{align}\n\\right| & \\begin{align}x^{\\prime} & =\\gamma x^{\\ast}=\\gamma(x-vt)\\\\\nt^{\\prime} & =t-\\frac{\\gamma^{2}vx^{\\ast}}{c^{2}}=\\gamma^{2}\\left(t-\\frac{vx}{c^{2}}\\right)\\\\\n\\gamma\\frac{v}{c} & =\\frac{v}{\\sqrt{c^{2}-v^{2}}}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 13,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x & =x^{\\prime}\\sqrt{1-\\frac{\\mathfrak{p}^{2}}{V^{2}}}\\\\\ny & =y^{\\prime}\\\\\nz & =z^{\\prime}\\\\\nt & =t^{\\prime}\n\\end{align}\n\\right| & \\begin{align}x^{\\ast}=x-vt & =\\frac{x^{\\prime}}{\\gamma}\\\\\ny & =y^{\\prime}\\\\\nz & =z^{\\prime}\\\\\nt & =t^{\\prime}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 14,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x & =\\mathrm{x}-\\mathfrak{p}_{x}t\\\\\ny & =\\mathrm{y}-\\mathfrak{p}_{y}t\\\\\nz & =\\mathrm{z}-\\mathfrak{p}_{z}t\\\\\nt^{\\prime} & =t-\\frac{\\mathfrak{p}_{x}}{V^{2}}x-\\frac{\\mathfrak{p}_{y}}{V^{2}}y-\\frac{\\mathfrak{p}_{z}}{V^{2}}z\n\\end{align}\n\\right| & \\begin{align}x^{\\prime} & =x-v_{x}t\\\\\ny^{\\prime} & =y-v_{y}t\\\\\nz^{\\prime} & =z-v_{z}t\\\\\nt^{\\prime} & =t-\\frac{v_{x}}{c^{2}}x'-\\frac{v_{y}}{c^{2}}y'-\\frac{v_{z}}{c^{2}}z'\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 15,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x_{1} & =x\\varepsilon^{\\frac{1}{2}}\\\\\ny_{1} & =y\\\\\nz_{1} & =z\\\\\nt^{\\prime} & =t-vx/c^{2}\\\\\ndt_{1} & =dt^{\\prime}\\varepsilon^{-\\frac{1}{2}}\\\\\n\\varepsilon & =\\left(1-v^{2}/c^{2}\\right)^{-1}\n\\end{align}\n\\right| & \\begin{align}x_{1} & =\\gamma x^{\\ast}=\\gamma(x-vt)\\\\\ny_{1} & =y\\\\\nz_{1} & =z\\\\\nt^{\\prime} & =t-\\frac{vx^{\\ast}}{c^{2}}=t-\\frac{v(x-vt)}{c^{2}}\\\\\ndt_{1} & =\\frac{dt^{\\prime}}{\\gamma}\\\\\n\\gamma^{2} & =\\frac{1}{1-\\frac{v^{2}}{c^{2}}}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 16,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x^{\\prime} & =x-vt\\\\\ny^{\\prime} & =y\\\\\nz^{\\prime} & =z\\\\\nt^{\\prime} & =t\\\\\nt^{\\prime\\prime} & =t^{\\prime}-\\varepsilon vx^{\\prime}/c^{2}\n\\end{align}\n\\right| & \\begin{align}x^{\\prime} & =x-vt\\\\\ny^{\\prime} & =y\\\\\nz^{\\prime} & =z\\\\\nt^{\\prime} & =t\\\\\nt^{\\prime\\prime}=t^{\\prime}-\\frac{\\gamma^{2}vx^{\\prime}}{c^{2}} & =\\gamma^{2}\\left(t-\\frac{vx}{c^{2}}\\right)\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 17,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x_{1} & =\\varepsilon^{\\frac{1}{2}}x^{\\prime}\\\\\ny_{1} & =y^{\\prime}\\\\\nz_{1} & =z^{\\prime}\\\\\ndt_{1} & =\\varepsilon^{-\\frac{1}{2}}dt^{\\prime\\prime}=\\varepsilon^{-\\frac{1}{2}}\\left(dt^{\\prime}-\\frac{v}{c^{2}}\\varepsilon dx^{\\prime}\\right)\\\\\nt_{1} & =\\varepsilon^{-\\frac{1}{2}}t^{\\prime}-\\frac{v}{c^{2}}\\varepsilon^{\\frac{1}{2}}x^{\\prime}\n\\end{align}\n\\right| & \\begin{align}x_{1} & =\\gamma x^{\\prime}=\\gamma(x-vt)\\\\\ny_{1} & =y'=y\\\\\nz_{1} & =z'=z\\\\\ndt_{1} & =\\frac{dt^{\\prime\\prime}}{\\gamma}=\\frac{1}{\\gamma}\\left(dt^{\\prime}-\\frac{\\gamma^{2}vdx^{\\prime}}{c^{2}}\\right)=\\gamma\\left(dt-\\frac{vdx}{c^{2}}\\right)\\\\\nt_{1} & =\\frac{t^{\\prime}}{\\gamma}-\\frac{\\gamma vx^{\\prime}}{c^{2}}=\\gamma\\left(t-\\frac{vx}{c^{2}}\\right)\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 18,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x^{\\prime} & =\\frac{V}{\\sqrt{V^{2}-\\mathfrak{p}_{x}^{2}}}x\\\\\ny^{\\prime} & =y\\\\\nz^{\\prime} & =z\\\\\nt^{\\prime} & =t-\\frac{\\mathfrak{p}_{x}}{V^{2}-\\mathfrak{p}_{x}^{2}}x\n\\end{align}\n\\right| & \\begin{align}x^{\\prime} & =\\gamma x^{\\ast}=\\gamma(x-vt)\\\\\ny^{\\prime} & =y\\\\\nz^{\\prime} & =z\\\\\nt^{\\prime} & =t-\\frac{\\gamma^{2}vx^{\\ast}}{c^{2}}=\\gamma^{2}\\left(t-\\frac{vx}{c^{2}}\\right)\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 19,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x & =\\frac{\\varepsilon}{k}x^{\\prime\\prime}\\\\\ny & =\\varepsilon y^{\\prime\\prime}\\\\\nz & =\\varepsilon x^{\\prime\\prime}\\\\\nt^{\\prime} & =k\\varepsilon t^{\\prime\\prime}\\\\\nk & =\\frac{V}{\\sqrt{V^{2}-\\mathfrak{p}_{x}^{2}}}\n\\end{align}\n\\right| & \\begin{align}x^{\\ast}=x-vt & =\\frac{\\varepsilon}{\\gamma}x^{\\prime\\prime}\\\\\ny & =\\varepsilon y^{\\prime\\prime}\\\\\nz & =\\varepsilon z^{\\prime\\prime}\\\\\nt^{\\prime}=\\gamma^{2}\\left(t-\\frac{vx}{c^{2}}\\right) & =\\gamma\\varepsilon t^{\\prime\\prime}\\\\\n\\gamma & =\\frac{1}{\\sqrt{1-\\frac{v^{2}}{c^{2}}}}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 20,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x^{\\prime} & =klx\\\\\ny^{\\prime} & =ly\\\\\nz^{\\prime} & =lz\\\\\nt' & =\\frac{l}{k}t-kl\\frac{w}{c^{2}}x\n\\end{align}\n\\right| & \\begin{align}x^{\\prime} & =\\gamma lx^{\\ast}=\\gamma l(x-vt)\\\\\ny^{\\prime} & =ly\\\\\nz^{\\prime} & =lz\\\\\nt^{\\prime} & =\\frac{lt}{\\gamma}-\\frac{\\gamma lvx^{\\ast}}{c^{2}}=\\gamma l\\left(t-\\frac{vx}{c^{2}}\\right)\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 21,
"text": "x^{\\prime}=\\sqrt{s}x,\\quad y^{\\prime}=y,\\quad z^{\\prime}=z,\\quad t'=\\frac{t}{\\sqrt{s}}-\\sqrt{s}\\frac{u}{v^{2}}x,\\quad s=1-\\frac{u^{2}}{v^{2}}"
},
{
"math_id": 22,
"text": "x=kx',\\quad y=y',\\quad z=z',\\quad t'=kt-\\frac{v}{kc^{2}}x"
},
{
"math_id": 23,
"text": "x=\\frac{x_{0}}{k},\\quad y=y_{0},\\quad z=z_{0},\\quad t=kt_{0},\\quad t_{1}=t_{0}-w\\cdot r_{0},\\quad k^{2}=\\frac{1}{1-w^{2}}"
},
{
"math_id": 24,
"text": "x^{\\prime}=kx,\\quad y^{\\prime}=y,\\quad z^{\\prime}=z,\\quad t'=\\frac{t}{k}-\\frac{kwx}{c^{2}},\\quad k^{2}=\\frac{c^{2}}{c^{2}-w^{2}}"
},
{
"math_id": 25,
"text": "c"
},
{
"math_id": 26,
"text": "\\delta t_{a}=\\frac{x^{\\ast}}{\\left(c-v\\right)}"
},
{
"math_id": 27,
"text": "\\delta t_{b}=\\frac{x^{\\ast}}{\\left(c+v\\right)}"
},
{
"math_id": 28,
"text": "t^{\\ast}=t-\\frac{\\gamma^{2}vx^{*}}{c^{2}}"
},
{
"math_id": 29,
"text": "\\tfrac{v^{2}}{c^{2}}\\ll1"
},
{
"math_id": 30,
"text": "\\begin{align}x^{\\prime} & =kl(x+\\varepsilon t)\\\\\ny^{\\prime} & =ly\\\\\nz^{\\prime} & =lz\\\\\nt' & =kl(t+\\varepsilon x)\\\\\nk & =\\frac{1}{\\sqrt{1-\\varepsilon^{2}}}\n\\end{align}\n"
},
{
"math_id": 31,
"text": "ct\\sqrt{-1}"
},
{
"math_id": 32,
"text": "\\xi'=\\frac{\\xi+\\varepsilon}{1+\\xi\\varepsilon},\\ \\eta'=\\frac{\\eta}{k(1+\\xi\\varepsilon)}"
},
{
"math_id": 33,
"text": "\\begin{align}\\tau & =\\beta\\left(t-\\frac{v}{V^{2}}x\\right)\\\\\n\\xi & =\\beta(x-vt)\\\\\n\\eta & =y\\\\\n\\zeta & =z\\\\\n\\beta & =\\frac{1}{\\sqrt{1-\\left(\\frac{v}{V}\\right)^{2}}}\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\\begin{matrix}x=\\frac{w_{\\xi}+v}{1+\\frac{vw_{\\xi}}{V^{2}}}t,\\ y=\\frac{\\sqrt{1-\\left(\\frac{v}{V}\\right)^{2}}}{1+\\frac{vw_{\\xi}}{V^{2}}}w_{\\eta}t\\\\\nU^{2}=\\left(\\frac{dx}{dt}\\right)^{2}+\\left(\\frac{dy}{dt}\\right)^{2},\\ w^{2}=w_{\\xi}^{2}+w_{\\eta}^{2},\\ \\alpha=\\operatorname{arctg}\\frac{w_{y}}{w_{x}}\\\\\nU=\\frac{\\sqrt{\\left(v^{2}+w^{2}+2vw\\cos\\alpha\\right)-\\left(\\frac{vw\\sin\\alpha}{V}\\right)^{2}}}{1+\\frac{vw\\cos\\alpha}{V^{2}}}\n\\end{matrix}\\left|\\begin{matrix}\\frac{u_{x}-v}{1-\\frac{u_{x}v}{V^{2}}}=u_{\\xi}\\\\\n\\frac{u_{y}}{\\beta\\left(1-\\frac{u_{x}v}{V^{2}}\\right)}=u_{\\eta}\\\\\n\\frac{u_{z}}{\\beta\\left(1-\\frac{u_{x}v}{V^{2}}\\right)}=u_{\\zeta}\n\\end{matrix}\\right."
},
{
"math_id": 35,
"text": "\\cos\\varphi'=\\frac{\\cos\\varphi-\\frac{v}{V}}{1-\\frac{v}{V}\\cos\\varphi}"
},
{
"math_id": 36,
"text": "\\begin{align}x'_{1} & =x_{1}\\\\\nx'_{2} & =x_{2}\\\\\nx'_{3} & =x_{3}\\cos i\\psi+x_{4}\\sin i\\psi\\\\\nx'_{4} & =-x_{3}\\sin i\\psi+x_{4}\\cos i\\psi\\\\\n\\cos i\\psi & =\\frac{1}{\\sqrt{1-q^{2}}}\n\\end{align}\n"
},
{
"math_id": 37,
"text": "-i\\tan i\\psi=\\frac{e^{\\psi}-e^{-\\psi}}{e^{\\psi}+e^{-\\psi}}=q"
},
{
"math_id": 38,
"text": "\\psi=\\frac{1}{2}\\ln\\frac{1+q}{1-q}"
},
{
"math_id": 39,
"text": "\\begin{matrix}x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}=x_{1}^{\\prime2}+x_{2}^{\\prime2}+x_{3}^{\\prime2}+x_{4}^{\\prime2}\\\\\n\\left(x_{1}^{\\prime}=x',\\ x_{2}^{\\prime}=y',\\ x_{3}^{\\prime}=z',\\ x_{4}^{\\prime}=it'\\right)\\\\\n-x^{2}-y^{2}-z^{2}+t^{2}=-x^{\\prime2}-y^{\\prime2}-z^{\\prime2}+t^{\\prime2}\\\\\n\\hline x_{h}=\\alpha_{h1}x_{1}^{\\prime}+\\alpha_{h2}x_{2}^{\\prime}+\\alpha_{h3}x_{3}^{\\prime}+\\alpha_{h4}x_{4}^{\\prime}\\\\\n\\mathrm{A}=\\mathrm{\\left|\\begin{matrix}\\alpha_{11}, & \\alpha_{12}, & \\alpha_{13}, & \\alpha_{14}\\\\\n\\alpha_{21}, & \\alpha_{22}, & \\alpha_{23}, & \\alpha_{24}\\\\\n\\alpha_{31}, & \\alpha_{32}, & \\alpha_{33}, & \\alpha_{34}\\\\\n\\alpha_{41}, & \\alpha_{42}, & \\alpha_{43}, & \\alpha_{44}\n\\end{matrix}\\right|,\\ \\begin{align}\\bar{\\mathrm{A}}\\mathrm{A} & =1\\\\\n\\left(\\det \\mathrm{A}\\right)^{2} & =1\\\\\n\\det \\mathrm{A} & =1\\\\\n\\alpha_{44} & >0\n\\end{align}\n}\n\\end{matrix}"
},
{
"math_id": 40,
"text": "\\begin{matrix}\\left.\\begin{array}{lrl}\nx'= & x\\ \\cos\\varphi+l\\ \\sin\\varphi, & y'=y\\\\\nl'= & -x\\ \\sin\\varphi+l\\ \\cos\\varphi, & z'=z\n\\end{array}\\right\\} \\\\\n\\left(\\operatorname{tg}\\varphi=i\\beta,\\ \\cos\\varphi=\\frac{1}{\\sqrt{1-\\beta^{2}}},\\ \\sin\\varphi=\\frac{i\\beta}{\\sqrt{1-\\beta^{2}}}\\right)\\\\\n\\hline \\beta=\\frac{1}{i}\\operatorname{tg}\\left(\\varphi_{1}+\\varphi_{2}\\right)=\\frac{1}{i}\\frac{\\operatorname{tg}\\varphi_{1}+\\operatorname{tg}\\varphi_{2}}{1-\\operatorname{tg}\\varphi_{1}\\operatorname{tg}\\varphi_{2}}=\\frac{\\beta_{1}+\\beta_{2}}{1+\\beta_{1}\\beta_{2}}\\\\\n\\cos\\varphi=\\cos\\varphi_{1}\\cos\\varphi_{2}-\\sin\\varphi_{1}\\sin\\varphi_{2}\\cos\\alpha\\\\\nv^{2}=\\frac{v_{1}^{2}+v_{2}^{2}+2v_{1}v_{2}\\cos\\alpha-\\frac{1}{c^{2}}v_{1}^{2}v_{2}^{2}\\sin^{2}\\alpha}{\\left(1+\\frac{1}{c^{2}}v_{1}v_{2}\\cos\\alpha\\right)^{2}}\n\\end{matrix}"
},
{
"math_id": 41,
"text": "\\begin{matrix}x'=x\\varphi(a)\\,{\\rm ch}\\,\\psi+t\\varphi(a)\\,{\\rm sh}\\,\\psi\\\\\nt'=-x\\varphi(a)\\,{\\rm sh}\\,\\psi+t\\varphi(a)\\,{\\rm ch}\\,\\psi\\\\\n\\hline {\\rm th}\\,\\psi=-a,\\ {\\rm sh}\\,\\psi=\\frac{a}{\\sqrt{1-a^{2}}},\\ {\\rm ch}\\,\\psi=\\frac{1}{\\sqrt{1-a^{2}}},\\ \\varphi(a)=1\\\\\n\\hline x'=\\frac{x-at}{\\sqrt{1-a^{2}}},\\ y'=y,\\ z'=z,\\ t'=\\frac{-ax+t}{\\sqrt{1-a^{2}}}\n\\end{matrix}"
},
{
"math_id": 42,
"text": "\\lambda\\left(dx^{2}+dy^{2}+dz^{2}+du^{2}\\right)"
},
{
"math_id": 43,
"text": "\\left.\\begin{matrix}z_{1}^{2}+z_{2}^{2}+z_{3}^{2}-z_{4}^{2}=0\\\\\nz_{1}=x,\\ z_{2}=y,\\ z_{3}=z,\\ z_{4}=t\\\\\nZ=\\frac{z_{1}+iz_{2}}{z_{4}-z_{3}}=\\frac{x+iy}{t-z},\\ Z'=\\frac{x'+iy'}{t'-z'}\\\\\nZ=\\frac{\\alpha Z'+\\beta}{\\gamma Z'+\\delta}\n\\end{matrix}\\right|\\begin{matrix}Z=Z'e^{\\vartheta}\\\\\n\\begin{align}x & =x', & t-z & =(t'-z')e^{\\vartheta}\\\\\ny & =y', & t+z & =(t'+z')e^{-\\vartheta}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 44,
"text": "\\begin{align}l' & =-x\\operatorname{sh}u+l\\operatorname{ch}u,\\\\\nx' & =x\\operatorname{ch}u-l\\operatorname{sh}u,\\\\\ny' & =y,\\quad z'=z,\\\\\n\\operatorname{ch}u & =\\frac{1}{\\sqrt{1-\\left(\\frac{v}{c}\\right)^{2}}}\n\\end{align}\n"
},
{
"math_id": 45,
"text": "\\frac{v}{c}=\\operatorname{th}u=\\operatorname{tg}\\psi=\\sin\\operatorname{gd}(u)=\\cos\\Pi(u)"
},
{
"math_id": 46,
"text": "\\begin{matrix}\\operatorname{ch}{u}=\\operatorname{ch}{u_{1}}\\operatorname ch{u_{2}}+\\operatorname{sh}{u_{1}}\\operatorname{sh}{u_{2}}\\cos\\alpha\\\\\n\\operatorname{ch}{u_{i}}=\\frac{1}{\\sqrt{1-\\left(\\frac{v_{i}}{c}\\right)^{2}}},\\ \\operatorname{sh}{u_{i}}=\\frac{v_{i}}{\\sqrt{1-\\left(\\frac{v_{i}}{c}\\right)^{2}}}\\\\\nv=\\sqrt{v_{1}^{2}+v_{2}^{2}-\\left(\\frac{v_{1}v_{2}}{c}\\right)^{2}}\\ \\left(a=\\frac{\\pi}{2}\\right)\n\\end{matrix}"
},
{
"math_id": 47,
"text": "\\begin{matrix}\\tau=t\\sec\\beta-x\\tan\\beta/U\\\\\n\\xi=x\\sec\\beta-Ut\\tan\\beta\\\\\n\\eta=y,\\ \\zeta=z,\\\\\n\\hline \\sin\\beta=v/U\n\\end{matrix}"
},
{
"math_id": 48,
"text": "\\begin{align}dx' & =p\\ dx-pq\\ dt\\\\\ndt' & =-pqn\\ dx+p\\ dt\\\\\np & =\\frac{1}{\\sqrt{1-q^{2}n}}\n\\end{align}\n"
},
{
"math_id": 49,
"text": "\\omega=\\sqrt{-1}"
},
{
"math_id": 50,
"text": "\\begin{matrix}V=\\frac{Q_{1}vQ_{2}}{T_{1}T_{2}}\\\\\n\\hline X^{2}+Y^{2}+Z^{2}+\\omega^{2}S^{2}=x^{2}+y^{2}+z^{2}+\\omega^{2}s^{2}\\\\\n\\hline \\begin{align}V & =Xi+Yj+Zk+\\omega S\\\\\nv & =xi+yj+zk+\\omega s\\\\\nQ_{1} & =(+Ai+Bj+Ck+D)+\\omega(A'i+B'j+C'k+D')\\\\\nQ_{2} & =(-Ai-Bj-Ck+D)+\\omega(A'i+B'j+C'k-D')\\\\\nT_{1}T_{2} & =T_{1}^{2}=T_{2}^{2}=A^{2}+B^{2}+C^{2}+D^{2}+\\omega^{2}\\left(A^{\\prime2}+B^{\\prime2}+C^{\\prime2}+D^{\\prime2}\\right)\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 51,
"text": "\\begin{matrix}\\begin{align} & \\left(i_{1}x'+i_{2}y'+i_{3}z'+ict'\\right)\\\\\n & \\quad-\\left(i_{1}x_{0}+i_{2}y_{0}+i_{3}z_{0}+ict_{0}\\right)\n\\end{align}\n=\\frac{\\left[\\begin{align} & \\left(i_{1}(A+iA')+i_{2}(B+iB')+i_{3}(C+iC')+i_{4}(D+iD')\\right)\\\\\n & \\quad\\cdot\\left(i_{1}x+i_{2}y+i_{3}z+ict\\right)\\\\\n & \\quad\\quad\\cdot\\left(i_{1}(A-iA')+i_{2}(B-iB')+i_{3}(C-iC')-(D-iD')\\right)\n\\end{align}\n\\right]}{\\left(A^{\\prime2}+B^{\\prime2}+C^{\\prime2}+D^{\\prime2}\\right)-\\left(A^{2}+B^{2}+C^{2}+D^{2}\\right)}\\\\\n\\hline \\text{where}\\\\\nAA'+BB'+CC'+DD'=0\\\\\nA^{2}+B^{2}+C^{2}+D^{2}>A^{\\prime2}+B^{\\prime2}+C^{\\prime2}+D^{\\prime2}\n\\end{matrix}"
},
{
"math_id": 52,
"text": "\\begin{matrix}g'=\\frac{pg\\pi}{M}\\\\\n\\hline \\begin{align}g & =\\sqrt{-1}ct+ix+jy+kz\\\\\ng' & =\\sqrt{-1}ct'+ix'+jy'+kz'\\\\\np & =(D+\\sqrt{-1}D')+i(A+\\sqrt{-1}A')+j(B+\\sqrt{-1}B')+k(C+\\sqrt{-1}C')\\\\\n\\pi & =(D-\\sqrt{-1}D')-i(A-\\sqrt{-1}A')-j(B-\\sqrt{-1}B')-k(C-\\sqrt{-1}C')\\\\\nM & =\\left(A^{2}+B^{2}+C^{2}+D^{2}\\right)-\\left(A^{\\prime2}+B^{\\prime2}+C^{\\prime2}+D^{\\prime2}\\right)\\\\\n & AA'+BB'+CC'+DD'=0\\\\\n & A^{2}+B^{2}+C^{2}+D^{2}>A^{\\prime2}+B^{\\prime2}+C^{\\prime2}+D^{\\prime2}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 53,
"text": "\\begin{matrix}\\begin{align}\\mathtt{D} & =\\mathbf{a}^{-1}\\mathtt{D}'\\mathbf{a}^{-1}\\\\\n\\mathtt{\\sigma} & =\\mathbf{a}\\mathtt{\\sigma}'\\mathbf{a}^{-1}\n\\end{align}\n\\\\\ne=\\mathbf{a}^{-1}e'\\mathbf{a}^{-1}\\\\\n\\hline a=\\left(1-hc^{-1}\\lambda\\right)^{\\frac{1}{2}}\\left(1+c^{-2}\\lambda^{2}\\right)^{-\\frac{1}{4}}\n\\end{matrix}"
},
{
"math_id": 54,
"text": "\\begin{matrix}q'=QqQ\\\\\n\\hline \\begin{align}q & =\\mathbf{r}+l=xi+yj+zk+\\iota ct\\\\\nq & '=\\mathbf{r}'+l'=x'i+y'j+z'k+\\iota ct'\\\\\nQ & =\\frac{1}{\\sqrt{2}}\\left(\\sqrt{1+\\gamma}+\\mathrm{u}\\sqrt{1-\\gamma}\\right)\\\\\n & =\\cos\\alpha+\\mathrm{u}\\sin\\alpha=e^{\\alpha\\mathrm{u}}\\\\\n & \\left\\{ \\gamma=\\left(1-v^{2}/c^{2}\\right)^{-1/2},\\ 2\\alpha=\\operatorname{arctg}\\ \\left(\\iota\\frac{v}{c}\\right)\\right\\} \n\\end{align}\n\\end{matrix}"
},
{
"math_id": 55,
"text": "\\begin{matrix}\\begin{matrix}\\mathfrak{v} =\\frac{\\mathfrak{v}'+(p-1)\\mathfrak{c}_{0}\\cdot\\mathfrak{c}_{0}\\mathfrak{v}'+pq\\mathfrak{c}_{0}}{p\\left(1+nq\\mathfrak{c}_{0}\\mathfrak{v}'\\right)} & \\left|\\begin{align}\\mathfrak{A}' & =\\mathfrak{A}+(p-1)\\mathfrak{c}_{0}\\cdot\\mathfrak{c}_{0}\\mathfrak{A}-pqb\\mathfrak{c}_{0}\\\\\nb' & =pb-pqn\\mathfrak{A}\\mathfrak{c}_{0}\\\\\n\\\\\n\\mathfrak{A} & =\\mathfrak{A}'+(p-1)\\mathfrak{c}_{0}\\cdot\\mathfrak{c}_{0}\\mathfrak{A}'+pqb'\\mathfrak{c}_{0}\\\\\nb & =pb'+pqn\\mathfrak{A}'\\mathfrak{c}_{0}\n\\end{align}\n\\right.\\end{matrix}\\\\\n\\left[\\mathfrak{v}=\\mathbf{u},\\ \\mathfrak{A}=\\mathbf{x},\\ b=t,\\ \\mathfrak{c}_{0}=\\frac{\\mathbf{v}}{v},\\ p=\\gamma,\\ n=\\frac{1}{c^{2}}\\right]\n\\end{matrix}"
},
{
"math_id": 56,
"text": "\\begin{matrix}\\text{original} & \\text{modern}\\\\\n\\hline \\left.\\begin{align}x^{0} & =x+\\alpha u(ux+vy+wz)-\\beta ut\\\\\ny^{0} & =y+\\alpha v(ux+vy+wz)-\\beta vt\\\\\nz^{0} & =z+\\alpha w(ux+vy+wz)-\\beta wt\\\\\nt^{0} & =-\\beta(ux+vy+wz)+\\beta t\\\\\n & \\alpha=\\frac{1}{\\sqrt{1-s^{2}}\\left(1+\\sqrt{1-s^{2}}\\right)},\\ \\beta=\\frac{1}{\\sqrt{1-s^{2}}}\n\\end{align}\n\\right| & \\begin{align}x' & =x+\\alpha v_{x}\\left(v_{x}x+v_{y}y+v_{z}z\\right)-\\gamma v_{x}t\\\\\ny' & =y+\\alpha v_{y}\\left(v_{x}x+v_{y}y+v_{z}z\\right)-\\gamma v_{y}t\\\\\nz' & =z+\\alpha v_{z}\\left(v_{x}x+v_{y}y+v_{z}z\\right)-\\gamma v_{z}t\\\\\nt' & =-\\gamma\\left(v_{x}x+v_{y}y+v_{z}z\\right)+\\gamma t\\\\\n & \\alpha=\\frac{\\gamma^{2}}{\\gamma+1},\\ \\gamma=\\frac{1}{\\sqrt{1-v^{2}}}\n\\end{align}\n\\end{matrix}"
},
{
"math_id": 57,
"text": "\\begin{array}{c|c}\n\\begin{align}\\mathbf{r}' & =\\mathbf{r}+(\\gamma-1)(\\mathbf{ru})\\mathbf{u}+i\\beta\\gamma lu\\\\\nl' & =\\gamma\\left[l-i\\beta(\\mathbf{ru})\\right]\n\\end{align}\n & \\begin{align}\\mathbf{r}' & =\\mathbf{r}+\\left[\\frac{\\gamma-1}{v^{2}}(\\mathbf{vr})-\\gamma t\\right]\\mathbf{v}\\\\\nt' & =\\gamma\\left[t-\\frac{1}{c^{2}}(\\mathbf{vr})\\right]\n\\end{align}\n\\end{array}"
},
{
"math_id": 58,
"text": "\\begin{array}{c|c|c|c|c}\n & x & y & z & t\\\\\n\\hline x' & 1-\\frac{v_{x}^{2}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & -\\frac{v_{x}v_{y}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & -\\frac{v_{x}v_{z}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & \\frac{-v_{x}}{\\sqrt{1-\\beta^{2}}}\\\\\ny' & -\\frac{v_{x}v_{y}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & 1-\\frac{v_{y}^{2}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & -\\frac{v_{y}v_{z}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & \\frac{-v_{y}}{\\sqrt{1-\\beta^{2}}}\\\\\nz' & -\\frac{v_{x}v_{z}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & -\\frac{v_{y}v_{z}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & 1-\\frac{v_{z}^{2}}{v^{2}}\\left(1-\\frac{1}{\\sqrt{1-\\beta^{2}}}\\right) & \\frac{-v_{z}}{\\sqrt{1-\\beta^{2}}}\\\\\nt' & \\frac{-v_{x}}{c^{2}\\sqrt{1-\\beta^{2}}} & \\frac{-v_{y}}{c^{2}\\sqrt{1-\\beta^{2}}} & \\frac{-v_{z}}{c^{2}\\sqrt{1-\\beta^{2}}} & \\frac{1}{\\sqrt{1-\\beta^{2}}}\n\\end{array}"
},
{
"math_id": 59,
"text": "\\mathfrak{D}"
},
{
"math_id": 60,
"text": "\\mathbf{v}'=-\\mathfrak{D}\\mathbf{v}"
},
{
"math_id": 61,
"text": "\\begin{array}{c}\n\\begin{align}\\mathbf{x}' & =\\mathfrak{D}^{-1}\\mathbf{x}-\\mathbf{v}'\\left\\{ \\left(\\gamma-1\\right)(\\mathbf{x\\cdot v})/v^{2}-\\gamma t\\right\\} \\\\\nt' & =\\gamma\\left(t-(\\mathbf{v}\\cdot\\mathbf{x})/c^{2}\\right)\n\\end{align}\n\\end{array}"
},
{
"math_id": 62,
"text": "\\begin{matrix}x^{2}+y^{2}-z^{2}-1=0\\\\\n\\hline {\\scriptstyle \\begin{align}\\delta a & =\\lambda^{2}+\\mu^{2}+\\nu^{2}-\\rho^{2}, & \\delta b & =2(\\lambda\\mu+\\nu\\rho), & \\delta c & =-2(\\lambda\\nu+\\mu\\rho),\\\\\n\\delta a' & =2(\\lambda\\mu-\\nu\\rho), & \\delta b' & =-\\lambda^{2}+\\mu^{2}+\\nu^{2}-\\rho^{2}, & \\delta c' & =2(\\lambda\\rho-\\mu\\nu),\\\\\n\\delta a'' & =2(\\lambda\\nu-\\mu\\rho), & \\delta b'' & =2(\\lambda\\rho+\\mu\\nu), & \\delta c'' & =-\\left(\\lambda^{2}+\\mu^{2}+\\nu^{2}+\\rho^{2}\\right),\n\\end{align}\n}\\\\\n\\left(\\delta=\\lambda^{2}+\\mu^{2}-\\rho^{2}-\\nu^{2}\\right)\\\\\n\\lambda=\\nu=0\\rightarrow\\text{Hyperbolic rotation}\n\\end{matrix}"
},
{
"math_id": 63,
"text": "\\begin{matrix}F=\\left(x_{1}-x_{2}\\right)^{2}+\\left(y_{1}-y_{2}\\right)^{2}+\\left(z_{1}-z_{2}\\right)^{2}-\\left(t_{1}-t_{2}\\right)^{2}\\\\\n\\hline {\\scriptstyle \\begin{align} & \\left(\\mu^{2}+\\nu^{2}-\\alpha^{2}\\right)\\cos\\varphi+\\left(\\lambda^{2}-\\beta^{2}-\\gamma^{2}\\right)\\operatorname{ch}{\\theta} & & -(\\alpha\\beta+\\lambda\\mu)(\\cos\\varphi-\\operatorname{ch}{\\theta})-\\nu\\sin\\varphi-\\gamma\\operatorname{sh}{\\theta}\\\\\n & -(\\alpha\\beta+\\lambda\\mu)(\\cos\\varphi-\\operatorname{ch}{\\theta})-\\nu\\sin\\varphi+\\gamma\\operatorname{sh}{\\theta} & & \\left(\\mu^{2}+\\nu^{2}-\\beta^{2}\\right)\\cos\\varphi+\\left(\\mu^{2}-\\alpha^{2}-\\gamma^{2}\\right)\\operatorname{ch}{\\theta}\\\\\n & -(\\alpha\\gamma+\\lambda\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\mu\\sin\\varphi-\\beta\\operatorname{sh}{\\theta} & & -(\\beta\\mu+\\mu\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\lambda\\sin\\varphi+\\alpha\\operatorname{sh}{\\theta}\\\\\n & (\\gamma\\mu-\\beta\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\alpha\\sin\\varphi-\\lambda\\operatorname{sh}{\\theta} & & -(\\alpha\\nu-\\lambda\\gamma)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\beta\\sin\\varphi-\\mu\\operatorname{sh}{\\theta}\\\\\n\\\\\n & \\quad-(\\alpha\\gamma+\\lambda\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\mu\\sin\\varphi+\\beta\\operatorname{sh}{\\theta} & & \\quad(\\beta\\nu-\\mu\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\alpha\\sin\\varphi-\\lambda\\operatorname{sh}{\\theta}\\\\\n & \\quad-(\\beta\\mu+\\mu\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})-\\lambda\\sin\\varphi-\\alpha\\operatorname{sh}{\\theta} & & \\quad(\\lambda\\gamma-\\alpha\\nu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\beta\\sin\\varphi-\\mu\\operatorname{sh}{\\theta}\\\\\n & \\quad\\left(\\lambda^{2}+\\mu^{2}-\\gamma^{2}\\right)\\cos\\varphi+\\left(\\nu^{2}-\\alpha^{2}-\\beta^{2}\\right)\\operatorname{ch}{\\theta} & & \\quad(\\alpha\\mu-\\beta\\lambda)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\gamma\\sin\\varphi-\\nu\\operatorname{sh}{\\theta}\\\\\n & \\quad(\\beta\\gamma-\\alpha\\mu)(\\cos\\varphi-\\operatorname{ch}{\\theta})+\\gamma\\sin\\varphi-\\nu\\operatorname{sh}{\\theta} & & \\quad-\\left(\\alpha^{2}+\\beta^{2}+\\gamma^{2}\\right)\\cos\\varphi+\\left(\\lambda^{2}+\\mu^{2}+\\nu^{2}\\right)\\operatorname{ch}{\\theta}\n\\end{align}\n}\\\\\n\\left(\\alpha^{2}+\\beta^{2}+\\gamma^{2}-\\lambda^{2}-\\mu^{2}-\\nu^{2}=-1\\right)\n\\end{matrix}"
},
{
"math_id": 64,
"text": "\\begin{matrix}v=\\alpha\\cdot c;\\quad\\beta=\\frac{1}{\\sqrt{1-\\alpha^{2}}}\\\\\n\\sin\\varphi=\\alpha;\\quad\\beta=\\frac{1}{\\cos\\varphi};\\quad\\alpha\\beta=\\tan\\varphi\\\\\n\\hline x'=\\frac{x}{\\cos\\varphi}-t\\cdot\\tan\\varphi,\\quad t'=\\frac{t}{\\cos\\varphi}-x\\cdot\\tan\\varphi\n\\end{matrix}"
},
{
"math_id": 65,
"text": "\\begin{matrix}\\alpha=\\frac{v}{c};\\ \\beta=\\frac{1}{\\sqrt{1-\\alpha^{2}}};\\\\\n\\cos\\theta=\\alpha=\\frac{v}{c};\\ \\sin\\theta=\\frac{1}{\\beta};\\ \\cot\\theta=\\alpha\\cdot\\beta\\\\\n\\hline x'=\\frac{x}{\\sin\\theta}-t\\cdot\\cot\\theta,\\quad t'=\\frac{t}{\\sin\\theta}-x\\cdot\\cot\\theta\n\\end{matrix}"
}
]
| https://en.wikipedia.org/wiki?curid=7058047 |
70582100 | ANAIS-112 | Spanish dark matter direct detection experiment
ANAIS (Annual modulation with NaI Scintillators) is a dark matter direct detection experiment located at the Canfranc Underground Laboratory (LSC), in Spain, operated by a team of researchers of the CAPA at the University of Zaragoza.
ANAIS' goal is to confirm or refute in a model independent way the experiment positive result: an annual modulation in the low-energy detection rate having all the features expected for the signal induced by weakly interacting dark matter particles () in a standard galactic . This modulation is produced as a result of the Earth rotation around the Sun. A modulation with all the characteristic of a (DM) signal has been observed for about 20 years by DAMA/LIBRA, but it is in strong tension with the negative results of other DM direct detection experiments. Compatibility among the different experimental results in most conventional WIMP-DM scenarios is actually disfavored, but it is strongly dependent on the DM particle and halo models considered. A comparison using the same target material, (Tl), is more direct and almost model-independent.
Experimental set up and performance.
Source:
ANAIS-112 experimental setup consists of 112.5 kg of (Tl), distributed in 9 cylindrical modules, 12.5 kg each and built by Alpha Spectra Inc., arranged in a 3 × 3 configuration.
Among the most relevant features of ANAIS- 112 modules, it is worth highlighting its remarkable optical quality, which combined to using high Hamamatsu (PMTs) results in a very high light collection, at the level of 15 photo (phe) per in all the nine modules. The signals from the two PMTs coupled to each module are digitized at 2 GS/s in a 1.2 μs window with high resolution (14 ). The requires the coincidence of the two PMT trigger signals in a 200 ns window, while the PMT individual is set at the single phe level.
Another interesting feature is a Mylar window in the middle of one of the lateral faces of the detectors, which allows to calibrate simultaneously the nine modules with external / sources down to 10 in a -free environment. A careful low energy calibration of the region of interest (ROI), from 1 to 6 keV, is carried out by combining information from external calibrations and background. External calibrations with a 109Cd source are performed every two weeks, and every 1.5 months energy depositions at 3.2 and 0.87 from 40K and 22Na internal contaminations in one ANAIS module are selected by profiting from the coincidence with a high energy in a second module.
The ANAIS-112 experiment is installed inside a shielding consisting of an inner layer of 10 cm of archaeological and an outer layer of 20 cm of low activity lead. This lead shielding is encased into an anti- box, tightly closed and kept under overpressure with -free nitrogen gas. The external layer of the shielding (the shielding) consists of 40 cm of a combination of water tanks and bricks. An active veto made up of 16 plastic is placed between the anti- box and the shielding, covering the top and sides of the set-up allowing to effectively tag the residual flux onsite along the ANAIS-112 data taking.
ANAIS-112 was commissioned during the spring of 2017 and it started the data-taking phase at the hall B of the LSC on 3 August 2017 under 2450 m.w.e. rock overburden. The "live time" of the experiment, useful for analysis, is more than 95%, allowing for the high duty cycle achieved. Down time is mostly due to the periodical calibration of the modules.
A background understanding has been achieved, except in the [1-2] keV energy region, where the background model underestimates the measured event rate. Crystal bulk contamination is the dominant background source, being 210Pb, 40K, 22Na, 3H contributions the most relevant ones in the region of interest. Considering altogether the nine ANAIS-112 modules, the average background in the ROI is 3.6 cpd/kg/keV after three years of data taking, while DAMA/LIBRAphase2 background is below 0.80 cpd/kg/keV in the[1–2] keV energy interval, below 0.24 cpd/kg/keV in the [2–3] keV energy interval, and below 0.12 cpd/kg/keV in the [3–4] keV energy interval.
Annual modulation analysis and results.
The development of filtering protocols based on the pulse shape and light sharing among the two PMTs has been crucial to fulfill the ANAIS-112 goal since the trigger rate in the ROI is dominated by non-bulk scintillation events. The determination of the corresponding is very important, and it is calculated using 109Cd, 40K and 22Na events. It is very close to 100% down to 2 keV, and then decreases steeply to about 15% at 1 keV, where the analysis threshold is set.
A for the annual modulation analysis of ANAIS-112 data has been applied: single-hit events in the ROI are kept blinded during the event selection. Up to now, three unblindings of the data have been carried out: at 1.5 years, at 2 years, and 3 years, which correspond to exposures of 157.55, 220.69, and 313.95 kg×y, respectively. ANAIS-112 annual modulation search is performed in the same regions explored by DAMA/LIBRA collaboration, [1–6] keV and [2–6] keV, fixing the period to 1 year and the maximum of the modulation to 2 June.
To evaluate the statistical significance of a possible modulation in ANAIS–112 data, the events rate of the nine detectors is calculated in 10-days bins, and it is χ2 = Σi (ni − μi)2/σ2i, where ni is the number of events in the time bin ti (corrected by live time and detector efficiency), σi is the corresponding uncertainty, accordingly corrected, and μi is the expected number of events at that time bin, that depends on the background model and can be written as: μi = [R0φbkg(ti) + Smcos(ω(ti − t0))]M∆E∆t.
Here, R0 represents the non-modulated rate in the experiment, formula_0 is the (PDF) in time of any non-modulated component, Sm is the modulation amplitude, ω is fixed to 2π/365 d = 0.01721 rad d−1, t0 to −62.2 d (time origin has been taken on 3 August and then the cosine maximum is on 2 June), M is the total detector mass, ∆E is the energy interval width, and ∆t the time bin width. R0 is a free parameter, while Sm is either fixed to 0 (for the null hypothesis) or left unconstrained, positive or negative (for the modulation hypothesis).
The null hypothesis is well supported for the 3-years data in both energy regions, being the results for the two background models (a single exponential or a PDF based on the background model) compatible. The standard deviation σ(Sm) is slightly lower when detectors are considered independently, as expected following a priori sensitivity analysis. Therefore, this fit is chosen to quote the ANAIS-112 annual modulation final result and sensitivity for three-year exposure. The best fits are incompatible with the DAMA/LIBRA result at 3.3 and 2.6 σ in [1-6] and [2-6] keV energy regions, for a sensitivity of 2.5 (2.7)σ at [1–6] keV ([2–6] keV). ANAIS-112 results for 1.5, 2 and 3 years of data-taking fully confirm the sensitivity projection.
ANAIS-112 results support the prospects of reaching a sensitivity above 3σ in 2022, within the scheduled 5 years of data taking.
Several consistency checks have been carried out (changing the number of detectors entering into the fit, considering only the first two years or the last two years, or changing the time bin size), concluding that there is no hint supporting relevant systematical uncertainties in the result. The performance of a large set of pseudo-experiments sampled from the background model guarantees that the fit is not biased. A frequency analysis have also been conducted, and the conclusion is that there is no statistically significant modulation in the frequency range searched in the ANAIS-112 data.
Future prospects.
ANAIS-112 sensitivity limitation is mostly due to the high background in the ROI, but in particular in the region from 1 to 2 keV. In this context, the application of techniques based on (BDTs), under development at present, could improve the rejection of these non-bulk scintillation events. Preliminary results point to a relevant sensitivity improvement. Extending the data taking for a few more years, could allow testing DAMA/LIBRA at the 5σ level. Operation at Canfranc Underground Laboratory has been granted until the end of 2025.
One possible systematics affecting the comparison between DAMA/LIBRA and ANAIS result is a possible different detector response to nuclear recoils, because both experiments are calibrated using x-rays/gammas. It is well known that scintillation is strongly quenched for energy deposited by nuclear recoils with respect to the same energy deposited by electrons. Measurements of Quenching Factors (QF) in NaI scintillators are affected by strong discrepancies. ANAIS-112 detectors QF are being determined after measurements at TUNL. In addition, a complete calibration program for the experiment using neutron sources onsite is being developed.
ANAIS-112 published results are available in open access at the webpage of the Dark Matter Data Center: https://www.origins-cluster.de/odsl/dark-matter-data-center/available-datasets/anais
Data are available upon request.
Funding Agencies.
ANAIS experiment operation is presently financially supported by MICIU/AEI/10.13039/501100011033 (Grants No. PID2022-138357NB-C21 and PID2019-104374GB-I00), and Unión Europea NextGenerationEU/PRTR (AstroHEP) and the Gobierno de Aragón. Funding from Grant FPA2017-83133-P, Consolider-Ingenio 2010 Programme under grants MULTIDARK CSD2009-00064 and CPAN CSD2007-00042, the Gobierno de Aragón and the LSC Consortium made possible the setting-up of the detectors. The technical support from LSC and GIFNA staff as well as from Servicios de Apoyo a la Investigación de la Universidad de Zaragoza (SAIs) is warmly acknowledged.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi_{bkg}"
}
]
| https://en.wikipedia.org/wiki?curid=70582100 |
7058352 | Matrix pencil | In linear algebra, if formula_0 are formula_1 complex matrices for some nonnegative integer formula_2, and formula_3 (the zero matrix), then the matrix pencil of degree formula_2 is the matrix-valued function defined on the complex numbers formula_4
A particular case is a linear matrix pencil formula_5 with formula_6 (or formula_7) where formula_8 and formula_9 are complex (or real) formula_10 matrices. We denote it briefly with the notation formula_11.
A pencil is called "regular" if there is at least one value of formula_12 such that formula_13. We call "eigenvalues" of a matrix pencil formula_11 all complex numbers formula_12 for which formula_14; in particular, the eigenvalues of the matrix pencil formula_15 are the matrix eigenvalues of formula_8. The set of the eigenvalues is called the "spectrum" of the pencil and is written formula_16.
Moreover, the pencil is said to have one or more eigenvalues "at infinity" if formula_9 has one or more 0 eigenvalues.
Applications.
Matrix pencils play an important role in numerical linear algebra. The problem of finding the eigenvalues of a pencil is called the generalized eigenvalue problem. The most popular algorithm for this task is the QZ algorithm, which is an implicit version of the QR algorithm to solve the associated eigenvalue problem formula_17 without forming explicitly the matrix formula_18 (which could be impossible or ill-conditioned if formula_9 is singular or near-singular)
Pencil generated by commuting matrices.
If formula_19, then the pencil generated by formula_8 and formula_9:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_0, A_1,\\dots,A_\\ell"
},
{
"math_id": 1,
"text": "n\\times n"
},
{
"math_id": 2,
"text": "\\ell"
},
{
"math_id": 3,
"text": "A_\\ell \\ne 0"
},
{
"math_id": 4,
"text": "L(\\lambda) = \\sum_{i=0}^\\ell \\lambda^i A_i. "
},
{
"math_id": 5,
"text": "A - \\lambda B"
},
{
"math_id": 6,
"text": "\\lambda \\in \\mathbb C"
},
{
"math_id": 7,
"text": "\\mathbb R"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "B"
},
{
"math_id": 10,
"text": "n \\times n"
},
{
"math_id": 11,
"text": "(A,B)"
},
{
"math_id": 12,
"text": "\\lambda"
},
{
"math_id": 13,
"text": "\\det(A - \\lambda B) \\neq 0"
},
{
"math_id": 14,
"text": "\\det(A - \\lambda B) = 0"
},
{
"math_id": 15,
"text": "(A,I)"
},
{
"math_id": 16,
"text": "\\sigma(A,B)"
},
{
"math_id": 17,
"text": "B^{-1}Ax = \\lambda x"
},
{
"math_id": 18,
"text": "B^{-1}A"
},
{
"math_id": 19,
"text": "AB = BA"
}
]
| https://en.wikipedia.org/wiki?curid=7058352 |
70587836 | Joshua 8 | Book of Joshua, chapter 8
Joshua 8 is the eighth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the conquest of Ai under the leadership of Joshua and the renewal of covenant on Mounts Ebal and Gerizim, a part of a section comprising Joshua 5:13–12:24 about the conquest of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 35 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q47 (4QJosha; 200–100 BCE) with extant verses 3–14, 18, also 34–35 (before 5:1).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Analysis.
The narrative of the Israelites conquering the land of Canaan comprises verses 5:13 to 12:24 of the Book of Joshua and has the following outline:
A. Jericho (5:13–6:27)
B. Achan and Ai (7:1–8:29)
1. The Sin of Achan (7:1-26)
a. Narrative Introduction (7:1)
b. Defeat at Ai (7:2-5)
c. Joshua's Prayer (7:6-9)
d. Process for Identifying the Guilty (7:10-15)
e. The Capture of Achan (7:16-21)
f. Execution of Achan and His Family (7:22-26)
2. The Capture of Ai (8:1-29)
a. Narrative Introduction (8:1-2)
b. God's Plan for Capturing the City (8:3-9)
c. Implementation of God's Plan (8:10-13)
d. The Successful Ambush (8:14-23)
e. Destruction of Ai (8:24-29)
C. Renewal at Mount Ebal (8:30–35)
1. Building the Altar (8:30-31)
2. Copying the Torah (8:32-33)
3. Reading the Torah (8:34-35)
D. The Gibeonite Deception (9:1–27)
E. The Campaign in the South (10:1–43)
F. The Campaign in the North and Summary List of Kings (11:1–12:24)
The narrative of Joshua 7–8 combines the story of Achan's offence against the 'devoted things', and the battle report concerning Ai, as the two themes are linked.
The firsf part of this chapter concerning the Battle against Ai has the following structure:
1. YHWH encourages Joshua and command him to take Ai by ambush (8:1–2)
2. Joshua organizes Israel for battle as YHWH commanded (8:3–13)
3. Israel carries out the tactics of YHWH (8:14–17)
4. YHWH directs Israel to victory through Joshua (8:18–23)
5. The report of victory (8:24–29)
The second part (8:30–35) is an interlude for divine worship before the next military campaigns, taking place on two mountains, involving an altar, sacrifice, a copy of Torah and pronouncement of God's blessings and curses.
Fall of Ai (8:1–29).
With the problem in Joshua 7 resolved, God is with his people again in the conquest of the land, so Ai, like Jericho before it, will fall to the Israelites (verse 2). The narrative contains military and topographical details, as YHWH takes charge in the taking of Ai (verses 1–2), in contrast to the previous attempt, where Joshua took charge. Unlike Jericho, the people of Israel may take plunder after conquering Ai. Using the stratagem of pretended flight (cf. Judges 20:36–38), simulating the first defeat (verse 6, cf. 7:4–5), the Israel tricked the men of Ai to leave the city void of defense, so a second unit of Israelite army could get in from the west (opposite direction of a direct confrontation) and conquer the city, then went out to pinch the men of Ai from two sides and killed them all. Two memorials of the victory against Ai are established: the ash piles of the burnt city; and a heap of stones for the dead king of Ai (verses 28–29).
The report related to the sending of the unit for the ambush consists of two versions (one in verses 3–9 and the other in verses 10–13) which are both preserved in succession, starting and closing with similar phrases ("Joshua rose" in verses 3 and 10; "Joshua…that night… in the middle" in verses 9 and 13).
"So Joshua burned Ai and made it a heap forever, a desolation to this day."
The covenant renewal at Mount Ebal (8:30–35).
The taking of Ai (and the implied defeat of Bethel as well) marks an important point in the conquest, that the ceremony reported here could be performed following the instruction in the Book of Deuteronomy, that 'on the day that you cross over the Jordan', the people should setup large stones on Mount Ebal, cover them with plaster, and write 'all the words of this law' on them, then to erect an altar for sacrifice (Deuteronomy 27:2–8), and solemnly reaffirm the covenant with God (Deuteronomy 27:11–26). The ceremony on Mounts Ebal and Gerizim, near ancient Shechem, made the 'book of the law', first only for Joshua himself as he led Israel into the land (Joshua 1:7-8), to become the rule for the whole people of Israel, which would lead to another covenant renewal ceremony at Shechem at the end of the book (Joshua 24).
"30 Then Joshua built an altar to the Lord God of Israel on Mount Ebal, 31 as Moses the servant of the Lord had commanded the children of Israel. As is written in the Book of the Law of Moses, it was “an altar of uncut stones not shaped by iron tools.” They sacrificed burnt offerings to the Lord on it, as well as peace offerings."
Archaeology.
Archaeological works in the 1930s at the location of Et-Tell or Khirbet Haijah showed that the city of Ai, an early target for conquest in the putative Joshua account, had existed and been destroyed, but in the 22nd century BCE. Some alternate sites for Ai, such as Khirbet el-Maqatir or Khirbet Nisya, have been proposed which would partially resolve the discrepancy in dates, but these sites have not been widely accepted.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70587836 |
70588167 | Joshua 9 | Book of Joshua, chapter 9
Joshua 9 is the ninth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BC. This chapter focuses on the deception by the people of Gibeon to avoid annihilation by having a treaty with the people of Israel under the leadership of Joshua, a part of a section comprising Joshua 5:13–12:24 about the conquest of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century). Fragments of the Septuagint Greek text containing this chapter is found in manuscripts such as Washington Manuscript I (5th century CE), and a reduced version of the Septuagint text is found in the illustrated Joshua Roll.
Analysis.
The narrative of the Israelites conquering the land of Canaan comprises verses 5:13 to 12:24 of the Book of Joshua and has the following outline:
A. Jericho (5:13–6:27)
B. Achan and Ai (7:1–8:29)
C. Renewal at Mount Ebal (8:30–35)
D. The Gibeonite Deception (9:1–27)
1. Response of Canaanite Kings to Jericho and Ai (9:1-2)
2. Report of the Gibeonites' Deception (9:3-13)
3. Israel Establishes a Covenant with Gibeon (9:14-15)
4. Israel's First Response to Discovering the Deception (9:16-21)
5. The Gibeonites Explain Their Actions to Joshua (9:22-27)
E. The Campaign in the South (10:1–43)
F. The Campaign in the North and Summary List of Kings (11:1–12:24)
Israel Establishes a Covenant with Gibeon (9:1–15).
The successes of Israel at Jericho and Ai caused independent kings of different nations in Canaan (Deuteronomy 1:7; 7:1; Joshua 3:10; the Girgashites are not listed here) to form an alliance in anticipation of the battle with the Israelites (verses 1–2), except for the Gibeonites, parts of the Hivites, who decided to pretend that they were from a faraway land (verse 3) and to make a peace treaty with the Israelites (verse 6).
Gibeon lay to the south of Bethel and Ai, a little to the north of Jerusalem, while the Israelite camp was still at Gilgal (verse 6), near Jericho. A treaty, or 'covenant' (Hebrew: "berit", the same word used for God's covenant with Israel, Exodus 24:7), was a 'universal mean of establishing relationships among peoples in the ancient Near East' (cf. Joshua 24). The Gibeonites acknowledge of Israel's successes since Egypt to the victories in Transjordan (verses 9–10), so they seek an inferior status (to be "vassal") as the price of survival. The 'leaders' (verse 14; or 'leaders of the congregation' in verse 18) of Israel, who represent Israel in an official way, conclude the treaty, eating the Gibeonites' provisions, and then Joshua makes peace with them. The narrative, however, states that the treaty was not according to the will of YHWH, because the Israelites did not consult YHWH about it.
The responses after discovering the deception (9:16–27).
When the Gibeonites was revealed to be local inhabitants, the Israelites debated whether they should still implement the "herem" ("ban"; verse 16–21) on these people, or rather honor the oath, and the decision was for the latter, with the Gibeonites consigned to servitude, as the retribution of their deceit. The short report in verse 21 is expanded in the final paragraph (verses 22–27) with a dialogue between Joshua and the Gibeonites, in which Joshua pronounced them 'cursed' for acquiring the treaty by deceit and the Gibeonites accepted the right of the Israelites (here, of Joshua) to decide their fate. The Gibeonites was assigned to servitude at the 'place that YHWH should choose', that is, the main worship sanctuary of Israel (Deuteronomy 12:5, 14), which may refer to Shiloh (Jeremiah 7:12), a central sanctuary for Israel before Jerusalem (1 Samuel 1–3) or to city of Gibeon, as the great 'high place' at which Solomon would worship before building the temple (1 Kings 3:4), where the tent of meeting was established after Shiloh (2 Chronicles 1:3). By the time of Saul's reign, the application of the treaty was already well established, that when Saul broke the covenant by killing the Gibeonites probably to extend his territory in Benjamin, Israel suffered the consequences of a famine (2 Samuel 21).
"And the people of Israel set out and reached their cities on the third day. Now their cities were Gibeon, Chephirah, Beeroth, and Kiriath-jearim."
Verse 17.
This verse shows that the "Gibeonites" live in four towns (a "tetrapolis"). Three of the four cities, without Gibeon, appear in Ezra 2:25 and Nehemiah 7:29.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70588167 |
70594221 | Consideration set | Consideration sets in consumer behavior
Consideration set is a model used in consumer behaviour to represent all of the brands and products a consumer evaluates before making a final purchase decision. The term consideration set was first used in 1977 by Peter Wright and Fredrick Barbour. The consideration set is a subset of the awareness set, which is all of the brands and products a consumer initially thinks of when faced with a purchasing decision. The awareness set is filtered into the consideration set through the consumer's individual thoughts, preferences, and feelings — such as price, mood, previous experiences, and heuristics. Conversely, products that do not meet the criteria for the consideration set are either placed into the inert set or the inept set. These sets are fluid and the products in each set can change rapidly when the consumer is presented with new information.
Set hierarchy and theory.
Consider the universal set, formula_0, to be the set of all products and brands which will satisfy a given need. The awareness or knowledge set, formula_1, is defined as all of the products and brands within the universal set that the consumer is aware of. This awareness can be derived from a product search, brand familiarity, advertisements, word-of-mouth, or any other method which informs the consumer of a viable option. While the awareness set is largely composed of products that reside in the consumer's long-term memory, the awareness set can also be expanded by products found during the search process – such as recommended products on an e-commerce site or shelves in a supermarket. Thus formula_2.
The awareness set can be further divided into the following sets:
Thus, formula_6
Psychological basis.
The consideration set models how humans behave when faced with many choices. While the consideration set is not directly observable, researchers believe that its existence is evident by a logical conjunction of prominent economic and psychological theories.
In this model, consumers screen many product options before fully evaluating just a few options (usually 2–5), which reduces the cognitive load and fatigue of an exhaustive search. This screening process is mostly based on heuristics about the product, and is generally considered to be a lower-effort process than the evaluation of the consideration set, once it is formed. Studies on heuristic screening methodologies have shown that consumers are more satisfied with their purchase decision and less stressed by the decision-making process when the consideration set is smaller. This is particularly true when information overload is high; that is, the quantity of available information exceeds the processing power a consumer is willing to dedicate to it.
Researchers have several theories as to why the consideration set is formed. The formation of the consideration set is considered the first step in common decision-making frameworks, with the second step being choosing an alternative from the set.
Formation.
Consumer behavior researchers have identified many frameworks, methodologies, and heuristics consumers often use to form a consideration set. It is important to note that this is not an exhaustive list, and that the formation of the consideration set varies "significantly" depending on the consumer and context of the decision. Nonetheless, some commonly observed methodologies are:
Decision-making.
Once the consideration set is formed, the next step is to select an alternative from the evaluated options. There are several models that describe how consumers make these selections, however many researchers believe that this process is concurrent with the formation process; that is, product selection is often being contemplated while the consideration set is forming. Nonetheless, the following methodologies are widely considered to be the second step of consumer decision making:
Limitations and critiques.
The decision-making process is still not well enough understood to clarify the distinction between the models used to represent the process and the process of decision-making itself. Many researchers reject the idea of a two-step decision-making process using a consideration set, and instead insist on viewing the consideration set as simply an indicator of preferences. Many researchers claim that knowing a particular consumer's consideration set is not enough to predict their final product choice, and this knowledge is trivial when compared to something like the utility function, which is much more robust. Since neither the utility function or consideration set are directly observable, researchers are still unsure whether either is an accurate or useful model for describing choice.
Another criticism of the consideration set is how it is applied. Marketers often assume that all consumers have the same consideration set; that is, they assume all consumers are selecting between the same set of options. The consideration set is actually theorized to be highly individualistic — and the products within it reflect a variety of factors such as the consumer's socioeconomic status, attitudes, and perceptions. This implies that marketers should treat consideration set formation as probabilistic, rather than objective.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "A \\subseteq U"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "R"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "C \\cup R \\cup P = A \\subseteq U"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "S = 1"
}
]
| https://en.wikipedia.org/wiki?curid=70594221 |
70595668 | Integrated nested Laplace approximations | Bayesian inference method
Integrated nested Laplace approximations (INLA) is a method for approximate Bayesian inference based on Laplace's method. It is designed for a class of models called latent Gaussian models (LGMs), for which it can be a fast and accurate alternative for Markov chain Monte Carlo methods to compute posterior marginal distributions. Due to its relative speed even with large data sets for certain problems and models, INLA has been a popular inference method in applied statistics, in particular spatial statistics, ecology, and epidemiology. It is also possible to combine INLA with a finite element method solution of a stochastic partial differential equation to study e.g. spatial point processes and species distribution models. The INLA method is implemented in the R-INLA R package.
Latent Gaussian models.
Let formula_0 denote the response variable (that is, the observations) which belongs to an exponential family, with the mean formula_1 (of formula_2) being linked to a linear predictor formula_3 via an appropriate link function. The linear predictor can take the form of a (Bayesian) additive model. All latent effects (the linear predictor, the intercept, coefficients of possible covariates, and so on) are collectively denoted by the vector formula_4. The hyperparameters of the model are denoted by formula_5. As per Bayesian statistics, formula_4 and formula_5 are random variables with prior distributions.
The observations are assumed to be conditionally independent given formula_4 and formula_5:
formula_6
where formula_7 is the set of indices for observed elements of formula_8 (some elements may be unobserved, and for these INLA computes a posterior predictive distribution). Note that the linear predictor formula_9 is part of formula_4.
For the model to be a latent Gaussian model, it is assumed that formula_10 is a Gaussian Markov Random Field (GMRF) (that is, a multivariate Gaussian with additional conditional independence properties) with probability density
formula_11
where formula_12 is a formula_5-dependent sparse precision matrix and formula_13 is its determinant. The precision matrix is sparse due to the GMRF assumption. The prior distribution formula_14 for the hyperparameters need not be Gaussian. However, the number of hyperparameters, formula_15, is assumed to be small (say, less than 15).
Approximate Bayesian inference with INLA.
In Bayesian inference, one wants to solve for the posterior distribution of the latent variables formula_4 and formula_5. Applying Bayes' theorem
formula_16
the joint posterior distribution of formula_4 and formula_5 is given by
formula_17
Obtaining the exact posterior is generally a very difficult problem. In INLA, the main aim is to approximate the posterior marginals
formula_18
where formula_19.
A key idea of INLA is to construct nested approximations given by
formula_20
where formula_21 is an approximated posterior density. The approximation to the marginal density formula_22 is obtained in a nested fashion by first approximating formula_23 and formula_24, and then numerically integrating out formula_5 as
formula_25
where the summation is over the values of formula_5, with integration weights given by formula_26. The approximation of formula_27 is computed by numerically integrating formula_28 out from formula_29.
To get the approximate distribution formula_29, one can use the relation
formula_30
as the starting point. Then formula_31 is obtained at a specific value of the hyperparameters formula_32 with Laplace's approximation
formula_33
where formula_34 is the Gaussian approximation to formula_35 whose mode at a given formula_36 is formula_37. The mode can be found numerically for example with the Newton-Raphson method.
The trick in the Laplace approximation above is the fact that the Gaussian approximation is applied on the full conditional of formula_4 in the denominator since it is usually close to a Gaussian due to the GMRF property of formula_4. Applying the approximation here improves the accuracy of the method, since the posterior formula_38 itself need not be close to a Gaussian, and so the Gaussian approximation is not directly applied on formula_38. The second important property of a GMRF, the sparsity of the precision matrix formula_39, is required for efficient computation of formula_40 for each value formula_41.
Obtaining the approximate distribution formula_42 is more involved, and the INLA method provides three options for this: Gaussian approximation, Laplace approximation, or the simplified Laplace approximation. For the numerical integration to obtain formula_43, also three options are available: grid search, central composite design, or empirical Bayes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{y}=(y_1,\\dots,y_n)"
},
{
"math_id": 1,
"text": "\\mu_i"
},
{
"math_id": 2,
"text": "y_i"
},
{
"math_id": 3,
"text": "\\eta_i"
},
{
"math_id": 4,
"text": "\\boldsymbol{x}"
},
{
"math_id": 5,
"text": "\\boldsymbol{\\theta}"
},
{
"math_id": 6,
"text": " \\pi(\\boldsymbol{y} |\\boldsymbol{x}, \\boldsymbol{\\theta}) = \\prod_{i \\in \\mathcal{I}}\\pi(y_i | \\eta_i, \\boldsymbol{\\theta}), "
},
{
"math_id": 7,
"text": "\\mathcal{I}"
},
{
"math_id": 8,
"text": "\\boldsymbol{y}"
},
{
"math_id": 9,
"text": "\\boldsymbol{\\eta}"
},
{
"math_id": 10,
"text": "\\boldsymbol{x}|\\boldsymbol{\\theta}"
},
{
"math_id": 11,
"text": " \\pi(\\boldsymbol{x} | \\boldsymbol{\\theta}) \\propto \\left| \\boldsymbol{Q_{\\theta}} \\right|^{1/2} \\exp \\left( -\\frac{1}{2} \\boldsymbol{x}^T \\boldsymbol{Q_{\\theta}} \\boldsymbol{x} \\right),"
},
{
"math_id": 12,
"text": "\\boldsymbol{Q_{\\theta}}"
},
{
"math_id": 13,
"text": "\\left| \\boldsymbol{Q_{\\theta}} \\right|"
},
{
"math_id": 14,
"text": "\\pi(\\boldsymbol{\\theta})"
},
{
"math_id": 15,
"text": "m=\\mathrm{dim}(\\boldsymbol{\\theta})"
},
{
"math_id": 16,
"text": "\n\\pi(\\boldsymbol{x}, \\boldsymbol{\\theta} | \\boldsymbol{y}) = \\frac{\\pi(\\boldsymbol{y} | \\boldsymbol{x}, \\boldsymbol{\\theta})\\pi(\\boldsymbol{x} | \\boldsymbol{\\theta}) \\pi(\\boldsymbol{\\theta}) }{\\pi(\\boldsymbol{y})},\n"
},
{
"math_id": 17,
"text": "\n\\begin{align}\n\\pi(\\boldsymbol{x}, \\boldsymbol{\\theta} | \\boldsymbol{y}) & \\propto \\pi(\\boldsymbol{\\theta})\\pi(\\boldsymbol{x}|\\boldsymbol{\\theta}) \\prod_i \\pi(y_i | \\eta_i, \\boldsymbol{\\theta}) \\\\\n& \\propto \\pi(\\boldsymbol{\\theta}) \\left| \\boldsymbol{Q_{\\theta}} \\right|^{1/2} \\exp \\left( -\\frac{1}{2} \\boldsymbol{x}^T \\boldsymbol{Q_{\\theta}} \\boldsymbol{x} + \\sum_i \\log \\left[\\pi(y_i | \\eta_i, \\boldsymbol{\\theta}) \\right] \\right). \n\\end{align}\n"
},
{
"math_id": 18,
"text": "\n\\begin{array}{rcl}\n\\pi(x_i | \\boldsymbol{y}) &=& \\int \\pi(x_i | \\boldsymbol{\\theta}, \\boldsymbol{y}) \\pi(\\boldsymbol{\\theta} | \\boldsymbol{y}) d\\boldsymbol{\\theta} \\\\\n\\pi(\\theta_j | \\boldsymbol{y}) &=& \\int \\pi(\\boldsymbol{\\theta} | \\boldsymbol{y}) d \\boldsymbol{\\theta}_{-j} ,\n\\end{array}\n"
},
{
"math_id": 19,
"text": "\\boldsymbol{\\theta}_{-j} = \\left(\\theta_1, \\dots, \\theta_{j-1}, \\theta_{j+1}, \\dots, \\theta_m \\right)"
},
{
"math_id": 20,
"text": "\n\\begin{array}{rcl}\n\\widetilde{\\pi}(x_i | \\boldsymbol{y}) &=& \\int \\widetilde{\\pi}(x_i | \\boldsymbol{\\theta}, \\boldsymbol{y}) \\widetilde{\\pi}(\\boldsymbol{\\theta} | \\boldsymbol{y}) d\\boldsymbol{\\theta} \\\\\n\\widetilde{\\pi}(\\theta_j | \\boldsymbol{y}) &=& \\int \\widetilde{\\pi}(\\boldsymbol{\\theta} | \\boldsymbol{y}) d \\boldsymbol{\\theta}_{-j} ,\n\\end{array}\n"
},
{
"math_id": 21,
"text": "\\widetilde{\\pi}(\\cdot | \\cdot)"
},
{
"math_id": 22,
"text": "\\pi(x_i | \\boldsymbol{y})"
},
{
"math_id": 23,
"text": "\\pi(\\boldsymbol{\\theta} | \\boldsymbol{y})"
},
{
"math_id": 24,
"text": "\\pi(x_i | \\boldsymbol{\\theta}, \\boldsymbol{y})"
},
{
"math_id": 25,
"text": "\n\\begin{align}\n\\widetilde{\\pi}(x_i | \\boldsymbol{y}) = \\sum_k \\widetilde{\\pi}\\left( x_i | \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right) \\times \\widetilde{\\pi}( \\boldsymbol{\\theta}_k | \\boldsymbol{y}) \\times \\Delta_k,\n\\end{align}\n"
},
{
"math_id": 26,
"text": "\\Delta_k"
},
{
"math_id": 27,
"text": "\\pi(\\theta_j | \\boldsymbol{y})"
},
{
"math_id": 28,
"text": "\\boldsymbol{\\theta}_{-j}"
},
{
"math_id": 29,
"text": "\\widetilde{\\pi}(\\boldsymbol{\\theta} | \\boldsymbol{y})"
},
{
"math_id": 30,
"text": "\n\\begin{align}\n{\\pi}( \\boldsymbol{\\theta} | \\boldsymbol{y}) = \\frac{\\pi\\left(\\boldsymbol{x}, \\boldsymbol{\\theta}, \\boldsymbol{y} \\right)}{\\pi\\left(\\boldsymbol{x} | \\boldsymbol{\\theta}, \\boldsymbol{y} \\right) \\pi(\\boldsymbol{y})},\n\\end{align}\n"
},
{
"math_id": 31,
"text": "\\widetilde{\\pi}( \\boldsymbol{\\theta} | \\boldsymbol{y})"
},
{
"math_id": 32,
"text": "\\boldsymbol{\\theta} = \\boldsymbol{\\theta}_k"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n\\widetilde{\\pi}( \\boldsymbol{\\theta}_k | \\boldsymbol{y}) &\\propto \\left . \\frac{\\pi\\left(\\boldsymbol{x}, \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right)}{\\widetilde{\\pi}_G\\left(\\boldsymbol{x} | \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right)} \\right \\vert_{\\boldsymbol{x} = \\boldsymbol{x}^{*}(\\boldsymbol{\\theta}_k)}, \\\\\n& \\propto \\left . \\frac{\\pi(\\boldsymbol{y} | \\boldsymbol{x}, \\boldsymbol{\\theta}_k)\\pi(\\boldsymbol{x} | \\boldsymbol{\\theta}_k) \\pi(\\boldsymbol{\\theta}_k)}{\\widetilde{\\pi}_G\\left(\\boldsymbol{x} | \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right)} \\right \\vert_{\\boldsymbol{x} = \\boldsymbol{x}^{*}(\\boldsymbol{\\theta}_k)},\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\\widetilde{\\pi}_G\\left(\\boldsymbol{x} | \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right)"
},
{
"math_id": 35,
"text": "{\\pi}\\left(\\boldsymbol{x} | \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right)"
},
{
"math_id": 36,
"text": "\\boldsymbol{\\theta}_k"
},
{
"math_id": 37,
"text": "\\boldsymbol{x}^{*}(\\boldsymbol{\\theta}_k)"
},
{
"math_id": 38,
"text": "{\\pi}( \\boldsymbol{\\theta} | \\boldsymbol{y})"
},
{
"math_id": 39,
"text": "\\boldsymbol{Q}_{\\boldsymbol{\\theta}_k}"
},
{
"math_id": 40,
"text": "\\widetilde{\\pi}( \\boldsymbol{\\theta}_k | \\boldsymbol{y})"
},
{
"math_id": 41,
"text": "{\\boldsymbol{\\theta}_k}"
},
{
"math_id": 42,
"text": "\\widetilde{\\pi}\\left( x_i | \\boldsymbol{\\theta}_k, \\boldsymbol{y} \\right)"
},
{
"math_id": 43,
"text": "\\widetilde{\\pi}(x_i | \\boldsymbol{y})"
}
]
| https://en.wikipedia.org/wiki?curid=70595668 |
70596336 | Kaniadakis statistics | Statistical physics approach
Kaniadakis statistics (also known as κ-statistics) is a generalization of Boltzmann–Gibbs statistical mechanics, based on a relativistic generalization of the classical Boltzmann–Gibbs–Shannon entropy (commonly referred to as Kaniadakis entropy or κ-entropy). Introduced by the Greek Italian physicist Giorgio Kaniadakis in 2001, κ-statistical mechanics preserve the main features of ordinary statistical mechanics and have attracted the interest of many researchers in recent years. The κ-distribution is currently considered one of the most viable candidates for explaining complex physical, natural or artificial systems involving power-law tailed statistical distributions. Kaniadakis statistics have been adopted successfully in the description of a variety of systems in the fields of cosmology, astrophysics, condensed matter, quantum physics, seismology, genomics, economics, epidemiology, and many others.
Mathematical formalism.
The mathematical formalism of κ-statistics is generated by κ-deformed functions, especially the κ-exponential function.
κ-exponential function.
The Kaniadakis exponential (or κ-exponential) function is a one-parameter generalization of an exponential function, given by:
formula_1
with formula_2.
The κ-exponential for formula_3 can also be written in the form:
formula_4
The first five terms of the Taylor expansion of formula_0 are given by:formula_5where the first three are the same as a typical exponential function.
Basic properties
The κ-exponential function has the following properties of an exponential function:
formula_6
formula_7
formula_8
formula_9
formula_10
formula_11
formula_12
For a real number formula_13, the κ-exponential has the property:
formula_14.
κ-logarithm function.
The Kaniadakis logarithm (or κ-logarithm) is a relativistic one-parameter generalization of the ordinary logarithm function,
formula_16
with formula_17, which is the inverse function of the κ-exponential:
formula_18
The κ-logarithm for formula_3 can also be written in the form:
formula_19
The first three terms of the Taylor expansion of formula_15 are given by:
formula_20
following the rule
formula_21
with formula_22, and
formula_23
where formula_24 and formula_25. The two first terms of the Taylor expansion of formula_15 are the same as an ordinary logarithmic function.
Basic properties
The κ-logarithm function has the following properties of a logarithmic function:
formula_26
formula_27
formula_28
formula_29
formula_30
formula_31
formula_32
For a real number formula_13, the κ-logarithm has the property:
formula_33
κ-Algebra.
κ-sum.
For any formula_34 and formula_35, the Kaniadakis sum (or κ-sum) is defined by the following composition law:
formula_36,
that can also be written in form:
formula_37,
where the ordinary sum is a particular case in the classical limit formula_38: formula_39.
The κ-sum, like the ordinary sum, has the following properties:
formula_40
formula_41
formula_42
formula_43
The κ-difference formula_44 is given by formula_45.
The fundamental property formula_46 arises as a special case of the more general expression below: formula_47
Furthermore, the κ-functions and the κ-sum present the following relationships:
formula_48
κ-product.
For any formula_34 and formula_35, the Kaniadakis product (or κ-product) is defined by the following composition law:
formula_49,
where the ordinary product is a particular case in the classical limit formula_38: formula_50.
The κ-product, like the ordinary product, has the following properties:
formula_51
formula_52
formula_53
formula_54
The κ-division formula_55 is given by formula_56.
The κ-sum formula_57 and the κ-product formula_58 obey the distributive law: formula_59.
The fundamental property formula_60 arises as a special case of the more general expression below:
formula_61
Furthermore, the κ-functions and the κ-product present the following relationships:
formula_62
formula_63
κ-Calculus.
κ-Differential.
The Kaniadakis differential (or κ-differential) of formula_64 is defined by:
formula_65.
So, the κ-derivative of a function formula_66 is related to the Leibniz derivative through:
formula_67,
where formula_68 is the Lorentz factor. The ordinary derivative formula_69 is a particular case of κ-derivative formula_70 in the classical limit formula_71.
κ-Integral.
The Kaniadakis integral (or κ-integral) is the inverse operator of the κ-derivative defined through
formula_72,
which recovers the ordinary integral in the classical limit formula_71.
κ-Trigonometry.
κ-Cyclic Trigonometry.
The Kaniadakis cyclic trigonometry (or κ-cyclic trigonometry) is based on the κ-cyclic sine (or κ-sine) and κ-cyclic cosine (or κ-cosine) functions defined by:
formula_73,
formula_74,
where the κ-generalized Euler formula is
formula_75.:
The κ-cyclic trigonometry preserves fundamental expressions of the ordinary cyclic trigonometry, which is a special case in the limit κ → 0, such as:
formula_76
formula_77.
The κ-cyclic tangent and κ-cyclic cotangent functions are given by:
formula_78
formula_79.
The κ-cyclic trigonometric functions become the ordinary trigonometric function in the classical limit formula_71.
κ-Inverse cyclic function
The Kaniadakis inverse cyclic functions (or κ-inverse cyclic functions) are associated to the κ-logarithm:
formula_80,
formula_81,
formula_82,
formula_83.
κ-Hyperbolic Trigonometry.
The Kaniadakis hyperbolic trigonometry (or κ-hyperbolic trigonometry) is based on the κ-hyperbolic sine and κ-hyperbolic cosine given by:
formula_84,
formula_85,
where the κ-Euler formula is
formula_86.
The κ-hyperbolic tangent and κ-hyperbolic cotangent functions are given by:
formula_87
formula_88.
The κ-hyperbolic trigonometric functions become the ordinary hyperbolic trigonometric functions in the classical limit formula_71.
From the κ-Euler formula and the property formula_46 the fundamental expression of κ-hyperbolic trigonometry is given as follows:
formula_89
κ-Inverse hyperbolic function
The Kaniadakis inverse hyperbolic functions (or κ-inverse hyperbolic functions) are associated to the κ-logarithm:
formula_90,
formula_91,
formula_92,
formula_93,
in which are valid the following relations:
formula_94,
formula_95,
formula_96.
The κ-cyclic and κ-hyperbolic trigonometric functions are connected by the following relationships:
formula_97,
formula_98,
formula_99,
formula_100,
formula_101,
formula_102,
formula_103,
formula_104.
Kaniadakis entropy.
The Kaniadakis statistics is based on the Kaniadakis κ-entropy, which is defined through:
formula_105
where formula_106 is a probability distribution function defined for a random variable formula_107, and formula_108 is the entropic index.
The Kaniadakis κ-entropy is thermodynamically and Lesche stable and obeys the Shannon-Khinchin axioms of continuity, maximality, generalized additivity and expandability.
Kaniadakis distributions.
A Kaniadakis distribution (or "κ"-distribution) is a probability distribution derived from the maximization of Kaniadakis entropy under appropriate constraints. In this regard, several probability distributions emerge for analyzing a wide variety of phenomenology associated with experimental power-law tailed statistical distributions.
Kaniadakis integral transform.
κ-Laplace Transform.
The Kaniadakis Laplace transform (or κ-Laplace transform) is a κ-deformed integral transform of the ordinary Laplace transform. The κ-Laplace transform converts a function formula_109 of a real variable formula_110 to a new function formula_111 in the complex frequency domain, represented by the complex variable formula_112. This κ-integral transform is defined as:
formula_113
The inverse κ-Laplace transform is given by:
formula_114
The ordinary Laplace transform and its inverse transform are recovered as formula_71.
Properties
Let two functions formula_115 and formula_116, and their respective κ-Laplace transforms formula_111 and formula_117, the following table presents the main properties of κ-Laplace transform:
The κ-Laplace transforms presented in the latter table reduce to the corresponding ordinary Laplace transforms in the classical limit formula_71.
κ-Fourier Transform.
The Kaniadakis Fourier transform (or κ-Fourier transform) is a κ-deformed integral transform of the ordinary Fourier transform, which is consistent with the κ-algebra and the κ-calculus. The κ-Fourier transform is defined as:
formula_118
which can be rewritten as
formula_119
where formula_120 and formula_121. The κ-Fourier transform imposes an asymptotically log-periodic behavior by deforming the parameters formula_64 and formula_122 in addition to a damping factor, namely formula_123.
The kernel of the κ-Fourier transform is given by:
formula_124
The inverse κ-Fourier transform is defined as:
formula_125
Let formula_126, the following table shows the κ-Fourier transforms of several notable functions:
The κ-deformed version of the Fourier transform preserves the main properties of the ordinary Fourier transform, as summarized in the following table.
The properties of the κ-Fourier transform presented in the latter table reduce to the corresponding ordinary Fourier transforms in the classical limit formula_71.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\exp_\\kappa(x)\n"
},
{
"math_id": 1,
"text": "\\exp_{\\kappa} (x) = \\begin{cases}\n\\Big(\\sqrt{1+\\kappa^2 x^2}+\\kappa x \\Big)^\\frac{1}{\\kappa} & \\text{if } 0 < \\kappa < 1. \\\\[6pt]\n\\exp(x) & \\text{if }\\kappa = 0, \\\\[8pt]\n\\end{cases}\n"
},
{
"math_id": 2,
"text": "\\exp_{-\\kappa} (x) = \\exp_{\\kappa} (x)\n"
},
{
"math_id": 3,
"text": "0 < \\kappa < 1\n"
},
{
"math_id": 4,
"text": "\\exp_{\\kappa} (x) = \\exp\\Bigg(\\frac{1}{\\kappa} \\text{arcsinh} (\\kappa x)\\Bigg).\n"
},
{
"math_id": 5,
"text": "\\exp_{\\kappa} (x) = 1 + x + \\frac{x^2}{2} + (1 - \\kappa^2) \\frac{x^3}{3!} + (1 - 4 \\kappa^2) \\frac{x^4}{4!} + \\cdots\n"
},
{
"math_id": 6,
"text": "\\exp_{\\kappa} (x) \\in \\mathbb{C}^\\infty(\\mathbb{R})\n"
},
{
"math_id": 7,
"text": "\\frac{d}{dx}\\exp_{\\kappa} (x) > 0\n"
},
{
"math_id": 8,
"text": "\\frac{d^2}{dx^2}\\exp_{\\kappa} (x) > 0\n"
},
{
"math_id": 9,
"text": "\\exp_{\\kappa} (-\\infty) = 0^+\n"
},
{
"math_id": 10,
"text": "\\exp_{\\kappa} (0) = 1\n"
},
{
"math_id": 11,
"text": "\\exp_{\\kappa} (+\\infty) = +\\infty\n"
},
{
"math_id": 12,
"text": "\\exp_{\\kappa} (x) \\exp_{\\kappa} (-x) = -1\n"
},
{
"math_id": 13,
"text": "r\n"
},
{
"math_id": 14,
"text": "\\Big[\\exp_{\\kappa} (x)\\Big]^r = \\exp_{\\kappa/r} (rx)\n"
},
{
"math_id": 15,
"text": "\\ln_\\kappa(x)\n"
},
{
"math_id": 16,
"text": "\\ln_{\\kappa} (x) = \\begin{cases}\n\\frac{x^\\kappa - x^{-\\kappa}}{2\\kappa} & \\text{if } 0 < \\kappa < 1, \\\\[8pt]\n\\ln(x) & \\text{if }\\kappa = 0, \\\\[8pt]\n\\end{cases}\n"
},
{
"math_id": 17,
"text": "\\ln_{-\\kappa} (x) = \\ln_{\\kappa} (x)\n"
},
{
"math_id": 18,
"text": "\\ln_{\\kappa}\\Big( \\exp_{\\kappa}(x)\\Big) = \\exp_{\\kappa}\\Big( \\ln_{\\kappa}(x)\\Big) = x."
},
{
"math_id": 19,
"text": "\\ln_{\\kappa}(x) = \\frac{1}{\\kappa}\\sinh\\Big(\\kappa \\ln(x)\\Big)\n"
},
{
"math_id": 20,
"text": "\\ln_{\\kappa} (1+x) = x - \\frac{x^2}{2} + \\left( 1 + \\frac{\\kappa^2}{2}\\right) \\frac{x^3}{3} - \\cdots\n"
},
{
"math_id": 21,
"text": " \\ln_{\\kappa}(1+x) = \\sum_{n=1}^{\\infty} b_n(\\kappa)\\,(-1)^{n-1}\n\\,\\frac{x^n}{n} "
},
{
"math_id": 22,
"text": " b_1(\\kappa)= 1"
},
{
"math_id": 23,
"text": " b_{n}(\\kappa) (x) = \\begin{cases}\n1 & \\text{if } n = 1, \\\\[8pt]\n\\frac{1}{2}\\Big(1-\\kappa\\Big)\\Big(1-\\frac{\\kappa}{2}\\Big)...\n\\Big(1-\\frac{\\kappa}{n-1}\\Big) ,\\,+\\,\\frac{1}{2}\\Big(1+\\kappa\\Big)\\Big(1+\\frac{\\kappa}{2}\\Big)...\n\\Big(1+\\frac{\\kappa}{n-1}\\Big) & \\text{for } n > 1, \\\\[8pt]\n\\end{cases}\n "
},
{
"math_id": 24,
"text": " b_n(0)=1 "
},
{
"math_id": 25,
"text": " b_n(-\\kappa)=b_n(\\kappa) "
},
{
"math_id": 26,
"text": "\\ln_{\\kappa} (x) \\in \\mathbb{C}^\\infty(\\mathbb{R}^+)\n"
},
{
"math_id": 27,
"text": "\\frac{d}{dx}\\ln_{\\kappa} (x) > 0\n"
},
{
"math_id": 28,
"text": "\\frac{d^2}{dx^2}\\ln_{\\kappa} (x) < 0\n"
},
{
"math_id": 29,
"text": "\\ln_{\\kappa} (0^+) = -\\infty\n"
},
{
"math_id": 30,
"text": "\\ln_{\\kappa} (1) = 0\n"
},
{
"math_id": 31,
"text": "\\ln_{\\kappa} (+\\infty) = +\\infty\n"
},
{
"math_id": 32,
"text": "\\ln_{\\kappa} (1/x) = -\\ln_{\\kappa} (x)\n"
},
{
"math_id": 33,
"text": "\\ln_{\\kappa} (x^r) = r \\ln_{r \\kappa} (x)\n"
},
{
"math_id": 34,
"text": "x,y \\in \\mathbb{R}"
},
{
"math_id": 35,
"text": "|\\kappa| < 1"
},
{
"math_id": 36,
"text": "x\\stackrel{\\kappa}{\\oplus}y=x\\sqrt{1+\\kappa^2y^2}+y\\sqrt{1+\\kappa^2x^2} \n"
},
{
"math_id": 37,
"text": "x\\stackrel{\\kappa}{\\oplus}y={1\\over\\kappa}\\,\\sinh\n\\left({\\rm arcsinh}\\,(\\kappa x)\\,+\\,{\\rm\narcsinh}\\,(\\kappa y)\\,\\right)\n"
},
{
"math_id": 38,
"text": "\\kappa \\rightarrow 0\n"
},
{
"math_id": 39,
"text": "x\\stackrel{0}{\\oplus}y=x + y\n"
},
{
"math_id": 40,
"text": "\\text{1. associativity:} \\quad (x\\stackrel{\\kappa}{\\oplus}y)\\stackrel{\\kappa}{\\oplus}z\n=x \\stackrel{\\kappa}{\\oplus} (y \\stackrel{\\kappa}{\\oplus} z) \n"
},
{
"math_id": 41,
"text": "\\text{2. neutral element:} \\quad x \\stackrel{\\kappa}{\\oplus} 0 = 0\n\n\\stackrel{\\kappa}{\\oplus}x=x \n"
},
{
"math_id": 42,
"text": "\\text{3. opposite element:} \\quad x\\stackrel{\\kappa}{\\oplus}(-x)=(-x) \\stackrel{\\kappa}{\\oplus}x=0 \n"
},
{
"math_id": 43,
"text": "\\text{4. commutativity:} \\quad x\\stackrel{\\kappa}{\\oplus}y=y\\stackrel{\\kappa}{\\oplus}x \n"
},
{
"math_id": 44,
"text": "\\stackrel{\\kappa}{\\ominus}"
},
{
"math_id": 45,
"text": "x\\stackrel{\\kappa}{\\ominus}y=x\\stackrel{\\kappa}{\\oplus}(-y)"
},
{
"math_id": 46,
"text": "\\exp_{\\kappa}(-x)\\exp_{\\kappa}(x)=1"
},
{
"math_id": 47,
"text": "\\exp_{\\kappa}(x)\\exp_{\\kappa}(y)=exp_\\kappa(x\\stackrel{\\kappa}{\\oplus}y) \n"
},
{
"math_id": 48,
"text": "\\ln_\\kappa(x\\,y) = \\ln_\\kappa(x) \\stackrel{\\kappa}{\\oplus}\\ln_\\kappa(y)\n"
},
{
"math_id": 49,
"text": "x\\stackrel{\\kappa}{\\otimes}y={1\\over\\kappa}\\,\\sinh\n\\left(\\,{1\\over\\kappa}\\,\\,{\\rm arcsinh}\\,(\\kappa x)\\,\\,{\\rm\narcsinh}\\,(\\kappa y)\\,\\right)\n"
},
{
"math_id": 50,
"text": "x\\stackrel{0}{\\otimes}y=x \\times y=xy\n"
},
{
"math_id": 51,
"text": "\\text{1. associativity:} \\quad (x \\stackrel{\\kappa}{\\otimes}y)\n \\stackrel{\\kappa}{\\otimes}z=x\n \\stackrel{\\kappa}{\\otimes}(y\n \\stackrel{\\kappa}{\\otimes}z) \n"
},
{
"math_id": 52,
"text": "\\text{2. neutral element:} \\quad x \\stackrel{\\kappa}{\\otimes}I=I\n \\stackrel{\\kappa}{\\otimes}x= x \\quad \\text{for} \\quad I=\\kappa^{-1}\\sinh\n\\kappa \\stackrel{\\kappa}{\\oplus}x=x \n"
},
{
"math_id": 53,
"text": "\\text{3. inverse element:} \\quad x \\stackrel{\\kappa}{\\otimes}\\overline x= \\overline x\n\\stackrel{\\kappa}{\\otimes}x=I \\quad \\text{for} \\quad \\overline\nx=\\kappa^{-1}\\sinh(\\kappa^2/{\\rm arcsinh} \\,(\\kappa x)) \n"
},
{
"math_id": 54,
"text": "\\text{4. commutativity:} \\quad x\\stackrel{\\kappa}{\\otimes}y=y\\stackrel{\\kappa}{\\otimes}x \n"
},
{
"math_id": 55,
"text": "\\stackrel{\\kappa}{\\oslash}"
},
{
"math_id": 56,
"text": "x\\stackrel{\\kappa}{\\oslash}y=x\\stackrel{\\kappa}{\\otimes}\\overline\ny"
},
{
"math_id": 57,
"text": "\\stackrel{\\kappa}{\\oplus}"
},
{
"math_id": 58,
"text": "\\stackrel{\\kappa}{\\otimes}"
},
{
"math_id": 59,
"text": "z\\stackrel{\\kappa}{\\otimes}(x \\stackrel{\\kappa}{\\oplus}y) = (z\n\\stackrel{\\kappa}{\\otimes}x) \\stackrel{\\kappa}{\\oplus}(z\n\\stackrel{\\kappa}{\\otimes}y) \n"
},
{
"math_id": 60,
"text": "\\ln_{\\kappa}(1/x)=-\\ln_{\\kappa}(x)"
},
{
"math_id": 61,
"text": "\\ln_\\kappa(x\\,y) = \\ln_\\kappa(x)\\stackrel{\\kappa}{\\oplus} \\ln_\\kappa(y) \n"
},
{
"math_id": 62,
"text": "\\exp_\\kappa(x) \\stackrel{\\kappa}{\\otimes} \\exp_\\kappa(y) = \\exp_\\kappa(x\\,+\\,y)\n"
},
{
"math_id": 63,
"text": "\\ln_\\kappa(x\\,\\stackrel{\\kappa}{\\otimes}\\,y) = \\ln_\\kappa(x) + \\ln_\\kappa(y)\n"
},
{
"math_id": 64,
"text": "x"
},
{
"math_id": 65,
"text": "\\mathrm{d}_{\\kappa}x= \\frac{\\mathrm{d}\\,x}{\\displaystyle{\\sqrt{1+\\kappa^2\\,x^2} }}\n"
},
{
"math_id": 66,
"text": "f(x)"
},
{
"math_id": 67,
"text": " \\frac{\\mathrm{d} f(x)}{\\mathrm{d}_{\\kappa}x} = \\gamma_\\kappa (x) \\frac{\\mathrm{d} f(x)}{\\mathrm{d} x} "
},
{
"math_id": 68,
"text": " \\gamma_\\kappa(x) = \\sqrt{1+\\kappa^2 x^2}"
},
{
"math_id": 69,
"text": "\\frac{\\mathrm{d} f(x)}{\\mathrm{d} x} "
},
{
"math_id": 70,
"text": "\\frac{\\mathrm{d} f(x)}{\\mathrm{d}_{\\kappa}x}"
},
{
"math_id": 71,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 72,
"text": "\\int \\mathrm{d}_{\\kappa}x \\,\\, f(x)= \\int \\frac{\\mathrm{d}\\, x}{\\sqrt{1+\\kappa^2\\,x^2}}\\,\\,f(x) "
},
{
"math_id": 73,
"text": "\\sin_{\\kappa}(x) =\\frac{\\exp_{\\kappa}(ix) -\\exp_{\\kappa}(-ix)}{2i} "
},
{
"math_id": 74,
"text": "\\cos_{\\kappa}(x) =\\frac{\\exp_{\\kappa}(ix) +\\exp_{\\kappa}(-ix)}{2} "
},
{
"math_id": 75,
"text": " \\exp_{\\kappa}(\\pm ix)=\\cos_{\\kappa}(x)\\pm i\\sin_{\\kappa}(x) "
},
{
"math_id": 76,
"text": "\\cos_{\\kappa}^2(x) + \\sin_{\\kappa}^2(x)=1 "
},
{
"math_id": 77,
"text": "\\sin_{\\kappa}(x \\stackrel{\\kappa}{\\oplus} y) = \\sin_{\\kappa}(x)\\cos_{\\kappa}(y) + \\cos_{\\kappa}(x)\\sin_{\\kappa}(y) "
},
{
"math_id": 78,
"text": " \\tan_{\\kappa}(x)=\\frac{\\sin_{\\kappa}(x)}{\\cos_{\\kappa}(x)} "
},
{
"math_id": 79,
"text": " \\cot_{\\kappa}(x)=\\frac{\\cos_{\\kappa}(x)}{\\sin_{\\kappa}(x)} "
},
{
"math_id": 80,
"text": " {\\rm arcsin}_{\\kappa}(x)=-i\\ln_{\\kappa}\\left(\\sqrt{1-x^2}+ix\\right) "
},
{
"math_id": 81,
"text": " {\\rm arccos}_{\\kappa}(x)=-i\\ln_{\\kappa}\\left(\\sqrt{x^2-1}+x\\right) "
},
{
"math_id": 82,
"text": " {\\rm arctan}_{\\kappa}(x)=i\\ln_{\\kappa}\\left(\\sqrt{\\frac{1-ix}{1+ix}}\\right) "
},
{
"math_id": 83,
"text": " {\\rm arccot}_{\\kappa}(x)=i\\ln_{\\kappa}\\left(\\sqrt{\\frac{ix+1}{ix-1}}\\right) "
},
{
"math_id": 84,
"text": "\\sinh_{\\kappa}(x) =\\frac{\\exp_{\\kappa}(x) -\\exp_{\\kappa}(-x)}{2} "
},
{
"math_id": 85,
"text": "\\cosh_{\\kappa}(x) =\\frac{\\exp_{\\kappa}(x) +\\exp_{\\kappa}(-x)}{2} "
},
{
"math_id": 86,
"text": " \\exp_{\\kappa}(\\pm x)=\\cosh_{\\kappa}(x)\\pm \\sinh_{\\kappa}(x) "
},
{
"math_id": 87,
"text": " \\tanh_{\\kappa}(x)=\\frac{\\sinh_{\\kappa}(x)}{\\cosh_{\\kappa}(x)} "
},
{
"math_id": 88,
"text": " \\coth_{\\kappa}(x)=\\frac{\\cosh_{\\kappa}(x)}{\\sinh_{\\kappa}(x)} "
},
{
"math_id": 89,
"text": "\\cosh_{\\kappa}^2(x)- \\sinh_{\\kappa}^2(x)=1\n"
},
{
"math_id": 90,
"text": " {\\rm arcsinh}_{\\kappa}(x)=\\ln_{\\kappa}\\left(\\sqrt{1+x^2}+x\\right) "
},
{
"math_id": 91,
"text": " {\\rm arccosh}_{\\kappa}(x)=\\ln_{\\kappa}\\left(\\sqrt{x^2-1}+x\\right) "
},
{
"math_id": 92,
"text": " {\\rm arctanh}_{\\kappa}(x)=\\ln_{\\kappa}\\left(\\sqrt{\\frac{1+x}{1-x}}\\right) "
},
{
"math_id": 93,
"text": " {\\rm arccoth}_{\\kappa}(x)=\\ln_{\\kappa}\\left(\\sqrt{\\frac{1-x}{1+x}}\\right) "
},
{
"math_id": 94,
"text": " {\\rm arcsinh}_{\\kappa}(x) = {\\rm sign}(x){\\rm arccosh}_{\\kappa}\\left(\\sqrt{1+x^2}\\right) "
},
{
"math_id": 95,
"text": " {\\rm arcsinh}_{\\kappa}(x) = {\\rm arctanh}_{\\kappa}\\left(\\frac{x}{\\sqrt{1+x^2}}\\right) "
},
{
"math_id": 96,
"text": " {\\rm arcsinh}_{\\kappa}(x) = {\\rm arccoth}_{\\kappa}\\left(\\frac{\\sqrt{1+x^2}}{x}\\right) "
},
{
"math_id": 97,
"text": " {\\rm sin}_{\\kappa}(x) = -i{\\rm sinh}_{\\kappa}(ix) "
},
{
"math_id": 98,
"text": " {\\rm cos}_{\\kappa}(x) = {\\rm cosh}_{\\kappa}(ix) "
},
{
"math_id": 99,
"text": " {\\rm tan}_{\\kappa}(x) = -i{\\rm tanh}_{\\kappa}(ix) "
},
{
"math_id": 100,
"text": " {\\rm cot}_{\\kappa}(x) = i{\\rm coth}_{\\kappa}(ix) "
},
{
"math_id": 101,
"text": " {\\rm arcsin}_{\\kappa}(x)=-i\\,{\\rm arcsinh}_{\\kappa}(ix) "
},
{
"math_id": 102,
"text": " {\\rm arccos}_{\\kappa}(x)\\neq -i\\,{\\rm arccosh}_{\\kappa}(ix) "
},
{
"math_id": 103,
"text": " {\\rm arctan}_{\\kappa}(x)=-i\\,{\\rm arctanh}_{\\kappa}(ix) "
},
{
"math_id": 104,
"text": " {\\rm arccot}_{\\kappa}(x)=i\\,{\\rm arccoth}_{\\kappa}(ix) "
},
{
"math_id": 105,
"text": "S_\\kappa \\big(p\\big) = -\\sum_i p_i \\ln_{\\kappa}\\big(p_i\\big) = \\sum_i p_i \\ln_{\\kappa}\\bigg(\\frac{1}{p_i} \\bigg)"
},
{
"math_id": 106,
"text": "p = \\{p_i = p(x_i); x \\in \\mathbb{R}; i = 1, 2, ..., N; \\sum_i p_i = 1\\}"
},
{
"math_id": 107,
"text": "X"
},
{
"math_id": 108,
"text": "0 \\leq |\\kappa| < 1"
},
{
"math_id": 109,
"text": "f"
},
{
"math_id": 110,
"text": "t"
},
{
"math_id": 111,
"text": "F_\\kappa(s)"
},
{
"math_id": 112,
"text": "s"
},
{
"math_id": 113,
"text": "\nF_{\\kappa}(s)={\\cal L}_{\\kappa}\\{f(t)\\}(s)=\\int_{\\, 0}^{\\infty}\\!f(t) \\,[\\exp_{\\kappa}(-t)]^s\\,dt\n"
},
{
"math_id": 114,
"text": "\nf(t)={\\cal L}^{-1}_{\\kappa}\\{F_{\\kappa}(s)\\}(t)={\\frac{1}{2\\pi i}\\int_{c-i \\infty}^{c+i \\infty}\\!F_{\\kappa}(s) \\,\\frac{[\\exp_{\\kappa}(t)]^s}{\\sqrt{1+\\kappa^2t^2}}\\,ds}\n"
},
{
"math_id": 115,
"text": "f(t) = {\\cal L}^{-1}_{\\kappa}\\{F_{\\kappa}(s)\\}(t)"
},
{
"math_id": 116,
"text": "g(t) = {\\cal L}^{-1}_{\\kappa}\\{G_{\\kappa}(s)\\}(t)"
},
{
"math_id": 117,
"text": "G_\\kappa(s)"
},
{
"math_id": 118,
"text": "\n{\\cal F}_\\kappa[f(x)](\\omega)={1\\over\\sqrt{2\\,\\pi}}\\int\\limits_{-\\infty}\\limits^{+\\infty}f(x)\\,\n\\exp_\\kappa(-x\\otimes_\\kappa\\omega)^i\\,d_\\kappa x \n"
},
{
"math_id": 119,
"text": "\n{\\cal F}_\\kappa[f(x)](\\omega)={1\\over\\sqrt{2\\,\\pi}}\\int\\limits_{-\\infty}\\limits^{+\\infty}f(x)\\,\n{\\exp(-i\\,x_{\\{\\kappa\\}}\\,\\omega_{\\{\\kappa\\}})\\over\\sqrt{1+\\kappa^2\\,x^2}} \\,d x\n"
},
{
"math_id": 120,
"text": "x_{\\{\\kappa\\}}=\\frac{1}{\\kappa}\\, {\\rm arcsinh}\n\\,(\\kappa\\,x)"
},
{
"math_id": 121,
"text": "\\omega_{\\{\\kappa\\}}=\\frac{1}{\\kappa}\\, {\\rm arcsinh}\n\\,(\\kappa\\,\\omega)"
},
{
"math_id": 122,
"text": "\\omega"
},
{
"math_id": 123,
"text": "\\sqrt{1+\\kappa^2\\,x^2}"
},
{
"math_id": 124,
"text": "\nh_\\kappa(x,\\omega) = \\frac{\\exp(-i\\,x_{\\{\\kappa\\}}\\,\\omega_{\\{\\kappa\\}})}\\sqrt{1+\\kappa^2\\,x^2}\n"
},
{
"math_id": 125,
"text": "\n{\\cal F}_\\kappa[\\hat f(\\omega)](x)={1\\over\\sqrt{2\\,\\pi}}\\int\\limits_{-\\infty}\\limits^{+\\infty}\\hat f(\\omega)\\,\n\\exp_\\kappa(\\omega \\otimes_\\kappa x)^i\\,d_\\kappa \\omega \n"
},
{
"math_id": 126,
"text": "u_\\kappa(x) = \\frac 1 \\kappa \\cosh\\Big(\\kappa\\ln(x) \\Big)"
}
]
| https://en.wikipedia.org/wiki?curid=70596336 |
70598218 | Matching logic | Matching logic is a formal logic mainly used to reason about the correctness of computer programs. Its operators use pattern matching to operate on the power set of states, rather than the set of states. It was created by Grigore Roșu and is used in the K Framework.
Overview.
Matching logic operates on patterns. Statements evaluate to the set of values that "match" them, not to true or false.
Given a set of signatures formula_0, a pattern can be:
A matching logic may also have a set formula_7 of sorts. In that case, each pattern belongs to a particular sort. Structures can be used to combine patterns of different sorts together. Some examples of sorts used when working with program semantics might be "32-bit integer values", "stack frames", or "heap memory".
Some derived concepts are defined as:
formula_14 is matched by all elements. formula_15 is matched by none.
"One should be careful when reasoning with such non-classic logics, as basic intuitions may deceive."
When interpreting matching logic (that is, defining its semantic meaning), a pattern is modeled with a power set. The statement's interpretation is the set of values that match the pattern.
Matching μ-Logic.
Matching formula_16-logic adds a fixed-point operator formula_16.
Applications.
Matching logic is used with reachability logic by the K Framework to specify an operational semantics and, from them, to create a Hoare logic.
Matching logic can be converted to first-order logic with equality, which allows the K Framework to use existing SMT-solvers to find proofs for theorems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Sigma"
},
{
"math_id": 1,
"text": "x \\in Var"
},
{
"math_id": 2,
"text": "\\sigma \\in \\Sigma"
},
{
"math_id": 3,
"text": "\\sigma(p_1, p_2, ..., p_n)"
},
{
"math_id": 4,
"text": "\\neg p"
},
{
"math_id": 5,
"text": "p_1 \\land p_2"
},
{
"math_id": 6,
"text": "\\exists x . p"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "\\top \\equiv \\exists x . x"
},
{
"math_id": 9,
"text": "\\bot \\equiv \\neg\\top"
},
{
"math_id": 10,
"text": "p_1 \\lor p_2 \\equiv \\neg(\\neg p_1 \\land \\neg p_2)"
},
{
"math_id": 11,
"text": "p_1 \\implies p_2 \\equiv \\neg p_1 \\lor p_2"
},
{
"math_id": 12,
"text": "p_1 \\Leftrightarrow p_2 \\equiv p_1 \\implies p_2 \\land p_2 \\implies p_1"
},
{
"math_id": 13,
"text": "\\forall x . p \\equiv \\neg (\\exists x . \\neg p)"
},
{
"math_id": 14,
"text": "\\top"
},
{
"math_id": 15,
"text": "\\bot"
},
{
"math_id": 16,
"text": "\\mu"
}
]
| https://en.wikipedia.org/wiki?curid=70598218 |
7059985 | Hitting time | Aspect of stochastic processes
In the study of stochastic processes in mathematics, a hitting time (or first hit time) is the first time at which a given process "hits" a given subset of the state space. Exit times and return times are also examples of hitting times.
Definitions.
Let T be an ordered index set such as the natural numbers, &NoBreak;&NoBreak; the non-negative real numbers, [0, +∞), or a subset of these; elements &NoBreak;&NoBreak; can be thought of as "times". Given a probability space (Ω, Σ, Pr) and a measurable state space S, let formula_0 be a stochastic process, and let A be a measurable subset of the state space S. Then the first hit time formula_1 is the random variable defined by
formula_2
The first exit time (from A) is defined to be the first hit time for "S" \ "A", the complement of A in S. Confusingly, this is also often denoted by τA.
The first return time is defined to be the first hit time for the singleton set {"X"0("ω")}, which is usually a given deterministic element of the state space, such as the origin of the coordinate system.
Examples.
formula_4
Début theorem.
The hitting time of a set F is also known as the "début" of F. The Début theorem says that the hitting time of a measurable set F, for a progressively measurable process with respect to a right continuous and complete filtration, is a stopping time. Progressively measurable processes include, in particular, all right and left-continuous adapted processes.
The proof that the début is measurable is rather involved and involves properties of analytic sets. The theorem requires the underlying probability space to be complete or, at least, universally complete.
The "converse of the Début theorem" states that every stopping time defined with respect to a filtration over a real-valued time index can be represented by a hitting time. In particular, for essentially any such stopping time there exists an adapted, non-increasing process with càdlàg (RCLL) paths that takes the values 0 and 1 only, such that the hitting time of the set {0} by this process is the considered stopping time. The proof is very simple.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X :\\Omega \\times T \\to S"
},
{
"math_id": 1,
"text": "\\tau_A : \\Omega \\to [0, +\\infty]"
},
{
"math_id": 2,
"text": "\\tau_A (\\omega) := \\inf \\{ t \\in T \\mid X_t (\\omega) \\in A \\}."
},
{
"math_id": 3,
"text": "(-\\infty,-r]\\cup [r, +\\infty)."
},
{
"math_id": 4,
"text": "\\begin{align}\n\\operatorname{E} \\left[ \\tau_r \\right] &= r^2, \\\\\n\\operatorname{Var} \\left[ \\tau_r \\right] &= \\tfrac{2}{3} r^4.\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=7059985 |
70600204 | Silicon isotope biogeochemistry | The study of environmental processes using the relative abundance of Si isotopes
Silicon isotope biogeochemistry is the study of environmental processes using the relative abundance of Si isotopes. As the relative abundance of Si stable isotopes varies among different natural materials, the differences in abundance can be used to trace the source of Si, and to study biological, geological, and chemical processes. The study of stable isotope biogeochemistry of Si aims to quantify the different Si fluxes in the global biogeochemical silicon cycle, to understand the role of biogenic silica within the global Si cycle, and to investigate the applications and limitations of the sedimentary Si record as an environmental and palaeoceanographic proxy.
Background.
Silicon in nature is typically bonded to oxygen, in a tetravalent oxidation state. The major forms of solid Si are silicate minerals and amorphous silica, whereas in aqueous solutions the dominant forms are orthosilicic acid and its dissociated species. There are three stable isotopes of Si, associated with the following mean natural abundances: 28Si– 92.23%, 29Si– 4.67%, and 30Si– 3.10%. The isotopic composition of Si is often formulated by the delta notation, as the following:
formula_0
The reference material (standard) for defining the δ30Si of a sample is the National Bureau of Standards (NBS) 28 Sand Quartz, which has been certified and distributed by the National Institute of Standards and Technology (NIST), and is also named NIST RM 8546. Currently, there are four main analytical methods for the measurement of Si isotopes: Gas Source Isotope-Ratio Mass Spectrometry (GC-IRMS), Secondary Ion Mass Spectrometry (SIMS), Multi-Collector Inductively Coupled Plasma Mass Spectrometry (MC–IPC–MS), and Laser Ablation MC–ICP–MS.
Si isotopes in the Si biogeochemical cycle.
Primary minerals and weathering.
Primary minerals are the minerals that crystalize during the formation of Earth's crust, and their typical δ30Si isotopic value is in the range of −0.9‰ – +1.4‰. Earth's crust is constantly undergoing weathering processes, which dissolve Si and produce secondary Si minerals simultaneously. The formation of secondary Si discriminates against the heavy Si isotope (30Si), creating minerals with relatively low δ30Si isotopic values (−3‰ – +2.5‰, mean: −1.1‰). It has been suggested that this isotopic fractionation is controlled by the kinetic isotope effect of Si adsorption to Aluminum hydroxides, which takes place in early stages of weathering. As a result of incorporation of lighter Si isotopes into secondary minerals, the remaining dissolved Si will be relative enriched in the heavy Si isotope (30Si), and associated with relatively high δ30Si isotopic values (−1‰ – +2‰, mean: +0.8‰). The dissolved Si is often transported by rivers to the oceans.
Terrestrial vegetation.
Silicon uptake by plants typically discriminates against the light Si isotope, forming 30Si-enriched plants (δ30Si of 0–6‰). The reason for this relatively large isotopic fractionation remains unclear, mainly because the mechanisms of Si uptake by plants are yet to be understood. Silicon in plants can be found in the xylem, which is associated with exceptionally high δ30Si values. Phytoliths, microscopic structures of silica in plant tissues, have relatively lower δ30Si values. For example, it was reported that the mean δ30Si of phytoliths in various wheat organs were -1.4–2.1‰, which is lower than the typical range for vegetation (δ30Si of 0–6‰). Phytoliths are relatively soluble, and as plants decay they contribute to the terrestrial dissolved Si budget.
Biomineralization in aquatic environments.
In aquatic environments (rivers, lakes and ocean), dissolved Si is utilized by diatoms, dictyochales, radiolarians and sponges to produce solid bSiO2 structures. The biomineralized silica has an amorphous structure and therefore its properties may vary among the different organisms. Biomineralization by diatoms induces the largest Si flux within the ocean, and thus it has a crucial role in the global Si cycle. During Si uptake by diatoms, there is an isotopic discrimination against the heavy isotope, forming 30Si-depleted biogenic silica minerals. As a result, the remaining dissolved Si in the surrounding water is 30Si-enriched. Since diatoms rely on sunlight for photosynthesis, they inhabit in surface waters, and thus the surface water of the ocean are typically 30Si-enriched. Although there is less available data on the isotopic fractionation during biomineralization by radiolarians, it has been suggested that radiolarians also discriminate against the heavy isotope (30Si), and that the magnitude of isotopic fractionation is of a similar range as biomineralization by diatoms. Sponges also show an isotopic preference for 28Si over 30Si, but the magnitude of their isotopic fractionation is often larger (For quantitative comparation, see Figure 2).
Hydrothermal vents.
Hydrothermal vents contribute dissolved Si to the ocean Si reservoir. Currently, it is challenging to determine the magnitude of hydrothermal Si fluxes, due to lack of data on the δ30Si values associated with this flux. There are only two published data points of the δ30Si value of hydrothermal vents (−0.4‰ and −0.2‰).
Diagenesis.
The δ30Si value of sediment porewater may be affected by post-depositional (diagenetic) precipitation or dissolution of Si. It is important to understand the extent and isotopic fractionations of these processes, as they alter the δ30Si values of the originally deposited sediments, and determine the δ30Si preserved in the rock record. Generally, precipitation of Si prefers the light isotope (28Si) and leads to 30Si-enriched dissolved Si in the hosting solution. The isotopic effect of Si dissolution in porewater is yet to be clear, as some studies report a preference for 28Si during dissolution, while other studies document that isotopic fractionation was not expressed during dissolution of sediments.
Paleoceanography proxies.
The silicic acid leakage hypothesis.
The silicic acid leakage hypothesis (SALH) is a suggested mechanism that aims to explain the atmospheric CO2 variations between glacial and interglacial periods. This hypothesis proposes that during glacial periods, as a result of enhanced dust deposition in the southern ocean, diatoms consume less Si relative to nitrogen. The decrease in the Si:N uptake ratios leads to Si excess in the southern ocean, which leaks to lower latitudes of the ocean that are dominated by coccolithophores. As the Si concentrations rise, the diatom population may outcompete the coccolithophores, reducing the CaCO3 precipitation and altering ocean alkalinity and the carbonate pump. These changes would induce a new ocean-atmosphere steady state with lower atmospheric CO2 concentrations, consistent with the draw down of CO2 observed in the last glacial period. The δ30Si and δ15N isotopic values archived in the southern ocean diatom sediments has been used to examine this hypothesis, as the dynamics of Si and N supply and utilization during the last deglaciation could be interpreted from this record. In alignment with the silicic acid leakage hypothesis, these isotopic archives suggest that Si utilization in the southern ocean increased during the deglaciation.
Si isotope palaeothermometry.
There have been attempts to reconstruct ocean paleotemperatures by chert Si isotopic record, which proposed that the Archean seawater temperatures were significantly higher than modern (~70 °C). However, subsequent studies question this palaeothermometry method and offer alternative explanation for the δ30Si values of Archean rocks. These signals could result from diagenetic alteration processes that overprint the original δ30Si values, or reflect that Archean cherts were composed of different Si sources. It is plausible that in during the Archean the dominant sources of Si sediments were weathering, erosion, silicification of clastic sediments or hydrothermal activity, in contrast to the vast SiO2 biomineralization in the modern ocean.
Paleo Si concentrations.
According to empirical calibrations, the difference in δ30Si (denoted as Δ30Si) between sponges and their hosting water is correlated with the Si concentration of the hosting solution. Therefore, it has been suggested that the Si concentrations in bottom waters of ancient oceans can be interpreted from the δ30Si of coexisting sponge spicules, which are preserved in the rock record. It has been proposed that this relation is determined by the growth rate and the Si uptake kinetics of sponges, but the current understanding of sponge biomineralization pathways is limited. Although the mechanism behind this relation is yet to be clear, it appears consistent among various laboratory experiments, modern environments, and core top sediments. However, there is also evidence that the δ30Si of carnivorous sponges may differ significantly from the expected correlation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta^{30}Si = {({^{30}Si \\over ^{28}Si})_{sample}\\over ({^{30}Si \\over ^{28}Si})_{standard}}-1\n"
}
]
| https://en.wikipedia.org/wiki?curid=70600204 |
70605254 | Indirect detection of dark matter | Indirect detection of dark matter is a method of searching for dark matter that focuses on looking for the products of dark matter interactions (particularly Standard Model particles) rather than the dark matter itself. Contrastingly, direct detection of dark matter looks for interactions of dark matter directly with atoms. There are experiments aiming to produce dark matter particles using colliders. Indirect searches use various methods to detect the expected annihilation cross sections for weakly interacting massive particles (WIMPs). It is generally assumed that dark matter is stable (or has a lifetime long enough to appear stable), that dark matter interacts with Standard Model particles, that there is no production of dark matter post-freeze-out, and that the universe is currently matter-dominated, while the early universe was radiation-dominated. Searches for the products of dark matter interactions are profitable because there is an extensive amount of dark matter present in the universe, and presumably, a lot of dark matter interactions and products of those interactions (which are the focus of indirect detection searches); and many currently operational telescopes can be used to search for these products. Indirect searches help to constrain the annihilation cross section formula_0 the lifetime of dark matter formula_1, as well as the annihilation rate.
Dark matter interactions.
Indirect detection relies on the "products" of dark matter interactions. Thus, there are several different models of dark matter interactions to consider. Dark matter (DM) is often considered stable, as a lifetime greater than the age of the universe is required ( formula_2 yrs) for large amounts of DM to be present today. In fact, it seems that the abundance of DM has not changed significantly while the universe has been matter-dominated. Using measurements of the CMB and other large scale structures, the lifetime of DM can be roughly constrained by formula_3s. Thus, annihilating DM is the focus of most indirect searches.
Annihilating dark matter.
An annihilation cross section on the order of formula_4 is consistent with the measured cosmological density of DM. Thus, the objects of indirect searches are the secondary products that are expected from the annihilation of two dark matter particles. When observations of those secondary products reveal cross sections on the order of the expected formula_5 (or near that order of magnitude, with some expected or known discrepancy) the source of those products may become a dark matter candidate, or an indication of dark matter (an indirect signal). In general, the DM is expected to be formula_6 for the cross section given above.
Note that the "J-factor" of a given potential source of dark matter interaction products is the energy spectrum integrated along the line of sight, taking only the term dependent on the distribution of the DM mass density. For annihilation, that J-factor is commonly given as,
formula_7
where formula_8 is the mass density of DM. The J-factor is essentially a predictive measurement of a potential annihilation signal. The J-factor depends on the density, so if the density of a given region is not well-known or well-defined, then it can be difficult to determine the size of the expected signal. For example, since it is difficult to distinguish and remove backgrounds near the galactic center the calculated J-factor for that region varies by several orders of magnitude, depending on the density profile used.
Decaying dark matter.
However, if DM is unstable, it would decay and produce decay products that could be observed. Since decay only involves one DM particle (while annihilation requires two), the flux of DM decay products is proportional to the DM density, formula_9, rather than formula_10 in the case of annihilation. There have been efforts to search for DM decay products in gamma rays, X-rays, cosmic rays, and neutrinos. For unstable dark matter of mass in the GeV–TeV range, the decay products are high-energy photons. These photons contribute to the extragalatic gamma ray background (EGRB). Studies of the EGRB using the "Fermi" satellite have revealed constraints on the lifetime of dark matter as formula_11 s, for masses between about 100 GeV and 1 TeV. The constraints derived from the EGRB are relatively unaffected by additional astrophysical uncertainties. NuSTAR observations have been used to search for X-ray lines to further constrain decaying DM for masses in the 10 to 50 keV range. For sterile neutrinos, there are several existing constraints based on X-ray limits. For DM masses formula_12 keV and formula_13 keV, there are well-defined constraints on the mixing angle, formula_14. Neutrinos have been used to derive constraints for DM masses in the range formula_15 GeV. Combined data from "Fermi" gamma-ray observations and IceCube neutrino observations give constraints depending on energy and defined by the criterion, formula_16, with formula_17 defined as the given signal, formula_18 as the muon neutrino background, and formula_19 as the Gaussian significance. For low energies, the constraints improve with time as formula_20. For high energies, the constraints are not well-defined, as neutrino flux is no longer dominant. Thus, there are constraints on the properties of decaying DM for masses formula_21 ranging from keV to TeV. Additionally, in the case of decay, the signal strength (like J-factor for the case of annihilation) is dependent only on density, rather than density squared: formula_22. For sufficiently distant sources, the signal strength can then be approximated as formula_23, where formula_24 is the source mass.
Methods of indirect detection.
There are currently many different avenues through which indirect searches for dark matter may be carried out. In general, indirect detection searches focus on either gamma-rays, cosmic-rays, or neutrinos. There are many instruments that have been used in efforts to detect dark matter annihilation products, including H.E.S.S., VERITAS, and MAGIC (Cherenkov telescopes), "Fermi" Large Area Telescope (LAT), High Altitude Water Cherenkov Experiment (HAWC), and Antares, IceCube, and SuperKamiokande (neutrino telescopes). Each of these telescopes participates in the search for a signal from WIMPs, focusing, respectively, on sources ranging from the Galactic center or galactic halo, to galaxy clusters, to dwarf galaxies, depending on allowable energy range for each instrument. A DM annihilation signal has not yet been confirmed, and instead, constraints are placed on DM particles through limits on the annihilation cross section of WIMPs, on the lifetime of dark matter (in the case of decay), as well as on the annihilation rate and flux.
WIMP annihilation limits.
Gamma-ray searches.
In order to detect or constrain the properties of dark matter, observations of dwarf galaxies have been carried out. Limits may be placed on the annihilation cross section of WIMPs based on analysis of either gamma-rays or cosmic rays. The VERITAS, MAGIC, "Fermi", and H.E.S.S. telescopes are among those that have been involved in the observation of gamma-rays. The air Cherenkov telescopes (H.E.S.S., MAGIC, VERITAS) are most effective at constraining the annihilation cross section for high energies (formula_25 GeV).
For energies below 100 GeV, "Fermi" is more effective, as this telescope is not constrained to a view of only a small portion of the sky (as the ground-based telescopes are). From six years of "Fermi" data, which observed dwarf galaxies in the Milky Way, the DM mass is constrained to formula_26 GeV (masses both this threshold are not allowed). Then, combining data from "Fermi" and MAGIC, the upper limit of the cross section is found to be formula_27 (that is, with no uncertainties in formula_28. This collaboration produced constraints for DM masses in the range formula_29. Note that "Fermi" data dominates for the low mass end of the range, while MAGIC dominates for the high masses.
VERITAS has been used to observe high energy gamma-rays in the range 85 GeV to 30 TeV, for the mass range formula_30.
Cosmic-ray searches.
Cosmic ray analyses primarily observe positrons and antiprotons. The AMS experiment is one such project, providing data on cosmic ray electrons and positrons in the 0.5 GeV to 350 GeV range. AMS data allows for constraints on DM masses formula_31 GeV. Results from AMS constrain the annihilation cross section to formula_32 for DM masses formula_33 GeV (with the thermally averaged cross section noted as formula_34 formula_35). The upper limit for the annihilation cross section can also be used to find a limit for the decay width of a DM particle. These analyses are also subject to substantial uncertainty, particularly pertaining to the Sun's magnetic field, as well as the production cross section for antiprotons.
Galactic center.
The galactic center is hypothesized to be a source of large amounts of dark matter annihilation products. However, the background at the galactic center is both bright and not yet well understood (based on the model of the Milky Way in use, the flux of annihilation products can vary by several orders of magnitude). The Galactic center is a unique source of high mass dark matter, which cannot be replicated in colliders. Thus, telescopes like "Fermi" and H.E.S.S. have observed the excess of gamma-rays coming from the galactic center, as backgrounds are lower for gamma-rays (and unknown backgrounds at the galactic center typically cause large uncertainties for dark matter searches). The annihilation cross section is consistent with the expected formula_36, and thus, In the case that those excess gamma-rays are products of dark matter annihilation, they must originate from dark matter with a mass formula_37.
H.E.S.S., an imaging atmospheric Cherenkov telescope, has been used to observe this excess of very high energy gamma-rays emanating from the galactic center. Probing energies in the range formula_38 GeV to formula_39 TeV, H.E.S.S. data allowed for limits on internal bremsstrahlung processes to be determined, which then allowed for upper limits on DM annihilation flux to be defined.
Overall, the galactic center is a focus for indirect searches due to its excess of gamma-rays. That excess has formula_40 which is on the order of the thermally averaged annihilation cross section, making the gamma-ray excess a potential dark matter candidate.
Heavy dark matter.
Heavy DM has formula_41. Dark matter with mass in this regime is expected to result in high-energy photons that, through pair production, create a cascade of electrons and photons, eventually leading to low energy gamma-rays. Those low-energy gamma-rays can be observed by telescopes like "Fermi", and then constrain the annihilation rate accordingly. Additionally, for decaying DM with masses greater than the TeV range, the lifetime is constrained to formula_42 s.
Light dark matter.
Contrastingly, light DM has formula_43, and it becomes difficult to observe products for these lower masses and energies. "Fermi" is limited by its angular resolution, and cannot observe products below formula_44. To observe products at the lower mass limit, either a low-energy gamma-ray telescope or an X-ray telescope is required.
Cosmic ray positron excess.
An excess of positrons (in the flux ratio of positrons to electron and positron pairs) was found by PAMELA, in observing cosmic rays. "Fermi" and AMS-02 later confirmed this excess. One possible explanation for this excess of positrons is annihilating dark matter. For energies formula_45 GeV to formula_46 GeV, the ratio of positrons to electron-positron pairs continues to increase, indicating that the annihilating dark matter is producing positrons (and the flux increases with the DM mass). There are alternative explanations for this excess of positrons, including pulsars or supernova remnants. In 2017, data from the HAWC Collaboration indicated that the increase in flux of positrons from the two nearest pulsars (Geminga and Monogem) is roughly equivalent to the excess originally observed by PAMELA.
The 3.5 keV line.
In 2014, a spectral line an energy of formula_47 keV was found in the observation of galaxy clusters. Further investigation of this spectral line by Chandra and XMM-Newton failed to find such a line, and thus, there is debate about whether the spectral line is evidence of dark matter. There are several explanations: (1) the source is a decaying sterile neutrino, with a mass formula_48 keV (cold dark matter), and thus, is not subject to the constraints on warm dark matter. This explanation is consistent with observation of the spectral line at 3.5 keV, as expected, in both the cosmic X-ray background and the Galactic center, but inconsistent with the results from Chandra and XMM-Newton; (2) the source is heavier than 3.5 keV, but has a "metastable excited state" at 3.5 keV and a decay emits a photon of that same energy; (3) the DM source decays, producing a 3.5 keV axion-like particle, which could turn into a photon under some external magnetic field. The actual explanation cannot yet be confirmed. Thus, the 3.5 keV line remains as evidence of a potential DM candidate.
In 2023 a research preprint published on Arxiv questioned the existence of the 3.5 keV spectral line; the authors of the research, when trying to replicate the results pointing to the existence of the 3.5 keV spectral line failed to reproduce these results in five out of six cases, leading them to conclude:"We conclude that there is little robust evidence for the existence of the 3.5 keV line".
Cosmic microwave background.
The cosmic microwave background (CMB) can also be analyzed in order to constrain dark matter annihilation products. If the number of dark matter annihilations is given as,
formula_49
where formula_50 is the expansion rate, formula_51 is the comoving volume, and formula_52 is the averaged annihilation cross section, then the number of dark matter annihilations during both the period of matter-radiation equality and matter domination can be determined. From the above equation for number of dark matter annihilations, and based on a typical dark matter mass of formula_53 GeV, that dark matter would ionize a significant portion of hydrogen atoms (~formula_54) at the time of recombination. Thus, dark matter would have a noticeable effect on the CMB, as observed today.
Because the anisotropies found in the CMB are sensitive to any increase in energy, those anisotropies can be calculated under the assumption that the energy increase is due to some DM annihilation, in an effort to determine constraints on that DM annihilation. The Planck Collaboration used the relation
formula_55
(where formula_56 is the energy released into the intergalactic medium by a DM annihilation process) to determine a parameter, formula_57, to constrain DM annihilations based on CMB anisotropies and polarization. The Planck Collaboration found that CMB-constraints were more reliable than other methods for smaller masses (below ~10 GeV). CMB-constraints are also most reliable for any DM annihilation that results in either protons or electrons (that is, excluding annihilation into neutrinos).
Alternative explanations.
Some of the alternative explanations are mentioned in their respective sections above, but there are many alternative explanations for the sources various that are considered potential DM signal candidates. For example, the excess of gamma-rays at the galactic center could be due to pulsars near the galactic center, rather than dark matter. Additionally, as previously mentioned, the excess of cosmic-ray positrons could be due to nearby pulsars increasing the flux of positrons.
It should also be noted that it is possible for dark matter to annihilate with a cross section smaller than the thermally averaged value of formula_5, but current instrumentation does not allow for the investigation of such a model. Some of those additional models include velocity dependent processes, in which the cross section scales with the square of the relative velocity (formula_58) of the two annihilating dark matter particles formula_59. Another model is that of resonant annihilations, in which dark matter is assumed to annihilate near resonance, causing the cross section at the time of freeze-out to be significantly higher (or lower) than is observed today (due to the increased velocity at resonance, and the relatively low velocity assumed at present). Asymmetric dark matter is a model that suggests a primordial asymmetry in the abundance of dark matter particles and antiparticles.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left \\langle \\sigma v \\right \\rangle,"
},
{
"math_id": 1,
"text": "\\tau_X"
},
{
"math_id": 2,
"text": "\\tau_X \\gg 10^{10}\n"
},
{
"math_id": 3,
"text": "\\tau_X \\gtrsim 2\\times 10^{19}"
},
{
"math_id": 4,
"text": "\\left \\langle \\sigma v \\right \\rangle\\sim 10^{-26} cm^3s^{-1}"
},
{
"math_id": 5,
"text": "\\sim 10^{-26}"
},
{
"math_id": 6,
"text": "5 \\ MeV \\lesssim m_X \\lesssim 120 \\ TeV"
},
{
"math_id": 7,
"text": "J_{ann}=\\frac{1}{8\\pi}\\int dr d\\Omega \\rho(\\bar{r})^2"
},
{
"math_id": 8,
"text": "\\rho(\\bar{r})"
},
{
"math_id": 9,
"text": "\\rho_X"
},
{
"math_id": 10,
"text": "\\rho_X^2"
},
{
"math_id": 11,
"text": "\\tau_X > 10^{28}"
},
{
"math_id": 12,
"text": "m_X \\lesssim 10"
},
{
"math_id": 13,
"text": "m_X \\gtrsim 50 "
},
{
"math_id": 14,
"text": "\\theta"
},
{
"math_id": 15,
"text": "10^3 \\lesssim m_X \\lesssim 10^6"
},
{
"math_id": 16,
"text": "N_{sig}/\\sqrt{N_{sig}+N_{bkg}} < \\delta"
},
{
"math_id": 17,
"text": "N_{sig}"
},
{
"math_id": 18,
"text": "N_{bkg}"
},
{
"math_id": 19,
"text": "\\delta"
},
{
"math_id": 20,
"text": "\\sim \\sqrt{t}"
},
{
"math_id": 21,
"text": "m_X"
},
{
"math_id": 22,
"text": "\\int \\rho(\\bar{r})drd\\Omega"
},
{
"math_id": 23,
"text": "M/R^2"
},
{
"math_id": 24,
"text": "M"
},
{
"math_id": 25,
"text": "E_{\\gamma} > 100 "
},
{
"math_id": 26,
"text": "m_X \\gtrsim 100"
},
{
"math_id": 27,
"text": "\\left \\langle \\sigma v \\right \\rangle \\sim 10^{-25}cm^3s^{-1}"
},
{
"math_id": 28,
"text": "J"
},
{
"math_id": 29,
"text": "10\\ GeV \\lesssim m_X \\lesssim 100\\ TeV"
},
{
"math_id": 30,
"text": "10 \\ GeV \\lesssim m_X \\lesssim 10 \\ TeV"
},
{
"math_id": 31,
"text": "m_X \\lesssim 300"
},
{
"math_id": 32,
"text": "\\left \\langle \\sigma v \\right \\rangle \\lesssim 1.1 \\times 10^{-24} cm^3s^{-1}"
},
{
"math_id": 33,
"text": "\\sim 100 "
},
{
"math_id": 34,
"text": "\\left \\langle \\sigma v \\right \\rangle_{therm} \\equiv 3 \\times 10^{-26}"
},
{
"math_id": 35,
"text": "cm^3s^{-1}"
},
{
"math_id": 36,
"text": "\\sim 10^{-26} cm^3 s^{-1}"
},
{
"math_id": 37,
"text": "40 \\ GeV \\lesssim m_X \\lesssim 70 \\ GeV"
},
{
"math_id": 38,
"text": "E_{\\gamma} \\sim 500"
},
{
"math_id": 39,
"text": "25"
},
{
"math_id": 40,
"text": "\\left \\langle \\sigma v \\right \\rangle \\sim 10^{-26} cm^3 s^{-1}"
},
{
"math_id": 41,
"text": "m_X \\gg TeV"
},
{
"math_id": 42,
"text": "\\tau_X \\gtrsim 10^{27-28}"
},
{
"math_id": 43,
"text": "m_X \\ll GeV"
},
{
"math_id": 44,
"text": "\\sim 1 \\ GeV"
},
{
"math_id": 45,
"text": "\\sim 10"
},
{
"math_id": 46,
"text": "300 "
},
{
"math_id": 47,
"text": "\\sim 3.5 "
},
{
"math_id": 48,
"text": "\\sim 7"
},
{
"math_id": 49,
"text": "N_{ann} = \\frac{1}{2}\\frac{\\rho_X^2\\left \\langle \\sigma v \\right \\rangle V_c}{m_X^2 H}"
},
{
"math_id": 50,
"text": "H"
},
{
"math_id": 51,
"text": "V_c"
},
{
"math_id": 52,
"text": "\\left \\langle \\sigma v \\right \\rangle\n"
},
{
"math_id": 53,
"text": "m_X = 100"
},
{
"math_id": 54,
"text": "10^{-3}"
},
{
"math_id": 55,
"text": "p_{ann} \\equiv f_{eff}\\frac{\\left \\langle \\sigma v \\right \\rangle}{m_X}"
},
{
"math_id": 56,
"text": "f_{eff}"
},
{
"math_id": 57,
"text": "p_{ann}"
},
{
"math_id": 58,
"text": "v_{rel}"
},
{
"math_id": 59,
"text": "(\\sigma v \\propto v_{rel}^2)"
}
]
| https://en.wikipedia.org/wiki?curid=70605254 |
70605963 | Joshua 11 | Book of Joshua, chapter 11
Joshua 11 is the eleventh chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter focuses on the conquest of the land of Canaan by the Israelites under the leadership of Joshua, a part of a section comprising Joshua 5:13–12:24 about the conquest of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 23 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of the Israelites conquering the land of Canaan comprises verses 5:13 to 12:24 of the Book of Joshua and has the following outline:
A. Jericho (5:13–6:27)
B. Achan and Ai (7:1–8:29)
C. Renewal at Mount Ebal (8:30–35)
D. The Gibeonite Deception (9:1–27)
E. The Campaign in the South (10:1–43)
F. The Campaign in the North and Summary List of Kings (11:1–12:24)
1. Victory over the Northern Alliance (11:1-15)
a. The Northern Alliance (11:1-5)
b. Divine Reassurance (11:6)
c. Victory at Merom (11:7-9)
d. Destruction of Hazor (11:10–11)
e. Summation of Obedience and Victory (11:12–15)
2. Summaries of Taking the Land (11:16-12:24)
a. Taking the Land (11:16-20)
b. Extermination of the Anakim (11:21-22)
c. Narrative Pivot: Taking and Allotting (11:23)
d. Capture of Land and Kings (12:1-24)
i. East of the Jordan (12:1-6)
ii. West of the Jordan (12:7-24)
Victory over the Northern Alliance (11:1–15).
The narrative of the northern conquest begins with the familiar formula,'when x heard'—here Jabin, king of Hazor, who then formed an alliance with kings around the area to prepare for the battle against the Israelites. No specific military or strategic plan is detailed of a march north by Joshua and Israel, except for an image of a surprise attack to the alliance camp at Merom (verse 7). The fulfilment of YHWH's command (to Moses, then from Moses to Joshua) is the most important element in the narrative of Joshua's success (verses 6, 9, 15), which also includes the hamstringing of the Canaanites' horses and the burning of their chariots (verse 9). The following conquest extends to the Mediterranean far to the north at Sidon (as far as 'Lebanon and... to the Western Sea' in Deuteronomy), then turning south-eastwards over Lake Huleh (now a fertile plain) towards Hazor, which is burned after the execution of the "herem" ("ban"; verses 10–15), whereas other cities are also destroyed, but not burned.
"1And it came to pass, when Jabin king of Hazor heard these things, that he sent to Jobab king of Madon, to the king of Shimron, to the king of Achshaph, 2and to the kings who were from the north, in the mountains, in the plain south of Chinneroth, in the lowland, and in the heights of Dor on the west, 3to the Canaanites in the east and in the west, the Amorite, the Hittite, the Perizzite, the Jebusite in the mountains, and the Hivite below Hermon in the land of Mizpah."
Summaries of taking the land (11:16–23).
Verses 16–20 summarize the area of the land now under Israel's control, from south to north — Mount Halak on the borderland of Edom (Seir) in the far south-east; Baal-gad near Mt. Hermon in the far north. The victory over the kings of the land is emphasized by a repetition (verses 17–18), with the exception of Gibeon (a blemish on the record), underlined by the rationale: God 'hardened their hearts' against Israel (cf. Pharaoh against Moses in Exodus 7:13; see Joshua 10:20), so that they might be utterly destroyed (referring to "herem" or the "ban"). The Anakim (verses 21–23) had inspired fear in Israel, when the spies first surveyed the land (Numbers 13:28; Deuteronomy 1:28), but the report of the victory here removed the misplaced fear, and witnesses the promise fulfillment of Deuteronomy 9:1–3. The phrase 'the land had rest from war' recalls Deuteronomy 12:10, which anticipates the life of Israel in the land after all wars are won and the land is divided among all tribes.
"So Joshua took the whole land, according to all that the Lord said unto Moses; and Joshua gave it for an inheritance unto Israel according to their divisions by their tribes. And the land rested from war."
Verse 23.
This verse summarizes the activities recorded in the book thus far. The note that the land was divided for the tribes of Israel and that the land rested were past events for the narrator, but are still future in the narrative.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70605963 |
7060924 | Hankinson's equation | Hankinson's equation (also called Hankinson's formula or Hankinson's criterion) is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood. For a wood that has uniaxial compressive strengths of formula_0 parallel to the grain and formula_1 perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle formula_2 to the grain is given by
formula_3
Even though the original relation was based on studies of spruce, Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form
formula_4
where the exponent formula_5 can take values between 1.5 and 2.
The stress wave velocity at angle formula_2 to the grain at the elastic limit can similarly be obtained from the Hankinson formula
formula_6
where formula_7 is the velocity parallel to the grain, formula_8 is the velocity perpendicular to the grain and formula_2 is the grain angle. | [
{
"math_id": 0,
"text": "\\sigma_0"
},
{
"math_id": 1,
"text": "\\sigma_{90}"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\n \\sigma_\\alpha = \\cfrac{\\sigma_0~\\sigma_{90}}{\\sigma_0~\\sin^2\\alpha + \\sigma_{90}~\\cos^2\\alpha}\n"
},
{
"math_id": 4,
"text": "\n \\sigma_\\alpha = \\cfrac{\\sigma_0~\\sigma_{90}}{\\sigma_0~\\sin^n\\alpha + \\sigma_{90}~\\cos^n\\alpha}\n"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": " \n V(\\alpha) = \\frac{V_0 V_{90}}{V_0 \\sin^2\\alpha + V_{90} \\cos^2\\alpha}\n"
},
{
"math_id": 7,
"text": "V_0"
},
{
"math_id": 8,
"text": "V_{90}"
}
]
| https://en.wikipedia.org/wiki?curid=7060924 |
706106 | Electrostatic induction | Separation of electric charge due to presence of other charges
Electrostatic induction, also known as "electrostatic influence" or simply "influence" in Europe and Latin America, is a redistribution of electric charge in an object that is caused by the influence of nearby charges. In the presence of a charged body, an insulated conductor develops a positive charge on one end and a negative charge on the other end. Induction was discovered by British scientist John Canton in 1753 and Swedish professor Johan Carl Wilcke in 1762. Electrostatic generators, such as the Wimshurst machine, the Van de Graaff generator and the electrophorus, use this principle. See also Stephen Gray in this context. Due to induction, the electrostatic potential (voltage) is constant at any point throughout a conductor. Electrostatic induction is also responsible for the attraction of light nonconductive objects, such as balloons, paper or styrofoam scraps, to static electric charges. Electrostatic induction laws apply in dynamic situations as far as the quasistatic approximation is valid.
Explanation.
A normal uncharged piece of matter has equal numbers of positive and negative electric charges in each part of it, located close together, so no part of it has a net electric charge. The positive charges are the atoms' nuclei which are bound into the structure of matter and are not free to move. The negative charges are the atoms' electrons. In electrically conductive objects such as metals, some of the electrons are able to move freely about in the object.
When a charged object is brought near an uncharged, electrically conducting object, such as a piece of metal, the force of the nearby charge due to Coulomb's law causes a separation of these internal charges. For example, if a positive charge is brought near the object (see picture of cylindrical electrode near electrostatic machine), the electrons in the metal will be attracted toward it and move to the side of the object facing it. When the electrons move out of an area, they leave an unbalanced positive charge due to the nuclei. This results in a region of negative charge on the object nearest to the external charge, and a region of positive charge on the part away from it. These are called "induced charges". If the external charge is negative, the polarity of the charged regions will be reversed.
Since this process is just a redistribution of the charges that were already in the object, it doesn't change the "total" charge on the object; it still has no net charge. This induction effect is reversible; if the nearby charge is removed, the attraction between the positive and negative internal charges causes them to intermingle again.
Charging an object by induction.
However, the induction effect can also be used to put a net charge on an object. If, while it is close to the positive charge, the above object is momentarily connected through a conductive path to electrical ground, which is a large reservoir of both positive and negative charges, some of the negative charges in the ground will flow into the object, under the attraction of the nearby positive charge. When the contact with ground is broken, the object is left with a net negative charge.
This method can be demonstrated using a gold-leaf electroscope, which is an instrument for detecting electric charge. The electroscope is first discharged, and a charged object is then brought close to the instrument's top terminal. Induction causes a separation of the charges inside the electroscope's metal rod, so that the top terminal gains a net charge of opposite polarity to that of the object, while the gold leaves gain a charge of the same polarity. Since both leaves have the same charge, they repel each other and spread apart. The electroscope has not acquired a net charge: the charge within it has merely been redistributed, so if the charged object were to be moved away from the electroscope the leaves will come together again.
But if an electrical contact is now briefly made between the electroscope terminal and ground, for example by touching the terminal with a finger, this causes charge to flow from ground to the terminal, attracted by the charge on the object close to the terminal. This charge neutralizes the charge in the gold leaves, so the leaves come together again. The electroscope now contains a net charge opposite in polarity to that of the charged object. When the electrical contact to earth is broken, e.g. by lifting the finger, the extra charge that has just flowed into the electroscope cannot escape, and the instrument retains a net charge. The charge is held in the top of the electroscope terminal by the attraction of the inducing charge. But when the inducing charge is moved away, the charge is released and spreads throughout the electroscope terminal to the leaves, so the gold leaves move apart again.
The sign of the charge left on the electroscope after grounding is always opposite in sign to the external inducing charge. The two rules of induction are:
The electrostatic field inside a conductive object is zero.
A remaining question is how large the induced charges are. The movement of charges is caused by the force exerted on them by the electric field of the external charged object, by Coulomb's law. As the charges in the metal object continue to separate, the resulting positive and negative regions create their own electric field, which opposes the field of the external charge. This process continues until very quickly (within a fraction of a second) an equilibrium is reached in which the induced charges are exactly the right size and shape to cancel the external electric field throughout the interior of the metal object. Then the remaining mobile charges (electrons) in the interior of the metal no longer feel a force and the net motion of the charges stops.
Induced charge resides on the surface.
Since the mobile charges (electrons) in the interior of a metal object are free to move in any direction, there can never be a static concentration of charge inside the metal; if there was, it would disperse due to its mutual repulsion. Therefore in induction, the mobile charges move through the metal under the influence of the external charge in such a way that they maintain local electrostatic neutrality; in any interior region the negative charge of the electrons balances the positive charge of the nuclei. The electrons move until they reach the surface of the metal and collect there, where they are constrained from moving by the boundary. The surface is the only location where a net electric charge can exist.
This establishes the principle that electrostatic charges on conductive objects reside on the surface of the object. External electric fields induce surface charges on metal objects that exactly cancel the field within.
The voltage throughout a conductive object is constant.
The electrostatic potential or voltage between two points is defined as the energy (work) required to move a small positive charge through an electric field between the two points, divided by the size of the charge. If there is an electric field directed from point formula_0 to point formula_1 then it will exert a force on a charge moving from formula_1 to formula_0. Work will have to be done on the charge by a force to make it move to formula_0 against the opposing force of the electric field. Thus the electrostatic potential energy of the charge will increase. So the potential at point formula_0 is higher than at point formula_1. The electric field formula_2 at any point formula_3 is the gradient (rate of change) of the electrostatic potential formula_4 :
formula_5
Since there can be no electric field inside a conductive object to exert force on charges formula_6, within a conductive object the gradient of the potential is zero
formula_7
Another way of saying this is that in electrostatics, electrostatic induction ensures that the potential (voltage) throughout a conductive object is constant.
Induction in dielectric objects.
A similar induction effect occurs in nonconductive (dielectric) objects, and is responsible for the attraction of small light nonconductive objects, like balloons, scraps of paper or Styrofoam, to static electric charges "(see picture of cat, above)", as well as static cling in clothes.
In nonconductors, the electrons are bound to atoms or molecules and are not free to move about the object as in conductors; however they can move a little within the molecules. If a positive charge is brought near a nonconductive object, the electrons in each molecule are attracted toward it, and move to the side of the molecule facing the charge, while the positive nuclei are repelled and move slightly to the opposite side of the molecule. Since the negative charges are now closer to the external charge than the positive charges, their attraction is greater than the repulsion of the positive charges, resulting in a small net attraction of the molecule toward the charge. This effect is microscopic, but since there are so many molecules, it adds up to enough force to move a light object like Styrofoam.
This change in the distribution of charge in a molecule due to an external electric field is called dielectric polarization, and the polarized molecules are called dipoles. This should not be confused with a polar molecule, which has a positive and negative end due to its structure, even in the absence of external charge. This is the principle of operation of a pith-ball electroscope.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{b}"
},
{
"math_id": 1,
"text": "\\mathbf{a}"
},
{
"math_id": 2,
"text": "\\mathbf{E}(\\mathbf{x})"
},
{
"math_id": 3,
"text": "\\mathbf{x}"
},
{
"math_id": 4,
"text": "V(\\mathbf{x})"
},
{
"math_id": 5,
"text": "\\nabla V = \\mathbf{E}\\,"
},
{
"math_id": 6,
"text": "(\\mathbf{E} = 0)\\,"
},
{
"math_id": 7,
"text": "\\nabla V = \\mathbf{0}\\,"
}
]
| https://en.wikipedia.org/wiki?curid=706106 |
70620666 | Waxman-Bahcall bound | The Waxman-Bahcall bound is a computed upper limit on the observed flux of high energy neutrinos based on the observed flux of high energy cosmic rays. Since the highest energy neutrinos are produced from interactions of utlra-high-energy cosmic rays, the observed rate of production of the latter places a limit on the former. It is named for John Bahcall and Eli Waxman.
Cosmic rays.
The Waxman-Bahcall limit comes from the analysis of cosmic rays at various energy levels and their respective fluxes. Cosmic rays are high energy particles, like protons or atomic nuclei that move at near the speed of light. These rays can come from a variety of sources such as the Sun, the Solar System, the Milky Way galaxy, or even further beyond.
Upon entry into our atmosphere, these cosmic rays interact with atoms in the atmosphere, initiating cosmic-ray air showers. These showers are cascades of secondary particles, including muons and neutrinos. These "atmospheric" neutrinos can be studied and a general plot of the energy of said neutrinos and their fluxes can be determined and created. The plot below shows the cosmic-ray energy spectrum. Caution: the energy spectrum of atmospheric neutrinos is different; also the Waxman-Bahcall bound does not apply to atompsheric neutrinos, but to (ultra)-high-energy neutrinos from outside of our galaxy.
During Waxman's and Bahcall's research and work into neutrinos, there seemed to be a gap of very high energetic neutrinos, past the atmospheric neutrino limit, but still below the GZK limit, meaning there exists some extra-galactic high energy neutrino source yet to be detected.
Atmospheric neutrinos.
Atmospheric neutrinos are produced in the atmosphere, about 15 km above the Earth's surface. They are the result of particles, usually protons or light atomic nuclei, hitting other particles in the atmosphere and causing a shower or neutrinos into the Earths surface.
Atmospheric neutrinos were successfully detected in the 1960s when experiments were able to successfully find muons that resulted from these neutrinos. From that, they were able to find the energy of the neutrinos and the flux associated with them. Currently, neutrinos are able to be detected by many different experiments, such as IceCube Lab, allowing for the higher accurate measurements of their energy and fluxes.
GZK limit.
The GZK limit exists as a limit on the highest possible cosmic-ray energy that can travel without interaction through the universe, and cosmic rays above around 5 x 1019 eV can reach Earth only from the nearby universe. The limit exists because at these higher energies, and at travel distances further than 50 Mpc, interactions of cosmic rays with the CMB photons increase. With these interactions, the new cosmic-ray product particles have lower and lower energy, and cosmic rays above a few 1020 eV do not reach Earth (except if their source would be very close). Important in this context is that the GZK interactions also produce neutrinos, called "cosmogenic" neutrinos. Their energy is typically one order of magnitude below the energy per nucleon of the cosmic ray particle (e.g., a 1020 eV proton would lead to 1019 eV neutrinos, but a 1020 eV iron nucleus with 56 nucleons, would lead to neutrions of 56 times lower energy than for the proton case).
Waxman - Bahcall upper bound.
The Waxman - Bahcall upper bound is derived from a problem where neutrinos were discovered to have a higher energy than the atmospheric limit but still below the GZK limit discussed above. Unsure about what possible source could be the cause of these neutrinos, Waxman and Bahcall worked to cross off possible other sources, such as assist from magnetic fields, redshift correction, and sources of high energy outside the Milky Way Galaxy.
The current upper bound on the intensity of muon neutrino is said to be:
formula_0
with the expected neutrino intensity to be 1/2 Imax.
Redshift losses.
Initially in the derivation for the muon neutrino intensity above, redshift factors were ignored. However, if a correction factor was included, it could also be found that the neutrinos detected above either started out at high energies and were detected at a lower energy due to redshift.
It is known, however, that if red-shift is to be the prime factor in the limit, that the proton would have had to have a redshift z of less than 1. If the particle started from outside this range, as told by the GZK limit, other interactions would take place during the particles travel, and make it so that the neutrinos detected would be far below the threshold discussed.
Deriving a correction factor to multiply by Imax to change the threshold of the system, it was found to be:
formula_1
Working with nearby galaxies and clusters, it was found that there is no significant change on the limit from the redshift correction, and that the reason for the limit and expected values outside of the limit has to come from some other external source.
Magnetic fields.
Neutrino Source.
Another factor to consider was the addition of the magnetic fields at the source of the neutrinos and how it might allow for increased energy of an incoming charged particle from a cosmic ray. If protons can be prevented from leaving the source due to a magnetic field, then only neutrinos would be allowed to go through, meaning we would be able to see higher level neutrinos. Bahcall and Waxman quickly ruled this out as a permanent option, as when there is a proto-meso interaction a charged pion is created but a proton is then also turned into a neutron. The neutron will not be affected by the field in any way, and will travel about 100 kpc when with high energies. This makes it impossible to exceed the upper bound found earlier from Waxman and Bahcall.
Intergalactic magnetic field.
Another theory is that the intergalactic magnetic field would be able to change the direction of the protons on their way to Earth, allowing for the neutrinos to come in relatively in a straight line.
To derive this theory, Waxman and Bahcall started with the basic proton traveling with energy "E", in a magnetic field "B", and with correlation length "λ". If the proton travels a distance of "λ", the resulting angle of deflection is:
formula_2
Where "Rl" is the Larmor Radius.
formula_3
If the angle is kept small, and propagating a distance "l", the new deflection angle becomes:
formula_4
Plugging in values for time, which would give us a maximum propagation distance that the particle could travel in that time, we find that the existence of a uniformly distributed inter-galactic magnetic field would have no effect on the limit.
Possible causes.
Active galactic nuclei jet models.
When looking out into the galaxy, and starting to think about what could have caused such high energy neutrinos to appear, it was thought of that jets from Active Galactic Nuclei (AGN) were the main cause. Looking further into the details, Waxman and Bachall saw that the intensities for jets from AGN's are two times higher in magnitude than the limit discussed above.
Initially, it was thought that the photons and protons were accelerated into the jets thanks to Fermi acceleration with an energy spectrum:
formula_5
for both protons and photons (simply plug in the values for photons or protons for either quantity). This implies the optical depth is related to Ep and assuming a small optical depth allows us to have the neutrino spectrum of:
formula_6
Later, it was realized that the decay of neutral pions, which are created along with charged pions, cause a high energy gamma ray emission. It was then found that the large energies being seen was not a result of Compton scattering of protons and photon, but of neutral pion decay. Once this emission was fixed, the intensity of the neutrinos found from AGN was under the max limit discussed above, and AGN then became a valid cause for these higher energy neutrinos if the area was optically thin and the energy burst was cause by a single interaction of a decaying neutral pion.
Gamma ray bursts.
The Gamma-Ray Bursts (GRB) fireball model has also been another candidate for the reasoning behind higher energy neutrinos.
The high energy neutrino model already took multiple variables into account and was a match for the limit discussed above. Similar to AGN's, the GRB's are optically thin, however, unlike AGN's which needed some more assumptions to be made on how the energy was being expelled and reached to match the flux calculations, the GRB model was able to correctly match this limit.
The fireball model works by having the initial burst of the GRB, but then has another shock later on which goes onto explain the afterglow associated with GRB's. This second shock continues to push particles away and allows them to reach detectors on Earth within the limits discussed earlier.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_\\text{max} = 1.5 * 10^{-8} * \\xi_z \\text{GeV} \\text{cm}^{-2} \\text{s}^{-1} \\text{sr}^{-1}"
},
{
"math_id": 1,
"text": "\\xi_z = \\frac{\\int_0^{z_\\text{max}} \\mathrm{d}zg(z)(1+z)^{-7/2}f(z)}{\\int_0^{\\inf} \\mathrm{d}zg(z)(1+z)^{-5/2}}"
},
{
"math_id": 2,
"text": "\\text{angle} = \\lambda / R_l"
},
{
"math_id": 3,
"text": "R_l = E / eB"
},
{
"math_id": 4,
"text": "\\text{angle} = \\sqrt {l/\\lambda} * \\lambda / R_l"
},
{
"math_id": 5,
"text": "\\frac{\\mathrm{d}N}{\\mathrm{d}E} \\varpropto E^{-2}"
},
{
"math_id": 6,
"text": "\\frac{\\mathrm{d}N}{\\mathrm{d}E} \\varpropto E^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=70620666 |
7062356 | Anomalous diffusion | Diffusion process with a non-linear relationship to time
Anomalous diffusion is a diffusion process with a non-linear relationship between the mean squared displacement (MSD), formula_0, and time. This behavior is in stark contrast to Brownian motion, the typical diffusion process described by Einstein and Smoluchowski, where the MSD is linear in time (namely, formula_1 with "d" being the number of dimensions and "D" the diffusion coefficient).
It has been found that equations describing normal diffusion are not capable of characterizing some complex diffusion processes, for instance, diffusion process in inhomogeneous or heterogeneous medium, e.g. porous media. Fractional diffusion equations were introduced in order to characterize anomalous diffusion phenomena.
Examples of anomalous diffusion in nature have been observed in ultra-cold atoms, harmonic spring-mass systems, scalar mixing in the interstellar medium, telomeres in the nucleus of cells, ion channels in the plasma membrane, colloidal particle in the cytoplasm, moisture transport in cement-based materials, and worm-like micellar solutions.
Classes of anomalous diffusion.
Unlike typical diffusion, anomalous diffusion is described by a power law, formula_2where formula_3 is the so-called generalized diffusion coefficient and formula_4 is the elapsed time. The classes of anomalous diffusions are classified as follows:
In 1926, using weather balloons, Lewis Fry Richardson demonstrated that the atmosphere exhibits super-diffusion. In a bounded system, the mixing length (which determines the scale of dominant mixing motions) is given by the Von Kármán constant according to the equation formula_8, where formula_9 is the mixing length, formula_10 is the Von Kármán constant, and formula_11 is the distance to the nearest boundary. Because the scale of motions in the atmosphere is not limited, as in rivers or the subsurface, a plume continues to experience larger mixing motions as it increases in size, which also increases its diffusivity, resulting in super-diffusion.
Models of anomalous diffusion.
The types of anomalous diffusion given above allows one to measure the type, but how does anomalous diffusion arise? There are many possible ways to mathematically define a stochastic process which then has the right kind of power law. Some models are given here.
These are long range correlations between the signals continuous-time random walks (CTRW) and fractional Brownian motion (fBm), and diffusion in disordered media. Currently the most studied types of anomalous diffusion processes are those involving the following
These processes have growing interest in cell biophysics where the mechanism behind anomalous diffusion has direct physiological importance. Of particular interest, works by the groups of Eli Barkai, Maria Garcia Parajo, Joseph Klafter, Diego Krapf, and Ralf Metzler have shown that the motion of molecules in live cells often show a type of anomalous diffusion that breaks the ergodic hypothesis. This type of motion require novel formalisms for the underlying statistical physics because approaches using microcanonical ensemble and Wiener–Khinchin theorem break down.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\langle r^{2}(\\tau )\\rangle "
},
{
"math_id": 1,
"text": "\\langle r^{2}(\\tau )\\rangle =2dD\\tau"
},
{
"math_id": 2,
"text": "\\langle r^{2}(\\tau )\\rangle =K_\\alpha\\tau^\\alpha"
},
{
"math_id": 3,
"text": "K_\\alpha"
},
{
"math_id": 4,
"text": "\\tau"
},
{
"math_id": 5,
"text": "1 < \\alpha < 2"
},
{
"math_id": 6,
"text": "r = v\\tau"
},
{
"math_id": 7,
"text": "\\alpha > 2"
},
{
"math_id": 8,
"text": "l_m={\\kappa}z"
},
{
"math_id": 9,
"text": "l_m"
},
{
"math_id": 10,
"text": " {\\kappa}"
},
{
"math_id": 11,
"text": " z "
}
]
| https://en.wikipedia.org/wiki?curid=7062356 |
70623658 | Joshua 12 | Book of Joshua, chapter 12
Joshua 12 is the twelfth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas. However, modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, which are attributed to nationalistic and devotedly Yahwistic writers who were active during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the list of kings defeated by the Israelites under the leadership of Moses and Joshua. It is part of a section about the conquest of Canaan which comprises Joshua 5:13–12:24.
Text.
This chapter was originally written in the Hebrew language. It is divided into 24 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of the Israelites conquering the land of Canaan comprises verses 5:13 to 12:24 of the Book of Joshua and has the following outline:
A. Jericho (5:13–6:27)
B. Achan and Ai (7:1–8:29)
C. Renewal at Mount Ebal (8:30–35)
D. The Gibeonite Deception (9:1–27)
E. The Campaign in the South (10:1–43)
F. The Campaign in the North and Summary List of Kings (11:1–12:24)
1. Victory over the Northern Alliance (11:1-15)
a. The Northern Alliance (11:1-5)
b. Divine Reassurance (11:6)
c. Victory at Merom (11:7-9)
d. Destruction of Hazor (11:10–11)
e. Summation of Obedience and Victory (11:12–15)
2. Summaries of Taking the Land (11:16-12:24)
a. Taking the Land (11:16-20)
b. Extermination of the Anakim (11:21-22)
c. Narrative Pivot: Taking and Allotting (11:23)
d. Capture of Land and Kings (12:1-24)
i. East of the Jordan (12:1-6)
ii. West of the Jordan (12:7-24)
Chapter 12 provides the closing the account of Israel's conquest of the territory in both Transjordan (verses 1–6) and Cisjordan (verses 7–24), as 'the promises land', which comprises lands allocated to all tribes of Israel.
Kings defeated east of the Jordan (12:1–6).
This section recalls once again the victories in Transjordan (cf. Joshua 1:12-15; Numbers 21; Deuteronomy 2–3), especially mentioning that the "herem" ("ban" or holy war) was applied there first (Deuteronomy 2:34; 3:6). The two defeated kings in Transjordan were Sihon of Heshbon and Og of Bashan. Sihon's territory extended from the river Arnon —running from the east into the Dead Sea, and forming the northern boundary of Moab — northwards along the east bank of the Jordan River to its next major tributary, the Jabbok (where Jacob had wrestled with God; Genesis 32:22–32), including a stretch of the Arabah well to the north of the Jabbok, east of "Chinneroth" (= Sea of Galilee), and eastwards to the Ammonites' borderland. Og's territory lay to the north and east of Sihon's, with two major cities, Ashtaroth and Edrei, in the east of the Sea of Galilee. By taking both territories, Moses took all the Transjordan from the Arnon (river) to (mount) Hermon, then distributed it to the tribes of Reuben and Gad, and half the tribe of Manasseh (cf. Numbers 32: Deuteronomy 2–3).
"Now these are the kings of the land whom the people of Israel defeated and took possession of their land beyond the Jordan toward the sunrise, from the Valley of the Arnon to Mount Hermon, with all the Arabah eastward:"
Verse 1.
The Transjordanian territory of Israel is defined by the Arnon river (Seil el-Mojib (Wadi Mujib) in the south and Mount Hermon in the north.
Kings defeated west of the Jordan (12:7–24).
This section lists Joshua's conquests in the territory west of the Jordan, against the nations which were there (Deuteronomy 1:7; 7:1; Joshua 11:16–17), to continue the land possession and promise fulfillment that Moses had started. The conquered area is reported to span from Baal-gad in the north and Mount Halak in the south (cf. Joshua 11:17), followed by the list of the defeated kings of cities, roughly followed the progress of the conquest: first, Jericho and Ai, then the southern alliance under the king of Jerusalem, and the northern alliance under the king of Hazor. Among some new city names —Geder, Hormah, Arad, Adullam, Tappuah, Hepher, Aphek, Lasharon, Taanach, Megiddo, Kedesh, Jokneam, Tirzah— Arad, on the southern borders of Judah, is known with its temple to YHWH, and Megiddo was an important fortress on the north-south route, guarding the 'entrance to the plain of Esdraelon'. "Bethel" in verse 16 (not listed in Septuagint) is mentioned in verse 9, but the city of Bethel is recorded to fall to the 'house of Joseph' in Judges 1:22–25, 'after the death of Joshua' (Judges 1:1). The recurrence of the 'kings' implies the significance of YHWH's empowerment of Joshua, who is not a king, to defeat the kings of Canaan.
"7And these are the kings of the land whom Joshua and the people of Israel defeated on the west side of the Jordan, from Baal-gad in the Valley of Lebanon to Mount Halak, that rises toward Seir (and Joshua gave their land to the tribes of Israel as a possession according to their allotments, 8in the hill country, in the lowland, in the Arabah, in the slopes, in the wilderness, and in the Negeb, the land of the Hittites, the Amorites, the Canaanites, the Perizzites, the Hivites, and the Jebusites):"
Verses 7–8.
The geographical description matches those in Joshua 11:17, but here it is presented in order from north to south.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70623658 |
7062375 | Poincaré residue | In mathematics, the Poincaré residue is a generalization, to several complex variables and complex manifold theory, of the residue at a pole of complex function theory. It is just one of a number of such possible extensions.
Given a hypersurface formula_0 defined by a degree formula_1 polynomial formula_2 and a rational formula_3-form formula_4 on formula_5 with a pole of order formula_6 on formula_7, then we can construct a cohomology class formula_8. If formula_9 we recover the classical residue construction.
Historical construction.
When Poincaré first introduced residues he was studying period integrals of the formformula_10 for formula_11where formula_4 was a rational differential form with poles along a divisor formula_12. He was able to make the reduction of this integral to an integral of the formformula_13 for formula_14where formula_15, sending formula_16 to the boundary of a solid formula_17-tube around formula_16 on the smooth locus formula_18of the divisor. Ifformula_19on an affine chart where formula_20 is irreducible of degree formula_21 and formula_22 (so there is no poles on the line at infinity page 150). Then, he gave a formula for computing this residue asformula_23which are both cohomologous forms.
Construction.
Preliminary definition.
Given the setup in the introduction, let formula_24 be the space of meromorphic formula_25-forms on formula_5 which have poles of order up to formula_26. Notice that the standard differential formula_1 sends
formula_27
Define
formula_28
as the rational de-Rham cohomology groups. They form a filtrationformula_29corresponding to the Hodge filtration.
Definition of residue.
Consider an formula_30-cycle formula_31. We take a tube formula_32 around formula_16 (which is locally isomorphic to formula_33) that lies within the complement of formula_7. Since this is an formula_3-cycle, we can integrate a rational formula_3-form formula_4 and get a number. If we write this as
formula_34
then we get a linear transformation on the homology classes. Homology/cohomology duality implies that this is a cohomology class
formula_8
which we call the residue. Notice if we restrict to the case formula_9, this is just the standard residue from complex analysis (although we extend our meromorphic formula_35-form to all of formula_36. This definition can be summarized as the mapformula_37
Algorithm for computing this class.
There is a simple recursive method for computing the residues which reduces to the classical case of formula_9. Recall that the residue of a formula_35-form
formula_38
If we consider a chart containing formula_7 where it is the vanishing locus of formula_39, we can write a meromorphic formula_3-form with pole on formula_7 as
formula_40
Then we can write it out as
formula_41
This shows that the two cohomology classes
formula_42
are equal. We have thus reduced the order of the pole hence we can use recursion to get a pole of order formula_35 and define the residue of formula_4 as
formula_43
Example.
For example, consider the curve formula_44 defined by the polynomial
formula_45
Then, we can apply the previous algorithm to compute the residue of
formula_46
Since
formula_47
and
formula_48
we have that
formula_49
This implies that
formula_50 | [
{
"math_id": 0,
"text": "X \\subset \\mathbb{P}^n"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "\\mathbb{P}^n"
},
{
"math_id": 6,
"text": "k > 0"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "\\operatorname{Res}(\\omega) \\in H^{n-1}(X;\\mathbb{C})"
},
{
"math_id": 9,
"text": "n=1"
},
{
"math_id": 10,
"text": "\\underset{\\Gamma}\\iint \\omega"
},
{
"math_id": 11,
"text": "\\Gamma \\in H_2(\\mathbb{P}^2 - D)"
},
{
"math_id": 12,
"text": "D"
},
{
"math_id": 13,
"text": "\\int_\\gamma \\text{Res}(\\omega)"
},
{
"math_id": 14,
"text": "\\gamma \\in H_1(D)"
},
{
"math_id": 15,
"text": "\\Gamma = T(\\gamma)"
},
{
"math_id": 16,
"text": "\\gamma"
},
{
"math_id": 17,
"text": "\\varepsilon"
},
{
"math_id": 18,
"text": "D^*"
},
{
"math_id": 19,
"text": "\\omega = \\frac{q(x,y)dx\\wedge dy}{p(x,y)}"
},
{
"math_id": 20,
"text": "p(x,y)"
},
{
"math_id": 21,
"text": "N"
},
{
"math_id": 22,
"text": "\\deg q(x,y) \\leq N-3"
},
{
"math_id": 23,
"text": "\\text{Res}(\\omega) = -\\frac{qdx}{\\partial p / \\partial y} = \\frac{qdy}{\\partial p / \\partial x}"
},
{
"math_id": 24,
"text": "A^p_k(X)"
},
{
"math_id": 25,
"text": "p"
},
{
"math_id": 26,
"text": "k"
},
{
"math_id": 27,
"text": "d: A^{p-1}_{k-1}(X) \\to A^p_k(X)"
},
{
"math_id": 28,
"text": "\\mathcal{K}_k(X) = \\frac{A^p_k(X)}{dA^{p-1}_{k-1}(X)}"
},
{
"math_id": 29,
"text": "\\mathcal{K}_1(X) \\subset \\mathcal{K}_2(X) \\subset \\cdots \\subset \\mathcal{K}_n(X) =\nH^{n+1}(\\mathbb{P}^{n+1}-X)"
},
{
"math_id": 30,
"text": "(n-1)"
},
{
"math_id": 31,
"text": "\\gamma \\in H_{n-1}(X;\\mathbb{C})"
},
{
"math_id": 32,
"text": "T(\\gamma)"
},
{
"math_id": 33,
"text": "\\gamma\\times S^1"
},
{
"math_id": 34,
"text": "\\int_{T(-)}\\omega : H_{n-1}(X;\\mathbb{C}) \\to \\mathbb{C}"
},
{
"math_id": 35,
"text": "1"
},
{
"math_id": 36,
"text": "\\mathbb{P}^1"
},
{
"math_id": 37,
"text": "\\text{Res}: H^{n}(\\mathbb{P}^{n}\\setminus X) \\to H^{n-1}(X)"
},
{
"math_id": 38,
"text": " \\operatorname{Res}\\left(\\frac{dz} z + a\\right) = 1"
},
{
"math_id": 39,
"text": "w"
},
{
"math_id": 40,
"text": "\\frac{dw}{w^k}\\wedge \\rho"
},
{
"math_id": 41,
"text": " \\frac{1}{(k-1)}\\left( \\frac{d\\rho}{w^{k-1}} + d\\left(\\frac{\\rho}{w^{k-1}}\\right) \\right)"
},
{
"math_id": 42,
"text": "\\left[ \\frac{dw}{w^k}\\wedge \\rho \\right] = \\left[ \\frac{d\\rho}{(k-1)w^{k-1}} \\right]"
},
{
"math_id": 43,
"text": " \\operatorname{Res}\\left( \\alpha \\wedge \\frac{dw} w + \\beta \\right) = \\alpha|_X"
},
{
"math_id": 44,
"text": "X \\subset \\mathbb{P}^2"
},
{
"math_id": 45,
"text": "F_t(x,y,z) = t(x^3 + y^3 + z^3) - 3xyz"
},
{
"math_id": 46,
"text": "\\omega = \\frac{\\Omega}{F_t} = \\frac{x\\,dy\\wedge dz - y \\, dx\\wedge dz + z \\, dx\\wedge dy}{t(x^3 + y^3 + z^3) - 3xyz}"
},
{
"math_id": 47,
"text": "\n\\begin{align}\n-z\\,dy\\wedge\\left( \\frac{\\partial F_t}{\\partial x} \\, dx + \\frac{\\partial F_t}{\\partial y} \\, dy + \\frac{\\partial F_t}{\\partial z} \\, dz \\right) &=z\\frac{\\partial F_t}{\\partial x} \\, dx\\wedge dy - z \\frac{\\partial F_t}{\\partial z} \\, dy\\wedge dz \\\\\ny \\, dz\\wedge\\left(\\frac{\\partial F_t}{\\partial x} \\, dx + \\frac{\\partial F_t}{\\partial y} \\, dy + \\frac{\\partial F_t}{\\partial z} \\, dz\\right) &= -y\\frac{\\partial F_t}{\\partial x} \\, dx\\wedge dz - y \\frac{\\partial F_t}{\\partial y} \\, dy\\wedge dz \n\\end{align}\n"
},
{
"math_id": 48,
"text": "\n3F_t - z\\frac{\\partial F_t}{\\partial x} - y\\frac{\\partial F_t}{\\partial y} = x \\frac{\\partial F_t}{\\partial x}\n"
},
{
"math_id": 49,
"text": "\n\\omega = \\frac{y\\,dz - z\\,dy}{\\partial F_t / \\partial x} \\wedge \\frac{dF_t}{F_t} + \\frac{3\\,dy\\wedge dz}{\\partial F_t/\\partial x}\n"
},
{
"math_id": 50,
"text": "\\operatorname{Res}(\\omega) = \\frac{y\\,dz - z\\,dy}{\\partial F_t / \\partial x} "
}
]
| https://en.wikipedia.org/wiki?curid=7062375 |
706244 | Conical surface | Surface drawn by a moving line passing through a fixed point
In geometry, a conical surface is a three-dimensional surface formed from the union of lines that pass through a fixed point and a space curve.
Definitions.
A ("general") conical surface is the unbounded surface formed by the union of all the straight lines that pass through a fixed point — the "apex" or "vertex" — and any point of some fixed space curve — the "directrix" — that does not contain the apex. Each of those lines is called a "generatrix" of the surface. The directrix is often taken as a plane curve, in a plane not containing the apex, but this is not a requirement.
In general, a conical surface consists of two congruent unbounded halves joined by the apex. Each half is called a nappe, and is the union of all the rays that start at the apex and pass through a point of some fixed space curve. Sometimes the term "conical surface" is used to mean just one nappe.
Special cases.
If the directrix is a circle formula_0, and the apex is located on the circle's "axis" (the line that contains the center of formula_0 and is perpendicular to its plane), one obtains the "right circular conical surface" or double cone. More generally, when the directrix formula_0 is an ellipse, or any conic section, and the apex is an arbitrary point not on the plane of formula_0, one obtains an elliptic cone (also called a "conical quadric" or "quadratic cone"), which is a special case of a quadric surface.
Equations.
A conical surface formula_1 can be described parametrically as
formula_2,
where formula_3 is the apex and formula_4 is the directrix.
Related surface.
Conical surfaces are ruled surfaces, surfaces that have a straight line through each of their points. Patches of conical surfaces that avoid the apex are special cases of developable surfaces, surfaces that can be unfolded to a flat plane without stretching. When the directrix has the property that the angle it subtends from the apex is exactly formula_5, then each nappe of the conical surface, including the apex, is a developable surface.
A cylindrical surface can be viewed as a limiting case of a conical surface whose apex is moved off to infinity in a particular direction. Indeed, in projective geometry a cylindrical surface is just a special case of a conical surface.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "S(t,u) = v + u q(t)"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "2\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=706244 |
706247 | Electronic band structure | Describes the range of energies of an electron within the solid
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes the range of energy levels that electrons may have within it, as well as the ranges of energy that they may not have (called "band gaps" or "forbidden bands").
Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.).
<templatestyles src="Template:TOC limit/styles.css" />
Why bands and band gaps occur.
The formation of electronic bands and band gaps can be illustrated with two complementary models for electrons in solids. The first one is the nearly free electron model, in which the electrons are assumed to move almost freely within the material. In this model, the electronic states resemble free electron plane waves, and are only slightly perturbed by the crystal lattice. This model explains the origin of the electronic dispersion relation, but the explanation for band gaps is subtle in this model.
The second model starts from the opposite limit, in which the electrons are tightly bound to individual atoms. The electrons of a single, isolated atom occupy atomic orbitals with discrete energy levels. If two atoms come close enough so that their atomic orbitals overlap, the electrons can tunnel between the atoms. This tunneling splits (hybridizes) the atomic orbitals into molecular orbitals with different energies.
Similarly, if a large number "N" of identical atoms come together to form a solid, such as a crystal lattice, the atoms' atomic orbitals overlap with the nearby orbitals. Each discrete energy level splits into "N" levels, each with a different energy. Since the number of atoms in a macroscopic piece of solid is a very large number (), the number of orbitals that hybridize with each other is very large. For this reason, the adjacent levels are very closely spaced in energy (of the order of ), and can be considered to form a continuum, an energy band.
This formation of bands is mostly a feature of the outermost electrons (valence electrons) in the atom, which are the ones involved in chemical bonding and electrical conductivity. The inner electron orbitals do not overlap to a significant degree, so their bands are very narrow.
Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve comparatively larger orbitals with more overlap, becoming progressively wider at higher energies so that there are no band gaps at higher energies.
Basic concepts.
Assumptions and limits of band structure theory.
Band theory is only an approximation to the quantum state of a solid, which applies to solids consisting of many identical atoms or molecules bonded together. These are the assumptions necessary for band theory to be valid:
The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory:
Crystalline symmetry and wavevectors.
Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch electrons as solutions
formula_0
where k is called the wavevector. For each value of k, there are multiple solutions to the Schrödinger equation labelled by "n", the band index, which simply numbers the energy bands.
Each of these energy levels evolves smoothly with changes in k, forming a smooth band of states. For each band we can define a function "E""n"(k), which is the dispersion relation for electrons in that band.
The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector (reciprocal lattice) space that is related to the crystal's lattice.
Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone.
Special high symmetry points/lines in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ (see Fig 1).
It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, "E" vs. "kx", "ky", "kz". In scientific literature it is common to see band structure plots which show the values of "E""n"(k) for values of k along straight lines connecting symmetry points, often labelled Δ, Λ, Σ, or [100], [111], and [110], respectively. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface.
Energy band gaps can be classified using the wavevectors of the states surrounding the band gap:
Asymmetry: Band structures in non-crystalline solids.
Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band gaps. These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials.
Density of states.
The density of states function "g"("E") is defined as the number of electronic states per unit volume, per unit energy, for electron energies near "E".
The density of states function is important for calculations of effects based on band theory.
In Fermi's Golden Rule, a calculation for the rate of optical absorption, it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering.
For energies inside a band gap, "g"("E") = 0.
Filling of bands.
At thermodynamic equilibrium, the likelihood of a state of energy "E" being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle:
formula_1
where:
The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states:
formula_2
Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands.
The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral.
The condition of charge neutrality means that "N"/"V" must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting "g"("E")), until it is at the correct equilibrium with respect to the Fermi level.
Names of bands near the Fermi level (conduction band, valence band).
A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances.
Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy "core band"s are also usually disregarded since they remain filled with electrons at all times, and are therefore inert.
Likewise, materials have several band gaps throughout their band structure.
The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level.
The bands and band gaps near the Fermi level are given special names, depending on the material:
Theory in crystals.
The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch's theorem as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors (b1, b2, b3). Now, any periodic potential "V"(r) which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as:
formula_3
where K = "m"1b1 + "m"2b2 + "m"3b3 for any set of integers ("m"1, "m"2, "m"3).
From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap.
Nearly free electron approximation.
In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch's theorem, which states that the eigenstate wavefunctions have the form
formula_4
where the Bloch function formula_5 is periodic over the crystal lattice, that is,
formula_6
Here index n refers to the nth energy band, wavevector k is related to the direction of motion of the electron, r is the position in the crystal, and R is the location of an atomic site.179
The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like aluminium even gets close to the empty lattice approximation.
Tight binding model.
The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation formula_7 is well approximated by a linear combination of atomic orbitals formula_8.
formula_9
where the coefficients formula_10 are selected to give the best approximate solution of this form. Index n refers to an atomic energy level and R refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by:Eq. 42 p. 267
formula_11
in which formula_12 is the periodic part of the Bloch's theorem and the integral is over the Brillouin zone. Here index "n" refers to the "n"-th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites R are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the "n"-th energy band as:
formula_13
The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations,
sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations.
KKR model.
The KKR method, also called "multiple scattering theory" or Green's function method, finds the stationary values of the inverse transition matrix T rather than the Hamiltonian. A variational implementation was suggested by Korringa, Kohn and Rostocker, and is often referred to as the "Korringa–Kohn–Rostoker method". The most important features of the KKR or Green's function formulation are (1) it separates the two aspects of the problem: structure (positions of the atoms) from the scattering (chemical identity of the atoms); and (2) Green's functions provide a natural approach to a localized description of electronic properties that can be adapted to alloys and other disordered system. The simplest form of this approximation centers non-overlapping spheres (referred to as "muffin tins") on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced.
Density-functional theory.
In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors.
It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenberg–Kohn theorem. In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopmans' theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials.
Green's function methods and the "ab initio" GW approximation.
To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = "GW" of the Green's function "G" and the dynamically screened interaction "W". This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely "ab initio" way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation.
Dynamical mean-field theory.
Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean-field theory, which attempts to bridge the gap between the nearly free electron approximation and the atomic limit. Formally, however, the states are not non-interacting in this case and the concept of a band structure is not adequate to describe these cases.
Others.
Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following:
The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a "complex band structure", which is of interest at surfaces and interfaces.
Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl).
Band diagrams.
To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels, while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi_{n\\mathbf{k}}(\\mathbf{r}) = e^{i\\mathbf{k}\\cdot\\mathbf{r}} u_{n\\mathbf{k}}(\\mathbf{r}),"
},
{
"math_id": 1,
"text": "f(E) = \\frac{1}{1 + e^{{(E-\\mu)}/{k_\\text{B} T}}}"
},
{
"math_id": 2,
"text": "N/V = \\int_{-\\infty}^{\\infty} g(E) f(E)\\, dE"
},
{
"math_id": 3,
"text": "V(\\mathbf{r}) = \\sum_{\\mathbf{K}} {V_{\\mathbf{K}} e^{i \\mathbf{K}\\cdot\\mathbf{r}}}"
},
{
"math_id": 4,
"text": "\\Psi_{n,\\mathbf{k}} (\\mathbf{r}) = e^{i \\mathbf{k}\\cdot\\mathbf{r}} u_n(\\mathbf{r}) "
},
{
"math_id": 5,
"text": "u_n(\\mathbf{r})"
},
{
"math_id": 6,
"text": "u_n(\\mathbf{r}) = u_n(\\mathbf{r}-\\mathbf{R}) ."
},
{
"math_id": 7,
"text": "\\Psi"
},
{
"math_id": 8,
"text": "\\psi_n(\\mathbf{r})"
},
{
"math_id": 9,
"text": "\\Psi(\\mathbf{r}) = \\sum_{n,\\mathbf{R}} b_{n,\\mathbf{R}} \\psi_n(\\mathbf{r}-\\mathbf{R}),"
},
{
"math_id": 10,
"text": " b_{n,\\mathbf{R}}"
},
{
"math_id": 11,
"text": "a_n(\\mathbf{r}-\\mathbf{R}) = \\frac{V_{C}}{(2\\pi)^{3}} \\int_\\text{BZ} d\\mathbf{k} e^{-i\\mathbf{k}\\cdot(\\mathbf{R} -\\mathbf{r})}u_{n\\mathbf{k}};"
},
{
"math_id": 12,
"text": "u_{n\\mathbf{k}}"
},
{
"math_id": 13,
"text": "\\Psi_{n,\\mathbf{k}} (\\mathbf{r}) = \\sum_{\\mathbf{R}} e^{-i\\mathbf{k}\\cdot(\\mathbf{R}-\\mathbf{r})}a_n(\\mathbf{r} - \\mathbf{R})."
}
]
| https://en.wikipedia.org/wiki?curid=706247 |
706271 | Fuglede's theorem | In mathematics, Fuglede's theorem is a result in operator theory, named after Bent Fuglede.
The result.
Theorem (Fuglede) Let "T" and "N" be bounded operators on a complex Hilbert space with "N" being normal. If "TN" = "NT", then "TN*" = "N*T", where "N*" denotes the adjoint of "N".
Normality of "N" is necessary, as is seen by taking "T"="N". When "T" is self-adjoint, the claim is trivial regardless of whether "N" is normal:
formula_0
Tentative Proof: If the underlying Hilbert space is finite-dimensional, the spectral theorem says that "N" is of the form
formula_1
where "Pi" are pairwise orthogonal projections. One expects that "TN" = "NT" if and only if "TPi" = "PiT".
Indeed, it can be proved to be true by elementary arguments (e.g. it can be shown that all "Pi" are representable as polynomials of "N" and for this reason, if "T" commutes with "N", it has to commute with "Pi"...).
Therefore "T" must also commute with
formula_2
In general, when the Hilbert space is not finite-dimensional, the normal operator "N" gives rise to a projection-valued measure "P" on its spectrum, "σ"("N"), which assigns a projection "P"Ω to each Borel subset of "σ"("N"). "N" can be expressed as
formula_3
Differently from the finite dimensional case, it is by no means obvious that "TN = NT" implies "TP"Ω = "P"Ω"T". Thus, it is not so obvious that "T" also commutes with any simple function of the form
formula_4
Indeed, following the construction of the spectral decomposition for a bounded, normal, not self-adjoint, operator "T", one sees that to verify that "T" commutes with formula_5, the most straightforward way is to assume that "T" commutes with both "N" and "N*", giving rise to a vicious circle!
That is the relevance of Fuglede's theorem: The latter hypothesis is not really necessary.
Putnam's generalization.
The following contains Fuglede's result as a special case. The proof by Rosenblum pictured below is just that presented by Fuglede for his theorem when assuming "N"="M".
Theorem (Calvin Richard Putnam) Let "T", "M", "N" be linear operators on a complex Hilbert space, and suppose that "M" and "N" are normal, "T" is bounded and "MT" = "TN".
Then "M"*"T" = "TN"*.
First proof (Marvin Rosenblum):
By induction, the hypothesis implies that "M""k""T" = "TN""k" for all "k".
Thus for any λ in formula_6,
formula_7
Consider the function
formula_8
This is equal to
formula_9
where formula_10 because formula_11 is normal, and similarly formula_12. However we have
formula_13
so U is unitary, and hence has norm 1 for all λ; the same is true for "V"(λ), so
formula_14
So "F" is a bounded analytic vector-valued function, and is thus constant, and equal to "F"(0) = "T". Considering the first-order terms in the expansion for small λ, we must have "M*T" = "TN*".
The original paper of Fuglede appeared in 1950; it was extended to the form given above by Putnam in 1951. The short proof given above was first published by Rosenblum in 1958; it is very elegant, but is less general than the original proof which also considered the case of unbounded operators. Another simple proof of Putnam's theorem is as follows:
Second proof: Consider the matrices
formula_15
The operator "N' " is normal and, by assumption, "T' N' = N' T' ". By Fuglede's theorem, one has
formula_16
Comparing entries then gives the desired result.
From Putnam's generalization, one can deduce the following:
Corollary If two normal operators "M" and "N" are similar, then they are unitarily equivalent.
Proof: Suppose "MS" = "SN" where "S" is a bounded invertible operator. Putnam's result implies "M*S" = "SN*", i.e.
formula_17
Take the adjoint of the above equation and we have
formula_18
So
formula_19
Let "S*=VR", with "V" a unitary (since "S" is invertible) and "R" the positive square root of "SS*". As "R" is a limit of polynomials on "SS*", the above implies that "R" commutes with "M". It is also invertible. Then
formula_20
Corollary If "M" and "N" are normal operators, and "MN" = "NM", then "MN" is also normal.
Proof: The argument invokes only Fuglede's theorem. One can directly compute
formula_21
By Fuglede, the above becomes
formula_22
But "M" and "N" are normal, so
formula_23
"C*"-algebras.
The theorem can be rephrased as a statement about elements of C*-algebras.
Theorem (Fuglede-Putnam-Rosenblum) Let "x, y" be two normal elements of a "C*"-algebra "A" and "z" such that "xz" = "zy". Then it follows that "x* z = z y*".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "TN^* = (NT)^* = (TN)^* = N^*T."
},
{
"math_id": 1,
"text": "N = \\sum_i \\lambda_i P_i "
},
{
"math_id": 2,
"text": "N^* = \\sum_i {\\bar \\lambda_i} P_i."
},
{
"math_id": 3,
"text": "N = \\int_{\\sigma(N)} \\lambda d P(\\lambda). "
},
{
"math_id": 4,
"text": "\\rho = \\sum_i {\\bar \\lambda} P_{\\Omega_i}."
},
{
"math_id": 5,
"text": "P_{\\Omega_i}"
},
{
"math_id": 6,
"text": "\\Complex"
},
{
"math_id": 7,
"text": "e^{\\bar\\lambda M}T = T e^{\\bar\\lambda N}."
},
{
"math_id": 8,
"text": "F(\\lambda) = e^{\\lambda M^*} T e^{-\\lambda N^*}."
},
{
"math_id": 9,
"text": "e^{\\lambda M^*} \\left[e^{-\\bar\\lambda M}T e^{\\bar\\lambda N}\\right] e^{-\\lambda N^*} = U(\\lambda) T V(\\lambda)^{-1},"
},
{
"math_id": 10,
"text": "U(\\lambda) = e^{\\lambda M^* - \\bar\\lambda M}"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": "V(\\lambda) = e^{\\lambda N^* - \\bar\\lambda N}"
},
{
"math_id": 13,
"text": "U(\\lambda)^* = e^{\\bar\\lambda M - \\lambda M^*} = U(\\lambda)^{-1}"
},
{
"math_id": 14,
"text": "\\|F(\\lambda)\\| \\le \\|T\\|\\ \\forall \\lambda."
},
{
"math_id": 15,
"text": "T' = \\begin{bmatrix} 0 & 0 \\\\ T & 0 \\end{bmatrix}\n\\quad \\text{and} \\quad\nN' = \\begin{bmatrix} N & 0 \\\\ 0 & M \\end{bmatrix}."
},
{
"math_id": 16,
"text": "T' (N')^* = (N')^*T'. "
},
{
"math_id": 17,
"text": "S^{-1} M^* S = N^*. "
},
{
"math_id": 18,
"text": "S^* M (S^{-1})^* = N. "
},
{
"math_id": 19,
"text": "S^* M (S^{-1})^* = S^{-1} M S \\quad \\Rightarrow \\quad SS^* M (SS^*)^{-1} = M."
},
{
"math_id": 20,
"text": "N = S^*M (S^*)^{-1}=VRMR^{-1}V^*=VMV^*. "
},
{
"math_id": 21,
"text": "(MN) (MN)^* = MN (NM)^* = MN M^* N^*. "
},
{
"math_id": 22,
"text": "= M M^* N N^* = M^* M N^*N. "
},
{
"math_id": 23,
"text": "= M^* N^* MN = (MN)^* MN. "
}
]
| https://en.wikipedia.org/wiki?curid=706271 |
706278 | Squeezed coherent state | Type of quantum state
In physics, a squeezed coherent state is a quantum state that is usually described by two non-commuting observables having continuous spectra of eigenvalues. Examples are position formula_0 and momentum formula_1 of a particle, and the (dimension-less) electric field in the amplitude formula_2 (phase 0) and in the mode formula_3 (phase 90°) of a light wave (the wave's quadratures). The product of the standard deviations of two such operators obeys the uncertainty principle:
formula_4 and formula_5 , respectively.
Trivial examples, which are in fact not squeezed, are the ground state formula_6 of the quantum harmonic oscillator and the family of coherent states formula_7. These states saturate the uncertainty above and have a symmetric distribution of the operator uncertainties with formula_8 in "natural oscillator units" and formula_9.
The term squeezed state is actually used for states with a standard deviation below that of the ground state for one of the operators or for a linear combination of the two. The idea behind this is that the circle denoting the uncertainty of a coherent state in the quadrature phase space (see right) has been "squeezed" to an ellipse of the same area. Note that a squeezed state does not need to saturate the uncertainty principle.
Squeezed states of light were first produced in the mid 1980s. At that time, quantum noise squeezing by up to a factor of about 2 (3 dB) in variance was achieved, i.e. formula_10. As of 2017, squeeze factors larger than 10 (10 dB) have been directly observed.
Mathematical definition.
The most general wave function that satisfies the identity above is the squeezed coherent state (we work in units with formula_11)
formula_12
where formula_13 are constants (a normalization constant, the center of the wavepacket, its width, and the expectation value of its momentum). The new feature relative to a coherent state is the free value of the width formula_14, which is the reason why the state is called "squeezed".
The squeezed state above is an eigenstate of a linear operator
formula_15
and the corresponding eigenvalue equals formula_16. In this sense, it is a generalization of the ground state as well as the coherent state.
Operator representation.
The general form of a squeezed coherent state for a quantum harmonic oscillator is given by
formula_17
where formula_6 is the vacuum state, formula_18 is the displacement operator and formula_19 is the squeeze operator, given by
formula_20
where formula_21 and formula_22 are annihilation and creation operators, respectively. For a quantum harmonic oscillator of angular frequency formula_23, these operators are given by
formula_24
For a real formula_25, (note that formula_26, where "r" is squeezing parameter), the uncertainty in formula_0 and formula_1 are given by
formula_27
Therefore, a squeezed coherent state saturates the Heisenberg uncertainty principle formula_28, with reduced uncertainty in one of its quadrature components and increased uncertainty in the other.
Some expectation values for squeezed coherent states are
formula_29
formula_30
formula_31
The general form of a displaced squeezed state for a quantum harmonic oscillator is given by
formula_32
Some expectation values for displaced squeezed state are
formula_33
formula_34
formula_35
Since formula_36 and formula_37 do not commute with each other,
formula_38
formula_39
where formula_40, with formula_41
Examples.
Depending on the phase angle at which the state's width is reduced, one can distinguish amplitude-squeezed, phase-squeezed, and general quadrature-squeezed states. If the squeezing operator is applied directly to the vacuum, rather than to a coherent state, the result is called the squeezed vacuum. The figures below give a nice visual demonstration of the close connection between squeezed states and Heisenberg's uncertainty relation: Diminishing the quantum noise at a specific quadrature (phase) of the wave has as a direct consequence an enhancement of the noise of the complementary quadrature, that is, the field at the phase shifted by formula_42.
As can be seen in the illustrations, in contrast to a coherent state, the quantum noise for a squeezed state is no longer independent of the phase of the light wave. A characteristic broadening and narrowing of the noise during one oscillation period can be observed. The probability distribution of a squeezed state is defined as the norm squared of the wave function mentioned in the last paragraph. It corresponds to the square of the electric (and magnetic) field strength of a classical light wave. The moving wave packets display an oscillatory motion combined with the widening and narrowing of their distribution: the "breathing" of the wave packet. For an amplitude-squeezed state, the most narrow distribution of the wave packet is reached at the field maximum, resulting in an amplitude that is defined more precisely than the one of a coherent state. For a phase-squeezed state, the most narrow distribution is reached at field zero, resulting in an average phase value that is better defined than the one of a coherent state.
In phase space, quantum mechanical uncertainties can be depicted by the Wigner quasi-probability distribution. The intensity of the light wave, its coherent excitation, is given by the displacement of the Wigner distribution from the origin. A change in the phase of the squeezed quadrature results in a rotation of the distribution.
Photon number distributions and phase distributions.
The squeezing angle, that is the phase with minimum quantum noise, has a large influence on the photon number distribution of the light wave and its phase distribution as well.
For amplitude squeezed light the photon number distribution is usually narrower than the one of a coherent state of the same amplitude resulting in sub-Poissonian light, whereas its phase distribution is wider. The opposite is true for the phase-squeezed light, which displays a large intensity (photon number) noise but a narrow phase distribution. Nevertheless, the statistics of amplitude squeezed light was not observed directly with photon number resolving detector due to experimental difficulty.
For the squeezed vacuum state the photon number distribution displays odd-even-oscillations. This can be explained by the mathematical form of the squeezing operator, that resembles the operator for two-photon generation and annihilation processes. Photons in a squeezed vacuum state are more likely to appear in pairs.
Classification.
Based on the number of modes.
Squeezed states of light are broadly classified into single-mode squeezed states and two-mode squeezed states, depending on the number of modes of the electromagnetic field involved in the process. Recent studies have looked into multimode squeezed states showing quantum correlations among more than two modes as well.
Single-mode squeezed states.
Single-mode squeezed states, as the name suggests, consists of a single mode of the electromagnetic field whose one quadrature has fluctuations below the shot noise level and the orthogonal quadrature has excess noise. Specifically, a single-mode squeezed "vacuum" (SMSV) state can be mathematically represented as,
formula_43
where the squeezing operator S is the same as introduced in the section on operator representations above. In the photon number basis, writing formula_44 this can be expanded as,
formula_45
which explicitly shows that the pure SMSV consists entirely of even-photon Fock state superpositions.
Single mode squeezed states are typically generated by degenerate parametric oscillation in an optical parametric oscillator, or using four-wave mixing.
Two-mode squeezed states.
Two-mode squeezing involves two modes of the electromagnetic field which exhibit quantum noise reduction below the shot noise level in a linear combination of the quadratures of the two fields. For example, the field produced by a nondegenerate parametric oscillator above threshold shows squeezing in the amplitude difference quadrature. The first experimental demonstration of two-mode squeezing in optics was by Heidmann "et al.". More recently, two-mode squeezing was generated on-chip using a four-wave mixing OPO above threshold. Two-mode squeezing is often seen as a precursor to continuous-variable entanglement, and hence a demonstration of the Einstein-Podolsky-Rosen paradox in its original formulation in terms of continuous position and momentum observables. A two-mode squeezed vacuum (TMSV) state can be mathematically represented as,
formula_46,
and, writing down formula_44, in the photon number basis as,
formula_47
If the individual modes of a TMSV are considered separately (i.e., formula_48), then tracing over or absorbing one of the modes leaves the remaining mode in a thermal state
formula_49
with an effective average number of photons formula_50.
Based on the presence of a mean field.
Squeezed states of light can be divided into squeezed vacuum and bright squeezed light, depending on the absence or presence of a non-zero mean field (also called a carrier), respectively. An optical parametric oscillator operated below threshold produces squeezed vacuum, whereas the same OPO operated above threshold produces bright squeezed light. Bright squeezed light can be advantageous for certain quantum information processing applications as it obviates the need of sending local oscillator to provide a phase reference, whereas squeezed vacuum is considered more suitable for quantum enhanced sensing applications. The AdLIGO and GEO600 gravitational wave detectors use squeezed vacuum to achieve enhanced sensitivity beyond the standard quantum limit.
Atomic spin squeezing.
For squeezing of two-level neutral atom ensembles it is useful to consider the atoms as spin-1/2 particles with corresponding angular momentum operators defined as
formula_51
where formula_52 and formula_53 is the single-spin operator in the formula_54-direction. Here formula_55 will correspond to the population difference in the two level system, i.e. for an equal superposition of the up and down state formula_56. The formula_57−formula_58 plane represents the phase difference between the two states. This is also known as the Bloch sphere picture. We can then define uncertainty relations such as formula_59. For a coherent (unentangled) state, formula_60. Squeezing is here considered the redistribution of uncertainty from one variable (typically formula_55) to another (typically formula_58). If we consider a state pointing in the formula_57 direction, we can define the Wineland criterion for squeezing, or the metrological enhancement of the squeezed state as
formula_61.
This criterion has two factors, the first factor is the spin noise reduction, i.e. how much the quantum noise in formula_55 is reduced relative to the coherent (unentangled) state. The second factor is how much the coherence (the length of the Bloch vector, formula_62) is reduced due to the squeezing procedure. Together these quantities tell you how much metrological enhancement the squeezing procedure gives. Here, metrological enhancement is the reduction in averaging time or atom number needed to make a measurement of a specific uncertainty. 20 dB of metrological enhancement means the same precision measurement can be made with 100 times fewer atoms or 100 times shorter averaging time.
Experimental realizations.
There has been a whole variety of successful demonstrations of squeezed states. The first demonstrations were experiments with light fields using lasers and non-linear optics (see optical parametric oscillator). This is achieved by a simple process of four-wave mixing with a formula_63 crystal; similarly travelling wave phase-sensitive amplifiers generate spatially multimode quadrature-squeezed states of light when the formula_64 crystal is pumped in absence of any signal. Sub-Poissonian current sources driving semiconductor laser diodes have led to amplitude squeezed light.
Squeezed states have also been realized via motional states of an ion in a trap, phonon states in crystal lattices, and spin states in neutral atom ensembles. Much progress has been made on the creation and observation of spin squeezed states in ensembles of neutral atoms and ions, which can be used to enhancement measurements of time, accelerations, fields, and the current state of the art for measurement enhancement is 20 dB. Generation of spin squeezed states have been demonstrated using both coherent evolution of a coherent spin state and projective, coherence-preserving measurements. Even macroscopic oscillators were driven into classical motional states that were very similar to squeezed coherent states. Current state of the art in noise suppression, for laser radiation using squeezed light, amounts to 15 dB (as of 2016), which broke the previous record of 12.7 dB (2010).
Applications.
Squeezed states of the light field can be used to enhance precision measurements. For example, phase-squeezed light can improve the phase read out of interferometric measurements (see for example gravitational waves). Amplitude-squeezed light can improve the readout of very weak spectroscopic signals.
Spin squeezed states of atoms can be used to improve the precision of atomic clocks. This is an important problem in atomic clocks and other sensors that use small ensembles of cold atoms where the quantum projection noise represents a fundamental limitation to the precision of the sensor.
Various squeezed coherent states, generalized to the case of many degrees of freedom, are used in various calculations in quantum field theory, for example Unruh effect and Hawking radiation, and generally, particle production in curved backgrounds and Bogoliubov transformations.
Recently, the use of squeezed states for quantum information processing in the continuous variables (CV) regime has been increasing rapidly. Continuous variable quantum optics uses squeezing of light as an essential resource to realize CV protocols for quantum communication, unconditional quantum teleportation and one-way quantum computing. This is in contrast to quantum information processing with single photons or photon pairs as qubits. CV quantum information processing relies heavily on the fact that squeezing is intimately related to quantum entanglement, as the quadratures of a squeezed state exhibit sub-shot-noise quantum correlations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "\\Delta x \\Delta p \\geq \\frac{\\hbar}2\\;"
},
{
"math_id": 5,
"text": "\\;\\Delta X \\Delta Y \\geq \\frac{1}4"
},
{
"math_id": 6,
"text": "|0\\rangle"
},
{
"math_id": 7,
"text": "|\\alpha\\rangle"
},
{
"math_id": 8,
"text": "\\Delta x_g = \\Delta p_g"
},
{
"math_id": 9,
"text": "\\Delta X_g = \\Delta Y_g = 1/2"
},
{
"math_id": 10,
"text": "\\Delta^2 X \\approx \\Delta^2 X_g/2"
},
{
"math_id": 11,
"text": "\\hbar=1"
},
{
"math_id": 12,
"text": "\\psi(x) = C\\,\\exp\\left(-\\frac{(x-x_0)^2}{2 w_0^2} + i p_0 x\\right)"
},
{
"math_id": 13,
"text": "C,x_0,w_0,p_0"
},
{
"math_id": 14,
"text": "w_0"
},
{
"math_id": 15,
"text": " \\hat x + i\\hat p w_0^2"
},
{
"math_id": 16,
"text": "x_0+ip_0 w_0^2"
},
{
"math_id": 17,
"text": " |\\alpha,\\zeta\\rangle = \\hat{S}(\\zeta)|\\alpha\\rangle = \\hat{S}(\\zeta) \\hat{D}(\\alpha)|0\\rangle "
},
{
"math_id": 18,
"text": "D(\\alpha)"
},
{
"math_id": 19,
"text": "S(\\zeta)"
},
{
"math_id": 20,
"text": "\\hat{D}(\\alpha)=\\exp (\\alpha \\hat a^\\dagger - \\alpha^* \\hat a)\\qquad \\text{and}\\qquad \\hat{S}(\\zeta)=\\exp\\bigg[\\frac{1}{2} (\\zeta^* \\hat a^2-\\zeta \\hat a^{\\dagger 2})\\bigg]"
},
{
"math_id": 21,
"text": "\\hat a"
},
{
"math_id": 22,
"text": "\\hat a^\\dagger"
},
{
"math_id": 23,
"text": "\\omega"
},
{
"math_id": 24,
"text": "\\hat a^\\dagger = \\sqrt{\\frac{m\\omega}{2\\hbar}}\\left(x-\\frac{i p}{m\\omega}\\right)\\qquad \\text{and} \\qquad \\hat a = \\sqrt{\\frac{m\\omega}{2\\hbar}}\\left(x+\\frac{i p}{m\\omega}\\right)"
},
{
"math_id": 25,
"text": "\\zeta"
},
{
"math_id": 26,
"text": "\\zeta = r e^{2 i \\phi}"
},
{
"math_id": 27,
"text": "(\\Delta x)^2=\\frac{\\hbar}{2m\\omega}\\mathrm{e}^{-2\\zeta} \\qquad\\text{and}\\qquad (\\Delta p)^2=\\frac{m\\hbar\\omega}{2}\\mathrm{e}^{2\\zeta}"
},
{
"math_id": 28,
"text": "\\Delta x\\Delta p=\\frac{\\hbar}{2}"
},
{
"math_id": 29,
"text": " \\langle\\alpha,\\zeta | \\hat a | \\alpha,\\zeta\\rangle = \\alpha \\cosh(r) - \\alpha^{*}e^{i\\theta} \\sinh(r) "
},
{
"math_id": 30,
"text": " \\langle\\alpha,\\zeta | {\\hat{a}}^2 | \\alpha,\\zeta\\rangle = \\alpha ^{2} \\cosh^{2}(r) +{\\alpha^{*}}^{2}e^{2i\\theta} \\sinh^{2}(r) - (1+2{|\\alpha|}^{2})e^{i\\theta} \\cosh (r) \\sinh (r) "
},
{
"math_id": 31,
"text": " \\langle\\alpha,\\zeta | {\\hat{a}}^{\\dagger}\\hat{a} | \\alpha,\\zeta\\rangle = |\\alpha|^2 \\cosh^{2}(r) + (1+{|\\alpha|}^{2})\\sinh^2 (r) - ({\\alpha}^2 e^{-i\\theta} + {\\alpha^{*}}^2 e^{i\\theta})\\cosh (r) \\sinh (r) "
},
{
"math_id": 32,
"text": " |\\zeta,\\alpha\\rangle = \\hat{D}(\\alpha)|\\zeta\\rangle = \\hat{D}(\\alpha) \\hat{S}(\\zeta)|0\\rangle "
},
{
"math_id": 33,
"text": " \\langle\\zeta,\\alpha | \\hat a | \\zeta,\\alpha\\rangle = \\alpha "
},
{
"math_id": 34,
"text": " \\langle\\zeta,\\alpha | {\\hat{a}}^2 | \\zeta,\\alpha\\rangle = \\alpha ^{2} - e^{i\\theta} \\cosh (r) \\sinh (r) "
},
{
"math_id": 35,
"text": " \\langle\\zeta,\\alpha | {\\hat{a}}^{\\dagger}\\hat{a} | \\zeta,\\alpha\\rangle = |\\alpha|^2 + \\sinh^2 (r) "
},
{
"math_id": 36,
"text": " \\hat{S}(\\zeta) "
},
{
"math_id": 37,
"text": " \\hat{D}(\\alpha)"
},
{
"math_id": 38,
"text": "\\hat{S}(\\zeta) \\hat{D}(\\alpha) \\neq \\hat{D}(\\alpha) \\hat{S}(\\zeta)"
},
{
"math_id": 39,
"text": " | \\alpha, \\zeta \\rangle \\neq | \\zeta, \\alpha \\rangle "
},
{
"math_id": 40,
"text": " \\hat{D}(\\alpha)\\hat{S}(\\zeta) =\\hat{S}(\\zeta)\\hat{S}^{\\dagger}(\\zeta)\\hat{D}(\\alpha)\\hat{S}(\\zeta)= \\hat{S}(\\zeta)\\hat{D}(\\gamma)"
},
{
"math_id": 41,
"text": " \\gamma=\\alpha\\cosh r + \\alpha^* e^{i\\theta} \\sinh r "
},
{
"math_id": 42,
"text": "\\tau/4"
},
{
"math_id": 43,
"text": " |\\text{SMSV}\\rangle = S(\\zeta)|0\\rangle "
},
{
"math_id": 44,
"text": "\\zeta = r e^{i\\phi}"
},
{
"math_id": 45,
"text": " |\\text{SMSV}\\rangle = \\frac{1}{\\sqrt{\\cosh r}} \\sum_{n=0}^\\infty (- e^{i\\phi} \\tanh r)^n \\frac{\\sqrt{(2n)!}}{2^n n!} |2n\\rangle"
},
{
"math_id": 46,
"text": " |\\text{TMSV}\\rangle = S_2(\\zeta)|0,0\\rangle = \\exp(\\zeta^* \\hat a \\hat b - \\zeta \\hat a^\\dagger \\hat b^\\dagger) |0,0\\rangle "
},
{
"math_id": 47,
"text": " |\\text{TMSV}\\rangle = \\frac{1}{\\cosh r} \\sum_{n=0}^\\infty (-e^{i \\phi}\\tanh r)^n |nn\\rangle"
},
{
"math_id": 48,
"text": "|nn\\rangle=|n\\rangle_1 |n\\rangle_2"
},
{
"math_id": 49,
"text": "\\begin{align}\\rho_1 &= \\mathrm{Tr}_2 [| \\mathrm{TMSV} \\rangle \\langle \\mathrm{TMSV} | ]\\\\ &= \\frac{1}{\\cosh^2(r)} \\sum_{n=0}^\\infty \\tanh^{2n}(r) \n|n \\rangle \\langle n|, \\end{align} "
},
{
"math_id": 50,
"text": "\\widetilde{n} = \\sinh^2(r)"
},
{
"math_id": 51,
"text": "J_v=\\sum_{i=1}^N j_v^{(i)}"
},
{
"math_id": 52,
"text": "v={x,y,z}"
},
{
"math_id": 53,
"text": "j_v^{(i)}"
},
{
"math_id": 54,
"text": "v"
},
{
"math_id": 55,
"text": "J_z"
},
{
"math_id": 56,
"text": "J_z=0"
},
{
"math_id": 57,
"text": "J_x"
},
{
"math_id": 58,
"text": "J_y"
},
{
"math_id": 59,
"text": "\\Delta J_z \\cdot \\Delta J_y \\geq \\left|\\Delta J_x\\right|/2"
},
{
"math_id": 60,
"text": "\\Delta J_z=\\Delta J_y=\\sqrt{N}/2"
},
{
"math_id": 61,
"text": "\\chi^2=\\left(\\frac{\\sqrt{N}/2}{\\Delta J_z}\\frac{\\left|J_x\\right|}{N/2}\\right)^2"
},
{
"math_id": 62,
"text": "\\left|J_x\\right|"
},
{
"math_id": 63,
"text": "\\chi^{(3)}"
},
{
"math_id": 64,
"text": "\\chi^{(2)}"
}
]
| https://en.wikipedia.org/wiki?curid=706278 |
70628115 | Biofoam | Foam made from biological substances
Biofoams are biological or biologically derived foams, making up lightweight and porous cellular solids. A relatively new term, its use in academia began in the 1980s in relation to the scum that formed on activated sludge plants.
Biofoams is a broad umbrella term that covers a large variety of topics including naturally occurring foams, as well as foams produced from biological materials such as soy oil and cellulose. Biofoams have been a topic of continuous research because synthesized biofoams are being considered as alternatives to traditional petroleum-based foams. Due to the variable nature of synthesized foams, they can have a variety of characteristics and material properties that make them suitable for packaging, insulation, and other applications.
Naturally occurring foams.
Foams can form naturally within a variety of living organisms. For example, wood, cork, and plant matter all can have foam components or structures. Fungi are generally composed of mycelium, which is made up of hollow filaments of chitin nanofibers bound to other components. Animal parts like cancellous bone, horseshoe crab shells, toucan beaks, sponge, coral, feathers, and antlers all contain foam-like structures which decrease overall weight at the expense of other material properties.
Structures like bone, antlers, and shells have strong materials housing weaker but lighter materials within. Bones tend to have compact, dense external regions, which protect the internal foam-like cancelous bone. The same principle applies to horseshoe crab shells, toucan beaks, and antlers. The barbs and shafts of feathers similarly contain closed-cell foam.
Protective foams can be formed externally by parent organisms or by eggs interacting with the environment: tunicate egg mix with sea water to create a liquid-based foam; tree frog eggs grow in protein foams above and on water (see Figure 1); certain freshwater fish lay eggs in surface foam from their mucus; deep sea fish produce eggs in swimbladders of dual layered foams; and some insects keep their larvae in foam.
Biomimetic synthetic foams.
Honeycomb.
Honeycomb refers to bioinspired patterns that provide a lightweight design for energy absorbing structures. Honeycomb design can be found in different structural biological components such as spongy bone and plant vasculature. Biologically inspired honeycomb structures include Kelvin, Weaire and Floret honeycomb (see Figure 2); each with a slightly different structure in comparison to the natural hexagonal honeycomb. These variations on the biological design have yielded significantly improved energy absorption results in comparison to traditional hexagonal honeycomb biofoam.
Due to these increased energy absorption performances, honeycomb inspired structures are being researched for use inside vehicle crumple zones. By using honeycomb structures as the inner core and surrounding the structure with a more rigid structural shell, these components can absorb impact energy during a crash and reduce the amount of energy the driver experiences.
Aerogel.
Aerogels are able to fill large volumes with minimal material yielding special properties such as low density and low thermal conductivity. These aerogels tend to have internal structures categorized as open or closed cell structures, the same cell structure that is used to define many 3-dimensional honeycomb biofoams. Aerogels are also being engineered to mirror the internal foam structures of animal hairs (see Figure 3). These biomimetic aerogels are being actively researched for their promising elastic and insulative properties.
Material properties.
Foam cell structures.
A foam is considered open-celled if at least two of its facets are holes rather than walls. In this case the entirety of the load on the foam is on the cross-beams that make up the edges of the cell. If no more than one of the walls of the cell are holes, the foam is considered closed-celled in nature. For most synthetic foams, a mixture of closed cell and open cell character is observed due to cells rupturing during the foaming process and then the matrix solidifying.
The mechanical properties of the foam then depend on the closed cell character of the foam as derived by Gibson and Ashby:
formula_0
Where "E" is the elastic modulus, "ρ" is the density of the material, "φ" is the ratio of the volume of the face to the volume of the edge of the material, and the subscript "s" denotes the bulk property of the material rather than that of the foam sample.
Liquid and solid foams.
For many polymeric foams, a solidified foam is formed by polymerizing and foaming a liquid polymer mixture and then allowing that foam to solidify. Thus, liquid foam aging effects do occur before solidification. In the liquid foam, gravitational forces and internal pressures cause a flow of the liquid toward the bottom of the foam. This causes some of the foam cells to form into irregular polyhedra as liquid drains, which are less stable structures than the spherical structures of a traditional foam. These structures can however be stabilized by the presence of a surfactant.
The foam structure before solidification is an inherently unstable one, as the voids present greatly increase the surface free energy of the structure. In some synthetic biofoams, a surfactant can be used in order to lower the surface free energy of the foam and therefore stabilize the foam. In some natural biofoams, proteins can act as the surfactants for the foams to form and stabilize.
Fiber reinforcement.
During the solidification of synthetic biofoams, fibers may be added as a reinforcement agent for the matrix. This additionally will create a heterogeneous nucleation site for the air pockets of the foam itself during the foaming process. However, as fiber content increases, it can begin to inhibit formation of the cellular structure of the matrix.
Applications.
Packaging.
In relation to packaging, starches and biopolyesters make up these biofoams as they are adequate replacements to expanded polystyrene. Polylactic acids (PLAs) are a common form of the basis of these biofoams since they offer a substitute for polyolefin-based foams that are commonly used in automotive parts, pharmaceutical products, and short life-time disposable packaging industries due to their bio-based and biodegradable properties. PLA comes from the formation of lactide produced from lactic acid due to bacterial fermentation through ring-opening polymerization, in which the process is shown through Figure 4.
PLA does not have the most desirable traits for biodegradability in the packaging industry as it contains a low heat distortion temperature and has unfavorable water barrier characteristics. On the other hand, PLA has been shown to have desirable packaging properties including high ultraviolet light barrier properties, and low melting and glass transition temperatures. As of recently, PGA has been introduced in the packaging industry as it is a good solvent and comparable to PLA. Table 1 shows the characteristics of both biofoams and how they compare. As shown, PGA contains a strong stereochemistry structure which in turn causes it to have high barrier and mechanical properties making it desirable for the packaging industry. The study of mixing both PGA and PLA has been explored by using copolymerization in order for PGA to help enhance the barrier properties of PLA when used in packaging.
Table 1: The properties of PLA in comparison to PGA.
Biomedical.
The most popular biofoam in the use of biomedical devices is PLA as well. PLA's properties are also desirable in biomedical applications, especially in combination with other polymers. Specifically, its biocompatibility and biodegradability make it favorable in tissue engineering through the use of FDM-3D printing. PLA does well in these printing environments as its glass transition temperature as well as shape memory is small. In recent studies, PLA has been specifically combined with hydroxyapatite (HA) in order to make the modulus of the sample more favorable for its application in repairing bone failure. Specifically in tissue engineering, HA has also been shown to generate osteogenesis by triggering osteoblasts and pre-osteoblastic cells. HA is a strong material, which makes it ideal to add to PLA, due to the fact that PLA has weak toughness with a 10% elongation before failure. FFF-based 3D printing was used as well as compression tests demonstrated in Figure 5. The results showed that there was a self-healing capability of the sample, which could be used in certain biomedical practices.
Environmental impact.
With recent attention toward climate change, global warming, and sustainability, there has been a new wave of research regarding the creation and sustainability of biodegradable products. This research has evolved to include the creation of biodegradable biofoams, with the intention to replace other foams that may be environmentally harmful or whose production may be unsustainable. Following this vein, Gunawan et al. conducted research to developed “commercially-relevant polyurethane products that can biodegrade in the natural environment”. One such product includes flip-flops so as part of the research a flip-flop made from algae derived polyurethane was prototyped (see Figure 7). This research ultimately resulted in the conclusion that in both a compost and soil environment (different microorganisms present in each environment) significant degradation occurs in polyurethane foam formulated from algae oil.
Similarly, research has been done where algae oil (AO) and residual palm oil (RPO) have been formulated into foam polyurethane at different ratios to determine what ratio has the optimum biodegradability. RPO is recovered from the waste of palm oil mill and is a byproduct of that manufacturing process. After undergoing a tests to determine biodegradability as well as a thermogravimetric analysis, the team determined that the material could be utilized in applications such as insulation or fire retardants depending on the AO/RPO ratio.
Another focus of biofoam research is the development of biofoams that are not only biodegradable, but are also cost-effective and require less energy to produce. Luo et al. have conducted research in this area of biofoams and have ultimately developed a biofoam that is produced from a “higher content of nature bioresource materials” and using a “minimal [number of] processing steps”. The processing steps include the one-pot method of foam preparation published by F. Zhang and X. Luo in their paper about developing polyurethane biofoams as an alternative to petroleum based foams for specific applications.
Ongoing research.
Research efforts have been put into using natural components in the creation of potentially biodegradable foam products. Mycelium (Figure 8), chitosan (Figure 9), wheat gluten (Figure 10), and cellulose (Figure 11) have all been used to create biofoams for different purposes. The wheat gluten example was used in combination with graphene to attempt to make a conductive biofoam. The mycelium-based, chitosan-based, and cellulose-based biofoam examples are intended to become cost effective and low density material options.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left ( \\frac{E}{E_s} \\right )\\propto\\left ( \\frac{ \\rho}{ \\rho_s} \\right )^2\\left ( \\frac{1}{(1+ \\phi)^2} \\right )\\left [ 1 + \\frac{ \\rho}{ \\rho_s} ( \\frac{ \\phi ^3}{1 + \\phi} ) \\right]"
}
]
| https://en.wikipedia.org/wiki?curid=70628115 |
70628267 | Kahn–Kalai conjecture | Mathematical proposition
The Kahn–Kalai conjecture, also known as the expectation threshold conjecture or more recently the Park-Pham Theorem, was a conjecture in the field of graph theory and statistical mechanics, proposed by Jeff Kahn and Gil Kalai in 2006. It was proven in a paper published in 2024.
Background.
This conjecture concerns the general problem of estimating when phase transitions occur in systems. For example, in a random network with formula_0 nodes, where each edge is included with probability formula_1, it is unlikely for the graph to contain a Hamiltonian cycle if formula_1 is less than a threshold value formula_2, but highly likely if formula_1 exceeds that threshold.
Threshold values are often difficult to calculate, but a lower bound for the threshold, the "expectation threshold", is generally easier to calculate. The Kahn–Kalai conjecture is that the two values are generally close together in a precisely defined way, namely that there is a universal constant formula_3 for which the ratio between the two is less than formula_4 where formula_5 is the size of a largest minimal element of an increasing family formula_6 of subsets of a power set.
Proof.
Jinyoung Park and Huy Tuan Pham announced a proof of the conjecture in 2022; it was published in 2024.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "(\\log N)/N"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "K \\log{\\ell(\\mathcal{F})}"
},
{
"math_id": 5,
"text": "\\ell(\\mathcal{F})"
},
{
"math_id": 6,
"text": "\\mathcal{F}"
}
]
| https://en.wikipedia.org/wiki?curid=70628267 |
706295 | Canonical commutation relation | Relation satisfied by conjugate variables in quantum mechanics
In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). For example,
formula_0
between the position operator x and momentum operator px in the x direction of a point particle in one dimension, where ["x" , "p""x"] = "x" "p""x" − "p""x" "x" is the commutator of x and px, i is the imaginary unit, and ℏ is the reduced Planck constant "h"/2π, and formula_1 is the unit operator. In general, position and momentum are vectors of operators and their commutation relation between different components of position and momentum can be expressed as
formula_2
where formula_3 is the Kronecker delta.
This relation is attributed to Werner Heisenberg, Max Born and Pascual Jordan (1925), who called it a "quantum condition" serving as a postulate of the theory; it was noted by E. Kennard (1927) to imply the Heisenberg uncertainty principle. The Stone–von Neumann theorem gives a uniqueness result for operators satisfying (an exponentiated form of) the canonical commutation relation.
Relation to classical mechanics.
By contrast, in classical physics, all observables commute and the commutator would be zero. However, an analogous relation exists, which is obtained by replacing the commutator with the Poisson bracket multiplied by "i"ℏ,
formula_4
This observation led Dirac to propose that the quantum counterparts formula_5, ĝ of classical observables f, g satisfy
formula_6
In 1946, Hip Groenewold demonstrated that a "general systematic correspondence" between quantum commutators and Poisson brackets could not hold consistently.
However, he further appreciated that such a systematic correspondence does, in fact, exist between the quantum commutator and a "deformation" of the Poisson bracket, today called the Moyal bracket, and, in general, quantum operators and classical observables and distributions in phase space. He thus finally elucidated the consistent correspondence mechanism, the Wigner–Weyl transform, that underlies an alternate equivalent mathematical representation of quantum mechanics known as deformation quantization.
Derivation from Hamiltonian mechanics.
According to the correspondence principle, in certain limits the quantum equations of states must approach Hamilton's equations of motion. The latter state the following relation between the generalized coordinate "q" (e.g. position) and the generalized momentum "p":
formula_7
In quantum mechanics the Hamiltonian formula_8, (generalized) coordinate formula_9 and (generalized) momentum formula_10 are all linear operators.
The time derivative of a quantum state is - formula_11 (by Schrödinger equation). Equivalently, since the operators are not explicitly time-dependent, they can be seen to be evolving in time (see Heisenberg picture) according to their commutation relation with the Hamiltonian:
formula_12
formula_13
In order for that to reconcile in the classical limit with Hamilton's equations of motion, formula_14 must depend entirely on the appearance of formula_10 in the Hamiltonian and formula_15 must depend entirely on the appearance of formula_9 in the Hamiltonian. Further, since the Hamiltonian operator depends on the (generalized) coordinate and momentum operators, it can be viewed as a functional, and we may write (using functional derivatives):
formula_16
formula_17
In order to obtain the classical limit we must then have
formula_18
Weyl relations.
The group formula_19 generated by exponentiation of the 3-dimensional Lie algebra determined by the commutation relation formula_20 is called the Heisenberg group. This group can be realized as the group of formula_21 upper triangular matrices with ones on the diagonal.
According to the standard mathematical formulation of quantum mechanics, quantum observables such as formula_22 and formula_23 should be represented as self-adjoint operators on some Hilbert space. It is relatively easy to see that two operators satisfying the above canonical commutation relations cannot both be bounded. Certainly, if formula_22 and formula_23 were trace class operators, the relation formula_24 gives a nonzero number on the right and zero on the left.
Alternately, if formula_22 and formula_23 were bounded operators, note that formula_25, hence the operator norms would satisfy
formula_26 so that, for any "n",
formula_27
However, n can be arbitrarily large, so at least one operator cannot be bounded, and the dimension of the underlying Hilbert space cannot be finite. If the operators satisfy the Weyl relations (an exponentiated version of the canonical commutation relations, described below) then as a consequence of the Stone–von Neumann theorem, "both" operators must be unbounded.
Still, these canonical commutation relations can be rendered somewhat "tamer" by writing them in terms of the (bounded) unitary operators formula_28 and formula_29. The resulting braiding relations for these operators are the so-called Weyl relations
formula_30
These relations may be thought of as an exponentiated version of the canonical commutation relations; they reflect that translations in position and translations in momentum do not commute. One can easily reformulate the Weyl relations in terms of the representations of the Heisenberg group.
The uniqueness of the canonical commutation relations—in the form of the Weyl relations—is then guaranteed by the Stone–von Neumann theorem.
For technical reasons, the Weyl relations are not strictly equivalent to the canonical commutation relation formula_20. If formula_22 and formula_23 were bounded operators, then a special case of the Baker–Campbell–Hausdorff formula would allow one to "exponentiate" the canonical commutation relations to the Weyl relations. Since, as we have noted, any operators satisfying the canonical commutation relations must be unbounded, the Baker–Campbell–Hausdorff formula does not apply without additional domain assumptions. Indeed, counterexamples exist satisfying the canonical commutation relations but not the Weyl relations. (These same operators give a counterexample to the naive form of the uncertainty principle.) These technical issues are the reason that the Stone–von Neumann theorem is formulated in terms of the Weyl relations.
A discrete version of the Weyl relations, in which the parameters "s" and "t" range over formula_31, can be realized on a finite-dimensional Hilbert space by means of the .
Generalizations.
The simple formula
formula_32
valid for the quantization of the simplest classical system, can be generalized to the case of an arbitrary Lagrangian formula_33. We identify canonical coordinates (such as x in the example above, or a field Φ("x") in the case of quantum field theory) and canonical momenta π"x" (in the example above it is p, or more generally, some functions involving the derivatives of the canonical coordinates with respect to time):
formula_34
This definition of the canonical momentum ensures that one of the Euler–Lagrange equations has the form
formula_35
The canonical commutation relations then amount to
formula_36
where "δ""ij" is the Kronecker delta.
Further, it can be shown that
formula_37
Using formula_38, it can be shown that by mathematical induction
formula_39
generally known as McCoy's formula.
Gauge invariance.
Canonical quantization is applied, by definition, on canonical coordinates. However, in the presence of an electromagnetic field, the canonical momentum p is not gauge invariant. The correct gauge-invariant momentum (or "kinetic momentum") is
formula_40 (SI units) formula_41 (cgs units),
where q is the particle's electric charge, A is the vector potential, and "c" is the speed of light. Although the quantity "p"kin is the "physical momentum", in that it is the quantity to be identified with momentum in laboratory experiments, it "does not" satisfy the canonical commutation relations; only the canonical momentum does that. This can be seen as follows.
The non-relativistic Hamiltonian for a quantized charged particle of mass m in a classical electromagnetic field is (in cgs units)
formula_42
where A is the three-vector potential and φ is the scalar potential. This form of the Hamiltonian, as well as the Schrödinger equation "Hψ" = "iħ∂ψ/∂t", the Maxwell equations and the Lorentz force law are invariant under the gauge transformation
formula_43
formula_44
formula_45
formula_46
where formula_47 and Λ = Λ("x","t") is the gauge function.
The angular momentum operator is
formula_48
and obeys the canonical quantization relations
formula_49
defining the Lie algebra for so(3), where formula_50 is the Levi-Civita symbol. Under gauge transformations, the angular momentum transforms as
formula_51
The gauge-invariant angular momentum (or "kinetic angular momentum") is given by
formula_52
which has the commutation relations
formula_53
where formula_54 is the magnetic field. The inequivalence of these two formulations shows up in the Zeeman effect and the Aharonov–Bohm effect.
Uncertainty relation and commutators.
All such nontrivial commutation relations for pairs of operators lead to corresponding uncertainty relations, involving positive semi-definite expectation contributions by their respective commutators and anticommutators. In general, for two Hermitian operators A and B, consider expectation values in a system in the state ψ, the variances around the corresponding expectation values being (Δ"A")2 ≡ ⟨("A" − ⟨"A"⟩)2⟩, etc.
Then
formula_55
where ["A", "B"] ≡ "A B" − "B A" is the commutator of A and B, and {"A", "B"} ≡ "A B" + "B A" is the anticommutator.
This follows through use of the Cauchy–Schwarz inequality, since
Substituting for A and B (and taking care with the analysis) yield Heisenberg's familiar uncertainty relation for x and p, as usual.
Uncertainty relation for angular momentum operators.
For the angular momentum operators "L""x" = "y pz" − "z py", etc., one has that
formula_56
where formula_57 is the Levi-Civita symbol and simply reverses the sign of the answer under pairwise interchange of the indices. An analogous relation holds for the spin operators.
Here, for Lx and Ly, in angular momentum multiplets "ψ" = |"ℓ","m"⟩, one has, for the transverse components of the Casimir invariant "Lx"2 + "Ly"2+ "Lz"2, the z-symmetric relations
⟨"Lx"2⟩ = ⟨"Ly"2⟩ = ("ℓ" ("ℓ" + 1) − "m"2) ℏ2/2,
as well as ⟨"Lx"⟩ = ⟨"Ly"⟩ = 0.
Consequently, the above inequality applied to this commutation relation specifies
formula_58
hence
formula_59
and therefore
formula_60
so, then, it yields useful constraints such as a lower bound on the Casimir invariant: "ℓ" ("ℓ" + 1) ≥ |"m"| (|"m"| + 1), and hence "ℓ" ≥ |"m"|, among others.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[\\hat x,\\hat p_x] = i\\hbar \\mathbb{I}"
},
{
"math_id": 1,
"text": " \\mathbb{I}"
},
{
"math_id": 2,
"text": "[\\hat x_i,\\hat p_j] = i\\hbar \\delta_{ij},"
},
{
"math_id": 3,
"text": "\\delta_{ij}"
},
{
"math_id": 4,
"text": "\\{x,p\\} = 1 \\, ."
},
{
"math_id": 5,
"text": "\\hat{f}"
},
{
"math_id": 6,
"text": "[\\hat f,\\hat g]= i\\hbar\\widehat{\\{f,g\\}} \\, ."
},
{
"math_id": 7,
"text": "\\begin{cases}\n \\dot{q} = \\frac{\\partial H}{\\partial p} = \\{q, H\\}; \\\\\n \\dot{p} = -\\frac{\\partial H}{\\partial q} = \\{p, H\\}.\n\\end{cases}"
},
{
"math_id": 8,
"text": "\\hat{H}"
},
{
"math_id": 9,
"text": "\\hat{Q}"
},
{
"math_id": 10,
"text": "\\hat{P}"
},
{
"math_id": 11,
"text": "i\\hat{H}/\\hbar"
},
{
"math_id": 12,
"text": "\\frac {d\\hat{Q}}{dt} = \\frac {i}{\\hbar} [\\hat{H},\\hat{Q}]"
},
{
"math_id": 13,
"text": "\\frac {d\\hat{P}}{dt} = \\frac {i}{\\hbar} [\\hat{H},\\hat{P}] \\,\\, ."
},
{
"math_id": 14,
"text": " [\\hat{H},\\hat{Q}]"
},
{
"math_id": 15,
"text": "[\\hat{H},\\hat{P}]"
},
{
"math_id": 16,
"text": "[\\hat{H},\\hat{Q}] = \\frac {\\delta \\hat{H}}{\\delta \\hat{P}} \\cdot [\\hat{P},\\hat{Q}]"
},
{
"math_id": 17,
"text": "[\\hat{H},\\hat{P}] = \\frac {\\delta \\hat{H}}{\\delta \\hat{Q}} \\cdot [\\hat{Q},\\hat{P}] \\, . "
},
{
"math_id": 18,
"text": " [\\hat{Q},\\hat{P}] = i \\hbar ~ \\mathbb{I}."
},
{
"math_id": 19,
"text": "H_3(\\mathbb{R})"
},
{
"math_id": 20,
"text": "[\\hat{x},\\hat{p}]=i\\hbar"
},
{
"math_id": 21,
"text": "3\\times 3"
},
{
"math_id": 22,
"text": "\\hat{x}"
},
{
"math_id": 23,
"text": "\\hat{p}"
},
{
"math_id": 24,
"text": "\\operatorname{Tr}(AB)=\\operatorname{Tr}(BA)"
},
{
"math_id": 25,
"text": "[\\hat{x}^n,\\hat{p}]=i\\hbar n \\hat{x}^{n-1}"
},
{
"math_id": 26,
"text": "2 \\left\\|\\hat{p}\\right\\| \\left\\|\\hat{x}^{n-1}\\right\\| \\left\\|\\hat{x}\\right\\| \\geq n \\hbar \\left\\|\\hat{x}^{n-1}\\right\\|,"
},
{
"math_id": 27,
"text": "2 \\left\\|\\hat{p}\\right\\| \\left\\|\\hat{x}\\right\\| \\geq n \\hbar"
},
{
"math_id": 28,
"text": "\\exp(it\\hat{x})"
},
{
"math_id": 29,
"text": "\\exp(is\\hat{p})"
},
{
"math_id": 30,
"text": "\\exp(it\\hat{x})\\exp(is\\hat{p})=\\exp(-ist/\\hbar)\\exp(is\\hat{p})\\exp(it\\hat{x})."
},
{
"math_id": 31,
"text": "\\mathbb{Z}/n"
},
{
"math_id": 32,
"text": "[x,p] = i\\hbar \\, \\mathbb{I} ~,"
},
{
"math_id": 33,
"text": "{\\mathcal L}"
},
{
"math_id": 34,
"text": "\\pi_i \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{\\partial {\\mathcal L}}{\\partial(\\partial x_i / \\partial t)}."
},
{
"math_id": 35,
"text": "\\frac{\\partial}{\\partial t} \\pi_i = \\frac{\\partial {\\mathcal L}}{\\partial x_i}."
},
{
"math_id": 36,
"text": "[x_i,\\pi_j] = i\\hbar\\delta_{ij} \\, "
},
{
"math_id": 37,
"text": "[F(\\vec{x}),p_i] = i\\hbar\\frac{\\partial F(\\vec{x})}{\\partial x_i}; \\qquad [x_i, F(\\vec{p})] = i\\hbar\\frac{\\partial F(\\vec{p})}{\\partial p_i}."
},
{
"math_id": 38,
"text": "C_{n+1}^{k} = C_{n}^{k} + C_{n}^{k-1}"
},
{
"math_id": 39,
"text": "\\left[\\hat{x}^n,\\hat{p}^m\\right] = \\sum_{k=1}^{\\min\\left(m,n\\right)}{ \\frac{-\\left(-i \\hbar\\right)^k n!m!}{k!\\left(n-k\\right)!\\left(m-k\\right)!} \\hat{x}^{n-k} \\hat{p}^{m-k}} = \\sum_{k=1}^{\\min\\left(m,n\\right)}{ \\frac{\\left(i \\hbar\\right)^k n!m!}{k!\\left(n-k\\right)!\\left(m-k\\right)!} \\hat{p}^{m-k}\\hat{x}^{n-k}} ,"
},
{
"math_id": 40,
"text": "p_\\text{kin} = p - qA \\,\\!"
},
{
"math_id": 41,
"text": "p_\\text{kin} = p - \\frac{qA}{c} \\,\\!"
},
{
"math_id": 42,
"text": "H=\\frac{1}{2m} \\left(p-\\frac{qA}{c}\\right)^2 +q\\phi"
},
{
"math_id": 43,
"text": "A\\to A' = A+\\nabla \\Lambda"
},
{
"math_id": 44,
"text": "\\phi\\to \\phi' = \\phi-\\frac{1}{c} \\frac{\\partial \\Lambda}{\\partial t}"
},
{
"math_id": 45,
"text": "\\psi \\to \\psi' = U\\psi"
},
{
"math_id": 46,
"text": "H\\to H' = U H U^\\dagger,"
},
{
"math_id": 47,
"text": "U=\\exp \\left( \\frac{iq\\Lambda}{\\hbar c}\\right)"
},
{
"math_id": 48,
"text": "L=r \\times p \\,\\!"
},
{
"math_id": 49,
"text": "[L_i, L_j]= i\\hbar {\\epsilon_{ijk}} L_k"
},
{
"math_id": 50,
"text": "\\epsilon_{ijk}"
},
{
"math_id": 51,
"text": " \\langle \\psi \\vert L \\vert \\psi \\rangle \\to\n\\langle \\psi^\\prime \\vert L^\\prime \\vert \\psi^\\prime \\rangle =\n\\langle \\psi \\vert L \\vert \\psi \\rangle +\n\\frac {q}{\\hbar c} \\langle \\psi \\vert r \\times \\nabla \\Lambda \\vert \\psi \\rangle \\, .\n"
},
{
"math_id": 52,
"text": "K=r \\times \\left(p-\\frac{qA}{c}\\right),"
},
{
"math_id": 53,
"text": "[K_i,K_j]=i\\hbar {\\epsilon_{ij}}^{\\,k}\n\\left(K_k+\\frac{q\\hbar}{c} x_k\n\\left(x \\cdot B\\right)\\right)"
},
{
"math_id": 54,
"text": "B=\\nabla \\times A"
},
{
"math_id": 55,
"text": " \\Delta A \\, \\Delta B \\geq \\frac{1}{2} \\sqrt{ \\left|\\left\\langle\\left[{A},{B}\\right]\\right\\rangle \\right|^2 + \\left|\\left\\langle\\left\\{ A-\\langle A\\rangle ,B-\\langle B\\rangle \\right\\} \\right\\rangle \\right|^2} ,"
},
{
"math_id": 56,
"text": " [{L_x}, {L_y}] = i \\hbar \\epsilon_{xyz} {L_z}, "
},
{
"math_id": 57,
"text": "\\epsilon_{xyz}"
},
{
"math_id": 58,
"text": "\\Delta L_x \\Delta L_y \\geq \\frac{1}{2} \\sqrt{\\hbar^2|\\langle L_z \\rangle|^2}~, "
},
{
"math_id": 59,
"text": "\\sqrt {|\\langle L_x^2\\rangle \\langle L_y^2\\rangle |} \\geq \\frac{\\hbar^2}{2} \\vert m\\vert"
},
{
"math_id": 60,
"text": "\\ell(\\ell+1)-m^2\\geq |m| ~,"
}
]
| https://en.wikipedia.org/wiki?curid=706295 |
70629721 | Joshua 13 | Book of Joshua, chapter 13
Joshua 13 is the thirteenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the list of land still to be conquered and the land allotments for the tribes Reuben, Gad and half of the Manasseh (east), a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 33 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1-14:15)
1. Joshua Directed to Distribute the West Jordan Inheritance (13:1-7)
2. The East Jordan Inheritance (13:8-33)
a. The East Jordan (13:8-14)
b. Reuben (13:15-23)
c. Gad (13:24-28)
d. East Manasseh (13:29-31)
e. Summary (13:32-33)
3. Summary of the West Jordan Inheritance (14:1-5)
4. Caleb's Inheritance (14:6-15)
B. The Allotment for Judah (15:1-63)
C. The Allotment for Joseph (16:1-17:18)
D. Land Distribution at Shiloh (18:1-19:51)
E. Levitical Distribution and Conclusion (20:1-21:45)
The command to allot the Land (13:1–7).
After the completion of the conquest narrative, this passage starts the major section concerning the allocation
of territory to the tribes (Joshua 13–21). The command to Joshua (verse 1) recalls other challenges to Israel in the book, with promise and warning at the same time (23:16; 24). It is followed by the outline of the land not yet conquered, covering three areas:
Now Joshua's task is to divide the land in Cisjordan (v. 7), as the Transjordanian land have already been allotted.
"2This is the land that yet remains: all the regions of the Philistines, and all those of the Geshurites 3(from the Shihor, which is east of Egypt, northward to the boundary of Ekron, it is counted as Canaanite; there are five rulers of the Philistines, those of Gaza, Ashdod, Ashkelon, Gath, and Ekron), and those of the Avvim,"
The settling of Transjordan (13:8–33).
The allotment of the Transjordan (land east of Jordan River) prefaces the section about the distribution of the Cisjordan (land west of the Jordan), with more abbreviated lists of
cities than the parallel in Numbers 32:34–38, but including other materials (e.g. Numbers 31:8 for verses 21–22; Deuteronomy 18:1 for verses 14, 33). Moses led the conquest in Transjordan (verses 12, 21), so he could 'give' the land as 'inheritance' (verses 8, 14-15, 24, 29, 33), and this continues to chapter 14 (verses 3-4, 9, 12), until finally Joshua is the one who 'gives for an inheritance' (14:13). This Transjordan narrative is therefore to affirm the unity of Moses' and Joshua's work, and demonstrates the unity of all tribes of Israel. The division of the large tribe of Joseph into two, Ephraim and Manasseh (14:3–4), explains why the tribe of Levi did not receive land of its own (verse 14, 33; their compensation is elaborated in Joshua 21), so the twelvefold character of Israel is maintained. Although Moses and Joshua distribute the land, it will be an 'inheritance', as its ultimate giver is the God of Israel.
"Balaam also, the son of Beor, the one who practiced divination, was killed with the sword by the people of Israel among the rest of their slain."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70629721 |
70629723 | Joshua 14 | Book of Joshua, chapter 14
Joshua 14 is the fourteenth chapter of the Book of Joshua in the Hebrew Bible or in the Old Testament of the Christian Bible. According to Jewish tradition the book was attributed to Joshua, with additions by the high priests Eleazar and Phinehas, but modern scholars view it as part of the Deuteronomistic History, which spans the books of Deuteronomy to 2 Kings, attributed to nationalistic and devotedly Yahwistic writers during the time of the reformer Judean king Josiah in 7th century BCE. This chapter records the preparation for the allotment of land and the inheritance for Caleb, a part of a section comprising Joshua 13:1–21:45 about the Israelites allotting the land of Canaan.
Text.
This chapter was originally written in the Hebrew language. It is divided into 15 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The narrative of Israelites allotting the land of Canaan comprising verses 13:1 to 21:45 of the Book of Joshua and has the following outline:
A. Preparations for Distributing the Land (13:1-14:15)
1. Joshua Directed to Distribute the West Jordan Inheritance (13:1-7)
2. The East Jordan Inheritance (13:8-33)
a. The East Jordan (13:8-14)
b. Reuben (13:15-23)
c. Gad (13:24-28)
d. East Manasseh (13:29-31)
e. Summary (13:32-33)
3. Summary of the West Jordan Inheritance (14:1-5)
4. Caleb's Inheritance (14:6-15)
B. The Allotment for Judah (15:1-63)
C. The Allotment for Joseph (16:1-17:18)
D. Land Distribution at Shiloh (18:1-19:51)
E. Levitical Distribution and Conclusion (20:1-21:45)
Summary of the West Jordan inheritance (14:1–5).
The allocation of the land in Cisjordan (west of the Jordan River) was done by Joshua together with Eleazar the priest and tribal chiefs (verses 1) as a direct continuation of Numbers 26, which records the census taken under the leadership of Moses and Eleazar precisely for this distribution (Numbers 26:1–4, 52–56; cf Numbers 32:28). The sacred lot was used as commanded in Numbers 26:55. The explanation for the exclusion of the Levites from land inheritance, and the dividing of the tribe of Joseph (as tribes of Ephraim and Manasseh), are additional to the information in Numbers 26.
"And these are the countries which the children of Israel inherited in the land of Canaan, which Eleazar the priest, and Joshua the son of Nun, and the heads of the fathers of the tribes of the children of Israel, distributed for inheritance to them."
Caleb's inheritance (14:6–15).
Before the allotment for the tribe of Judah, a special grant of land is given to Caleb, who (with Joshua) had dissented from the bad report of first spies (Numbers 13:30–33; cf. Numbers 32:12), and thus for his faithfulness was promised a possession of his own (Numbers 14:24; Deuteronomy 1:36). The land Caleb requested was in the area of Hebron (verse 12), within the territory soon to be allotted to Caleb's tribe of Judah. In his speech of the request Caleb emphasized his vigor into old age (cf Moses; Deuteronomy 34:7), as also a part of the promise to him (Numbers 26:65), because of his trust in YHWH, that he was not afraid of the Anakim, the gigantic people who scared Israel at first (verse 12; cf. Numbers 13:22, 28, 32–33).
"And now, behold, the LORD has kept me alive, just as he said, these forty-five years since the time that the LORD spoke this word to Moses, while Israel walked in the wilderness. And now, behold, I am this day eighty-five years old."
Verse 10.
A time calculation is embedded in this verse: Caleb the son of Jephunneh was 40 years old when he received the promise (after coming back as one of the 12 spies; verse 7), and 45 years have passed since then, so he is 85 years old at this time. According to "Sebachim 118b", the promise in Kadesh-barnea was given 2 years after the Exodus from Egypt, so within 40 years of wandering, 38 years have passed until the crossing into Canaan (Deuteronomy 2:14). Therefore, now (45 years after the promise) seven years have passed which comprise the period of the conquest of Canaan.
"And Joshua blessed him, and gave unto Caleb the son of Jephunneh Hebron for an inheritance."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=70629723 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.